id
stringlengths
32
40
text
stringlengths
1
2.95M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
3 values
original_shard_dir
stringclasses
174 values
original_shard_idx
int64
0
311k
num_tokens
int64
2
512k
3655822d5d2e022f22ca2c790ab64bb902bfd217
\section{Introduction}\label{sec:intro} A major challenge towards self-organizing networks (SON) is the joint optimization of multiple SON use cases by coordinately handling multiple configuration parameters. Widely studied SON use cases include coverage and capacity optimization (CCO), mobility load balancing (MLB) and mobility robustness optimization (MRO)\cite{3GPP36902}.\cosl{We need a reference here} However, most of these works study an isolated single use case and ignore the conflicts or interactions between the use cases \cite{giovanidis2012dist,razavi2010self}. In contrast, this paper considers a joint optimization of two strongly coupled use cases: CCO and MLB. The objective is to achieve a good trade-off between coverage and capacity performance, while ensuring a load-balanced network. The SON functionalities are usually implemented at the network management layer and are designed to deal with \lq\lq long-term\rq\rq \ network performance. Short-term optimization of individual users is left to lower layers of the protocol stack. To capture long-term global changes in a network, we consider a cluster-based network scenario, where users served by the same base station (BS) with similar SINR distribution are adaptively grouped into clusters. Our objective is to jointly optimizing the following variables: \begin{itemize} \item Cluster-based BS assignment and power allocation. \item BS-based antenna tilt optimization and power allocation. \end{itemize} The joint optimization of assignment, antenna tilts, and powers is an inherently challenging problem. The interference and the resulting performance measures depend on these variables in a complex and intertwined manner. Such a problem, to the best of the authors' knowledge, has been studied in only a few works. For example, in \cite{klessig2012improving} a problem of jointly optimizing antenna tilt and cell selection to improve the spectral and energy efficiency is stated, however, the solution derived by a structured searching algorithm may not be optimal. In this paper, we propose a robust algorithmic framework built on a utility model, which enables fast and near-optimal uplink solutions and sub-optimal downlink solutions\cosl{Do we know that this is near-optimal?} by exploiting three properties: 1) the monotonic property and fixed point of the monotone and strictly subhomogenoues (MSS) functions \footnote{Many literatures use the term {\it interference function} for the functions satisfy three condotions, positivity, monotonicity and scalability \cite{yates95}. Positivity is shown to be a consequence of the other two properties \cite{leung2004convergence}, and we use the term {\it strctly subhomogeneous} in place of scalable from a constraction mapping point of view in keeping with some related literature \cite{nuzman2007contraction}.}, 2) decoupled property of the antenna tilt and BS assignment optimization in the uplink network, and 3) uplink-downlink duality. The first property admits global optimal solution with fixed-point iteration for two specific problems: utility-constrained power minimization and power-constrained max-min utility balancing \cite{vucic2011fixed,stanczak2009fundamentals,schubert2012interference,yates95}. The second and third properties enable decomposition of the high-dimensional optimization problem, such as the joint beamforming and power control proposed in \cite{BocheDuality06,schubert2005iterative,huang2013joint,he2012multi}. Our distinct contributions in this work can be summarized as follows:\\ 1) We propose a max-min utility balancing algorithm for capacity-coverage trade-off optimization over a joint space of antenna tilts, BS assignments and powers. The utility defined as a convex combination of the average SINR and the worst-case SINR implies the balanced performance of capacity and coverage. Load balancing is improved as well due to a uniform distribution of the interference among the BSs.\\ 2) The proposed utility is formulated based on the MSS functions, which allows us to find the optimal solution by applying fixed-point iterations.\\ 3) Note that antenna tilts are BS-specific variables, while assignments are cluster-specific, we develop two optimization problems with the same objective functions, formulated either as a problem of per-cluster variables or as a problem of per-base variables. We propose a two-step optimization algorithm in the uplink to iteratively optimize the per BS variables (antenna tilts and BS power budgets) and the cluster-based variables (assignments and cluster power). Since both problems aim at optimizing the same objective function, the algorithm is shown to be convergent.\\ 4) The decoupled property of antenna tilt and assignment in the uplink decomposes the high-dimensional optimization problem and enables more efficient optimization algorithm. We then analyze the uplink-downlink duality by using the Perron-Frobenius theory\cite{meyer2000matrix}, and propose an efficient sub-optimal solution in the downlink by utilizing optimized variables in the dual uplink. \section{System Model}\label{sec:Model} We consider a multicell wireless network composed of a set of BSs $\set{N}:=\{1,\ldots, N\}$ and a set of users $\set{K}:=\{1,\ldots, K\}$. Using fuzzy C-means clustering algorithm \cite{bezdek1984fcm}, we group users with similar SINR distributions\footnote{We assume the Kullback-Leibler divergence as the distance metric.} and served by the same BS into clusters. The clustering algorithm is beyond the scope of this paper. Let the set of user clusters be denoted by $\set{C}:=\{1,\ldots,C\}$, and let $\bm{A}$ denote a $C\times K$ binary user/cluster assignment matrix whose columns sum to one. The BS/cluster assignment is defined by a $N\times C$ binary matrix $\bm{B}$ whose columns also sum to one. Throughout the paper, we assume a frequency flat channel. The average/long-term downlink path attenuation between $N$ BSs and $K$ users are collected in a channel gain matrix $\bm{H}\in {\field{R}}^{N\times K}$. We introduce the cross-link gain matrix $\bm{V}\in{\field{R}}^{K\times K}$, where the entry $v_{lk}(\theta_j)$ is the cross-link gain between user $l$ served by BS $j$, and user $k$ served by BS $i$, i.e., between the transmitter of the link $(j, l)$ and the receiver of the link $(i, k)$. Note that $v_{lk}(\theta_j)$ depends on the antenna downtilt $\theta_j$. Let the BS/user assignment matrix be denoted by $\bm{J}$ so that we have $\bm{J}:=\bm{B}\bm{A}\in\{0,1\}^{N\times K}$, and $\bm{V}:=\bm{J}^T\bm{H}$. We denote by $\bm{r}:=[r_1, \ldots, r_N]^T$, $\bm{q}:=[q_1, \ldots, q_C]^T$ and $\bm{p}:=[p_1, \ldots, p_K]^T$the BS transmission power budget, the cluster power allocation and the user power allocation, respectively. % \subsection{Inter-cluster and intra-cluster power sharing factors} \label{subsec:powFactor} We introduce the inter-cluster and intra-cluster power sharing factors to enable the transformation between two power vectors with different dimensions. Let $\bm{b}:=[b_1, \ldots, b_C]^T$ denote the serving BSs of clusters $\{1, \ldots, C\}$. We define the vector of the inter-cluster power sharing factors to be $\bm{\beta}:=[\beta_1, \ldots, \beta_C]^T$, where $\beta_c:=q_c/r_{b_c}$. With the BS/cluster assignment matrix $\bm{B}$, we have $\bm{q}:=\ma{B}_{\ve{\beta}}^T \bm{r}$, where $\ma{B}_{\ve{\beta}}:=\bm{B}\mathop{\mathrm{diag}}\{\bm{\beta}\}$. Since users belonging to the same cluster have similar SINR distribution, we allocate the cluster power uniformly to the users in the cluster. The intra-cluster sharing factors are represented by $\bm{\alpha}:=[\alpha_1, \ldots, \alpha_K]^T$ with $\alpha_k=1/|\set{K}_{c_k}|$ for $k\in\set{K}$, where $\set{K}_{c_k}$ denotes the set of users belonging to cluster $c_k$, while $c_k$ denotes the cluster with user $k$. We have $\bm{p}:=\ma{A}_{\ve{\alpha}}^T\bm{q}$, where $\ma{A}_{\ve{\alpha}}:=\bm{A}\mathop{\mathrm{diag}}\{\bm{\alpha}\}$. The transformation between BS power $\bm{r}$ and user power $\bm{p}$ is then $\bm{p}:=\bm{T}\bm{r}$ where the transformation matrix $\bm{T}:=\ma{A}_{\ve{\alpha}}^T\ma{B}_{\ve{\beta}}^T$. % \subsection{Signal-to-interference-plus-noise ratio}\label{subsec:SINR} Given the cross-link gain matrix $\bm{V}$, the downlink SINR of the $k$th user depends on all powers and is given by \begin{equation} \operator{SINR}_k^{(\text{d})}:=\frac{p_k \cdot v_{kk}(\theta_{n_k})}{\sum_{l\in\set{K}\setminus k} p_l \cdot v_{lk}(\theta_{n_l})+\sigma_k^2}, k\in\set{K} \label{eqn:DL_SINR} \end{equation} where $n_k$ denotes the serving BS of user $k$, $\sigma_k^2$ denotes the noise power received in user $k$. Likewise, the uplink SINR is \begin{equation} \operator{SINR}_k^{(\text{u})}:=\frac{p_k \cdot v_{kk}(\theta_{n_k})}{\sum_{l\in\set{K}\setminus k} p_l \cdot v_{kl}(\theta_{n_k})+\sigma_k^2}, k\in\set{K} \label{eqn:UL_SINR} \end{equation} % Assuming that there is no self-interference, the cross-talk terms can be collected in a matrix \begin{equation} [\tilde{\ma{V}}]_{lk}:= \begin{cases} v_{lk}(\theta_{n_l}), & l\neq k\\ 0, & l=k \end{cases}. \label{eqn:PsiMat} \end{equation} Thus the downlink interference received by user $k$ can be written as $I_k^{(\text{d})}:=[\tilde{\bm{V}}^T\bm{p}]_k$, while the uplink interference is given by $I_k^{(\text{u})}:=[\tilde{\bm{V}}\bm{p}]_k$. A crucial property is that the uplink SINR of user $k$ depends on the BS assignment $n_k$ and the single antenna tilt $\theta_{n_k}$ alone, while the downlink SINR depends on the BS assignment vector $\bm{n}:=[n_1,\ldots, n_K]^T$, and the antenna tilt vector $\bm{\theta}:=[\theta_1, \ldots, \theta_N]^T$. The decoupled property of uplink transmission has been widely exploited in the context of uplink and downlink multi-user beamforming \cite{BocheDuality06}\cosl{Reference} and provides a basis for the optimization algorithm in this paper. % The notation used in this paper is summarized in Table \ref{tab:CovCap_notation}. \begin{table}[t] \centering \caption{NOTATION SUMMARY} \begin{tabular}{|c|c|} \hline ${\emenge{N}}$ & set of BSs \\ ${\emenge{K}}$ & set of users \\ ${\emenge{C}}$ & set of user clusters\\ $\bm{A}$ & cluster/user assignment matrix\\ $\bm{B}$ & BS/cluster assignment matrix\\ $\bm{J}$ & BS/user assignment matrix\\ $c_k$ & cluster that user $k$ is subordinated to\\ ${\emenge{K}}_{c}$ & set of users subordinated to cluster $c$\\ $\bm{H}$ & channel gain matrix\\ $\bm{V}$ & interference coupling matrix\\ $\tilde{\bm{V}}$ & interference coupling matrix without intra-cell interference\\ $\tilde{\bm{V}}_{\bm{b}}$ & interference coupling matrix depending on BS assignments $\bm{b}$\\ $\tilde{\bm{V}}_{\bm{\theta}}$ & interference coupling matrix depending on antenna tilts $\bm{\theta}$\\ $\bm{r}$ & BS power budget vector\\ $\bm{q}$ & cluster power vector\\ $\bm{p}$ & user power vector\\ $\bm{\alpha}$ & intra-cluster power sharing factors\\ $\bm{\beta}$ & inter-cluster power sharing factors\\ $\bm{A}_{\bm{\alpha}}$ & transformation from $\bm{q}$ to $\bm{p}$, $\bm{p}:=\bm{A}_{\bm{\alpha}}^T\bm{q}$\\ $\bm{B}_{\bm{\beta}}$ & transformation from $\bm{r}$ to $\bm{q}$, $\bm{q}:=\bm{B}_{\bm{\beta}}^T\bm{r}$\\ $\bm{T}$ & transformation from $\bm{r}$ to $\bm{p}$, $\bm{p}:=\bm{T}\bm{r}$\\ $\bm{\theta}$ & BS antenna tilt vector\\ $\bm{b}$ & serving BSs of clusters\\ $b_c$ & serving BS of cluster $c$\\ $\bm{n}$ & serving BSs of the users\\ $n_k$ & serving BS of user $k$\\ $\bm{\sigma}$ & noise power vector\\ $P^{\text{max}}$ & sum power constraint\\ \hline \end{tabular} \label{tab:CovCap_notation} \end{table} \section{Utility Definition and Problem Formulation}\label{sec:ProbForm} As mentioned, the objective is a joint optimization of coverage, capacity and load balancing. We capture coverage by the worst-case SINR, while the average SINR is used to represent capacity. A cluster-based utility $U_c(\bm{\theta},\bm{r},\bm{q},\bm{b})$ is introduced as the combined function of the worst-case SINR and average SINR, depending on BS power allocation $\bm{r}$, antenna downtilt $\bm{\theta}$ , cluster power allocation $\bm{q}$ and BS/cluster assignment $\bm{b}$.\footnote{The reader should note that user-specific variables $(\bm{p},\bm{n})$ can be derived directly from cluster-specific variables $\bm{q}$ and $\bm{b}$, provided that cluster/user assignment $\bm{A}$ and intra-cluster power sharing factor $\bm{\alpha}$ are given.} To achieve the load balancing by distributing the clusters to the BSs such that their utility targets can be achieved \footnote {The assignment of clusters also distributes the interference among the BSs.}, we formulate the following objective $$\max_{(\bm{r},\bm{\theta},\bm{q},\bm{b})}\min_{c\in\set{C}} \frac{U_c(\bm{r},\bm{\theta},\bm{q},\bm{b})}{\gamma_c}$$ where $\gamma_c$ is the predefined utility target for cluster $c$. The BS variables $(\bm{r},\bm{\theta})$ and cluster variables $(\bm{q}, \bm{b})$ are optimized by iteratively solving\\ 1) Cluster-based BS assignment and power allocation $\max_{(\bm{q},\bm{b})}\max_{c\in\set{C}} U_c(\bm{q},\bm{b})/\gamma_c$ given the fixed $(\hat{\bm{r}},\hat{\bm{\theta}})$ \\ 2) BS-based antenna tilt optimization and power allocation $\max_{(\bm{r},\bm{\theta})}\max_{c\in\set{C}} U_c(\bm{r},\bm{\theta})/\gamma_c$ given the fixed $(\hat{\bm{q}},\hat{\bm{b}})$. In the following we introduce the utility definition and problem formulation for the cluster-based and the BS-based problems respectively. We start with the problem statement and algorithmic approaches for the uplink. We then discuss the downlink in Section \ref{sec:Duality}. % \subsection{Cluster-Based BS Assignment and Power Allocation}\label{subsec:clusterOpt} Assume the per-BS variables $(\hat{\bm{r}}, \hat{\bm{\theta}})$ are fixed, let the interference coupling matrix depending on BS assignment $\bm{b}$ in \eqref{eqn:PsiMat} be denoted by $\V_{\ve{b}}$. We first define two utility functions indicating capacity and coverage per cluster respectively, then we introduce the joint utility as a combination of the capacity and coverage utility. After that we define the cluster-based max-min utility balancing problem based on the joint utility. % \subsubsection{Average SINR Utility (Capacity)}\label{subsubsec:LB_A} With the intra-cluster power sharing factor introduced in Section \ref{subsec:powFactor}, we have $\bm{p}:=\ma{A}_{\ve{\alpha}}^T \bm{q}$. Define the noise vector $\bm{\sigma}:=[\sigma_1^2, \ldots, \sigma_K^2]^T$, the average SINR of all users in cluster $c$ is written as \begin{align} \bar{U}_c^{(\text{u},1)}&(\bm{q}, \bm{b}) := \frac{1}{|\set{K}_c|} \sum_{k\in\set{K}_c}\operator{SINR}_k^{(\text{u})}\nonumber\\ &= \frac{1}{|\set{K}_c|} \sum_{k\in\set{K}_c}\frac{q_c \alpha_k v_{kk}}{\left[\V_{\ve{b}} \ma{A}_{\ve{\alpha}}^T \bm{q}+\bm{\sigma}\right]_k}\nonumber\\ &\geq \frac{1}{|\set{K}_c|}\frac{q_c \sum_{k\in\set{K}_c} \alpha_k v_{kk}}{\sum_{k\in\set{K}_c} \left[\V_{\ve{b}} \ma{A}_{\ve{\alpha}}^T \bm{q}+\bm{\sigma}\right]_k} =U_c^{(\text{u},1)}(\bm{q}, \bm{b}) \label{eqn:CL_cap_1} \end{align} The uplink capacity utility of cluster $c$ denoted by $U_c^{(\text{u},1)}$ is measured by the ratio between the total useful power and the total interference power received in the uplink in the cluster. Utility $U_c^{(\text{u},1)}$ is used instead of $\bar{U}_c^{(\text{u},1)}$ because of two reasons: First, it is a lower bound for the average SINR. Second, it has certain monotonicity properties (introduced in Section \ref{sec:OPAlgor}) which are useful for optimization. Introducing the cluster coupling term $\overline{\ma{G}}_{\ve{b}}^{(\text{u})}:=\bm{\Psi}\bm{A}\V_{\ve{b}}\ma{A}_{\ve{\alpha}}^T$, where $\bm{\Psi}:=\mathop{\mathrm{diag}}\{|\set{K}_1|/g_1, \ldots, |\set{K}_c|/g_C\}$ and $g_c:=\sum_{k\in \set{K}_c}\alpha_k v_{kk}$ for $c\in\set{C}$; and the noise term $\overline{\bm{z}}:=\bm{\Psi}\bm{A}\bm{\sigma}$, the capacity utility is simplified as \begin{align} U_c^{(\text{u},1)}(\bm{q}, \bm{b})&:=\frac{q_c}{\set{J}_c^{(\text{u},1)}(\bm{q}, \bm{b})}\label{eqn:CL_cap_2}\\ \mbox{where } \set{J}_c^{(\text{u},1)}(\bm{q}, \bm{b})&:=\left[\overline{\ma{G}}_{\ve{b}}^{(\text{u})}\bm{q}+\overline{\bm{z}}\right]_c. \label{eqn:CL_cap_inter} \end{align} % \subsubsection{Worst-Case SINR Utility (Coverage)} Roughly speaking, the coverage problem arises when a certain number of the SINRs are lower than the predefined SINR threshold. Thus, improving the coverage performance is equivalent to maximizing the worst-case SINR such that the worst-case SINR achieves the desired SINR target. We then define the uplink coverage utility for each cluster as \begin{align} U_c^{(\text{u},2)}(\bm{q},\bm{b})&:=\min_{k\in\set{K}_c}\operator{SINR}_k^{(\text{u})}=\min_{k\in\set{K}_c} \frac{q_c\alpha_k v_{kk}}{\left[\V_{\ve{b}} \ma{A}_{\ve{\alpha}}^T \bm{q}+\bm{\sigma}\right]_k}\nonumber\\ &= \frac{q_c}{\max_{k\in\set{K}_c}\left[ \bm{\Phi}\V_{\ve{b}} \ma{A}_{\ve{\alpha}}^T \bm{q}+\bm{\Phi}\bm{\sigma}\right]_k} \label{eqn:CL_cov_1} \end{align} where $\bm{\Phi}:=\mathop{\mathrm{diag}}\{1/\alpha_1 v_{11}, \ldots, 1/\alpha_K v_{KK}\}$. We define a $C \times K$ matrix $\bm{X}:=[\bm{x}_1|\ldots|\bm{x}_C]^T$, where $\bm{x}_c:=\bm{e}^j_K$ and $\bm{e}^j_i$ denotes an $i$-dimensional binary vector which has exact one entry (the j-th entry) equal to 1. Introducing the term $\underline{\ma{G}}_{\ve{b}}^{(\text{u})}:=\bm{\Phi}\V_{\ve{b}} \ma{A}_{\ve{\alpha}}^T$, and the noise term $\underline{\bm{z}}:=\bm{\Phi}\bm{\sigma}$, the coverage utility is given by \begin{align} U_c^{(\text{u},2)}(\bm{q},\bm{b})&:=\frac{q_c}{\set{J}_c^{(\text{u},2)}(\bm{q}, \bm{b})}\label{eqn:CL_cov_2}\\ \mbox{where } \set{J}_c^{(\text{u},2)}(\bm{q}, \bm{b}) & := \max_{\bm{x}_c:=\bm{e}_K^j, j\in\set{K}_c} \left[\bm{X}\underline{\ma{G}}_{\ve{b}}^{(\text{u})}\bm{q}+\bm{X}\underline{\bm{z}}\right]_c. \label{eqn:CL_cov_inter} \end{align} % \subsubsection{Joint Utility and Cluster-Based Max-Min Utility Balancing}\label{eqn:LB_maxmin} The joint utility $U_c^{(\text{u})}(\bm{q}, \bm{b})$ is defined as \begin{align} U_c^{(\text{u})}(\bm{q}, \bm{b})&:=\frac{q_c}{\set{J}_c^{\ul}(\bm{q}, \bm{b})}\label{eqn:LB_utility_1}\\ \mbox{where }\set{J}_c^{\ul}(\bm{q}, \bm{b})&:= \mu\set{J}_c^{(\text{u},1)}(\bm{q}, \bm{b})+(1-\mu)\set{J}_c^{(\text{u},2)}(\bm{q}, \bm{b})\label{eqn:LB_utility_2}. \end{align} In other words, the joint interference function $\set{I}_c^{(\text{u})}$ is a convex combination of $\set{I}_c^{(\text{u},1)}$ in \eqref{eqn:CL_cap_inter} and $\set{I}_c^{(\text{u},2)}$ in \eqref{eqn:CL_cov_inter}. The cluster-based power-constrained max-min utility balancing problem in the uplink is then provided by \begin{problem}[Cluster-Based Utility Balancing] \begin{equation} C^{(\text{u})}(P^{\text{max}})=\max_{\bm{q}\geq 0, \bm{b}\in \set{N}^C} \min_{c\in\set{C}} \frac{U_c^{(\text{u})}(\bm{q}, \bm{b})}{\gamma_c}, \mbox{s.t. } \|\bm{q}\|\leq P^{\text{max}} \label{eqn:LB_OP} \end{equation} Here, $\|\cdot\|$ is an arbitrary monotone norm, i.e., $\bm{q}\leq\bm{q}'$ implies $\|\bm{q}\|\leq\|\bm{q}'\|$, $P^{\text{max}}$ denotes the total power constraint. According to the joint utility in \eqref{eqn:LB_utility_1},\eqref{eqn:LB_utility_2}, the algorithm optimizes the performance of capacity when we set the tuning parameter $\mu=1$ (utility is equivalent to the capacity utility in \eqref{eqn:CL_cap_2}), while with $\mu=0$ it optimizes the performance of coverage (utility equals to the coverage utility in \eqref{eqn:CL_cov_2}). By tuning $\mu$ properly, we can achieve a good trade-off between the performance of coverage and capacity. \label{prob:LB} \end{problem} % \subsection{BS-Based Antenna Tilt Optimization and Power Allocation}\label{subsec:AO} Given the fixed $(\hat{\bm{q}},\hat{\bm{b}})$, we compute the intra-cluster power allocation factor $\bm{\beta}$, given by $\beta_c:=\hat{q}_c/\sum_{c\in\set{C}_{b_c}}\hat{q}_c$ for $c\in\set{C}$. We denote the cross-link coupling matrix depending on $\bm{\theta}$ by $\V_{\ve{\theta}}$. In the following we formulate the BS-based max-min utility balancing problem such that it has the same physical meaning as the problem stated in \eqref{eqn:LB_OP}. We then introduce the BS-based joint utility interpreted by $(\bm{r}, \bm{\theta})$. \subsubsection{BS-Based Max-Min Utility Balancing}\label{subsubsec:AO_maxmin} To be consistent with our objective function $C^{(\text{u})}(P^{\text{max}})$ in \eqref{eqn:LB_OP}, we transform the cluster-based optimization problem to the BS-based optimization problem: % \begin{problem}[BS-Based Utility Balancing] \begin{align} C^{(u)}&(P^{\text{max}})=\max\limits_{\bm{r}\geq 0, \bm{\theta}\in\Theta^N} \min\limits_{c\in\set{C}} \frac{U_c^{(\text{u})}(\bm{r},\bm{\theta})}{\gamma_c}\nonumber\\ &=\max\limits_{\bm{r}\geq 0, \bm{\theta}\in\Theta^N} \min\limits_{n\in\set{N}}\left(\min\limits_{c\in\set{C}_n}\frac{U_c^{(\text{u})}(\bm{r},\bm{\theta})}{\gamma_c}\right)\nonumber\\ & = \max\limits_{\bm{r}\geq 0, \bm{\theta}\in\Theta^N} \min\limits_{n\in\set{N}} \widehat{U}_n^{(\text{u})}(\bm{r},\bm{\theta}), \mbox{ s.t. } \|\bm{r}\|\leq P^{\text{max}} \label{eqn:maxmin_AO} \end{align} \label{prob:AO} \end{problem} where $\Theta$ denotes the predefined space for antenna tilt configuration. \subsubsection{BS-Based Joint Utility}\label{subsubsec:AO_joinyUtility} It is shown in \eqref{eqn:maxmin_AO} that the cluster-based problem is transformed to the BS-based problem by defining \begin{align} \widehat{U}_n^{(\text{u})}(\bm{r},\bm{\theta})&:=\min_{c\in\set{C}_n}\frac{U_c^{(\text{u})}(\bm{r},\bm{\theta})}{\gamma_c}= \frac{r_n}{\widehat{\set{J}}_n^{\ul}(\bm{r}, \bm{\theta})}\label{eqn:AO_utility_1}\\ \widehat{\set{J}}_n^{\ul}(\bm{r}, \bm{\theta}) &:= \max_{c\in\set{C}_n} \frac{\gamma_c}{\beta_c} \set{J}_c^{\ul}(\bm{r}, \bm{\theta}), \label{eqn:AO_utility_2} \end{align} where $\set{J}_c^{\ul}(\bm{r}, \bm{\theta})$ is obtained from $\set{J}_c^{\ul}(\bm{q}, \bm{b})$ in \eqref{eqn:LB_utility_2} by substituting $\bm{q}$ with $\bm{q}:=\ma{B}_{\ve{\beta}}^T\bm{r}$, and $\tilde{\ma{V}}_{\bm{b}}$ with $\tilde{\ma{V}}_{\bm{\theta}}$. Note that \eqref{eqn:AO_utility_1} is derived by applying the inter-cluster sharing factor such that $r_n:=q_c/\beta_c$ for $n=b_c$. Due to lack of space we omit the details of the individual per BS capacity and coverage utilities corresponding to the cluster-based utilities \eqref{eqn:CL_cap_1} and \eqref{eqn:CL_cov_1}. % % % % \section{Optimization Algorithm}\label{sec:OPAlgor} We developed our optimization algorithm based on the fixed-point iteration algorithm proposed by Yates \cite{yates95}, by exploiting the properties of the monotone and strictly subhomogeneous functions. \subsection{MSS function and Fixed-Point Iteration}\label{subsec:contraction} The vector function $\bm{f}: {\field{R}}_+^K\mapsto {\field{R}}_+^K$ of interest has the following two properties: \begin{itemize} \item {\it Monotonicity}: $\bm{x}\leq \bm{y}$ implies $\bm{f}(\bm{x})\leq\bm{f}(\bm{y})$,. \item {\it Strict subhomogeneity}: for each $\alpha>1, \bm{f}(\alpha \bm{x})<\alpha\bm{f}(\bm{x})$. \end{itemize} A function satisfying the above two properties is referred to be {\it monotonic and strict subhomogeneous (MSS)}. When the strict inequality is relaxed to weak inequality, the function is said to be {\it monotonic and subhomogeneous (MS)}. \begin{theorem}\cite{nuzman2007contraction} Suppose that $\bm{f}: {\field{R}}_+^K\mapsto {\field{R}}_+^K$ is MSS and that $\bm{h}=\bm{x}/l(\bm{x})$, where $l:{\field{R}}_+^K \mapsto {\field{R}}_+$ is MS. For each $\theta>0$, there is exactly one eigenvector $\bm{v}$ and the associated eigenvalue $\lambda$ of $\bm{f}$ such that $l(\bm{v})=\theta$. Given an arbitrary $\theta$, the repeated iterations of the function \begin{equation} \bm{g}(\bm{x})=\theta \bm{f}(x)/l(\bm{f(x)}) \label{eqn:fixedpointiteration} \end{equation} converge to a unique fixed point such that $l(\bm{v})=\theta$. \label{Theoremmapping} \end{theorem} The fixed point iteration in \eqref{eqn:fixedpointiteration} is used to obtain the solution of the following max-min utility balancing problem \begin{equation} \max_{\bm{p}}\min_{k\in\set{K}} U_k(\bm{p}), \mbox{ s.t. } \|\bm{p}\|\leq P^{\text{max}} \label{eqn:prob_maxmin_1} \end{equation} where the utility function can be defined as $U_k(\bm{p}):= p_k/f_k(\bm{p})$. \subsection{Joint Optimization Algorithm}\label{subsec:JointOptAlgor} We aim on jointly optimizing both problems, by optimizing $(\bm{q}, \bm{b})$ in Problem \ref{prob:LB} and $(\bm{r},\bm{\theta})$ in Problem \ref{prob:AO} iteratively with the fixed-point iteration. In the following we present some properties that are required to solve the problem efficiently and to guarantee the convergence of the algorithm. \subsubsection{Decoupled Variables in Uplink} In uplink the variables $\bm{b}$ and $\bm{\theta}$ are decoupled in the interference functions \eqref{eqn:LB_utility_2} and \eqref{eqn:AO_utility_2}, i.e., $\set{J}_c^{\ul}(\bm{q}, \bm{b}):=\set{J}_c^{\ul}(\bm{q}, b_c)$ and $\widehat{\set{J}}_n^{\ul}(\bm{r}, \bm{\theta}):=\widehat{\set{J}}_n^{\ul}(\bm{r}, \theta_n)$. Thus, we can decompose the BS assignment (or tilt optimization) problem into sub-problems that can be independently solved in each cluster (or BS), and the interference functions can be modified as functions of the power allocation only: \begin{align} \set{J}_c^{\ul}(\bm{q})&:=\min_{b_c\in\set{N}} \set{J}_c^{\ul}(\bm{q}, b_c)\label{eqn:modi_inter_1}\\ \widehat{\set{J}}_n^{\ul}(\bm{r})&:=\min_{\theta_n\in\Theta} \widehat{\set{J}}_n^{\ul}(\bm{r}, \theta_n) \label{eqn:modi_inter_2} \end{align} \subsubsection{Standard Interference Function} The modified interference function \eqref{eqn:modi_inter_1} and \eqref{eqn:modi_inter_2} are \textit{standard}. Using the following three properties: 1) an affine function $\bm{\set{I}}(\bm{p}):=\bm{V}\bm{p}+\bm{\sigma}$ is standard, 2) if $\bm{\set{I}}(\bm{p})$ and $\bm{\set{I}}'(\bm{p})$ are standard, then $\beta\bm{\set{I}}(\bm{p})+(1-\beta)\bm{\set{I}}'(\bm{p})$ are standard, and 3) If $\bm{\set{I}}(\bm{p})$ and $\bm{\set{I}}'(\bm{p})$ are standard, then $\bm{\set{I}}^{\text{min}}(\bm{p})$ and $\bm{\set{I}}^{\text{max}}(\bm{p})$ are standard, where $\bm{\set{I}}^{\text{min}}(\bm{p})$ and $\bm{\set{I}}^{\text{max}}(\bm{p})$ are defined as $\set{I}_j^{\text{min}}(\bm{p}):=\min\{\set{I}_j(\bm{p}), \set{I}_j'(\bm{p})\}$ and $\set{I}_j^{\text{max}}(\bm{p}):=\max\{\set{I}_j(\bm{p}), \set{I}_j'(\bm{p})\}$ respectively \cite{yates95}, we can easily prove that \eqref{eqn:modi_inter_1} and \eqref{eqn:modi_inter_2} are standard interference functions. Substituting \eqref{eqn:modi_inter_1} and \eqref{eqn:modi_inter_2} in Problem \ref{prob:LB} and Problem \ref{prob:AO}, define $U_c^{(\text{u})}(\bm{q}):=q_c/\set{I}_c^{(\text{u})}(\bm{q})$ and $U_n^{(\text{u})}(\bm{r}):=r_n/\widehat{\set{J}}_n^{\ul}(\bm{r})$, we can write both problems in the general framework of the max-min fairness problem \eqref{eqn:prob_maxmin_1}: \begin{itemize} \item[]Problem 1. $\max_{\bm{q}\geq 0}\min_{c\in\set{C}} U_c^{(\text{u})}(\bm{q})/\gamma_c, \|\bm{q}\|\leq P^{\text{max}}$. \item[]Problem 2. $\max_{\bm{r}\geq 0}\min_{n\in\set{N}} U_n^{(\text{u})}(\bm{r}), \|\bm{r}\|\leq P^{\text{max}}$ \end{itemize} % The property of the decoupled variables in uplink and the property of utilities based on the standard interference functions enable us to solve each problem efficiently with two iterative steps: 1) find optimum variable $b_c$ (or $\theta_n$) for each cluster $c$ (or each BS $n$) independently, 2) solve the max-min balancing power allocation problem with fixed-point iteration. % \subsubsection{Connections between The Two Problems} Problem \ref{prob:LB} and Problem \ref{prob:AO} have the same objective $C^{(\text{u})}(P^{\text{max}})$ as stated in \eqref{eqn:LB_OP} and \eqref{eqn:maxmin_AO}, i.e., given the same variables $(\hat{\bm{q}}, \hat{\bm{b}}, \hat{\bm{r}}, \hat{\bm{\theta}})$, using \eqref{eqn:AO_utility_1}, we have $\min_{c\in\set{C}} U_c^{(\text{u})}/\gamma_c=\min_{n\in\set{N}} \widehat{U}_n^{(\text{u})}$. Both problems are under the same sum power constraint. However, the convergence of the two-step iteration requires two more properties: 1) the BS power budget $\bm{r}$ derived by solving Problem \ref{prob:AO} at the previous step should not be violated by the cluster power allocation $\bm{q}$ found by optimizing Problem \ref{prob:LB}, and 2) when optimizing Problem \ref{prob:AO}, the inter-cluster power sharing factor $\bm{\beta}$ should be consistent with the derived cluster power allocation $\bm{q}$ in Problem \ref{prob:LB}. To fulfill the first requirement, we introduce the per BS power constraint $P_n^{\text{max}}$ for Problem \ref{prob:AO} equivalent to the BS power budget $r_n$ in Problem \ref{prob:LB}. We also propose a scaled version of fixed point iteration similar to the one proposed in \cite{nuzman2007contraction} to iteratively scale the cluster power vector and achieve the max-min utility boundary under per BS power budget constraints, as stated below. \begin{equation} q_c^{(t+1)} =\frac{\gamma_c\set{I}_c^{(\text{u})}(\bm{q}^{(t)})}{\|\bm{B}\bm{\set{I}}^{(\text{u})}(\bm{q}^{(t)}) \oslash {\bm{P}^{\text{max}}}^{(t)}\|_{\infty}} \label{eqn:FP_LB} \end{equation} where $\oslash$ denote the element-wise division of vectors, $\|\cdot\|_{\infty}$ denotes the maximum norm, ${\bm{P}^{\text{max}}}^{(t)}:=\bm{r}^{(t)}$. To fulfill the second requirement, once $\bm{q}^{(n+1)}$ is derived, the power sharing factors $\bm{\beta}$ need to be updated for solving Problem \ref{prob:AO} at the next step, given by \begin{equation} \bm{\beta}^{(n+1)}:=\bm{Q}^{-1}\bm{B}^T\bm{r}^{(n)}, \mbox{where } \bm{Q}=\mathop{\mathrm{diag}}\{\bm{q}^{(n+1)}\} \label{eqn:FP_LB_beta} \end{equation} The scaled fixed-point iteration to optimize Problem \ref{prob:AO} is provided by \begin{equation} r_n^{(t+1)}= \frac{P^{\text{max}}}{\|\bm{\widehat{\set{I}}}^{(\text{u})}(\bm{r}^{(t)})\|}\cdot \widehat{\set{I}}_n^{(\text{u})}(\bm{r}^{(t)}) \label{eqn:FP_AO_1} \end{equation} % The joint optimization algorithm is given in Algorithm \ref{alg:optim-algor}. % \begin{algorithm}[t]\label{alg:optim-algor} \caption{Joint Optimization of Problem \ref{prob:LB} and \ref{prob:AO}} \begin{algorithmic}[1] \STATE broadcast the information required for computing $\bm{V}$, predefined constraint $P^{\text{max}}$ and thresholds $\epsilon_1,\epsilon_2,\epsilon_3$ \STATE arbitrary initial power vector $\bm{q}^{(t)}>0$ and iteration step $t:=0$ \REPEAT[joint optimization of Problem \ref{prob:LB} and \ref{prob:AO}] \REPEAT[fixed-point iteration for every cluster $c\in\set{C}$] \STATE broadcast $\bm{q}^{(t)}$ to all base stations \FOR{all assignment options $b_c \in \set{N}$} \STATE compute $\set{I}_c^{(\text{u})}(\bm{q}^{(t)}, b_c)$ with \eqref{eqn:LB_utility_2} \ENDFOR \STATE compute $\set{I}_c^{(\text{u})}(\bm{q}^{(t)})$ with \eqref{eqn:modi_inter_1} and update $b_c^{(t+1)}$ \STATE update $q_c^{(t+1)}$ with \eqref{eqn:FP_LB} \STATE $t := t+1$ \UNTIL{convergence: $\bigl| q_c^{(t+1)} - q_c^{(t)}\bigr| / q_c^{(t)} \leq \epsilon_1$} \STATE update $\bm{\beta}^{(t)}$ with \eqref{eqn:FP_LB_beta} % \REPEAT[fixed-point iteration for every BS $n\in\set{N}$] \STATE broadcast $\bm{r}^{(t)}$ to all base stations \FOR{all antenna tilt options $\theta_n \in \Theta$} \STATE compute $\widehat{\set{I}}_n^{(\text{u})}(\bm{r}^{(t)}, \theta_n)$ with \eqref{eqn:AO_utility_2} \ENDFOR \STATE compute $\widehat{\set{I}}_n^{(\text{u})}(\bm{r}^{(t)})$ with \eqref{eqn:modi_inter_2} and update $\theta_n^{(t+1)}$ \STATE update $r_c^{(n+1)}$ with \eqref{eqn:FP_AO_1} \STATE $t := t+1$ \UNTIL{convergence: $\bigl| r_n^{(t+1)} - r_n^{(t)}\bigr| / r_n^{(t)} \leq \epsilon_2$} \STATE update ${P_n^{\text{max}}}^{(t)}:=r_n^{(t)}$ \STATE compute $l^{(t+1)}:=\min_{n\in\set{N}} \widehat{U}^{(\text{u})}_n(\bm{r}^{(n+1)})$ \UNTIL{convergence: $|l^{(t+1)}-l^{(t)}|/l^{(t)}\leq\epsilon_3$} \end{algorithmic} \end{algorithm} % \section{Uplink-Downlink Duality}\label{sec:Duality} We state the joint optimization problem in uplink in Section \ref{sec:ProbForm} and propose an efficient solution in Section \ref{sec:OPAlgor} by exploiting the decoupled property of $\bm{V}$ over the variables $\bm{\theta}$ and $\bm{b}$. The downlink problem, due to the coupled structure of $\bm{V}^T$, is more difficult to solve. As extended discussion we want to address the relationship between the uplink and the downlink problem, and to propose a sub-optimal solution for downlink which can be possibly found through the uplink solution. Let us consider cluster-based max-min capacity utility balancing problem in Section \ref{subsubsec:LB_A} as an example. In the downlink the optimization problem is written as \begin{align} \vspace{-0.2em} \max_{\bm{q}, \bm{b}}\min_c &\frac{U_c^{(\text{d},1)}(\bm{q}, \bm{b})}{\gamma_c}, \mbox{s.t. } \|\bm{q}\|_1\leq P^{\text{max}}\nonumber\\ \mbox{where } & U_c^{(\text{d},1)} :=\frac{q_c}{[\bm{\Psi}\bm{A}\V_{\ve{b}}^T\ma{A}_{\ve{\alpha}}^T\bm{q}+\bm{\Psi}\bm{z}^{(\text{d})}]} \label{eqn:LB_dl} \vspace{-0.2em} \end{align} The cluster-based received noise is written as $\bm{z}^{(\text{d})}:=\bm{A}\bm{\sigma}^{(\text{d})}$. In the following we present a virtual dual uplink network in terms of the feasible utility region for the downlink network in \eqref{eqn:LB_dl} via Perron-Frobenius theory, such that the solution of problem \eqref{eqn:LB_dl} can be derived by solving the uplink problem \eqref{eqn:LB_ul} with the algorithm introduced in Section \ref{sec:OPAlgor}. % \begin{proposition} Define a virtual uplink network where the link gain matrix is modified as $\bm{W}_{\bm{b}}:=\mathop{\mathrm{diag}}\{\bm{\alpha}\}\V_{\ve{b}}\mathop{\mathrm{diag}}^{-1}\{\bm{\alpha}\}$, i.e., $w_{lk}:=v_{lk}\frac{\alpha_l}{\alpha_k}$, and the received uplink noise is denoted by $\bm{\sigma}^{(\text{u})}:=[{\sigma^2_1}^{(\text{u})}, \ldots, {\sigma^2_K}^{(\text{u})}]^T$, where ${\sigma_k^2}^{(\text{u})}:=\frac{\Sigma_{\text{tot}}}{|\set{K}_{c_k}|\cdot C}$ for $k\in\set{K}$, and assume $\Sigma_{\text{tot}}:=\|\bm{\sigma}^{(\text{u})}\|_1=\|\bm{\sigma}^{(\text{d})}\|_1$ (which means, the sum noise is equally distributed in clusters, while in each cluster the noise is equally distributed in the subordinate users). The dual uplink problem of problem \eqref{eqn:LB_dl} is given by \begin{align} \vspace{-0.2em} \max_{\bm{q},\bm{b}}\min_c & \frac{U_c^{(\text{u},1)}(\bm{q}, \bm{b})}{\gamma_c}, \mbox{s.t. } \|\bm{q}\|_1\leq P^{\text{max}}\nonumber\\ \mbox{where } & U_c^{(\text{u},1) }:=\frac{q_c}{[\bm{\Psi}\bm{A}\bm{W}_{\bm{b}}\ma{A}_{\ve{\alpha}}^T\bm{q}+\bm{\Psi}\bm{z}^{(\text{u})}]} \label{eqn:LB_ul} \vspace{-0.2em} \end{align} where $\bm{z}^{(\text{u})}:=\bm{A}\bm{\sigma}^{(\text{u})}$. \label{prop:Duality} \end{proposition} \begin{proof} The proof is given in the Appendix. \end{proof} Note that the optimizer $\bm{b}^{\ast}$ for BS assignment in downlink can be equivalently found by minimizing the spectral radius $\bm{\Lambda^{(u)}(\bm{b})}$ in the uplink. Once $\bm{b}^{\ast}$ is found, the associate optimizer for uplink power ${\bm{q}^{(\text{u})}}^{\ast}$ is given as the dominant right-hand eigenvector of matrix $\bm{\Lambda}^{(\text{u})}(\bm{b}^{\ast})$, while the associate optimizer for downlink power ${\bm{q}^{(\text{d})}}^{\ast}$ is given as the dominant right-hand eigenvector of matrix $\bm{\Lambda}^{(\text{d})}(\bm{b}^{\ast})$. Proposition \ref{prop:Duality} provides an efficient approach to solve the downlink problem with two iterative steps (as the one proposed in \cite{BocheDuality06}): 1) for a fixed power allocation $\hat{\bm{q}}$, solve the uplink problem and derive the assignment $\bm{b}^{\ast}$ that associated with the spectral radius of extend coupling matrix $\bm{\Lambda}^{(\text{u})}$, and 2) for a fixed assignment $\hat{\bm{b}}$, update the power $\bm{q}^{\ast}$ as the solution of \eqref{eqn:DL_matrixEqua}. Although we are able to find a dual uplink problem for the downlink problem in \eqref{eqn:LB_dl} with our proposed utility functions \emph{under sum power constraints}, \insl{we are not able to construct a dual network with decoupled properties for the modified problem \emph{under per BS power constraints} \eqref{eqn:FP_LB}. However, numerical experiments show that our approach to the downlink based on the proposed uplink solution does improve the network performance, although the duality does not exactly hold between the downlink problem and our proposed uplink problem under the per BS power constraints.} % % \section{Numerical Results}\label{sec:Simu} We consider a real-world urban scenario based on a pixel-based mobility model of realistic collection of BS locations and pathloss model for the city of Berlin. The data was assembled within the EU project MOMENTUM and is available at \cite{MOMENTUM}. We select 15 tri-sectored BS in the downtown area. Users are uniformly distributed and are clustered based on their SINR distributions as shown in Fig. \ref{fig:Berlin} (UEs assigned to each sector are clustered into groups and are depicted in distinct colors). The SINR threshold is defined as -6.5 dB and the power constraint per BS is 46dBm. The 3GPP antenna model defined in \cite{3GPP36942} is applied. Fig. \ref{fig:convergence} illustrates the convergence of the algorithm. Our algorithm achieves the max-min utility balancing, and improves the feasibility level $C^{(u)}(P^{\text{max}})$ by each iteration step. In Fig.\ref{fig:cov_cap_mu} we show that the trade-off between coverage and capacity can be adjusted by tuning parameter $\mu$. By increasing $\mu$ we give higher priority to capacity utility (which is proportional to the ratio between total useful power and total interference power), while for better coverage utility (defined as minimum of SINRs) we can use a small value of $\mu$ instead. Fig. \ref{fig:coverage}, \ref{fig:capacity} and \ref{fig:power} illustrate the improvement of coverage and capacity performance and decreasing of the energy consumption in both uplink and downlink systems by applying the proposed algorithm, when the average number of the users per BS is chosen from the set $\{15,20,25,30,35\}$. In Fig. \ref{fig:capacity} we show that the actual average SINR is also improved, although the capacity utility is defined as a lower bound of the average SINR. Fig. \ref{fig:power} illustrate that our algorithm is more energy efficient when comparing with the fixed BS power budget scenario. Compared to the near-optimal uplink solutions, less improvements are observed for the downlink solutions as shown in Fig. \ref{fig:coverage}, \ref{fig:capacity} and \ref{fig:power}. This is because we derive the downlink solution by exploiting an uplink problem which is not exactly its dual due to the individual power constraints (as described in Section \ref{sec:Duality}). However, the sub-optimal solutions still provide significant performance improvements. % % % % % \section{Conclusions and Further Research}\label{sec:con} We present an efficient and robust algorithmic optimization framework build on the utility model for joint optimization of the SON use cases coverage and capacity optimization and load balancing. The max-min utility balancing formulation is employed to enforce the fairness across clusters. We propose a two-step optimization algorithm in the uplink based on fixed-point iteration to iteratively optimize the per base station antenna tilt and power allocation as well as the cluster-based BS assignment and power allocation. We then analyze the network duality via Perron-Frobenius theory, and propose a sub-optimal solution in the downlink by exploiting the solution in the uplink. Simulation results show significant improvements in performance of coverage, capacity and load balancing in a power-efficient way, in both uplink and downlink. In our follow-up papers we will further propose a more complex interference coupling model and the optimization framework where frequency band assignment is taken into account. We will also examine the suboptimality under more general form of power constraints. \begin{figure}[t] \centering \includegraphics[width=.5\textwidth]{BerlinReceivedSignalStrengthMap_v2} \caption{Berlin Scenario.} \label{fig:Berlin} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{convergence} \caption{Algorithm convergence.} \label{fig:convergence} \end{figure} % \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{cov_cap_mu} \vspace{-1em} \caption{Trade-off between utilities depending on $\mu$.} \label{fig:cov_cap_mu} \vspace{-1.5em} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{coverage} \caption{Performance of proposed algorithm: coverage.} \label{fig:coverage} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.43\textwidth]{capacity} \caption{Performance of proposed algorithm: capacity.} \label{fig:capacity} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=.43\textwidth]{power} \caption{Performance of proposed algorithm: per-BS power budget.} \label{fig:power} \end{figure} % \input{appendices} \subsection*{Acknowledgements} We would like to thank Dr. Martin Schubert and Dr. Carl J. Nuzman for their expert advice. \ifCLASSOPTIONcaptionsoff \newpage \fi % \bibliographystyle{IEEEtran}
2023-04-23T08:17:26.599Z
2016-07-19T02:04:46.000Z
redpajama/arxiv
arxiv_0000
17
6,891
61decaa1f993790b6b05ece2d83435830a9a74c1
\section{Introduction} Level design is a core feature of what defines a video game. When constructed correctly, it is a main determinant of player experience. From the designer's perspective, level design can be either a tedious, but necessary step in the game's development or a creatively freeing process - sometimes it is both. Most levels are designed with the intent to teach the game's interactable space - the mechanics - to the player in a way that is (ideally) engaging, fun, visually pleasing, intuitive, and informative \cite{rogers2014level,koster2013theory,green2017press}. Levels designed for tutorial sections of the game create simplistic and low-risk environments. These levels are direct, and sometimes oblique in their intention so the player can grasp the core mechanics of the game as quickly as possible. As the player becomes more familiar with the mechanics and how they work together in the game's system, the levels should also increase in complexity and challenge. The general design of these levels in turn needs to be as complex and engaging both visually and functionally~\cite{khalifa2019intentional,anthropy2014game}. Most games demonstrate each mechanic at least once throughout the entire level space; combining and ordering them in a way that builds itself based on the player's current skill as they get more familiar with the game \cite{totten_2016,anthropy2014game}. However, designing levels to explore multiple combinations of mechanics is an arduous task to undertake for a level designer. While it is unlikely that a player would play all of these levels, creating this possibility space of levels would allow the level designer to hand-pick and order them in a way that the mechanics feel coherent. Furthermore, an adaptive game with a diverse set of levels would allow the player to explore different combinations of mechanics according to their own pace and preference. For example, if a player is having difficulties with a certain necessary mechanic, such as long jumps in most Super Mario games or the spin attack in most Legend of Zelda games, having a specific subset of levels with a focus on these more challenging mechanics would allow the player to develop their skill with the mechanic better than a single tutorial level~\cite{anthropy2014game}. In contrast, if a player has fully mastered a mechanic, the game could select a level that uses the mechanic in a more challenging situation or a level with an entirely different mechanic space for them to master next. Levels could be selected automatically from the generated level space and dynamically ordered in a way that adapts to the player's current skill level as opposed to a hard-coded level ordering~\cite{yannakakis2011experience}. Generating a diversity of levels that explore multiple mechanic combinations would save time in the design process and allow for more creative flexibility in a way that manually designed levels could not provide. This paper presents a system that seeks to combine human design and AI-driven design to enable mixed-initiative collaborative game level creation. Users can choose to start from a blank slate with their work while adding their own edits then have an AI back-end evolve their work towards a pre-defined objective. This objective function can be defined by minimalism in design, maximization of game mechanic coverage, overall quality, or any other feature that could contribute to the quality of the level. Alternatively, users may select from a variety of AI suggestions and pre-generated samples to begin their work and then make changes as necessary. This design process is not limited to the initializing step of the level; the user and AI system can switch their roles as designers at any point in the creation process. Concurrently, the AI system will look at what its previous users have created and submitted, and ask new users to design levels that complement what's already there. With this design process, the mechanic space of a game can be fully explored and every combination of mechanics can be represented by a level. With a human-based rating system, the automated system can learn to design levels with better quality and the human users can design levels that are missing from this mechanic combination space. This project demonstrates the mixed-initiative collaborative process through level design for the independent, Sokoban-like game `Baba is You' - a game whose mechanics are defined and modified by the level design itself and the player's interaction with it. Levels can be made either by users, AI, or a mixed combination of both and uploaded to the level database to be used for future creations and to improve the quality of the AI's objective function. \subsection{Baba is Y'all v1 (prototype)} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{imgs/level_matrix.png} \caption{Baba is Y'all Version 1 Main Screen (from April 2020)} \label{fig:levelMat1} \end{figure} The first version of Baba is Y'all (BiY v1) was released officially in March 29th, 2020, and promoted chiefly on Twitter. This version served as a prototype and proof-of-concept system for mixed-initiative AI-assisted game content collaboration specifically for designing levels in the game `Baba is You' (Arvi 'Hempuli' Teikari, 2017). This system was built on concepts from three different areas of content creation: \begin{itemize} \item \textbf{Crowdsourcing:} a model used by different systems that allows a large set of users to contribute toward a common goal provided by the system~\cite{brabham2013crowdsourcing}. For example, Wikipedia users participate to fill in missing information for particular content. \item \textbf{User content creation:} allows players to create levels for a game/system and upload them online to the level database for other players to play and enjoy - i.e. Super Mario Maker (Nintendo, 2015), Line Rider (inXile Entertainment, 2006), and LittleBigPlanet (Media Molecule, 2008). \item \textbf{Quality diversity:} the underlying technique behind our system. It ensures that the levels made from combining the first 2 concepts are of both good quality and diverse in terms of the feature space they are established in~\cite{pugh2016quality}. For this system, the feature space is defined as the potential game mechanics implemented in each level. \end{itemize} The Baba is Y'all website (as shown in figure~\ref{fig:levelMat1}) was a prototype example of a mixed-initiative collaborative level designing system. However, the site was limited by the steep learning curve required to interact with the system~\cite{charity2020baba}. Features of the site were overwhelming to use and lack cohesion in navigating the site. \subsection{Baba is Y'all v2 (updated release)} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{imgs/dark_main.png} \caption{Baba is Y'all Version 2 Main Screen (as of September 2021)} \label{fig:levelMat2} \end{figure} The second version of Baba is Y'all\footnote{http://equius.gil.engineering.nyu.edu/} (BiY v2) was released on May 27th, 2021 and designed to have a more user-friendly setup. It was similarly promoted via Twitter and on mailing lists. This version includes a cleaner, more compact, and more fluid user interface for the entire website and consolidated many of the separate features from the BiY v1 site onto fewer pages for easier access. Three main webpages were created for this updated system. Unlike the previous version, which showed all of the mechanic combination levels (both from the database and unmade) in random order, the updated level selection page adds level tabs that separates levels by recently added (New), highest rated (Top), and levels with rules that had not been made yet (Unmade.) A carousel scrolling feature shows 9 levels at a time to not overwhelm the player with choices (as shown in figure~\ref{fig:levelMat2}). The level rating system is also included on the main page as a tab, as well as the search feature. The personal level selection tab allows users to see their previously submitted levels and login to their account to submit levels with their username as the author or co-author. The updated level editing page consolidates both the user editing with the PCG level evolution onto one page. Users can easily switch between manually editing the level themselves and allowing the PCG back-end system to edit the level while pausing in between. Users can also select rule objectives for the system to evolve towards implementing. To fight the problem of blank canvas paralysis, users can start from a set of different types of levels (both PCG and user-made)~\cite{krall2012artist}. Once a level is successfully solved, users may name the level upon submission - further personalizing the levels and assigning authorship. A slideshow tutorial is provided for the users and describes every feature and function of the site instead of the walkthrough video that was featured on BiY v1. Users can also play a demo version of the `Baba is You' (Arvi 'Hempuli' Teikari, 2017) game to familiarize themselves with the game mechanics/rule space and how they interact with each other (game dynamics). For quick assistance, a helper tool is provided on the level editing page as a refresher on how to use the editing tool. In addition to updating the features and collecting more data about the levels created, we conducted a formal user study with 76 participants to gather information about which features they chose to use for their level creation process and their subjective opinion on using the site overall. This user study, as well as the general level statistics collected from the site's database, showed that our new interface better facilitated the user-AI collaborative experience to create more diverse levels. \section{Background and Related Work} The Baba is Y'all system uses the following methods in the collaborative level design process: procedural content generation to create new levels from the AI backend, quality diversity to maintain the different kinds of levels produced from the system and show the coverage of game mechanics across each level, crowdsourcing so the AI may learn to create new levels from previously submitted "valid" levels - either those made exclusively by users, the system itself, or a combination of both, and finally mixed-initiative AI so that the user and evolutionary algorithm can develop the level together. Each method is described as the following: \subsection{Procedural Content Generation} Procedural content generation (PCG) is defined as the process of using a computer program to create content that with limited or indirect user input \cite{shaker2016procedural}. Such methods can make an automated, quicker, and more efficient content creation process, and also enable aesthetics based on generation. PCG has been used in games from the 1980's Rogue to its descendent genre of the Rogue-likes used in games such as Spelunky (Mossmouth, LLC, 2008) and Hades (Supergiant Games, 2020), as well as games that revolve around level and world generation such as Minecraft (Mojang, 2011) and No Man's Sky (Hello Games, 2016). PCG can be used to build levels such as The Binding of Isaac (Edmund McMillen, 2011), enemy encounters such as Phoenix HD (Firi Games, 2011), or item or weapon generation such as Borderlands (Gearbox Software, 2009). In academia, PCG has been explored in many different game facets for generating assets \cite{ruela2017procedural, gonzalez2020generating}, mechanics \cite{khalifa2019general, togelius2008experiment,browne2010evolutionary}, levels \cite{snodgrass2016learning,charity2020mech}, boss fights~\cite{siu2016programming}, tutorials~\cite{khalifa2019intentional,green2018atdelfi}, or even other generators \cite{kerssemakers2012procedural,earle2021learning,earle2021illuminating,khalifa2020multi}. A plethora of AI methods underpin successful PCG approaches, including evolutionary search \cite{togelius2010search}, supervised and unsupervised learning \cite{summerville2018procedural,liu2021deep}, and reinforcement learning \cite{khalifa2020pcgrl}. The results of these implementations have led to PCG processes being able to generate higher quality, more generalizable, and more diverse content. PCG is used in the Baba is Y'all system to allow the mutator module to create new `Baba is You' levels. \subsection{Quality Diversity} Quality-diversity (QD) search based methods are increasing in usage for both game researchers and AI researchers \cite{pugh2016quality,gravina2019procedural}. Quality-diversity techniques are search based techniques that try to generate a set of diverse solutions while maintaining high level of quality for each solution. A well-known and popular example is MAP-Elites, an evolutionary algorithm that uses a multi-dimensional map instead of a population to store its solutions~\cite{mouret2015illuminating}. This map is constructed by dividing the solution space into a group of cells based on a pre-defined behavior characteristics. Any new solution found will not only be evaluated for fitness but also for its defined characteristics then placed in the correct cell in the MAP-Elites map. If the cell is not empty, both solutions compete and only the fitter solution survives. Because of the map maintenance and the cell competition, MAP-Elites can guarantee a map of diverse and high quality solutions, after a finite number of iterations through the generated population. The MAP-Elites algorithm has also been extended into Constrained MAP-Elites \cite{khalifa2018talakat, khalifa2019intentional, alvarez2019empowering}, Covariance Matrix Adaptation using MAP-Elites (CMA-ME) \cite{fontaine2020covariance}, Monte Carlo Elites~\cite{sfikas2021monte}, MAP-Elites via Gradient Arborescence~\cite{fontaine2021differentiable}, and etc. For this project, we use the Constrained MAP-Elites algorithm to maintain a diverse population of `Baba is You' levels where the behavior characteristic space of the matrix is defined by the starting and ending rules of a level when it is submitted. \subsection{Crowdsourcing data and content} Some, but relatively few, games allow users to submit their own custom creations using the game's engine as most games do not have their source code available or even partially accessible for modifications to add more content in the context of the game. Whether through a built-in level editing system seen in games like Super Mario Maker (Nintendo, 2015), LittleBigPlanet (MediaMolecule, 2008), or LineRider (inXile Entertainment, 2006) or through a modding community that alter the source code for notable games such as Skyrim (Bethesda, 2011) Minecraft (Mojang, 2011,) or Friday Night Funkin' (Ninjamuffin99, 2020), players can create their own content to enhance their experience and/or share with others. In crowdsourcing, many users contribute data that can be used for a common goal. Some systems like Wikipedia rely entirely on content submitted by their user base in order to provide information to others on a given subject. Other systems like Amazon's MechanicalTurk crowdsource data collection, such as research experiments \cite{buhrmester2016amazon}, by outsourcing small tasks to multiple users for a small wage. An example of a game generator based on crowdsourced data is Barros et al.’s DATA Agent \cite{barros2018killed,green2018data}, which uses crowd-sourced data such as Wikipedia to create a point-click adventure game sourced from a large corpus of open data to generate interesting adventure games. What differentiates the Baba is Y'all system from other level editing systems or interactive PCG systems is that the Baba is Y'all site has a central goal: populate the MAP-Elites matrix with levels that cover all possible rule combinations. With this system, users may freely create the levels they want, but they may also work towards completing the global goal of making levels with a behavior characteristic that has not been made before. Participation in this task is encouraged by the AI back-end system that keeps track of missing cells in the MAP-Elites matrix. \subsection{Mixed-Initiative AI} Mixed-initiative AI systems involve a co-creation of content between a human user and an artificially intelligent system~\cite{yannakakis2014mixed}. Previous mixed-initiative systems include selecting from and evolving a population of generated images \cite{secretan2008picbreeder,bontrager2018deep}, composing music \cite{mann2016ai,tokui2000music}, and creating game levels through suggestive feedback \cite{machado2019pitako}. Mixed-initiative and collaborative AI level editors for game systems have thoroughly been explored in the field as well through direct and indirect interaction with the AI backend system \cite{shaker2013ropossum,liapis2013sentient,butler2013mixed,guzdial2018co,zhou2021toward,bhaumik2021lode,alvarez2019empowering,smith2010tanagra,delarosa2021mixed}. Since the release of the first Baba is Y'all prototype and paper~\cite{charity2020baba}, the implementation of mixed-initiative systems have grown in the game and AI research field. Bhaumik implemented an AI constrained system with their Lode Encoder level editing tool that only allowed users to edit a level from a set of levels generated by a variational autoencoder - forcing users to only edit from a palette provided by the AI back-end tool~\cite{bhaumik2021lode}. Delarosa used a reinforcement learning agent in a mixed-initiative web app to collaboratively suggest edits to Sokoban levels \cite{delarosa2021mixed}. Zhou used levels generated with the AI-assisted level editor Morai Maker (a Super Mario level editor) to apply transfer learning for level editing to Zelda \cite{zhou2021toward}. These recent developments look more into how the human users are affected through their relationship with collaborating with these AI systems and how it can be improved through examining the dimensionality of the QD algorithm, the evolutionary process, or the human-system interaction itself \cite{alvarez2020exploring}. We look to incorporate these new perspectives into this updated iteration of Baba is Y'all and evaluate the effects through a user study. \section{System Description} The updated Baba is Y'all site's features were condensed into 2 main pages to make navigation and level editing much easier and intuitive: \begin{itemize} \item \textbf{The Home Screen:} contains the level matrix \textit{Map Module}, the search page, the \textit{Rating Module} page, and the \textit{User Profile} page. From here, users can also change the visuals of the site from light to dark mode, view the tutorial section or the site stats page by clicking on the Baba and Keke sprites respectively at the top of the page, and create a new level from scratch by clicking on various 'Create New Level' buttons placed on various subpages. Figure~\ref{fig:levelMat2} shows the starting page of the home screen. \item \textbf{The Level Editor Screen:} contains both the \textit{Editor Module} and the \textit{Mutator Module}. Users can also test their levels with themselves or with the Keke solver by clicking on the Baba and Keke icons at the bottom of the canvas. Figure~\ref{fig:editor_screen} shows the starting page of the level editor screen. \end{itemize} In the following subsections, we are going to explain the different modules that constitutes these two main screens. Each of the following modules are either being used in the home screen, the level editor screen, or both. \subsection{Baba is You} `Baba is You' (Arvi ``Hempuli'' Teikari, 2019) is a puzzle game where players can manipulate the rules of a level and properties of the game objects through Sokoban-like movements of pushing word blocks found on the map. These dynamically changing rules create interesting exploration spaces for both procedurally generating the levels and solving them. The different combinations of rules can also lead to a large diversity of level types that can be made in this space. The general rules for the `Baba is You' game can be referred to from our previous paper~\cite{charity2020baba}. To reiterate, there are three types of rule formats in the game: \begin{itemize} \item \textbf{X-IS-(KEYWORD)} a property rule stating that the game object class `X' has a certain property such as `WIN', `YOU', `MOVE', etc. \item \textbf{X-IS-X} a reflexive rule stating that the game object class `X' cannot be changed to another game object class. \item \textbf{X-IS-Y} a transformative rule changing all game objects of class `X' into game objects of class `Y'. \end{itemize} \begin{figure} \centering \includegraphics[width=0.4\linewidth]{imgs/simple_level.png} \caption{An example of a simple `Baba is You' level.} \label{fig:simple_map} \end{figure} The game sprites are divided into two main different classes: the object class and the keyword class. Sprites in the object class represent the interactable objects in the map as well as the literal word representation for the object. Sprites in the keyword class represent the rules of the level that manipulate the properties of the objects. For example, figure~\ref{fig:simple_map} shows four different object class sprites [BABA (object and corresponding word) and FLAG (object and corresponding word)] and three different keyword class sprites [IS (x2), YOU, and WIN]. The keyword class sprites are arranged in two rules: `BABA-IS-YOU' allowing the player to control all the Baba objects and `FLAG-IS-WIN' indicating that reaching any flag object will make the player win the level. The system has a total of 32 different sprites: 11 object class sprites and 21 keyword class sprites. Because the game allows rule manipulation, object classes are arbitrary in the game as they serve only to provide a variety of objects for rules to affect and for aesthetic pleasure. \subsection{Game Module} The game module is responsible for simulating a `Baba is You' level. It also allows users to test the playability of levels either by directly playing through the level themselves or by allowing a solver agent to attempt to solve it. This component is used on the home screen when a user selects a level to play and the editor screen for a user to test their created level. Because the game rules are dynamic and can be altered by the player at any stage in the solution, the system keeps track of all the active rules at every state. Once the win condition has been met, the game module records the current solution, the active rules at the start of the level, and the active rules when the solution has been reached. These properties are saved to be used and interpreted by the Map module (section~\ref{sec:map_module}). The activated rules are used as the level's characteristic feature representation and saved as a chromosome to the MAP-Elites matrix. The game module provides an AI solver called 'KEKE' (based on one of the characters traditionally used as an autonomous 'NPC' in the game). KEKE uses a greedy best-first tree search algorithm that tries to solve the input level. The branching space is based on the five possible inputs a player can do within the game: move left, move right, move up, move down, and do nothing. The algorithm uses a heuristic function based on a weighted average of the Manhattan distance to the centroid distance for 3 different groups: keyword objects, objects associated with the `WIN' rule, and objects associated with the `PUSH' rule. These were chosen based on their critical importance for the user solving the level - as winning objects are required to complete the level, keyword objects allow for manipulation of active rules, and pushable objects can directly and indirectly affect the layout of a level map and therefore the accessibility of player objects to reach winning objects. The heuristic function is represented by the following equation: \begin{equation} h = (n + w + p) / 3 \end{equation} where $h$ is the final heuristic value for placement in the priority queue, $n$ is the minimum Manhatttan distance from any player object to the nearest winnable object, $w$ is the minimum Manhatttan distance from any player object to the nearest word sprite, and $p$ is the minimum Manhatttan distance from any player object to the nearest pushable object. As an update for this version of the system, the agent can run for a maximum of 10000 iterations and can be stopped at any time. A user may also attempt to solve part of the level themselves and the KEKE solver can pick up where the user left off to attempt to solve the remainder of the level. This creates a mixed-initiative approach to solving the levels in addition to editing the levels. However, even with this collaborative approach, the system still has limitations and difficulty solving levels with complex solutions - specifically solutions that require back-tracking across the level after a rule has been changed. The solver runs on the client side of the site and is limited by the capacity of the user's computational resources. Future work will look into improving the solver system to reduce computational resource. We will also look for better solving algorithms to improve the utility of the solver such as Monte Carlo Tree Search (MCTS) with reversibility compression~\cite{cook2021monte}. \subsection{Editor Module} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{imgs/editor_screen.png} \caption{A screenshot of the level editor screen} \label{fig:editor_screen} \end{figure} The editor module of the system allows human users to create their own `Baba is You' levels in the same vain of Super Mario Maker (Nintendo, 2015). Figure~\ref{fig:editor_screen} shows the editor window that is available for the user. The user can place and erase any game sprite or keyword at any location on the map using the provided tools. As a basis, the user can start modifying either a blank map, a basic map (a map with X-IS-YOU and Y-IS-WIN rules already placed with X and Y objects), a randomly generated map, or an elite level provided by the Map Module. Similar to Super Mario Maker (Nintendo, 2015), the created levels can only be submitted after they are tested by the human player or the AI agent to check for solvability. For testing the level, the editor module sends the level information to the game module to allow the user to test it. This updated version of the site also includes an undo and redo feature so that users may erase any changes they make. A selection and lasso feature is also available so users can select specific areas of the level and move them to another location. Unlike the previous version, all tiles are available to the user on the same screen and the user may seamlessly transition from the editor module to the mutator module and vice versa for ease of access and better interactivity and collaboration between the AI system and the user. \subsection{Mutator Module}\label{sec:mutator_module} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{imgs/evolver_screen.png} \caption{A screenshot of the level evolver page} \label{fig:evolver_screen} \end{figure} The Mutator module is a procedural content level generator. More specifically, the Baba is Y'all system uses an evolutionary level generator that defines a fitness function based on a version of tile-pattern Kullback-Liebler Divergence (ETPKLDiv\footnote{https://github.com/amidos2006/ETPKLDiv}) algorithm~\cite{lucas2019tile}. Figure~\ref{fig:evolver_screen} shows the updated interface used by the evolver. As mentioned before in the previous subsection, this version of the mutator module can interface seamlessly with the other modules to allow the user more ease of access between manual editing and evolutionary editing. The user can easily transfer the level from the editor module to the mutator module and vice versa. When switching between the editor module and the mutator module, the level loses its pure procedurally generated or pure human-designed quality and becomes a hybrid of the two - thus mixed-initiative interaction between the algorithm and the user. The evolver interface provides the user with multiple customizations such as the initialization method, stopping criteria, evolution pausing, and an application of a mutation function allowing manual user control. With these features, the user is not directly changing the evolution process itself, but instead guiding and limiting the algorithm towards generating the level they want. The ETPKLDiv algorithm uses a 1+1 evolution strategy, also known as a hillclimber, to improve the similarity between the current evolved levels and a reference level. The algorithm uses a sliding window of a fixed size to calculate the probability of each tile configuration (called tile patterns) in both the reference level and the evolved level and tries to minimize the Kullback-Liebler Divergence between both probability distributions. Like Lucas and Volz, we use a window size of 3x3 for the tile selection. This was to maximize the probability of generating initial rules for a level, since rules in `Baba is You' are made up of 3 tiles. However, in our project, we used 2+2 evolution strategy instead of 1+1 used to allow slightly more diversity in the population~\cite{lucas2019tile}. We also modified the fitness function to allow it to compare with more than one level. The fitness value also includes the potential solvability of the level ($p$), the ratio of empty tiles ($s$), and the ratio of useless sprites ($u$). The final fitness equation for a level is as follows: \begin{equation}\label{eq:fitness} fitness_{new} = min(fitness_{old}) + u + p + 0.1 \cdot s \end{equation} where $fitness_{old}$ is the Kullback-Lievler Divergence fitness function from the Lucas and Volz work~\cite{lucas2019tile} compared to a reference level. The minimum operator is added as we are using multiple reference levels instead of one and we want to pick the fitness of the most similar reference level. In the updated version of Baba is Y'all, we recalculate the ratio of useless objects ($u$) used in the original version's equation. The value $u$ is defined as the combined percentage of unnecessary object and word sprites in the level. This is broken up into 2 variables $o$ and $w$ for the objects and words respectively. The $o$ value corresponds to the objects that are not required or predicted to act as a constraint or solution for the level. The value for $o$ can be calculated as follows: \begin{equation} o = \frac{i}{j} \end{equation} where $i$ is the number of objects sprites initialized in the level without a related object-word sprite and $j$ is the total number of object sprites initialized in the level. While the $w$ value corresponds to the words that have no associated object in the map (this does not apply to keyword class words such as ``KILL'' or ``MOVE''). The value for $w$ can be calculated as follows: \begin{equation} w = \frac{k}{l} \end{equation} where $k$ is the number of word sprites initialized in the level without a related object-word sprite and $l$ is the total number of word sprites initialized in the level. To combine both variables $o$ and $w$ into the one variable $u$ a constant ratio is applied. In the system, 0.85 is applied to the $o$ variable and 0.15 to $w$. This is to more weight on reducing the number of useless object sprites as opposed to useless word sprites, as word sprites can be used to modify the properties of objects or transform other object sprites. The $u$ value is implemented in order to prevent noise within the level due to having object tiles that cannot be manipulated in any way or have relevancy to the level. A human-made level may include these ``useless'' tiles for aesthetic purposes or to give the level a theme - similar to the original `Baba is You' levels. However, the PCG algorithm optimizes towards efficiency and minimalist levels, therefore ignoring the subjective aspect of a level's quality (which can be added later by the user). The playability of the level ($p$) is a binary constraint value that determines whether a level is potentially winnable or not. The value can be calculated as follows: \begin{equation} p = \begin{cases} 1, & \text{has [`X-IS-YOU' rule, `WIN' keyword]} \\ 0, & otherwise \end{cases} \end{equation} This is to ensure any levels that are absolutely impossible to play or win are penalized in the population and less likely to be mutated and evolved from in future generations. We used a simple playability constraint check instead of checking for playability using the solver because the solver take time to check for playability. Also, all playable levels by the solver usually end up being easy levels due to the limited search space we are given for the best first algorithm. The ratio of empty tiles ($s$) is the ratio of empty space tiles to all of the tiles in the level. The equation can be calculated as follows: \begin{equation} s = \frac{e}{t} \end{equation} where $e$ is the number of empty spaces in the level and $t$ is the total number of tiles found in the level. The value $s$ is multiplied with a value of $0.1$ in equation~\ref{eq:fitness} to avoid heavy penalization for having any empty spaces in a level and to prevent encouragement for levels to mutate towards populating the level with an overabundance of similar tiles in order to eliminate any empty space. The Mutator module is not run as a back-end process to find more levels, instead it has to be done manually by the user. This is done due to the fact that some generated levels cannot be solved without human input. One might wonder why not generate a huge corpus of levels and ask the users later to test them for the system. This could result in the system generating a multitude of levels that are either impossible to solve or are solvable but not subjectively ``good'' levels - levels the user would not find pleasing or enjoyable. This overabundance of ``garbage'' levels could lead to a waste of memory and a waste human resources. By allowing the user direct control over which levels are submitted from the generation algorithm, it still guarantees that the levels are solvable and with sufficient quality and promote using the tool in a mixed-initiative approach. Future work will explore implementing a fully autonomous generator and associated solver to expand the archive of levels without human input. \subsection{Objective Module}\label{sec:objective_module} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{imgs/obj_screen.png} \caption{A screenshot of the rule objective screen} \label{fig:objective_screen} \end{figure} In conjunction with the Mutator module (section~\ref{sec:mutator_module}), an Objective Module has been implemented to help guide the evolver towards generating levels that match selected objectives - or rules - set by either the Map Module or the user. Like before this will nudge both the user and the evolver back-end towards creating levels with mechanic combinations that have not been made in the site database. Users can select from the table of mechanics which sets of rules to include in the level - whether initially at the start of the level, at the solution, or either. Initial rules can be found automatically when the user or evolver edits the level, final rules can only be determined at the end of the level - when the solution has been found. Active rules are highlighted with a green backlight in the table and change accordingly when a rule is created or removed. The evolver also prioritizes levels that match as many of the selected rules as possible. A cascading function is used to rank the generated levels from the chromosome population. The evolver first evaluates how well a generated level corresponds to the selected objectives then looks at the fitness function. With this, the evolver becomes more involved with expanding the level database for the site and actively tries to help the user fill these missing levels. \subsection{Rating Module}\label{sec:rating_module} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{imgs/rating_screen.png} \caption{A screenshot of the rating screen with 2 levels shown} \label{fig:rating_screen} \end{figure} Like the original system, a rating for a single level is determined by comparison to another level within the site database. The user must determine the better level based on two qualities: level of challenge and quality of aesthetic design. A level that is considered `more challenging' could indicate that the solution search space for the level takes longer to arrive at or is not as intuitive or straightforward. A level that is considered to have `better design' represents that the level is more visually pleasing and elegant with its map representation - a quality that is hard to generate automatically with AI. Users can select between the two levels for each feature by shifting a slider towards one level or the other. \subsection{Map Module}\label{sec:map_module} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{imgs/map_screen.png} \caption{A screenshot of the map selection screen} \label{fig:select_screen} \end{figure} The Map module functions as both storing all of the levels in the site database as well as recommending specific levels to the user to use for their own level creation process. The Map module is the core module of the system. To maintain distinguish-ability between quality and diverse levels, we implemented the MAP-Elites algorithm for this module. \begin{table}[t] \caption{Chromosome Rule Representation} \centering \begin{tabular}{|p{0.2\linewidth}|p{0.7\linewidth}|} \hline Rule Type & Definition \\ \hline \hline X-IS-X & objects of class X cannot be changed to another class \\ X-IS-Y & objects of class X will transform to class Y \\ X-IS-PUSH & X can be pushed \\ X-IS-MOVE & X will autonomously move \\ X-IS-STOP & X will prevent the player from passing through it\\ X-IS-KILL & X will kill the player on contact\\ X-IS-SINK & X will destroy any object on contact\\ X-IS-[PAIR] & both rules 'X-IS-HOT' and 'X-IS-MELT' are present \\ X,Y-IS-YOU & two distinct objects classes are controlled by the player \\ \hline \end{tabular} \label{tab:rrp} \end{table} When a level is submitted to be archived, the system uses the list of active rules at the start and the end of the level as behavior characteristic for the input level to determine its location in the map. There are 9 different rules checked for in each level - based on the possible rule mechanics that can be made in the Game module system. Table \ref{tab:rrp} shows the full list of possible rules. Since these rules can be active at the beginning or at the end, it makes the number of behavior characteristics equal to 18 instead of 9 which provide us with a map of $2^{18}$ cells. The Map Module can recommend levels to start from when designing a new level. Like the Mutator Module (section~\ref{sec:mutator_module}), it also takes the Objective Module (section~\ref{sec:objective_module}) into consideration when selecting its recommendations. The Map Module can provide levels that most similarly match the objectives chosen and provide either other levels the user has previously made or high rated (and intuitively high quality) ``elite'' levels. In this project we are using a multi population per each cell of the Map-Elites similar to the constrained Map-Elites~\cite{khalifa2018talakat}. The quality of the level is determined by user ratings - performed by the Rating Module. \subsection{User Profiles} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{imgs/user_screen.png} \caption{A screenshot of the user profile screen for the user 'Milk'} \label{fig:profile_screen} \end{figure} The user profiles feature is the newest addition to the Baba is Y'all site. Like the original system, if a user creates a profile through the site's login system and submits a level, they get authorship attributed to the submitted level. Users can also find their previously made levels on the profile page - called ``My Levels'' - and replay them, edit them, or view the level's mechanic combination. A user's personal stats for their level submissions can also be viewed on the page including the number of levels submitted, number of rule combinations contributed, and their top rated level. This feature was implemented to provide more user agency and personalization on the site and give users better access to their own submitted levels. Through the search page, players can search for specific levels by username or by level name. This creates a sense of authorship over each of the levels, even if the level wasn't designed with any human input (i.e. a level with PCG.js as the author) and encourages the collaborative nature of the site between AI and human. Users may also share links to site levels via the game page. \section{Results} The following results were extracted from the entire Baba is Y'all v2 site and includes data from levels made from participants not involved with the study. \subsection{User and Author-based Data} All users on the Baba is Y'all site had the option of registering for a new account to easily find their saved work as well as attribute personal authorship to any levels they submitted. Those who participated in the user study were given pre-made usernames in order to verify the levels they submitted from their responses and to protect their identities. These users only had to provide an email address to register for both the site and the survey. The site had a total of 727 unique users registered - only 78 (10\%) came from outside of the user study while the rest of the users participated in the survey. \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{imgs/Level-types.png} \caption{Sample levels generated for the system. The left column is user generated levels, the middle column is evolver module levels, and the right column is mixed-initiative user and evolver levels} \label{fig:level_types} \end{figure} We looked into all the levels created by the users and we divided them based on how the mixed-initiative tool was used to create them. We divided them into three main categories (as shown in figure~\ref{fig:level_types}): \begin{itemize} \item \textbf{User-Only levels:} were created from a blank map exclusively by the human user without any AI assistance. \item \textbf{PCG-only levels:} were created solely by the AI tool without any human input aside from choosing which tool to use and when. \item \textbf{Mixed-author levels:} involved both the human user as well as the AI tool in the creation process of the level. \end{itemize} \begin{table}[ht] \begin{center} \begin{tabular}{|c c c|} \hline Author Type & Number & \%\\ \hline\hline User-only & 103 & 66.45 \\ \hline PCG-only & 16 & 10.32\\ \hline Mixed-author & 36 & 23.23\\ \hline \hline Total & 155 & 100\\ \hline \end{tabular} \end{center} \caption{Authorship for levels submitted} \label{tab:level_author} \end{table} The majority of the levels submitted were user only (66.45\%), however almost a quarter (23.23\%) of the levels submitted had mixed-authorship. Table \ref{tab:level_author} shows the full data for this area. Looking at this table, we notice that the amount of submitted levels are a lot less than total number of users ($155$ levels and $727$ users). This big difference in the numbers is due to releasing the system online with no security measures. This attracted a lot of bots that created multiple accounts so they could fill out the user survey via the link provided, but did not submit any levels. \subsection{Level-based Data} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{site_graphs/rule_perc.png} \caption{Site results for the rule distribution across levels submitted} \label{fig:level_rule_dist} \end{figure} Looking into all the $155$ submitted levels, we found only $74$ different cells in the MAP-Elites matrix were covered. This is less than 1\% of the whole number of possible rule combinations ($2^{18}$ possible combinations). Figure~\ref{fig:level_rule_dist} shows the rule distributions over all of the levels submitted. The X-is-KILL rule was used the most in over half of the levels submitted and the X-is-STOP rule was used the second-most at 44.52\%. This may be because these rules create hazards for the player and add more depth to the level and solution. Meanwhile, the X-is-[PAIR] rule was used the least in only 12.9\% of the levels submitted. This is likely due to the lock-and-key nature of the rule combinations that require more intentionally placed word blocks that can also be accomplished with the X-is-SINK or X-is-KILL rule. \begin{table}[ht] \begin{center} \begin{tabular}{|c c c c|} \hline User Type & \# Rules & Sol. Length & Map Size (\# tiles) \\ \hline\hline User-only & 2.563 $\pm$ 2.19 & 25.834 $\pm$ 26.11 & 117.883 $\pm$ 50.84\\ \hline PCG-only & 1.00 $\pm$ 1.17 & 19.062 $\pm$ 14.68 & 95.437 $\pm$ 25.60\\ \hline Mixed-author & \textbf{2.833 $\pm$ 2.56} & \textbf{26.027 $\pm$ 20.36} & \textbf{127.722 $\pm$ 49.91}\\ \hline \end{tabular} \end{center} \caption{Averaged attributes for different types of created levels} \label{tab:avg_author} \end{table} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{site_graphs/rule_dist.png} \caption{Rule distributions across the different authored levels} \label{fig:rule_dist} \end{figure} The relation between rules and the different type of authors can be shown in table~\ref{tab:avg_author}. Some levels may use no rules at all (only containing the required X-is-YOU and X-is-WIN rules.) The mixed-author levels has the highest number of average rules per level ($2.833$), while PCG-only levels have the lowest average ($1$). The rule distributions for each author type are shown in Figure \ref{fig:rule_dist}. The PCG-authored levels had the least variability between rules while the Mixed-authored levels had the most variability. Mixed-author levels also had the highest average solution length and highest average level size, with PCG levels having the lowest for both attributes. \section{User Study} The following results were extracted from a Google Form survey given to the experiment participants. Users were instructed to play a level already made on the site, create a new level using the level editor, test it, and finally submit it to the site. They were also given the option to go through the tutorial of the site if they were unfamiliar with the `Baba is You' game or needed assistance with interacting with the level editor tool. Of the $727$ users registered on the site, only a total of $170$ responses were received, however, only $76$ of these responses were valid. These responses were evaluated based on cross-validation and verification between the saved level on the website and the level ID they submitted via the survey that they claimed they authored. Many of these invalid responses contained levels that either did not exist in the database or were claimed to be authored by another user already. The following results are taken from the self-reported subjective survey given to the valid $76$ users. \subsection{Demographic Data}\label{sec:demographics} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{hor_survey_graphs/freq_v2.png} \caption{A. Frequency for playing games; B. Frequency for designing levels for games} \label{fig:freq_des_play} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{hor_survey_graphs/pref_v2.png} \caption{Preference for solving or making puzzles} \label{fig:design_pref} \end{figure} Half of the users who completed the survey answered that they frequently played video games (more than 10 hours a week) with around 80\% of the users stating they play for at least 2 hours a week (figure~\ref{fig:freq_des_play}). Conversely, only 28.9\% of users responded that they spend 2 or more hours a week designing levels for games with 40.8\% of users stating they never design levels at all (figure~\ref{fig:design_pref}). When asked if they prefer to solve or make puzzles, 50\% of participants responded that they prefer to solve puzzles, while only 6.6\% preferred the latter. 40.8\% of users were split on the preference for designing and solving puzzles. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{hor_survey_graphs/experience_v2.png} \caption{A. Experience playing Sokoban; B. Experience with 'Baba is You'; C. Experience with AI-assisted level editing tools} \label{fig:exp_graph} \end{figure} We asked participants if they had ever played the original game `Baba is You' by Hempuli (either the jam version or the Steam release as both contain the rules used in the Baba is Y'all site), played a Sokoban-like game (puzzle games with pushing block mechanics), and have experience with AI-assisted level editing tools. Figure~\ref{fig:exp_graph} shows the distribution of the users' answers for these questions. Only 30\% of participants had played the game before, meanwhile 22\% had heard of it but had never played it. For the rest, this study would be their first experience with the game. Interestingly enough, 96\% of the participants stated they had played a Sokoban-like game so we can infer that the learning curve would not be too harsh for the new players. Concerning AI-Assisted level editing tools, 75\% of users had never used them before, with 5.3\% stating they were unsure if they had ever used one - thus the learning curve for AI-collaboration would be much higher and new to participants. \subsection{Self-Reported Site Interactions} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{survey_graphs/feat.png} \caption{Survey results for users' reports on the features they used} \label{fig:feat_report} \end{figure} Figure \ref{fig:feat_report} shows the full list of features that participants interacted with on the site. Users were given the optional task to go through the tutorial section of the Baba is Y'all site to familiarize themselves with both the mechanics of the original `Baba is You' game, the AI assisted tools available to them through the level editor, and the site layout and navigation itself. 81.6\% of users went through this tutorial (whether fully or partially was not recorded.) The second task for users was to play a level that was previously submitted to the website database. 100\% of users were able to solve a level by themselves, however 72.4\% of users reported choosing to watch the Keke AI solver complete the submitted level as well. The third and final task for the participants was to submit their own `Baba is You' level using the level editor. Here, users were asked the most about their involvement with the AI system. Some users chose to create more than one level, so they may have multiple experiences and their design choices may not be mutually exclusive (i.e. using a blank level and also using an AI-suggested level.) For the initial creation of the level, 88.2\% of users chose to start with a blank map. 9.2\% of users started with a level that had already been submitted to the level database - either a level that had been ranked as an elite level or a level created by the user themselves (in the case that they submitted more than one level during this study.) 6.6\% of users started with a level that was suggested from the 'Unmade' page - ideally with the intent to make a level with a rule combination that had not been made yet - thus expanding the MAP-Elites rule combination matrix in the database. Unfortunately, we forgot to ask users in the survey if they started with the random level option that was also provided by the AI assistance tool - so we lack data to report on this statistic. For editing the level, 81.6\% of users reported editing a level completely by hand without any AI assistance. 27.6\% of users edited the level with help from either the evolver algorithm or the mutator functions provided by the AI assistance back-end. 19.7\% of users reported using the objective table to aid the evolver tool in creating the level. We think this low percentage is attributed the fact that a large population of users were unfamiliar with the system or `Baba is You' game overall. This - as well as the lack of selection for level comparison from the previously submitted levels in the database - made using the evolver tool towards certain goals too steep of a task to accomplish and learn. Finally, when testing the level, 59.2\% of users reported using the Keke solver AI when testing their levels and 72.4\% of users named their levels. While not required in the tasks given, we also asked participants about any extra site features they chose to explore. 23.7\% of users reported submitting a level rating from the 'Rate' page. 51.3\% of users reported using the 'Search' tool to search for specific levels (what their search criteria was we did not ask.) Finally, 19.7\% of users reported using the 'Share Level' to share a submitted level link with others online. The least used interactions - 'Started with a database-saved map in the level editor', 'Started with a level suggestion from the Unmade page', and 'Used the objectives table to evolve levels' - were also all related to the AI mixed-initiation of the system. The first could be attributed to a lack of overall levels in the database (at the start of the experiment there were only around 40 available levels) therefore leading to a lack of viable options for the user to choose from. However, the lack of usage for the other two features could be attributed to the opposite problem of having too many options to choose from - again due to lack of levels available to choose from in the database. Trying to make a level with constrained parameters may have also been too steep of a task to accomplish for someone who was totally unfamiliar with the system or even the `Baba is You' game overall. There was also no incentive for a player to create a level suggested by the system as opposed to making a level from scratch. We also didn't explicitly instruct users to make a level from the suggested set, and instead allowed them to make whatever level they wanted with the editor - whether with the prompted ruleset or from their own ideas. \section{Discussion} \subsection{Data Analysis} It is clear from both the submitted level statistics of the site and the self-reported user survey that mixed-authorship is not the preference for users when designing levels. Many users would still prefer to have total control over their level design process from start to finish. For future work, we can look to limit user control and encourage more AI-assistance with the design process similar to the work done by Bhaumik et al.~\cite{bhaumik2021lode}. The limitations of the AI back-end (both the evolver and solver) may be at fault for the lack of AI interaction. The mutator and evolver system are dependent on previously submitted levels and level ratings in order to ``learn'' how to effectively evolve levels towards high quality design. As a result, the assistant tool is always learning what makes a ``good'' level from human input. If there is a lack of available data for the tool to learn from, the AI will be unable to create quality levels - causing the user to less likely submit mixed-initiative co-created levels, and causing a negative feedback loop. The fitness function defined for the evolver and mutator tool may be inadequate for level designing. It could produce a level that is deemed ``optimal'' in quality by its internal definition, but may actually be sub-par in quality for a human user. Another flaw in the AI-collaboration system, could be that the users lacked direct control on the evolver and mutator and attempting to use them in middle of creation might have been more problematic as it could destroy some of the level structures that the users were working on. Future work could remedy this problem by giving users various mutation "options" similar to the AI selections in RLBrush \cite{delarosa2021mixed} and Pitako. \cite{machado2019pitako} Finally, the `Keke' AI solver was also lacking in performance as a few participants mentioned that the solver was unable to solve their prototype levels that they themselves could end up solving in just a couple of moves. An improved AI solver would help with the level creation efficiency. \subsection{User Comments and Feedback} We gave the participants opportunities to provide open feedback about their experience using the site in order to gather more subjective data about their experience as well as collect suggestions for potential new features. Almost no users experienced any technical difficulties or bugs that prevented them from using the site. The few that did mentioned formatting issues with site caused by their browser (i.e. icons too close together, loading the helper gifs, font colors.) However, one user mentioned that this issue may have been because they were using the site from their phone (we unfortunately did not provide users with instructions to complete the study on a desktop or laptop.) In the future, we will be sure to exhaustively test the site on as many browsers as possible - both desktop-based and mobile - to be more accessible. Some users were confused by the tutorial and the amount of information it conveyed for the entire site citing it as ``intimidating'', ``overwhelming'', and ``a bit complex''. However, other users reported the lack of information saying it was ``not detailed'', or had ``sufficient information [...] but could have been delivered in a more comprehensible way.'' To make the game more accessible, we will most likely try to make the tutorial section less intimidating to new users by limiting the amount of information shown (possibly through a ``table of contents'' as suggested by one participant) while still being comprehensible enough to understand the level editor and tools. For feature suggestions, many users wished for larger maps and vocabulary - like those found in the Steam-release `Baba is You' game. Users also wished for a save feature that would allow them to make ``drafts'' of their level to come back later to edit. Many users also suggested a co-operative multiplayer feature for level editing and level solving - we can assume with another human and not an AI agent. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{hor_survey_graphs/browser_v2.png} \caption{User feedback for likelihood to return using the site after the experiment} \label{fig:reuse_likelihood} \end{figure} While the results of the statistics on the levels submitted were disappointing for involvement of the AI assisting tool, we also asked users how likely they would continue using the site after the experiment. 38.2\% of users said they would continue to use the site, while 55.3\% said they would maybe use the site (figure~\ref{fig:reuse_likelihood}). Many users were optimistic and encouraging with the concept of incorporating AI and PCG technologies with level design - citing the project as a ``cool project'', ``a very unique experience'', a ``lovely game and experiment'', and ``very fun.'' At the time of writing, a few users did return, as their 'Keke' assigned usernames were shown as authors on the New page, long after the study was completed. Most notably, the Keke subject user Keke978 who took up the username 'Jme7' and contributed 28 more levels to the site after the study was concluded and currently holds the title for most levels submitted and most rule combinations on the site. Many users also provided us with constructive feedback for feature implementation, site usability, and suggestions for improvement with how to further incorporate the AI back-end interactivity. As shown in figure~\ref{fig:exp_graph}, 70\% of users who played with the system had never played the game `Baba is You' and 75\% of people had never used an AI-assisted level editor tool before this experiment. Based on this information and retainability of users to complete the survey and provide the constructive feedback, we can extrapolate 2 conclusions: 1. the game stands alone, independent of `Baba is You', as an entertainment system; and 2. for people with even limited AI-gaming experience, as long as they are not completely foreign to gaming, this project has the ability to grasp their attention long enough to understand it, tinker around, and then give constructive feedback. \section{Conclusion and Future Work} The results from the user study have demonstrated both the benefits and limitations of a crowd-sourced mixed-initiative collaborative AI system. Currently, users still prefer to edit most of the content themselves, with minimal AI input - due to the lack of submitted content and ratings for the AI to learn from. Pretraining the AI system before incorporating it into the full system would be recommended to create more intelligent systems that can effectively collaborate with their human partners for designing and editing content. This would lead to more helpful suggestions on the evolver's end as well as better designed levels overall. This project is the start of a much longer and bigger investigation into the concept of crowd-sourced mixed initiative systems that can use quality diversity methods to produce content and we have many more ideas to improve upon the Baba is Y’all system. As suggested by many participants in the user study, we would like to incorporate level design collaborations between multiple users and multiple types of evolutionary algorithms all at once to create levels. Our system would take inspiration from collaboration tools such as LodeEncoder \cite{bhaumik2021lode}, RLBrush \cite{delarosa2021mixed}, and Roblox (Roblox Corporation, 2006). This would broaden the scope and possibilities of level design and development even further to allow more creativity and evolutionary progress within the system. This collaboration setting will open multitude of interesting problems to investigate such as authorship. Outside of the `Baba is You' game, we would like to propose the development of an open-source framework to allow mixed-initiative crowd-sourcing level design for any game or game clone. Such games could include Zelda, Pacman, Final Fantasy, Kirby, or any other game as long as we have a way to differentiate between levels mechanically and we can measure minimum viable quality of levels. Adding more games to the mixed-initiative framework would allow an easier barrier of entry to players who may have been unfamiliar with the independent game `Baba is You' but is very familiar with triple-A games produced by companies such as Nintendo. We would like to also propose a competition for the online `Keke' solver algorithm for the challenging levels. In this competition, users would submit their own agent that can solve the user-made and artificially created `Baba is You' levels. Ideally, this improve the solver of the `Baba is Y'all' system but also introduce a novel agent capable of solving levels with dynamically changing content and rules - an area that has not been previously explored in the field. Development for this framework for this competition has already begun at the time of writing this paper. Finally, we would like to propose the creation of a fully autonomous level generator and solver that can act as a user to our system. This generator-solver pair would work parallel to the current system's mixed-initiative approach, but with a focus on coverage to exhaustively find and create levels for every combination of mechanics. With a redefined fitness function and updated solver (possibly from the Keke Solver Competition,) this could be more efficient than having users manually submit the levels, while still using content created by human users to maintain the mixed-initiative approach. There are many new directions we can take the Baba is Y'all system and the concept of crowd-sourced collaborative mixed-initiative level design as a whole and this project will hopefully serve as a stepping stone into the area and provide insight on how AI and users can work together in a crowd-sourced website to generate new and creative content. \section*{Acknowledgment} The authors would like to thank the Game Innovation Lab, Rodrigo Canaan, Mike Cook, and Jack Buckley for their feedback on the site in its beta version as well as the numerous users who participated in the study and left feedback. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2023-04-23T08:17:28.014Z
2022-03-07T02:05:04.000Z
redpajama/arxiv
arxiv_0000
61
10,632
28aefb059d9f6613e5ff09536977455486e365d4
\section{Introduction} \label{sec:intro} Robots supporting people in their daily activities at home or at the workplace need to accurately and robustly perceive objects, such as containers, and their physical properties, for example when they are manipulated by a person prior to a human-to-robot handover~\cite{Sanchez-Matilla2020,Medina2016,Rosenberger2021RAL,Ortenzi2021TRO,Yang2021ICRA}. Audio-visual perception should adapt -- on-the-fly and with limited or no prior knowledge -- to changing conditions in order to guarantee the correct execution of the task and the safety of the person. For assistive scenarios at home, audio-visual perception should accurately and robustly estimate the physical properties (e.g., weight and shape) of household containers, such as cups, drinking glasses, mugs, bottles, and food boxes~\cite{Sanchez-Matilla2020,Ortenzi2021TRO,Liang2020MultimodalPouring,Modas2021ArXiv,Xompero2021_ArXiv}. However, the material, texture, transparency and shape can vary considerably across containers and also change with their content, which may not be visible due to the opaqueness of the container or occlusions, and hence should be inferred through the behaviour of the human~\cite{Sanchez-Matilla2020,Modas2021ArXiv,Xompero2021_ArXiv,Mottaghi2017ICCV,Duarte2020ICDL_EpiRob}. In this paper, we present the tasks and the results of the CORSMAL challenge at IEEE ICASSP 2022, supporting the design and evaluation of audio-visual solutions for the estimation of the physical properties of a range of containers manipulated by a person prior to a handover (see Fig.~\ref{fig:avsamples}). The specific containers and fillings are not known in advance, and the only priors are the sets of object categories ({drinking glasses}, {cups}, {food boxes}) and filling types ({water}, {pasta}, {rice}). The estimation of the mass and dimensions of the containers are novel tasks of this challenge, and complement the tasks of its previous version~\cite{Xompero2021_ArXiv}, such as the estimation of the container capacity and the type, mass and amount of the content. We carefully defined a set of performance scores to directly evaluate and systematically compare the algorithms on each task. Moreover, to assess the accuracy of the estimations and visualise the safeness of human-to-robot handovers, we implemented a real-to-simulation framework~\cite{Pang2021ROMAN} that provides indirect high-level evaluations on the impact of these tasks (see Fig.~\ref{fig:challengetasksdiagram}). The source code of the entries to the challenge and the up-to-date leaderboards are available at \mbox{\url{http://corsmal.eecs.qmul.ac.uk/challenge.html}}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{challenge_image.png} \caption{Sample video frames and audio spectrograms of people manipulating objects prior to handing them over to a robot.} \label{fig:avsamples} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{diagram_tasks.eps} \caption{The challenge tasks feeding into the CORSMAL simulator~\cite{Pang2021ROMAN} to evaluate the impact of estimation errors. Given video frames and audio signals from the CORSMAL Containers Manipulation (CCM) dataset~\cite{Xompero2021_ArXiv,Xompero_CCM}), the results of T1 (filling level), T2 (filling type), and T3 (container capacity) are used to compute the filling mass, which is added to T4 (container mass) for estimating the mass of the object (container + filling). The estimated dimensions (T5) are used to visualise the container. The simulator also uses object annotations, such as 6D poses over time, the true weight (container + filling), a 3D mesh model reconstructed offline with a vision baseline~\cite{Pang2021ROMAN}, and the frame where the object is ready to be grasped by the simulated robot arm, for performing and visualising the handover. } \label{fig:challengetasksdiagram} \end{figure*} \section{The tasks} \label{sec:tasks} In the scope of the challenge and based on the reference dataset~\cite{Xompero2021_ArXiv,Xompero_CCM}, containers vary in shape and size, and may be empty or filled with an unknown content at 50\% or 90\% of its capacity. We define a configuration as the manipulation of a container with a filling type and amount under a specific setting (i.e., background, illumination, scenario). The challenge features five tasks (Ts), each associated with a physical property to estimate for each configuration $j$. \begin{description} \item[Filling level classification (T1).] The goal is to classify the filling level ($\tilde{\lambda}^j$) as empty, 50\%, or 90\%. \item[Filling type classification (T2).] The goal is to classify the type of filling ($\tilde{\tau}^j$), if any, as one of these classes: 0 (no content), 1 (pasta), 2 (rice), 3 (water). \item[Container capacity estimation (T3).] The goal is to estimate the capacity of the container ($\tilde{\gamma}^j$, in mL). \item[Container mass estimation (T4).] The goal is to estimate the mass of the (empty) container ($\tilde{m}_{c}^j$, in g). \item[Container dimensions estimation (T5).] The goal is to estimate the width at the top ($\tilde{w}_t^j$, in mm) and at the bottom ($\tilde{w}_b^j$, in mm), and height ($\tilde{h}^j$, in mm) of the container. \end{description} Algorithms designed for the challenge are expected to estimate these physical properties to compute the mass of the filling as \begin{equation} \tilde{m}_f^j = \tilde{\lambda}^j \tilde{\gamma}^j D(\tilde{\tau}^j), \label{eq:fillingmass} \end{equation} where $D(\cdot)$ selects a pre-computed density based on the classified filling type. The mass of the object $\tilde{m}$ is calculated as the sum of the mass of the empty container and the mass of the content, if any. \section{The evaluation} \label{sec:evaluation} \subsection{Data} CORSMAL Containers Manipulation (CCM)~\cite{Xompero2021_ArXiv,Xompero_CCM} is the reference dataset for the challenge and consists of 1,140 visual-audio-inertial recordings of people interacting with 15 container types: 5 drinking cups, 5 drinking glasses, and 5 food boxes. These containers are made of different materials, such as plastic, glass, and cardboard. Each container can be empty or filled with water, rice or pasta at two different levels of fullness: 50\% or 90\% with respect to the capacity of the container. In total, 12 subjects of different gender and ethnicity\footnote{An individual who performs the manipulation is referred to as \textit{subject}. Ethical approval (QMREC2344a) was obtained at Queen Mary University of London, and consent from each person was collected prior to data collection.} were invited to execute a set of 95 configurations as a result of the combination of containers and fillings, and for one of three manipulation scenarios. The scenarios are designed with an increasing level of difficulty caused by occlusions or subject motions, and recorded with two different backgrounds and two different lighting conditions to increase the visual challenges for the algorithms. The annotation of the data includes the capacity, mass, maximum width and height (and depth for boxes) of each container, and the type, level, and mass of the filling. The density of pasta and rice is computed from the annotation of the filling mass, capacity of the container, and filling level for each container. Density of water is 1 g/mL. For validation, CCM is split into a training set (recordings of 9 containers), a public test set (recordings of 3 containers), and a private test set (recordings of 3 containers). The containers for each set are evenly distributed among the three categories. The annotations are provided publicly only for the training set. \subsection{Real-to-sim visualisation} The challenge adopts a real-to-simulation framework~\cite{Pang2021ROMAN} that complements the CCM dataset with a human-to-robot handover in the PyBullet simulation environment~\cite{coumans2019pybullet}. The framework uses the physical properties of a manipulated container estimated by a perception algorithm. The handover setup recreated in simulation consists of a 6 DoF robotic arm (UR5) equipped with a 2-finger parallel gripper (Robotiq 2F-85), and two tables. The simulator renders a 3D object model reconstructed offline by a vision baseline in manually selected frames with no occlusions~\cite{Pang2021ROMAN}. The weight of the object used by the simulator is the true, annotated value. We manually annotated the poses of the containers for each configuration of CCM every 10 frames and interpolated the intermediate frames. We also annotated the frame where the person started delivering the object to the robot arm. We use the annotated and interpolated poses to render the motion of the object in simulation and control the robot arm to approach the object at the annotated frame for the handover. If the robot is not able to reach the container before the recording ends, the last location of the container is kept for 2~s. When reaching the container, the simulated robot arm closes the gripper to 2~cm less than the object width to ensure good contact with the object, and applies an amount of force determined by the estimated weight of the object to grasp the container. Note that in the scope of the challenge, we avoid simulating the human hands so that the object is fully visible and can be grasped by the robot arm. The simulator visualises whether the estimations enable the robot to successfully grasp the container without dropping it or squeezing it. After grasping the container, the robot delivers it to a target area on a table via a predefined trajectory. \subsection{Scores} To provide sufficient granularity into the behaviour of the various components of the audio-visual algorithms and pipelines, we compute {13 performance scores} individually for the public test set (no annotations available to the participants), the private test set (neither data nor annotations are available to the participants), and their combination. All scores are in the range $[0,1]$. With reference to Table~\ref{tab:scores}, the first 7 scores quantify the accuracy of the estimations for the 5 main tasks and include filling level, filling type, container capacity, container width at the top, width at the bottom, and height, and container mass. Other 3 scores evaluate groups of tasks and assess filling mass, joint filling type and level classification, joint container capacity and dimensions estimation. The last 2 scores are an indirect evaluation of the impact of the estimations (i.e., the object mass) on the quality of human-to-robot handover and delivery of the container by the robot in simulation. \textbf{T1 and T2.} For filling level and type classification, we compute precision, recall, and F1-score for each class $k$ across all the configurations of that class, $J_k$. \textit{Precision} is the number of true positives divided by the total number of true positives and false positives for each class $k$ ($P_k$). \textit{Recall} is the number of true positives divided by the total number of true positives and false negatives for each class $k$ ($R_k$). \textit{F1-score} is the harmonic mean of precision and recall, defined as \begin{equation} F_k = 2\frac{P_k R_k}{P_k + R_k}. \end{equation} We compute the weighted average F1-score across $K$ classes as, \begin{equation} \bar{F}_1 = \sum_{k=1}^K \frac{J_k F_k}{J}, \label{eq:wafs} \end{equation} where $J$ is the total number of configurations (for either the public test set, the private test set, or their combination). Note that $K=3$ for the task of filling level classification and $K=4$ for the task of filling type classification. \textbf{T3, T4 and T5.} For container capacity and mass estimation, we compute the relative absolute error between the estimated measure, $a \in \{\tilde{\gamma}^j, \tilde{m}_c^j \}$, and the true measure, $b \in \{\gamma^j, m_c^j \}$: \begin{equation} \varepsilon(a, b) = \frac{|a - b |}{b}. \label{eq:ware} \end{equation} For container dimensions estimation, where $a \in \left\{\tilde{w}_t^j,\tilde{w}_b^j,\tilde{h}^j\right\}$ and $b$ is the corresponding annotation, we use the normalisation function $\sigma_1(\cdot,\cdot)$~\cite{Sanchez-Matilla2020}: \begin{equation} \sigma_1(a,b)= \begin{cases} 1 - \frac{|a - b |}{b} & \text{if} \quad | a - b | < b, \\ 0 & \text{otherwise}. \end{cases} \end{equation} For filling mass estimation\footnote{Note that an algorithm with lower scores for T1, T2 and T3, may obtain a higher filling mass score than other algorithms due to the multiplicative formula to compute the filling mass for each configuration.}, we compute the relative absolute error between the estimated, $\tilde{m}_{f}^j$, and the true filling mass, $m_{f}^j$, unless the annotated mass is zero (empty filling level), \begin{equation} \epsilon(\tilde{m}_f^j, m_f^j) = \begin{cases} 0, & \text{if } m_f^j = 0 \land \tilde{m}_f^j=0, \\ \tilde{m}_f^j & \text{if } m_f^j = 0 \land \tilde{m}_f^j \neq 0, \\ \frac{|\tilde{m}_f^j - m_f^j |}{m_f^j} & \text{otherwise}. \end{cases} \label{eq:ware2} \end{equation} With reference to Table~\ref{tab:scores}, we compute the score, $s_i$, with \mbox{$i=\left\{3,\dots,8\right\}$}, across all the configurations $J$ for each measure as: \begin{equation} \noindent s_i = \begin{cases} \frac{1}{J}\sum_{j=1}^{J}{ \mathds{1}_j e^{-\varepsilon(a, b)}} & \text{if} \, a \in \left\{\tilde{\gamma}^j, \tilde{m}_c^j\right\},\\ \frac{1}{J}\sum_{j=1}^{J}{\mathds{1}_j \sigma_1(a,b)} & \text{if} \, a \in \left\{\tilde{w}^j,\tilde{w}_b^j,\tilde{h}^j\right\},\\ \frac{1}{J}\sum_{j=1}^{J}{ \mathds{1}_j e^{-\epsilon(a, b)}} & \text{if} \, a=\tilde{m}_f^j.\\ \end{cases} \end{equation} The value of the indicator function, \mbox{$\mathds{1}_j \in \{0,1\}$}, is 0 only when \mbox{$a \in \left\{ \tilde{\gamma}^j, \tilde{m}_c^j, \tilde{w}_t^j,\tilde{w}_b^j,\tilde{h}^j, \tilde{m}_f^j \right\}$} is not estimated in configuration $j$. Note that estimated and annotated measures are strictly positive, $a>0$ and $b>0$, except for filling mass in the empty case (i.e., $\tilde{\lambda}^j = 0$ or $\tilde{\tau}^j = 0$). \begin{table*}[t!] \centering \scriptsize \renewcommand{\arraystretch}{1.2} \setlength\tabcolsep{1.3pt} \caption{Results of the CORSMAL challenge entries on the combination of the public and private CCM test sets~\cite{Xompero2021_ArXiv,Xompero_CCM}. For a measure $a$, its corresponding ground-truth value is $\hat{a}$. All scores are normalised and presented in percentages. $\bar{F}_1(\cdot)$ is the weighted average F1-score. Filling amount and type are sets of classes (no unit). } \begin{tabular}{ccccclllllccrrrrrrrrr} \specialrule{1.2pt}{3pt}{0.6pt} T1 & T2 & T3 & T4 & T5 & Description & Unit & Measure & Score & Weight & Type & R2S & RAN & AVG & \cite{Donaher2021EUSIPCO_ACC} & \cite{Liu2020ICPR} & \cite{Ishikawa2020ICPR} & \cite{Iashin2020ICPR} & \cite{Apicella_GC_ICASSP22} & \cite{Matsubara_GC_ICASSP22} & \cite{Wang_GC_ICASSP22} \\ \specialrule{1.2pt}{3pt}{1pt} \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & Filling level & & $\lambda^j$ & $s_1 = \bar{F}_1(\lambda^1, \ldots, \lambda^J, \hat{\lambda}^1, \ldots, \hat{\lambda}^J)$ & $\pi_1 = 1/8$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 37.62 & 33.15 & \textbf{80.84} & 43.53 & 78.56 & 79.65 & \multicolumn{1}{c}{--} & 65.73 & 77.40 \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & Filling type & & $\tau^j$ & $s_2 = \bar{F}_1(\tau^1, \ldots, \tau^J, \hat{\tau}^1, \ldots, \hat{\tau}^J)$ & $\pi_2 = 1/8$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 24.38 & 23.01 & 94.50 & 41.83 & 96.95 & 94.26 & \multicolumn{1}{c}{--} & 80.72 & \textbf{99.13} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & Capacity & mL & $\gamma^j$ & $s_3 = \frac{1}{J} \sum_{j=1}^{J} \mathds{1}_j e^{-\varepsilon^j(\gamma^j, \hat{\gamma}^j)}$ & $\pi_3 = 1/8$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 24.58 & 40.73 & \multicolumn{1}{c}{--} & 62.57 & 54.79 & 60.57 & \multicolumn{1}{c}{--} & \textbf{72.26} & 59.51 \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & Container mass & g & $m_c^j$ & $s_4 = \frac{1}{J} \sum_{j=1}^{J} \mathds{1}_j e^{-\varepsilon^j(m_c^j, \hat{m}_c^j)}$ & $\pi_4 = 1/8$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 29.42 & 22.06 & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 49.64 & 40.19 & \textbf{58.78} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Width at top & mm & $w_t^j$ & $s_5 = \frac{1}{J}\sum_{j=1}^{J}{\mathds{1}_j \sigma_1(w_t^j, \hat{w_t}^j)}$ & $\pi_5 = 1/24$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 32.33 & 76.89 & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 69.09 & \textbf{80.01} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Width at bottom & mm & $w_b^j$ & $s_6 = \frac{1}{J}\sum_{j=1}^{J}{\mathds{1}_j \sigma_1(w_b^j, \hat{w_b}^j)}$ & $\pi_6 = 1/24$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 25.36 & 58.19 & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 59.74 & \textbf{76.09} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Height & mm & $h^j$ & $s_7 = \frac{1}{J}\sum_{j=1}^{J}{ \mathds{1}_j \sigma_1(h^j, \hat{h}^j)}$ & $\pi_7 = 1/24$ & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 42.48 & 64.32 & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} & 70.07 & \textbf{74.33} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & Filling mass & g & $m_f^j$ & $s_8 = \frac{1}{J} \sum_{j=1}^{J} \mathds{1}_j e^{-\epsilon^j(m_f^j, \hat{m}_f^j)}$ & $\pi_8 = 1/8$* & I & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 35.06 & 42.31 & 25.07 & 53.47 & 62.16 & 65.06 & \multicolumn{1}{c}{--} & \textbf{70.50} & 65.25 \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Object mass & g & $m^j$ & $s_9 = \frac{1}{J}\sum_{j=1}^{J}{\mathds{1}_j \psi^j(m^j, \hat{F}^j)}$ & $\pi_9 = 1/8$* & I & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & 56.31 & 58.30 & 55.22 & 64.13 & 66.84 & 65.04 & 53.54 & 60.41 & \textbf{71.19} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Pose at delivery & (mm, $^\circ$) & ($\alpha^j$,$\beta^j$) & $s_{10} = \frac{1}{J}\sum_{j=1}^{J}{\Delta_j(\alpha^j,\beta^j,\eta,\phi)}$ & $\pi_{10} = 1/8$* & I & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & 72.11 & 70.01 & 73.94 & 78.76 & 72.91 & \textbf{80.40} & 60.54 & 73.17 & 79.32 \\ \midrule \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \multicolumn{3}{l}{Joint filling type and level} & $s_{11} = \bar{F}_1(\lambda^1, \tau^1, \ldots, \hat{\lambda}^1, \hat{\tau}^1, \ldots)$ & \multicolumn{1}{c}{--} & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 10.49 & 8.88 & 77.15 & 24.32 & 77.81 & 76.45 & \multicolumn{1}{c}{--} & 59.32 & \textbf{78.16} \\ \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \multicolumn{3}{l}{Container capacity and dimensions} & $s_{12} = {s_3}/{2} + (s4 + s5 + s6)/{6}$ & \multicolumn{1}{c}{--} & D & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=white] (1,1) circle (0.5ex);} & 28.99 & 53.60 & \multicolumn{1}{c}{--} & 31.28 & 27.39 & 30.28 & \multicolumn{1}{c}{--} & \textbf{69.28} & 68.16 \\ \specialrule{1.2pt}{3pt}{1pt} \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & \protect\raisebox{1pt}{\protect\tikz \protect\draw[black,fill=black] (1,1) circle (0.5ex);} & Overall score & & & $S = \sum_{l=1}^{10} \pi_l s_l$ & \multicolumn{1}{c}{--} & I & \multicolumn{1}{c}{--} & 39.11 & 44.51 & 31.52 & 35.89 & 47.04 & 48.35 & 9.05 & 66.16 & \textbf{73.43} \\ \specialrule{1.2pt}{3pt}{1pt} \multicolumn{21}{l}{\scriptsize{Best performing results for each row highlighted in bold. Results of tasks not addressed shown with a hyphen (--).}}\\ \multicolumn{21}{l}{\scriptsize{For $s_9$ and $s_{10}$, configurations with failures in grasping and/or delivering the containers in simulation using true physical properties as input are annotated and discarded.}}\\ \multicolumn{21}{l}{\scriptsize{For fairness, the residual between 100 and the scores obtained with true measures of the physical properties are added to $s_9$ and $s_{10}$ to remove the impact of the simulator.}}\\ \multicolumn{21}{l}{\scriptsize{KEY -- T:~task, D:~direct score, I:~indirect score, R2S:~measured in the real-to-simulation framework, RAN:~random estimation, AVG:~average from the training set.}}\\ \multicolumn{21}{l}{\scriptsize{* weighted by the number of performed tasks.}}\\ \end{tabular} \label{tab:scores} \vspace{-10pt} \end{table*} \textbf{Object safety and accuracy of delivery.} Object safety is the probability that the force applied by the robot, $\tilde{F}$, enables a gripper to hold the container without dropping it or breaking it~\cite{Pang2021ROMAN}. We approximate the force required to hold the container as \begin{equation} \hat{F} \approx \frac{\hat{m} (g + a_{max})}{\mu}, \label{equ:grasp_force_theoretical} \end{equation} where $\hat{m}$ is the annotated object mass; $g=9.81$~m/s$^2$ is the gravitational earth acceleration; $a_{max}$ is the maximum acceleration of the robot arm when carrying the object; and $\mu$ is the coefficient of friction between the container and the gripper ($\mu=1.0$~\cite{Pang2021ROMAN}). The value of the force applied by the robot to grasp the object is calculated with Eq.~\ref{equ:grasp_force_theoretical} using the predicted object mass $\tilde{m}$. We compute object safety as an exponential function that accounts for the difference between the applied normal force $\tilde{F}^j$ (measured in simulation) and the required normal force, $\hat{F}^j$: \begin{equation} \psi^j = e^{\frac{| \tilde{F}^j - \hat{F}^j |}{\hat{F}^j} \ln{(1-c)}} = {\ln{(1-c)}}^{\frac{| \tilde{F}^j - \hat{F}^j |}{\hat{F}^j}}, \label{eq:forcesafety} \end{equation} where the normal force is the component of the contact force perpendicular to the contact surface and $c$ controls the sensitivity of $\psi^j$~\cite{Pang2021ROMAN}. A negative difference represents a higher probability of dropping the container, and a positive difference represents a higher probability of breaking the container. We quantify the accuracy of delivering the container upright and within the target area as \begin{equation} \Delta_j = \begin{cases} 1 - \frac{\alpha}{\eta} & \text{if } (\alpha < \eta) \text{ and } (\beta < \phi), \\ 0 & \text{otherwise}, \\ \end{cases} \end{equation} where $\alpha$ is the distance from the centre of the base of the container to the target location $\boldsymbol{d}$; $\eta$ is the maximum distance allowed from the delivery target location; $\beta$ is the angle between the vertical axis of the container and the vertical axis of the world coordinate system; and $\phi$ is the value of $\beta$ at which the container would tip over. We compute the score for object safety, $s_9$, as \begin{equation} s_{9} = \frac{1}{J}\sum_{j=1}^{J}{ \mathds{1}_j \psi^j(m^j, \hat{F}^j)}, \label{eq:totobjsafetyscore} \end{equation} where the value of the indicator function, $\mathds{1}_j$, is 0 only when either the filling mass or the containers mass is not estimated for each configuration $j$; and the score for the delivery accuracy, $s_{10}$, as \begin{equation} s_{10} = \frac{1}{J}\sum_{j=1}^{J}{ \Delta_j(\alpha^j,\beta^j,\eta,\phi)}. \label{eq:deliveryscore} \end{equation} The scores $s_9$ and $s_{10}$ are partially influenced by the simulator conditions (e.g, friction, contact, robot control), but we aimed at making the simulated handover reproducible across different algorithms through the annotated object trajectory, starting handover frame, and reconstructed 3D model. {\bf Group tasks and overall score.} For joint filling type and level classification ($s_{11}$), estimations and annotations of both filling type and level are combined in $K=7$ feasible classes, and $\bar{F}_1$ is recomputed based on these classes. For joint container capacity and dimensions estimation, we compute the following weighted average: \begin{equation} s_{12} = \frac{s_3}{2} + \frac{s4 + s5 + s6}{6}. \end{equation} Finally, the overall score is computed as the weighted average of the scores from $s_1$ to $s_{10}$. Note that $s_8$, $s_9$, and $s_{10}$ may use random estimations for either of the tasks not addressed by an algorithm. \subsection{IEEE ICASSP 2022 Challenge entries} Nine teams registered for the IEEE ICASSP 2022 challenge; three algorithms were submitted for container mass estimation (T4), two algorithms were submitted for classifying the filling level (T1) and type (T2), and two other algorithms were submitted for estimating the container properties (T3, T4, T5) by three teams. We refer to the submissions of the three teams as A1~\cite{Apicella_GC_ICASSP22}, A2~\cite{Matsubara_GC_ICASSP22}, and A3~\cite{Wang_GC_ICASSP22}. A1 solved only the task of container mass estimation (T4) using RGB-D data from the fixed frontal view and by regressing the mass with a shallow Convolutional Neural Network (CNN)~\cite{Christmann2020NTNU}. To increase the accuracy, A1 extracted a set of patches of the detected container from automatically selected frames in a video, and averaged their predicted masses. To classify the filling level (T1) and type (T2), A2 used Vision Transformers~\cite{Dosovitskiy2021ICLR}, whereas A3 used pre-trained CNNs (e.g., Mobilenets~\cite{Howard2017Arxiv}) combined with Long Short-Term Memory units or majority voting~\cite{Hochreiter1997LSTM}. Only audio or audio with visual information (RGB) from the fixed, frontal view is preferred as input. To estimate the container properties (T3, T4, T5), A2 used RGB data from the three fixed views, and A3 used RGB-D data from the fixed frontal view. A2 used a modified multi-view geometric approach that iteratively fits a hypothetical 3D model~\cite{Xompero2020ICASSP_LoDE}. A3 fine-tunes multiple Mobilenets via transfer learning from the task of dimensions estimation (T5) to the tasks of container capacity (T3) and mass (T4) estimation~\cite{Wang_GC_ICASSP22}. These Mobilenets regress the properties using patches extracted from automatically selected frames where the container is mostly visible~\cite{Wang_GC_ICASSP22}. To overcome over-fitting of the limited training data and improve generalisation on novel containers, these Mobilenets are fine-tuned with geometric-based augmentations and variance evaluation~\cite{Wang_GC_ICASSP22}. Overall, A3 is designed to process a continuous stream (online), thus being more suitable for human-to-robot handovers. Table~\ref{tab:scores} shows the scores of the submissions on the combined CCM test sets. As reference, we provide the results for the properties estimated by a pseudo-random generator (RAN), by using the average (AVG) of the training set for container capacity, mass, and dimensions; or by the algorithms of four earlier entries to the challenge~\cite{Donaher2021EUSIPCO_ACC,Liu2020ICPR,Ishikawa2020ICPR,Iashin2020ICPR}. A3 achieves the highest $\bar{F}_1$ for filling type classification ($s_2 = 99.13$), and joint filling type and level classification ($s_{11} = 78.16$). A3 is also the most accurate in estimating the container mass ($s_4 = 58.78$), followed by A1 ($s_4=49.64$), and the container dimensions. A2 is the most accurate in estimating the capacity ($s_3 = 72.26$). A2 is also the most accurate for filling mass ($s_8 = 70.50$). A3 has a high accuracy for filling level and type classification, but is affected by its lower accuracy for capacity estimation. Among the entries of the challenge at IEEE ICASSP 2022, A3 achieves the best score for object safety ($s_9 = 71.19$) and delivery accuracy ($s_{10} = 79.32$). In conclusion, A3 reaches the highest overall score ($S = 73.43$), followed by A2 ($S = 66.16$). \section{Conclusion} \label{sec:conclusion} Recent, fast advances in machine learning and artificial intelligence have created an expectation on the ability of robots to seamlessly operate in the real world by accurately and robustly perceiving and understanding dynamic environments, including the actions and intentions of humans. However, several challenges in audio-visual perception and modelling humans with their hard-to-predict behaviours hamper the deployment of robots in real-world scenarios. We presented the tasks, the real-to-simulation framework, the scores and the entries to the CORSMAL challenge at IEEE ICASSP 2022 . These new entries complement the algorithms previously submitted to the challenge~\cite{Donaher2021EUSIPCO_ACC,Liu2020ICPR,Ishikawa2020ICPR,Iashin2020ICPR}. \bibliographystyle{IEEEbib}
2023-04-23T08:17:28.124Z
2022-03-07T02:01:33.000Z
redpajama/arxiv
arxiv_0000
66
5,570
30a0701b2d7f2d3f82acdac04bc19081a3cb2f11
\section{Introduction}\label{sec:intro} Time-dependent manipulation of few and many-particle quantum systems is important across all implementations of quantum computing and simulation. In such processes, decoherence and undesired transitions reducing the state fidelity are relatively ubiquitous. One important example is given by the undesired transitions that can occur between instantaneous eigenstates of the dynamical Hamiltonian upon the application of an external drive. This is why many driving protocols rely on adiabatic dynamics, where the system follows the instantaneous eigenstates and transitions are naturally suppressed. Ideal adiabatic processes are reversible making them - in principle - robust. However, to approach ideal adiabatic processes the dynamics must always be very slow, requiring compromises on the time-scales of competing heating and decoherence processes. Speeding up adiabatic protocols to enable their completion within the system's coherence time is important for the development of any quantum technologies relying on such protocols \cite{acin2018quantum}. One approach to do this is the implementation of optimal driving protocols, which aim to end up with the system in a desired final state. For example, numerically optimised paths can be employed to avoid points where gaps in the spectrum of the system become small, or additional control fields can be tuned to increase the size of these gaps \cite{kirk2004optimal,glaser2015training,AlessandroBook2007}. In broad terms, this is the goal of protocols collectively referred to as quantum optimal control. Another option is to design techniques which speed up the adiabatic dynamics, often termed shortcuts to adiabaticity (STA). The primary aim of STA is to entirely remove or suppress diabatic transitions between instantaneous eigenstates of the dynamical Hamiltonian \cite{Torrontegui2013,GueryOdelin2019}. One particularly successful technique is counterdiabatic driving (CD), which was first utilised in physical chemistry by Demirplak and Rice \cite{demirplak2003,demirplak2005}, and was independently introduced by Berry \cite{berry_transitionless_2009} under the name `transitionless driving'. CD aims to suppress losses that arise due to fast deformations of the system far from the adiabatic limit by analytically compensating for them in the Hamiltonian. In general, to suppress diabatic losses exactly, the full analytical or numerical solutions of the Schr\"odinger equation are required. This makes the implementation of CD in complex systems - \@e.g.~for many-body dynamics - difficult and requires the need for new techniques to be introduced. Links between optimal control and STA have existed throughout the development of both approaches \cite{stefanatos2021,Zhang2021connection}, but there are few examples of their explicit combination in a way that exploits their complementary nature. Some attempts to achieve this have included an emulation of CD through fast oscillations of the original Hamiltonian \cite{Petiziol2018fast, Petiziol2019accelerating} as well as through recent advances in reinforcement learning methods aimed at optimizing quantum protocols~\cite{bukov_reinforcement_2018}. Such methods have been shown to achieve a significant improvement in performance when implemented using concepts borrowed from CD~\cite{yao_reinforcement_2020}. In this work, we offer a significantly different new approach in combining elements from STA and quantum optimal control which we will call \textit{counterdiabatic optimised local driving} (COLD). A key ingredient in the development of COLD is a recent approach designed for implementing CD in the setting of larger, more complex systems: local counterdiabatic driving (LCD) \cite{sels_minimizing_2017,Kolodrubetz2017geometry, gjonbalaj2021counterdiabatic}. LCD offers a method to derive \emph{approximate} CD protocols, with the aim of suppressing undesired transitions instead of fully eliminating them. This allows it to account for some physical constraints of the system, \@e.g.~locality conditions. However, the approximate nature of the LCD protocol can lead to poor performance, necessitating the introduction of additional non-local, long-range corrections \cite{sels_minimizing_2017}. If all possible corrections are added, then LCD is equivalent to the normal analytical approaches of CD, but the additional terms are generally difficult to control experimentally. COLD offers an alternative approach, with additional control fields which allow for an optimisation of the dynamical Hamiltonian for a given local form of LCD. The impact of more complex corrections can then be radically reduced, giving a corresponding improvement in the desired protocol. The structure of this paper is as follows: first, we give a detailed description of the new method, COLD, with a focus on the elements of quantum optimal control and CD required for its implementation. In Sec.~\ref{sec:TwoSpin} we explore a 2-spin annealing protocol, that showcases the strengths of COLD. Sec.~\ref{sec:1dIsing} describes and analyses the improvements gained with COLD and its combination with other optimal control techniques in the case of state preparation in the Ising model. Then in Sec.~\ref{sec:lattice} we show the improvement that COLD can achieve on the recently realised example of LCD for state transfer on a synthetic lattice in ultracold atoms. A list of abbreviations used in this work can be found in Table.~\ref{table} for reference. \begin{table}[h] \begin{tabular}{p{2cm} | p{6cm}} \toprule \multicolumn{1}{m{2cm}}{\centering Abbreviation} & \multicolumn{1}{m{6cm}}{\centering Meaning} \\ \midrule STA & shortcuts to adiabaticity \\ \hline CD & counterdiabatic driving \\ \hline LCD & local counterdiabatic driving \\ \hline COLD & counterdiabatic optimised local driving \\ \hline BPO & bare Powell optimisation \\ \hline CRAB & chopped randomised basis \\ \hline ARP & adiabatic rapid passage \\ \bottomrule \end{tabular} \caption{List of abbreviations used throughout the manuscript.}\label{table} \end{table} \section{An Introduction to Counterdiabatic Optimised Local Driving}\label{sec:intro_olcd} \subsection{Quantum Optimal Control}\label{sec:OptCont} In the context we consider, we employ quantum optimal control to optimise the function $f(\psi,\boldsymbol{\beta})$ in the Schr\"odinger equation \begin{equation} \dot{\psi} = f(\psi,\boldsymbol{\beta}), \label{eq:Control} \end{equation} where $\psi$ is the quantum wave function and $\boldsymbol{\beta}$ is the set of optimisable control parameters. Optimisation of Eq.~\eqref{eq:Control} in most cases means taking the system from an initial state $\ket{\psi_0}$ to a final target state $\ket{\psi_T}$ by finding the optimal values of $\boldsymbol{\beta}$ with respect to some target metric (e.g.~the time taken to evolve the system from $\ket{\psi_0}$ to $\ket{\psi_T}$). There is a large variety of techniques available to achieve this goal \cite{glaser2015training,koch2016controlling}. The success/target metric needs to be defined prior to the optimisation of $\boldsymbol{\beta}$. Often this is done by constructing a \emph{cost function}, which in turn defines the optimisation landscape. In general, we can optimise for any desired property of the final state of the system, with some examples being the entropy, energy, energy fluctuations or some other observable. A commonly used cost function in state preparation is related to the fidelity of the final, post-evolution state $\ket{\psi_f}$ with respect to the target state: \begin{align} \label{eq:lossfunc} \mathcal{C}(\ket{\psi_f}) = 1 - \left|\braket{\psi_T}{\psi_f}\right|^2. \end{align} In performing such a numerical optimisation, it is common to take the target state to be parameterised via a Hamiltonian split into two parts. The first is the so-called \emph{bare} Hamiltonian $H_0(t)$, which can be time-dependent and describes the dynamics of the quantum system in question. The second part is then an additional driving term that includes the control parameters $\boldsymbol{\beta}(t)$ and operators $\mathcal{O}_{\rm opt}$ which provide additional degrees of freedom in the dynamics of the system. The full Hamiltonian of the control system is then: \begin{align} \label{eq:h_optimal_control} H_{\beta}(t,\boldsymbol{\beta}) = H_0(t) + \boldsymbol{\beta}(t)\mathcal{O}_{\rm opt}. \end{align} The parameters $\boldsymbol{\beta}(t)$ can then be optimised for the optimal dynamics with respect to the metric defined by the cost function. In this work, we use the Powell minimization approach \cite{powell1964efficient} for the numerical optimisation as implemented in Python's \textit{SciPy} \cite{Pauli2020SciPy}. When performing this optimisation without any CD terms in the Hamiltonian, we refer to this approach as bare Powell optimisation (BPO), with bare referring to the lack of CD. Furthermore, we implement the chopped rendomised basis (CRAB) approach \cite{Caneva_chopped_2011,muller2021} and combine its methodology with that of COLD, to obtain the method of COLD-CRAB. CRAB expands the size of the parameter landscape by employing randomisation, usually in the optimised pulse driving the system. The approach was first developed for quantum many-body problems whose simulation requires the time-dependent density matrix renormalization group, despite the fact that these were thought to not be tractable in the quantum control setting \cite{brif2010,muller2021}. CRAB has benefits in that it can avoid traps in the control landscape \cite{Rach2015}, and has built-in flexibility for open-loop or closed-loop optimisation \cite{heck2018,muller2021} although these advantages come at a higher computational cost due to requiring a far larger search-space for the optimisation. \subsection{Counterdiabatic Driving}\label{sec:LCD} An important class of optimisation problems deals with the case where the initial and final states are ground states of a Hamiltonian $H_0(t)$ at some initial $t=t_i$ and final $t=t_f$ time. In these cases, the adiabatic theorem guarantees that for an infinitesimally slow transformation of the system $t_f-t_i\to\infty$, it should follow the instantaneous (non-degenerate) ground state of $H_0(t)$ and hence reach the target state with unit fidelity. This process is generally known as quantum annealing. In large, complex systems with many degrees of freedom, quantum annealing tends to require prohibitively long protocol times due to vanishingly small gaps typically present in such systems. This often makes annealing protocols impractical \cite{farhi2008how}. It has been found that this problem can be formally overcome by using CD, where velocity-dependent terms are added to the Hamiltonian analytically enforcing the adiabatic wave function to be the solution of the time-dependent Schr\"odinger equation \cite{demirplak2003,demirplak2005,berry_transitionless_2009}. In this case, the dynamical state will follow the instantaneous eigenstate with no transitions regardless of the driving time. The form of the dynamical Hamiltonian enforcing this is \cite{berry_transitionless_2009}: \begin{equation}\label{eq:counterdiabatic} \begin{aligned} & H_{\mathrm{CD}}(t) = H_0 (t) \\ & + i\hbar \sum_n (\ket{\partial_t n}\bra{n} - \bra{n}\ket{\partial_t n}\ket{n}\bra{n}), \end{aligned} \end{equation} with $\ket{n}\equiv \ket{n(t)}$ the $n$-th eigenstate of the instantaneous Hamiltonian $H_0(t)$. The last term enforces the phases ($\bra{n}\ket{\partial_t n}$) on the instantaneous eigenstates, which are arbitrary and thus will be omitted. In general, knowledge of the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} requires knowledge of the full spectrum of $H_0(t)$ at each instantaneous moment in time. \subsection{Counterdiabatic Optimised Local Driving} \label{sec:OLCD} We will now introduce the main idea of COLD. The principle is to take the same approach as Sec.~\ref{sec:LCD} but with the original Hamiltonian given by $H_\beta(t,\boldsymbol{\beta})$, see Eq.~\eqref{eq:h_optimal_control}. Quantum Annealing then applies to the whole family of control Hamiltonians $ H_{\beta}(t,\boldsymbol{\beta})$ as long as the additional control terms $\boldsymbol{\beta}(t)$ vanish at the protocol boundaries: $\boldsymbol{\beta}(t_i)=\boldsymbol{\beta}(t_f)=0$. This flexibility was explored in finding the optimal adiabatic path characterized by e.g. the shortest distance between the initial and the final states, i.e. a geodesic \cite{tomka2016geodesic}. A similar geodesic approach was developed in the context of dissipative systems to minimize energy losses~\cite{Sivak2012Geodesic}. During the protocol, a dynamical Hamiltonian $H_\beta(t,\boldsymbol{\beta})$ generally induces transitions between the quantum states that it drives and the question about what is the optimal path remains open. The Hamiltonian $H_\beta(t,\boldsymbol{\beta})$ and its eigenstates depend on time only through the driving parameters, which include $\boldsymbol{\beta}$ and any additional control terms in the particular protocol. This makes it convenient to introduce a path in the coupling space parametrized by a dimensionless parameter $\lambda\in [0,1]$ such that both $H_0$ and $\boldsymbol{\beta}$ are functions of $\lambda$ satisfying $H_\beta(\lambda=0)=H_{0}(t_i)$ and $H_\beta(\lambda=1)=H_{0}(t_f)$, i.e. being equal to the initial and the final Hamiltonian at the protocol boundaries. By construction this implies that any additional fields introduced to the bare Hamiltonian $H_0$ must go to zero at the boundaries. In this way, any protocol can be uniquely characterized by first specifying the path $\boldsymbol{\beta}(\lambda)$ in the coupling space manifold and then the time dependence $\lambda(t)$ along it. The path determines the sequence of couplings of the Hamiltonian during time evolution and hence the sequence of ground state wave functions followed by the driven state. Furthermore, the time dependence encodes the speed of traversing this path. We can then introduce a hermitian operator called the (path-dependent) adiabatic gauge potential~\cite{sels_minimizing_2017}: $\mathcal A_\lambda=i \hbar \sum_n \ket{\partial_\lambda n}\bra{n}$, which satisfies a closed form equation, \begin{equation} \label{eq:AGP_eq} \left[\partial_\lambda H_\beta+{i\over \hbar} [ \mathcal A_\lambda,H_\beta],H_\beta\right]=0, \end{equation} where $H_\beta \equiv H_\beta(\lambda,\boldsymbol{\beta}(\lambda))$ and $\mathcal{A}_\lambda \equiv \mathcal{A}_\lambda(\lambda,\boldsymbol{\beta}(\lambda))$. Then the CD Hamiltonian reads \begin{align}\label{eq:counterdiabaticLCD} H_{\mathrm{CD}}(\lambda,\boldsymbol{\beta}(\lambda)) = H_\beta(\lambda,\boldsymbol{\beta}(\lambda)) +\dot \lambda \mathcal A_\lambda(\lambda,\boldsymbol{\beta}(\lambda)), \end{align} and is equivalent to the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} given knowledge of the exact adiabatic gauge potential. However, generally the adiabatic gauge potential is a very non-local object and solutions of Eq.~\eqref{eq:AGP_eq} are unstable to small perturbations containing exponentially many terms in the number of degrees of freedom. LCD aims to find approximate gauge potentials that satisfy particular requirements like robustness and locality, thus circumventing many of the difficulties in determining the second component in Eq.~\eqref{eq:counterdiabatic} and~\eqref{eq:counterdiabaticLCD} exactly. The goal, in essence, is to suppress the most relevant diabatic effects rather than completely eliminate them. This method has recently been experimentally implemented to speed up state transfer for synthetic lattices in ultracold atoms \cite{meier_counterdiabatic_2020}, for preparing states in nuclear-magnetic-resonance systems \cite{zhou_experimental_2020}, and annealing protocols on an IBM quantum computer \cite{Hegade2021Shortcuts}. Following the methods of Ref.~\cite{sels_minimizing_2017}, the problem of finding the optimal adiabatic gauge potential can be cast as the minimisation of the Hilbert-Schmidt norm of the operators \begin{equation}\label{eq:Goperator} G_{\lambda}= \partial_{\lambda}H_\beta + i\comm{\mathcal{A}_\lambda}{H_\beta}, \end{equation} which is equivalent to minimisation of the action \begin{equation}\label{eq:actionCD} \mathcal{S}(\mathcal{A}_{\lambda}) = \Trace{\left[G_{\lambda}(\mathcal{A}_{\lambda})^2\right]}, \end{equation} with respect to $\mathcal{A}_{\lambda}$. In most cases, this is achieved by first choosing an operator ansatz - \@i.e. a set of linearly independent operators $\{\mathcal{O}_{\rm LCD}\}$ - and then using this set as an operator basis for the adiabatic gauge potential $\mathcal A_\lambda=\sum_j \alpha_j \mathcal O_{\rm LCD}^{(j)}$. The action can then be minimized with respect to the the set of coefficients, ${\bm \alpha}$. In the example of an Ising spin chain we may take $\mathcal{A}_\lambda = \sum_j^N \alpha_j \sigma^y_j$, where $j$ labels the $N$ chain sites, and $\{\mathcal{O}_{\rm LCD}\}$ is a set the $y$-pauli matrices. Without any additional control fields $\boldsymbol{\beta}$, LCD is essentially an informed choice of the operator set $\{\mathcal{O}_{\rm LCD}\}$ in a way that the resulting control protocol from the minimisation of Eq.~\eqref{eq:actionCD} is optimal for a given $H_0(\lambda)$. In this case we explore the family of Hamiltonians \begin{equation} H_{\rm LCD}(\lambda) = H_0(\lambda) +\sum_j \alpha_j(\lambda) \mathcal{O}_{\rm LCD}^{(j)}. \end{equation} The performance of such LCD protocols is determined by how accurately the variational manifold spanned by the set $\{\mathcal{O}_{\rm LCD}\}$ can approximate an exact $\mathcal{A}_{\lambda}$ such that Eq.~\eqref{eq:AGP_eq} holds. In the case of the new protocol COLD, we allow for extra exploration of the family of Hamiltonians due to the additional control fields as in Eq.~\eqref{eq:h_optimal_control}. This expands the family of Hamiltonians to \begin{equation}\label{eq:expandedH} H_\beta(\lambda,\boldsymbol{\beta}) = H_0(\lambda) + {\bm \alpha}(\lambda,\boldsymbol{\beta}) \mathcal{O}_{\rm LCD} + \boldsymbol{\beta}(\lambda) \mathcal{O}_{\rm opt}. \end{equation} Note, that the coefficients of the optimal control field change the form of the LCD driving coefficients, i.e. $\bm \alpha = f(\lambda,\boldsymbol{\beta})$. The aim of COLD is then to choose $\boldsymbol{\beta}(\lambda)$ and $\mathcal{O}_{\rm opt}$ such that the LCD term is optimal for the dynamical Hamiltonian $H_0(\lambda)+\boldsymbol{\beta}(\lambda)\mathcal{O}_{\rm opt}$. We will focus on the optimisation of the control parameters $\boldsymbol{\beta}(\lambda)$ for a given choice of $\mathcal{O}_{\rm opt}$, although the choice of operators $\mathcal{O}_{\rm opt}$ can also be optimised over as an extension. With COLD, we have two methods to improve on the existing LCD protocol. As previously shown in Refs.~\cite{claeys_floquet-engineering_2019,prielinger_diabatic_2020}, there is a possibility to add more terms to the LCD making it less local, \@e.g.~through long-range interactions. In the spin chain case, we could take the aforementioned sum over $\sigma^y$ terms to be the \textit{first-order} anzatz for the LCD, where higher-order ans\"{a}tze might contain sets of operators $\{\mathcal{O}_{\rm LCD}\}$ with terms odd in $\sigma^y$ such as $\sigma^y_j\sigma^{(z,x)}_{j+1}$. This procedure generally improves the performance of CD protocols at the expense of adding more complex operators which may be experimentally impractical depending on the scenario. Alternately, with COLD and the introduction of additional local control fields to the Hamiltonian, we can improve the performance of LCD at a fixed complexity of the CD term. If these extra control fields vanish at the beginning and at the end of the protocol, they do not affect the adiabatically connected states, but they can significantly modify the adiabatic landscape at intermediate couplings to enhance the performance of the given order of LCD. In order to optimize for the additional fields, in this work, we use methods of quantum optimal control. However, we note that such optimization can be done locally by requiring that the next order variational correction to the CD terms is small along the optimal landscape. This local optimization may be advantageous in that it does not require knowledge of the wave function and in this sense is not limited by small system sizes. Furthermore, we compare COLD to the use of CRAB, as discussed in Sec.~\ref{sec:OptCont}. An advantage of COLD is that it can be combined with many advanced optimal control procedures, owing to the standard way additional control fields are introduced to the Hamiltonian. In this work we find the combination of COLD and CRAB particularly useful and we will refer to this as COLD-CRAB. \section{Two Spin Quantum Annealing}\label{sec:TwoSpin} To showcase and explore the use of COLD in a relatively simple setting we will consider a two spin quantum annealing problem with bare Hamiltonian \begin{equation}\label{eq:Hanneal} H_0(t) = -2J \sigma_1^z \sigma_2^z - h ( \sigma_{1}^z + \sigma_{2}^z) + 2h \lambda(t) (\sigma_{1}^x + \sigma_{2}^x), \end{equation} where $\sigma^a_j$, $a \in \{x,y,z\}$ are the Pauli matrices applied to spins indexed by $j$. For the scaling function $\lambda(t)$ we pick the form \begin{equation}\label{eq:Scalingfunc} \lambda(t) = \sin^2\left(\frac{\pi}{2} \sin^2 \left( \frac{\pi t}{2 \tau} \right) \right) \end{equation} such that $\lambda(0) = 0$ and $\lambda(\tau) = 1$. We consider the case of $J/h=0.5$, which means the spins start in the initial state of $\ket{\uparrow \uparrow}$ and finish in a superposition of all of the symmetric states. As discussed in Ref.~\citep{sels_minimizing_2017}, since $H_0$ has a standard Ising spin chain form, the first-order LCD terms are given by the following ansatz for the adiabatic gauge potential: \begin{equation}\label{eq:LCD1st} \mathcal{A}(\lambda) = \alpha \sum_{i=1}^2 \sigma_i^y, \end{equation} with the sum being over the full length of the $N$ spin chain. Minimising Eq.~\eqref{eq:actionCD} for this $\mathcal{A}_{\lambda}$ with respect to the coefficient $\alpha$ gives \begin{equation} \alpha = - \frac{h^2}{4(h\lambda)^2 + h^2 + 4J^2}. \end{equation} To further improve on the first-order LCD we can implement COLD, as we will discuss shortly, or we can introduce higher-order terms to the ansatz for $\mathcal{A}_{\lambda}$. This second method serves as a good benchmark against COLD, since it offers an improvement to first-order LCD in the same way as COLD does, but requires more complicated interactions between the two spins increasing the implementation overhead. The second-order LCD can be found by taking an ansatz for the adiabatic gauge potential: \begin{equation} \begin{aligned} \mathcal{A}^{(2)}(\lambda) =& \alpha \sum_{j} \sigma_j^y + \gamma (\sigma_1^x \sigma_{2}^y + \sigma_1^y \sigma_{2}^x) \\ & + \zeta (\sigma_1^z \sigma_{2}^y + \sigma_1^y \sigma_{2}^z), \end{aligned} \end{equation} where to solve for $\alpha$, $\gamma$, and $\zeta$ we once again minimize the action given by Eq.~\eqref{eq:actionCD} and obtain three coupled equations which can be solved numerically (see Appendix \ref{app:derivation} for a detailed derivation). \begin{figure}[t] \includegraphics[width=0.98\linewidth]{TwoSpin.png} \caption{Optimisation of the annealing protocol for two spin Hamiltonian given by Eq.~\eqref{eq:Hanneal} for $h/J=2$. (a) Final fidelities of the annealing protocol with triangles (black) representing the case where no CD is applied and circles showing the case of first-order LCD (pink) as well as the combination of first- and second-order LCD (orange). (b) Final fidelities achieved when using the optimal control method BPO (red diamonds) and the new approach of COLD (blue circles), both with $N_k=1$.}\label{fig:TwoSpin} \end{figure} We now consider three distinct cases in this two spin quantum annealing example: no LCD, first-order LCD, and second-order LCD. The fidelity of the final state for each case over a wide range of driving times is shown in Fig.~\ref{fig:TwoSpin}(a), with an easily distinguishable advantage in the case of LCD. The final fidelity where no LCD is implemented decreases rapidly as the ramp times are made short, with the system getting stuck in its initial state. On the contrary, first-order LCD retains good final state fidelities into short times, as the driving Hamiltonian becomes that of only the LCD term. The second-order LCD then gives unit fidelity, in agreement with previous observations \cite{claeys_floquet-engineering_2019}, as for a two spin Hamiltonian the highest order corrections are that including two spin terms. We now add an optimisable term, as described in Sec.~\ref{sec:OptCont}, so that the new Hamiltonian reads: \begin{equation}\label{eq:HannealOpt} H_\beta(t) = H_0(t) + \sum_{k=1}^{N_k} \beta^k \sin (\pi k t / \tau) \sum_i \sigma_i^z, \end{equation} \noindent with $N_k$ the number of optimisation coefficients, and $\beta^k$ the coefficient of the $k$th frequency of the control function. Note that we consider \begin{equation} \boldsymbol{\beta}(t) = \sum_{k=1}^{N_k} \beta^k \sin (\pi k t / \tau) = \sum_{k=1}^{N_k} \beta^k \sin (\pi k f(\lambda)), \end{equation} with \begin{equation}\label{eq:lambda} f(\lambda) = \frac{2}{\pi}\arcsin\left(\sqrt{\frac{2}{\pi} \arcsin\left(\sqrt{\lambda}\right)}\right). \end{equation} The form of the additional control field fulfils the requirement that the boundary conditions are $H(0) = H_0(0)$ and $H(\tau) = H_0(\tau)$. Numerically optimising the $\beta^k$ for the best final state fidelity \emph{without} adding LCD terms results in the BPO method introduced in Sec.~\ref{sec:OptCont}. We show the results of BPO in Fig.~\ref{fig:TwoSpin}(b), where it is observed that BPO gives better results than the case of no LCD in Fig.~\ref{fig:TwoSpin}(a). However, for short times the BPO approach still results in the system getting stuck in the initial state. \begin{figure*}[t] \includegraphics[width=0.9\linewidth]{IsingUnconstrained.png} \caption{Optimisation of the annealing protocol for the Ising model given by Eq.~\eqref{eq:h0_ising} for $N=5$ spins. (a) A comparison of final state fidelities for different driving times using the optimal control technique BPO (blue diamonds), first-order LCD (black dash-dot line) and COLD (red circles). The same is shown in (b) for CRAB (green diamonds) and COLD-CRAB (purple circles). CD enhanced techniques (COLD and COLD-CRAB) introduced in this work show a clear convergence to good fidelities at short driving times. All results are for the best (lowest) fidelity achieved over $500$ optimisations.}\label{fig:IsingUnconstrained} \end{figure*} Finally we present and compare the results of the new method, COLD. In this case the Hamiltonian before adding LCD terms is given by Eq.~\eqref{eq:HannealOpt} and the coefficient of the first-order LCD is \begin{equation} \alpha = -\frac{h(h+\boldsymbol{\beta}) + h\frac{\lambda}{\dot{\lambda}} \dot{\boldsymbol{\beta}}}{4(h\lambda)^2 + (h+\boldsymbol{\beta})^2 + 4J^2}. \end{equation} Note that the optimisation of the additional control field also feeds into the coefficient of the adiabatic gauge potential during the dynamics as discussed in Sec.~\ref{sec:OLCD}. The results of the COLD approach for this two spin annealing protocol are shown in Fig.~\ref{fig:TwoSpin}(b), where we observe an improvement of the final state fidelity beyond what is possible with first-order LCD alone in Fig.~\ref{fig:TwoSpin}(a). In this example, LCD alone reaches a final state fidelity of $1-F=3\%$ at short times, however COLD improves this error in the final state to $1-F=0.005\%$. This is due to the extended family of dynamical Hamiltonians in Eq.~\eqref{eq:expandedH} owing to the addition of an optimisable control field. This result shows that COLD can provide an advantageous alternative to the addition of higher-order LCD which may be experimentally impractical. We have found that COLD performs better than LCD of the same order or BPO when the system dynamics are calculated numerically. This does not, however, imply anything about the performance of COLD in more complex scenarios, like in the case of an unknown target ground state. In that case the fidelity is a poor optimisation metric. There is, however, a way to come to the same conclusions as those presented in Fig.~\ref{fig:TwoSpin} without the need to compute the dynamics exactly. We can do this by first using a guess for the COLD protocol to find the approximate adiabatic gauge potential and then minimising the integral of the norm of the second-order correction to the adiabatic gauge potential along the path. Note, that the ground state can be in turn obtained through first order COLD, so there is no need to diagonalize the Hamiltonian. This integral should be small if COLD has implemented a dynamical Hamiltonian that makes the first-order adiabatic gauge potential the leading term. It is effectively a measure of the error of COLD and can be given by \begin{equation} \begin{aligned} \mathcal{I}_1 = \int_0^\tau dt^\prime \Big[& \bra{\psi_g(t^\prime)} \Gamma^2(t^\prime) \ket{\psi_g(t^\prime)} \\ &- (\bra{\psi_g(t^\prime)} \Gamma(t^\prime) \ket{\psi_g(t^\prime)})^2\Big]^{1/2}, \end{aligned} \end{equation} \noindent with $\ket{\psi_g(t)}$ the instantaneous ground state along the path and \begin{equation} \Gamma(t) = \gamma(t) \left( \sigma_1^y \sigma_2^x + \sigma_1^x \sigma_2^y \right), \end{equation} \noindent one of the second-order correction terms. In order to confirm this is the case, we compare the different paths -- COLD and LCD only -- in the two-spin example in order to determine if $\mathcal{I}_1$ is small for COLD. If $\mathcal{I}_1$ is small when compared to the same measure for lower-order LCD as $t\rightarrow 0$, then we know that COLD is enforcing a better dynamical Hamiltonian. In the case of the two spin annealing protocol we find that as $t\rightarrow0$, $\mathcal{I}_1\rightarrow 0.04$ for COLD and $\mathcal{I}_1\rightarrow 0.2$ for LCD, showing that COLD is minimising the second-order correction along the path. A simpler integral \begin{equation} \mathcal{I}_2 = \int_0^\tau dt^\prime |\gamma(t^\prime)|, \end{equation} also reflects this correction in this two spin example, with $\mathcal{I}_2 \rightarrow 0.03$ for COLD and $\mathcal{I}_2 \rightarrow 0.1$ for LCD as $t\rightarrow0$. This could be useful in more complex scenarios as $\mathcal{I}_2$ will be relatively simple to calculate. We also observe the reduction of the corresponding integrals of the $(\sigma^y_1\sigma^z_2 + \sigma^z_1\sigma^y_2)$ term of the second-order LCD. By minimising these integrals, it could be possible to extend the COLD approach to more complex scenarios, including where the exact calculation of the dynamics is not possible. \section{1D Ising Model} \label{sec:1dIsing} In this section we apply COLD for state preparation on a 1D Ising spin chain in the presence of a transverse and longitudinal field. We consider an annealing protocol where the aim is to prepare the ground state across the Ising phase transition. The annealing Hamiltonian is given by \begin{align}\label{eq:h0_ising} \begin{split} H_{0}(t) &= - J \sum_{j}^{N-1} \sigma^z_j\sigma_{j+1}^z + Z_0\sum_j^N \sigma_j^z \\ &+ \lambda(t) X_f \sum_j^N \sigma_j^x, \end{split} \end{align} where $Z_0$ is a small offset parameter to break ground state degeneracies and $X_f$ is the final x-field strength. Note, the breaking of the ground state degeneracies is not a requirement but allows for easier consideration of the adiabatic path. As before, $\lambda(t)$ is a scaling function that has the boundary conditions $\lambda(0) = 0$ and $\lambda(\tau) = 1$, with $\tau$ the driving time. This means we start from the ground state of all spins up and drive across the quantum phase transition to the ground state which is a superposition of all basis states. We again take the scaling function to be given by Eq.~\eqref{eq:Scalingfunc}. In this example, we use $X_f = 10J$ and $Z_0 = 0.02J$. For the Hamiltonian of Eq.~\eqref{eq:h0_ising}, the LCD to first and second order is well known, as the wave functions are entirely real. We take the first-order adiabatic gauge potential to be given by \begin{equation}\label{eq:LCD1} \mathcal{A}(\lambda) = \alpha \sum_{j}^N\sigma_j^y, \end{equation} \noindent where the coefficients for the general periodic spin chain of Eq.~\eqref{eq:h0_ising} are \begin{align} \label{eq:alphas} \alpha(\lambda) = \frac{1}{2} \frac{Z_0 X_f}{Z_0^2 + \lambda^2 X_f^2 + 2J^2}. \end{align} Note, that the quoted $\alpha$ above is technically for a periodic or infinite size system, with $J^2 \rightarrow J^2(1-1/N)$ for a finite system. However, we find that the inclusion of the factor for the finite system sizes we consider only changes the final achieved converged fidelities at short times by $\sim 10^{-6}\%$. The second-order adiabatic gauge potential is of the form \begin{equation} \begin{aligned} \mathcal{A}^{(2)}(\lambda) =& \alpha \sum_{j} \sigma_j^y + \gamma \sum_{j} (\sigma_j^x \sigma_{j+1}^y + \sigma_j^y \sigma_{j+1}^x) \\ & + \zeta \sum_{j} (\sigma_j^z \sigma_{j+1}^y + \sigma_j^y \sigma_{j+1}^z) , \end{aligned} \label{eq:SecondLCD} \end{equation} with the coefficients $\alpha$, $\gamma$ and $\zeta$ again obtained by minimising the action given by Eq.~\eqref{eq:actionCD} and solving the coupled set of equations numerically (see Appendix \ref{app:derivation} for a detailed derivation). \begin{figure}[t] \includegraphics[width=0.98\linewidth]{MaxAmp.png} \caption{Maximum amplitudes of CD terms in the Ising model annealing protocol for (a) first- and second-order LCD only with no additional optimal control fields and (b) the COLD approach optimised for the best final state fidelity implementing first-order LCD as shown in Fig.~\ref{fig:IsingUnconstrained} (a). The plot shows the maximum amplitude at each driving time for the first-order $\alpha$ (red circles) and the two second-order terms $\gamma$ (blue diamonds) and $\zeta$ (green triangles) as given in Eq.~\eqref{eq:SecondLCD} (although the second-order LCD is not actually implemented in COLD). An inversion in the strength of the second-order and first-order LCD terms for (a) no additional optimal control fields and (b) the addition of optimal control fields shows that COLD implements a dynamical Hamiltonain which is favourable for the applied order of LCD (first-order in this case).}\label{fig:MaxAmp} \end{figure} In this example, optimal control is implemented by introducing an additional driving field so that the dynamical Hamiltonian is given by \begin{align}\label{eq:h_beta} H_{\beta}(t, \boldsymbol{\beta}) = H_0(t) + \sum_j \boldsymbol{\beta}(t)\sigma^z_j, \end{align} with $\boldsymbol{\beta}$ being the terms to optimise over. We take our additional terms to again respect the boundary conditions $\boldsymbol{\beta}(0) = 0$ and $\boldsymbol{\beta}(\tau) = 0$, meaning a natural choice is \begin{align}\label{eq:optimsable_1} \boldsymbol{\beta}(t) = \sum_k^{N_k} \beta^k \sin(\omega_k t / \tau) = \sum_k^{N_k} \beta^k \sin(\omega_k f(\lambda)), \end{align} with $\omega_k = k 2\pi$ the $k$th principal frequency and $f(\lambda)$ given by Eq.~\eqref{eq:lambda}. To implement the CRAB algorithm discussed in Sec.~\ref{sec:OptCont}, we will use $k \rightarrow k(1+r)$ instead with $r$ drawn from a uniform random distribution $r \in [-0.5,0.5]$. As before, we choose the first order adiabatic gauge potential given by Eq.~\eqref{eq:LCD1} and find that the coefficients are \begin{align} \alpha(\lambda,\boldsymbol{\beta}) = \frac{X_f}{2} \frac{(Z_0 + \boldsymbol{\beta}) - \lambda\dot{\boldsymbol{\beta}}/\dot{\lambda}}{(Z_0 + \boldsymbol{\beta})^2 + \lambda^2 X_f^2 + 2 J^2}. \end{align} Note, with the introduction of the additional control fields $\boldsymbol{\beta}$ it is possible for $\alpha$ to be non-zero at the start or end of the protocol, as $\dot{\boldsymbol{\beta}}$ is not fixed to be zero. However, this can be enforced by a suitable choice of the additional control field, we will consider replacing $\alpha \rightarrow S(\lambda)\alpha$ where $S(\lambda)$ is a scaling function that tends to zero as $\lambda \rightarrow 0$ and $\lambda \rightarrow 1$. We find that the scaling function only has a minimal effect on the final fidelities observed. This issue could also be resolved by a suitable choice of $\boldsymbol{\beta}$, with our example drive being an extreme case as $\dot{\boldsymbol{\beta}}$ is maximal at the boundaries of the protocol. The suitable choice of the form of $\boldsymbol{\beta}$ in a given example is a problem we will leave for future work, with our focus being on the introduction of the COLD protocol. \begin{figure}[t] \includegraphics[width=0.98\linewidth]{ScalingN.png} \caption{Scaling of fidelities in the annealing protocol for the Ising model with (a) system size $N$ and (b) optimisation parameters $N_k$ at driving time $\tau=10^{-2}J^{-1}$. Plots show a comparison between BPO (blue diamonds) and COLD (red circles). In (a) we see that the COLD fidelity decreases as a function of $N$ but remains quite high when compared to BPO while (b) shows the non-existent improvement for both BPO and COLD with an increasing number of parameters in the $N=5$ spin case. Once again, plotted best fidelities are obtained across 500 optimisations.}\label{fig:ScalingN} \end{figure} We first compare the final state fidelity when using COLD versus BPO as shown in Fig.~\ref{fig:IsingUnconstrained}(a) for different driving times in a system of $N=5$ spins and a single $N_k=1$ optimisation coefficient. As expected, at long timescales the two methods agree as we approach the adiabatic limit of the dynamics. However, at shorter time scales the difference in behaviour is dramatic. We observe that the BPO approach fails in the case of very fast driving as the state gets stuck in the initial state but the COLD approach converges to $1-F \sim 10^{-3}$. Note, this is not achieved by the introduction of first-order LCD terms alone, as this will result in $F=0.0440$ for $\tau=10^{-3}J^{-1}$. COLD is instead achieving this by making the LCD term dominant for the dynamical Hamiltonian through the additional control fields. \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{IsingcConstrained.png} \caption{Optimisation of the \emph{constrained} annealing protocol for the Ising model for $N=5$ spins with a maximum amplitude limit on each term in the Hamiltonian of Eq.~\eqref{eq:h0_ising} of $10J$. (a) Shows a comparison between BPO (blue diamonds) and COLD (red circles) which both give much lower fidelities than in the unconstrained case in Fig.~\ref{fig:IsingUnconstrained}, although COLD persists in giving better results. In (b) the comparison is between CRAB (green diamonds) and COLD-CRAB (purple circles) which show orders of magnitude better fidelities than those in (a), with COLD-CRAB eking out higher fidelities at short driving times. The plotted best results are obtained from 200 optimisations for each method.}\label{fig:IsingcConstrained} \end{figure*} To confirm this, we plot the maximum amplitudes of both the first- and second-order adiabatic gauge potentials in Fig.~\ref{fig:MaxAmp}, where Fig.~\ref{fig:MaxAmp}(a) shows the case of no optimisation and Fig.~\ref{fig:MaxAmp}(b) the case of applying COLD. We can see that without COLD the second-order $(\sigma^x_j\sigma^y_{j+1} + \sigma^y_j\sigma^x_{j+1})$ corrections to the LCD are far larger than the first-order, resulting in the small final state fidelities when only first-order LCD is implemented. In the case of COLD, this relationship reverses and the first-order LCD terms dominate the dynamics. These results are further evidence that the implementation of COLD through the minimisation of the second-order corrections discussed in Sec.~\ref{sec:TwoSpin} could be fruitful in more complex and/or larger systems, where the dynamics can not be calculated. We find that the results of BPO and COLD at short driving times are stable against increasing system size, as shown for $\tau = 10^{-2}J^{-1}$ in Fig.~\ref{fig:ScalingN}(a), with only a small decrease in final state fidelity for larger systems with COLD. Similarly, increasing the number of optimisation coefficients $N_k$ results in little improvement in the values obtained at short times for this example, as shown in Fig.~\ref{fig:ScalingN}(b). It is possible that in more complex systems, more optimisation coefficients will be needed to gain a larger advantage. We also note that by increasing the number of coefficients, we are increasing the complexity of the cost function landscape to be explored by the minimisation procedure, hence leading to slightly worse final fidelities This can mean that alternative approaches than the Powell minimisation used so far, \@e.g.~that of CRAB, could be better suited to probing the cost function for high $N_k$. We also note that this lack of improvement in the results is likely the consequence of the form of the control field given by Eq.~\eqref{eq:optimsable_1} rather than due to a failure of the optimiser in the face of a complex parameter space. We find that the parameter space is relatively smooth in the case of $N_k = 1,2,3$ and a better solution for this form of control field simply does not exist. We now consider the combined method of COLD-CRAB for this annealing example as shown in Fig.~\ref{fig:IsingUnconstrained}(b). We point out that with our application of CRAB in this scenario we are not enforcing $\boldsymbol{\beta}$ to be zero at the start and end of the dynamics, allowing for their to be a tuning of the $z$-field offset. This is consistent between CRAB and COLD-CRAB and therefore does not influence our comparison of the two. First, it is important to note that CRAB alone results in a overall speedup of the dynamics for a high final state fidelity $1-F\sim 10^{-3}$ at long time-scales. However, CRAB still suffers from getting stuck in the initial state at fast driving times and the final state fidelity again tends to zero. This is not the case for COLD-CRAB, which converges to large final state fidelities $1-F \sim 10^{-3}$ at short driving times $\tau \leq 10^{-1}J^{-1}$. Note that the difference between the convergence to final state fidelities are only marginally different between COLD and COLD-CRAB at longer times, but at short times COLD-CRAB performs a lot better. Further improvement could be gained by combining COLD with more advanced versions of CRAB or other optimal control methods. As shown in Fig.~\ref{fig:MaxAmp}, the amplitude of the driving required to achieve the fidelities discussed so far scales with the driving time. Practical scenarios will necessarily place limits both on achievable driving times and the maximum amplitude of any term that is being driven. However, the scaling of the drivings shown do not mean that everything diverges in the limit of $\tau \rightarrow 0$. To see this we can first write the Scr\"odinger equation for COLD as \begin{equation} i \hbar d_t \ket{\psi}=\left(H_\beta+\dot\lambda \mathcal{A}_\lambda\right) \ket{\psi}, \end{equation} we then divide through by $\dot{\lambda}$ to get \begin{equation} i \hbar d_\lambda \ket{\psi}=\left(\frac{H_\beta}{\dot \lambda}+\mathcal{A}_\lambda\right) \ket{\psi}, \end{equation} in the limit of $\tau \rightarrow 0$ then $\dot\lambda\to \infty$ to result in the Hamiltonian term disappearing, or in other words, we turn off the Hamiltonian. We then only drive the system in the $\tau \to 0$ limit with the COLD or LCD driving term: \begin{equation} \label{eq:OnlyA} i \hbar d_\lambda \ket{\psi}=\mathcal{A}_\lambda \ket{\psi}. \end{equation} In this limit then $\lambda$ plays the role of time, and this could then be implemented in a practical scenario in finite time as it corresponds to some manipulation of the couplings in the system. This renormalised time cannot then be infinitesimally short if the couplings are bounded but we have shown that the protocol does not diverge as $\tau \rightarrow 0$. In the case of a spin chain, evolving under Eq.~\eqref{eq:OnlyA} is effectively to first order in LCD implementing independent single spin rotations along the chain, and COLD can be easily applied. If it is not possible to switch off the Hamiltonian as discussed above then as an alternative we can implement COLD with experimental constraints accounted for directly in the optimal control minimisation. We consider an extreme example of constraints to show that even in this scenario COLD can provide an advantage and corresponding speed-up. In the constrained case the annealing protocol remains that of Hamiltonian~\eqref{eq:h0_ising} but we choose to introduce a bound of $X_f$ on the maximum amplitude of all drivings. This makes it so that no optimal control or LCD term can go beyond the original amplitude of the $x$-field drive. We show the final state fidelities achieved for the constrained example in Fig.~\ref{fig:IsingcConstrained}. As can be seen in Fig.~\ref{fig:IsingcConstrained}(a), COLD provides a substantial improvement beyond what is achievable with BPO. BPO manages $F < 0.5$ for $\tau < 1J^{-1}$, but COLD can reach final state fidelities $F\sim 0.9$ for $\tau < 1J^{-1}$. The real improvement, however, comes with the application of CRAB and COLD-CRAB. CRAB already improves the fidelities substantially, and would allow for a speed up in the annealing protocol but with COLD-CRAB the final state fidelities are even better, with $F\sim 0.99$ achievable when approaching $\tau\sim 0.1J^{-1}$. Signs are seen of the onset of the convergence to small values for COLD-CRAB in Fig.~\ref{fig:IsingcConstrained}(b) before the maximum amplitude required becomes too large and the short time results tend towards zero fidelity and the state being stuck again. With this example and the discussion on implementation via turning off the Hamiltonian, we have shown that COLD is capable of delivering improvements beyond other schemes even for practical problems with strict and rather extreme constraints. \section{Transport in a Synthetic Lattice}\label{sec:lattice} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{synthetic.png} \caption{Optimisation of state transfer in a synthetic lattice. In (a) we compare the fidelities obtained via the bare ARP protocol (pink dashed line) and first-order LCD previously implemented in Ref.~\cite{meier_counterdiabatic_2020} (purple dash-dot line) to BPO (blue diamonds) and the COLD method (red circles). (c) Maximum amplitude of the tunneling term at each driving time for LCD (green diamonds) as given by Eq.~\eqref{eq:tunneling} as well as COLD (red triangles) which includes additional control parameters as shown in Eq.~\eqref{eq:tunneling_opt} and BPO (blue triangles) which omits the modifications due to CD but retains the control terms $\boldsymbol{\beta}$. In both (a) and (c) we simulate $N = 7$ lattice sites and use $N_k = 1$ parameter for optimisation of BPO and COLD. (b) Scaling of fidelities with increasing number of lattice sites (where $N_k = 1$) for both COLD (red circles) and BPO (blue diamonds) noting that the latter performs very poorly for $N > 9$. (d) does the same for the number of parameters while keeping $N=7$, with the trend indicating that increasing $N_k$ does not lead to better fidelities in either the BPO or COLD case. Note that both (b) and (d) are simulated for driving time $\tau = 0.5 J^{-1}$ and the best fidelities are obtained across 500 optimisations.}\label{fig:Synthetic} \end{figure*} The efficient transfer of states between opposite ends of a lattice could have future applications in the settings of quantum computation and simulation due to its promise of efficient transport of information \cite{lang2017}. This objective is often tackled in the setting of ultracold atoms in optical lattices. While the problem can be tuned to be a single-particle system and the analytical solutions of the corresponding instantaneous Schr\"odinger equation are known \cite{Hatsugai1993,Hugel2014} even for a finite system \cite{Duncan2018exact}, efficient evolution for state transfer is not straight-forward. This is due to the fact that the majority of the states are delocalised across the lattice, meaning that the $\ket{\psi}\bra{\psi}$ terms of the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} are global in reach. It is normal to consider this system in the tight-binding limit where the implementation of global terms is not straightforward. Such terms can be generated via the interactions of the atoms with cavity modes \cite{landig2016,keller2017} or from dipolar interactions \cite{baranov2002,menotti2008,trefzger2011}. However, it would be ambitious to expect this control to be general enough to implement the CD Hamiltonian of the exact solutions. This is one of the reasons that LCD has been pursued in this setting. Recently, LCD has been successfully applied to improve an adiabatic rapid passage (ARP) protocol for population transfer across a synthetic lattice \cite{meier_counterdiabatic_2020}. In this realisation, population transfer was achieved in a synthetic tight-binding lattice of laser coupled atomic momentum states. We will consider the same problem as in Ref.~\cite{meier_counterdiabatic_2020} but with the improvement that can be gained by COLD. This system is described by the Hamiltonian \begin{align}\label{eq:Hlattice} \begin{split} H_0(t) &= - \sum_n J_n(t)(c_n^{\dag}c_{n+1} + H.c.) \\ &+ \sum_n V_n(t) c_n^{\dag}c_n, \end{split} \end{align} where $J_n(t)$ is the time-dependent tunnelling that describes the nearest-neighbour coupling, $V_n(t)$ is the on-site energy offset with respect to neighbouring sites and $c_n^{\dag}$($c_{n}$) is the creation(annihilation) operator on a given synthetic lattice site. In the ARP protocol, the population gets moved from one end of the lattice to the other by linearly ramping the lattice from a positive tilt to a negative tilt via \begin{align}\label{eq:t_and_v} J_n(t) = J_0(1.1 - \lambda) = J_0\Big(0.1 + \frac{t}{\tau}\Big), \\ V_n(t) = n V_0 2 (\lambda - 1/2) = nV_0\Big(1 - \frac{2t}{\tau}\Big), \end{align} where $V_0 = 4J_0$ is the initial site energy slope and $J_0$ is the characteristic tunnelling scale of the lattice. The scaling function in this case is given by \begin{equation} \lambda(t) = 1-\frac{t}{\tau}. \end{equation} In order to implement LCD as shown in Ref.~\cite{meier_counterdiabatic_2020}, the first order LCD can be accounted for by taking \begin{align}\label{eq:tunneling} J_n(t) \rightarrow J_{n, \mathrm{CD}}(t) e^{-i\phi_{n, \mathrm{CD}}(t)}, \end{align} \noindent where \begin{align}\label{eq:t_phi_cd} J_{n, \mathrm{CD}}(t) = \sqrt{J_n(t)^2 + (\alpha_n(t)/\tau)^2}, \\ \phi_{n, \mathrm{CD}}(t) = \arctan\left(-\frac{J_n(t)\tau}{\alpha_n(t)}\right), \end{align} \noindent and $\alpha_n(t)$ is the CD terms which can be found by solving a set of linear equations \begin{align} \begin{split} &-3(J_n J_{n+1})\alpha_{n+1} + (J_{n-1}^2 + 4J^2_n + J_{n+1}^2)\alpha_j \\ &- 3(J_n J_{n-1})\alpha_{n-1} + (V_{n+1} - V_n)^2 \alpha_n \\ &= -\partial_{\lambda}J_n (V_{n+1} - V_{n}). \end{split} \end{align} In order to implement COLD we include additional terms to the tunnelling of the lattice \begin{align}\label{eq:tunneling_opt} J_n(t) \rightarrow J_n(t, \boldsymbol{\beta}) = J_n(t) + \boldsymbol{\beta}(t), \end{align} which can then be incorporated into the forms of both $J_{n, CD}(t)$ and $\phi_{n, CD}(t)$. We again want the additional control terms to go to zero around the problem boundaries and a natural choice is the same as in the Ising spin chain example in Eq.~\eqref{eq:optimsable_1}. The parameters $\boldsymbol{\beta}$ are optimised as before by minimizing with respect to the fidelity of the final state, where the population has been fully transferred to the opposite lattice site. We first consider a system size of $N=7$ sites which was successfully experimentally probed in Ref.~\cite{meier_counterdiabatic_2020}, where final state fidelities of $0.75$ were achieved for $\tau = 1$ms with a final tunnelling strength of $J/\hbar = 1/2\pi kHz$ (equivalent to $\tau \sim 1 J^{-1}$ in our units). We initially confirm the breakdown of ARP in this setting for fast times, and the success of the LCD protocol at short times, as shown in Fig.~\ref{fig:Synthetic} (a) and found in Ref.~\cite{meier_counterdiabatic_2020}. Implementing BPO on its own manages to enhance the achievable fidelities at intermediate times of $\tau > 0.03 J^{-1}$. However, eventually, as observed in all scenarios in this work, BPO becomes stuck in the initial state at fast times, and the fidelity goes to zero. Implementing the newly introduced COLD protocol achieves an order of magnitude improvement in the fidelity over LCD. This is also plotted in Fig.~\ref{fig:Synthetic}(a) alongside previous results of ARP and first-order LCD. One concern could be that COLD is achieving this improvement by simply pumping power into the tunnelling term, but as we can see in Fig.~\ref{fig:Synthetic}(c) the maximum amplitude of the tunnelling term tracks that of LCD. A key issue for experiments is the maximum amplitude achievable by a driving term and with this result we can stipulate that COLD is likely to be feasible in the same regimes as LCD in this synthetic lattice system. There is single outlier at intermediate times as indicated by the single point peaking in maximum amplitude in Fig.~\ref{fig:Synthetic}(c), this is the exception to the rule, where the optimisation has found a marginally higher fidelity (see the offset point in Fig.~\ref{fig:Synthetic}(a)) by pumping in power. A large concern for state transfer techniques is the robustness of a protocol with respect to an increasing system size. We show the best achievable fidelities with increasing system size for both BPO and COLD in Fig.~\ref{fig:Synthetic}(c). While both protocols show a decreasing fidelity with system size as is to be expected, once again COLD does not suffer from getting stuck in the initial state. This is shown by the BPO fidelities going to unity for large systems in Fig.~\ref{fig:Synthetic}(c), and is the same mechanism for this as for the short driving times in Fig.~\ref{fig:Synthetic}(a). Another concern could be that BPO will beat COLD if enough parameters are allowed for the optimisation, i.e. if we increase $N_k$ enough. We observed no evidence of this for the Ising model example and we again do not observe this in this synthetic lattice example, as is shown in Fig.~\ref{fig:Synthetic}(d). Small improvements are made in the fidelities achieved with BPO and COLD for larger $N_k$ but this is not substantial. \section{Discussion and outlook} \label{sec:conclusions} We have introduced a new hybrid approach combining quantum optimal control and shortcuts to adiabaticity: COLD. Inspired by the successes of LCD, where diabatic transitions are suppressed and locality conditions can be met, COLD improves on its methodology by combining it with quantum optimal control. The natural way to enhance the performance of LCD is by introducing higher order CD terms, but these are often non-local and difficult to engineer in experiments. COLD circumvents this by allowing for additional control fields that extend the family of dynamical Hamiltonians which can be explored. In this way, our method may find the best possible path where the effect of lower-order LCD is most relevant and higher order corrections are suppressed. COLD has a clear potential in efficiently speeding up adiabatic evolution in various settings. We demonstrate this numerically via several example protocols which indicate improvements beyond a classical optimisation approach BPO as well as LCD of different orders. Our work shows that COLD reduces the strength of higher order LCD corrections, and that it performs well for increasing system sizes. We have shown that COLD can be implemented in the limit of fast driving by a `switching off' of the original dynamical Hamiltonian. For scenarios where removing the Hamiltonian is not possible, we have shown that an alternative way to implement COLD is to use a bounded optimisation where amplitudes are restricted. We find that both the COLD and COLD-CRAB protocols perform extremely well in this setting. COLD will be most beneficial when the LCD is only realisable to a certain order but the higher order corrections are large. This means the diabatic transitions are not being sufficiently suppressed by the choice of LCD and COLD can be used to find the dynamical Hamiltonian for which the required order of LCD term dominates. Note, that this goes the other way too, with COLD not providing substantial improvements when the chosen lower order LCD is small across the path. This can be thought of as being the case in two limits. First is the adiabatic limit, for which any CD correction is small and COLD will tend towards the adiabatic result. Second, the low-order LCD terms can be small compared to the driving as the exact CD would be correcting transitions due to interactions at longer ranges. In this scenario, the order of LCD being implemented with COLD needs to be increased, so that the CD term is accounting for the longer range terms. While we have only utilised COLD with first-order LCD in this work, it can be applied with higher-order LCD terms. We found this to be the case when we tried to apply COLD to the protocols for \@e.g.~Greenberger-Horne-Zeilinger state generation in arrays of Rydberg atoms in Ref.~\cite{omran_generation_2019}. COLD with first- or second-order LCD could not improve upon the results of BPO or CRAB. This was due to both the first- and second-order corrections being smaller than the dynamical terms of the original Hamiltonian by two orders of magnitude. It is perhaps not surprising when we consider that the Hamiltonian of this protocol includes long-range couplings. In this case, higher-order corrections would need to be implemented with COLD, and finding methods for executing these now highly non-local terms may be a future path of research. COLD may also be applied to more complex systems where exact dynamics are not possible, \@e.g. due to an excessively large Hilbert space. This may be achieved by variationally minimising the integrals of the coefficients for the higher order corrections to the LCD. This was briefly sketched out in the example of two spin annealing, and we have shown that COLD functions by minimising the strength of the higher order LCD corrections along a given path. A further option is to combine COLD with one of a large variety of numerical optimal control methods, as we have done for the example of CRAB. We have shown a substantial improvement for state preparation in the Ising model that can be obtained from the COLD-CRAB combination - particularly in the constrained case. Both fusions of COLD with advanced optimal control methods as well as the minimisation of higher order LCD term amplitudes for complex systems are two potential extensions which could prove fruitful with further study. \begin{acknowledgements} Work at the University of Strathclyde was supported by the EPSRC Quantum Technologies Hub for Quantum Computing and Simulation (EP/T001062/1), and the European Union’s Horizon 2020 research and innovation program under grant agreement No. 817482 PASQuanS. A.P. acknowledges support from NSF under Grant DMR-2103658 and by the AFOSR under Grants No. FA9550-16-1-0334 and FA9550-21-1-0342. \end{acknowledgements} \section{Introduction}\label{sec:intro} Time-dependent manipulation of few and many-particle quantum systems is important across all implementations of quantum computing and simulation. In such processes, decoherence and undesired transitions reducing the state fidelity are relatively ubiquitous. One important example is given by the undesired transitions that can occur between instantaneous eigenstates of the dynamical Hamiltonian upon the application of an external drive. This is why many driving protocols rely on adiabatic dynamics, where the system follows the instantaneous eigenstates and transitions are naturally suppressed. Ideal adiabatic processes are reversible making them - in principle - robust. However, to approach ideal adiabatic processes the dynamics must always be very slow, requiring compromises on the time-scales of competing heating and decoherence processes. Speeding up adiabatic protocols to enable their completion within the system's coherence time is important for the development of any quantum technologies relying on such protocols \cite{acin2018quantum}. One approach to do this is the implementation of optimal driving protocols, which aim to end up with the system in a desired final state. For example, numerically optimised paths can be employed to avoid points where gaps in the spectrum of the system become small, or additional control fields can be tuned to increase the size of these gaps \cite{kirk2004optimal,glaser2015training,AlessandroBook2007}. In broad terms, this is the goal of protocols collectively referred to as quantum optimal control. Another option is to design techniques which speed up the adiabatic dynamics, often termed shortcuts to adiabaticity (STA). The primary aim of STA is to entirely remove or suppress diabatic transitions between instantaneous eigenstates of the dynamical Hamiltonian \cite{Torrontegui2013,GueryOdelin2019}. One particularly successful technique is counterdiabatic driving (CD), which was first utilised in physical chemistry by Demirplak and Rice \cite{demirplak2003,demirplak2005}, and was independently introduced by Berry \cite{berry_transitionless_2009} under the name `transitionless driving'. CD aims to suppress losses that arise due to fast deformations of the system far from the adiabatic limit by analytically compensating for them in the Hamiltonian. In general, to suppress diabatic losses exactly, the full analytical or numerical solutions of the Schr\"odinger equation are required. This makes the implementation of CD in complex systems - \@e.g.~for many-body dynamics - difficult and requires the need for new techniques to be introduced. Links between optimal control and STA have existed throughout the development of both approaches \cite{stefanatos2021,Zhang2021connection}, but there are few examples of their explicit combination in a way that exploits their complementary nature. Some attempts to achieve this have included an emulation of CD through fast oscillations of the original Hamiltonian \cite{Petiziol2018fast, Petiziol2019accelerating} as well as through recent advances in reinforcement learning methods aimed at optimizing quantum protocols~\cite{bukov_reinforcement_2018}. Such methods have been shown to achieve a significant improvement in performance when implemented using concepts borrowed from CD~\cite{yao_reinforcement_2020}. In this work, we offer a significantly different new approach in combining elements from STA and quantum optimal control which we will call \textit{counterdiabatic optimised local driving} (COLD). A key ingredient in the development of COLD is a recent approach designed for implementing CD in the setting of larger, more complex systems: local counterdiabatic driving (LCD) \cite{sels_minimizing_2017,Kolodrubetz2017geometry, gjonbalaj2021counterdiabatic}. LCD offers a method to derive \emph{approximate} CD protocols, with the aim of suppressing undesired transitions instead of fully eliminating them. This allows it to account for some physical constraints of the system, \@e.g.~locality conditions. However, the approximate nature of the LCD protocol can lead to poor performance, necessitating the introduction of additional non-local, long-range corrections \cite{sels_minimizing_2017}. If all possible corrections are added, then LCD is equivalent to the normal analytical approaches of CD, but the additional terms are generally difficult to control experimentally. COLD offers an alternative approach, with additional control fields which allow for an optimisation of the dynamical Hamiltonian for a given local form of LCD. The impact of more complex corrections can then be radically reduced, giving a corresponding improvement in the desired protocol. The structure of this paper is as follows: first, we give a detailed description of the new method, COLD, with a focus on the elements of quantum optimal control and CD required for its implementation. In Sec.~\ref{sec:TwoSpin} we explore a 2-spin annealing protocol, that showcases the strengths of COLD. Sec.~\ref{sec:1dIsing} describes and analyses the improvements gained with COLD and its combination with other optimal control techniques in the case of state preparation in the Ising model. Then in Sec.~\ref{sec:lattice} we show the improvement that COLD can achieve on the recently realised example of LCD for state transfer on a synthetic lattice in ultracold atoms. A list of abbreviations used in this work can be found in Table.~\ref{table} for reference. \begin{table}[h] \begin{tabular}{p{2cm} | p{6cm}} \toprule \multicolumn{1}{m{2cm}}{\centering Abbreviation} & \multicolumn{1}{m{6cm}}{\centering Meaning} \\ \midrule STA & shortcuts to adiabaticity \\ \hline CD & counterdiabatic driving \\ \hline LCD & local counterdiabatic driving \\ \hline COLD & counterdiabatic optimised local driving \\ \hline BPO & bare Powell optimisation \\ \hline CRAB & chopped randomised basis \\ \hline ARP & adiabatic rapid passage \\ \bottomrule \end{tabular} \caption{List of abbreviations used throughout the manuscript.}\label{table} \end{table} \section{An Introduction to Counterdiabatic Optimised Local Driving}\label{sec:intro_olcd} \subsection{Quantum Optimal Control}\label{sec:OptCont} In the context we consider, we employ quantum optimal control to optimise the function $f(\psi,\boldsymbol{\beta})$ in the Schr\"odinger equation \begin{equation} \dot{\psi} = f(\psi,\boldsymbol{\beta}), \label{eq:Control} \end{equation} where $\psi$ is the quantum wave function and $\boldsymbol{\beta}$ is the set of optimisable control parameters. Optimisation of Eq.~\eqref{eq:Control} in most cases means taking the system from an initial state $\ket{\psi_0}$ to a final target state $\ket{\psi_T}$ by finding the optimal values of $\boldsymbol{\beta}$ with respect to some target metric (e.g.~the time taken to evolve the system from $\ket{\psi_0}$ to $\ket{\psi_T}$). There is a large variety of techniques available to achieve this goal \cite{glaser2015training,koch2016controlling}. The success/target metric needs to be defined prior to the optimisation of $\boldsymbol{\beta}$. Often this is done by constructing a \emph{cost function}, which in turn defines the optimisation landscape. In general, we can optimise for any desired property of the final state of the system, with some examples being the entropy, energy, energy fluctuations or some other observable. A commonly used cost function in state preparation is related to the fidelity of the final, post-evolution state $\ket{\psi_f}$ with respect to the target state: \begin{align} \label{eq:lossfunc} \mathcal{C}(\ket{\psi_f}) = 1 - \left|\braket{\psi_T}{\psi_f}\right|^2. \end{align} In performing such a numerical optimisation, it is common to take the target state to be parameterised via a Hamiltonian split into two parts. The first is the so-called \emph{bare} Hamiltonian $H_0(t)$, which can be time-dependent and describes the dynamics of the quantum system in question. The second part is then an additional driving term that includes the control parameters $\boldsymbol{\beta}(t)$ and operators $\mathcal{O}_{\rm opt}$ which provide additional degrees of freedom in the dynamics of the system. The full Hamiltonian of the control system is then: \begin{align} \label{eq:h_optimal_control} H_{\beta}(t,\boldsymbol{\beta}) = H_0(t) + \boldsymbol{\beta}(t)\mathcal{O}_{\rm opt}. \end{align} The parameters $\boldsymbol{\beta}(t)$ can then be optimised for the optimal dynamics with respect to the metric defined by the cost function. In this work, we use the Powell minimization approach \cite{powell1964efficient} for the numerical optimisation as implemented in Python's \textit{SciPy} \cite{Pauli2020SciPy}. When performing this optimisation without any CD terms in the Hamiltonian, we refer to this approach as bare Powell optimisation (BPO), with bare referring to the lack of CD. Furthermore, we implement the chopped rendomised basis (CRAB) approach \cite{Caneva_chopped_2011,muller2021} and combine its methodology with that of COLD, to obtain the method of COLD-CRAB. CRAB expands the size of the parameter landscape by employing randomisation, usually in the optimised pulse driving the system. The approach was first developed for quantum many-body problems whose simulation requires the time-dependent density matrix renormalization group, despite the fact that these were thought to not be tractable in the quantum control setting \cite{brif2010,muller2021}. CRAB has benefits in that it can avoid traps in the control landscape \cite{Rach2015}, and has built-in flexibility for open-loop or closed-loop optimisation \cite{heck2018,muller2021} although these advantages come at a higher computational cost due to requiring a far larger search-space for the optimisation. \subsection{Counterdiabatic Driving}\label{sec:LCD} An important class of optimisation problems deals with the case where the initial and final states are ground states of a Hamiltonian $H_0(t)$ at some initial $t=t_i$ and final $t=t_f$ time. In these cases, the adiabatic theorem guarantees that for an infinitesimally slow transformation of the system $t_f-t_i\to\infty$, it should follow the instantaneous (non-degenerate) ground state of $H_0(t)$ and hence reach the target state with unit fidelity. This process is generally known as quantum annealing. In large, complex systems with many degrees of freedom, quantum annealing tends to require prohibitively long protocol times due to vanishingly small gaps typically present in such systems. This often makes annealing protocols impractical \cite{farhi2008how}. It has been found that this problem can be formally overcome by using CD, where velocity-dependent terms are added to the Hamiltonian analytically enforcing the adiabatic wave function to be the solution of the time-dependent Schr\"odinger equation \cite{demirplak2003,demirplak2005,berry_transitionless_2009}. In this case, the dynamical state will follow the instantaneous eigenstate with no transitions regardless of the driving time. The form of the dynamical Hamiltonian enforcing this is \cite{berry_transitionless_2009}: \begin{equation}\label{eq:counterdiabatic} \begin{aligned} & H_{\mathrm{CD}}(t) = H_0 (t) \\ & + i\hbar \sum_n (\ket{\partial_t n}\bra{n} - \bra{n}\ket{\partial_t n}\ket{n}\bra{n}), \end{aligned} \end{equation} with $\ket{n}\equiv \ket{n(t)}$ the $n$-th eigenstate of the instantaneous Hamiltonian $H_0(t)$. The last term enforces the phases ($\bra{n}\ket{\partial_t n}$) on the instantaneous eigenstates, which are arbitrary and thus will be omitted. In general, knowledge of the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} requires knowledge of the full spectrum of $H_0(t)$ at each instantaneous moment in time. \subsection{Counterdiabatic Optimised Local Driving} \label{sec:OLCD} We will now introduce the main idea of COLD. The principle is to take the same approach as Sec.~\ref{sec:LCD} but with the original Hamiltonian given by $H_\beta(t,\boldsymbol{\beta})$, see Eq.~\eqref{eq:h_optimal_control}. Quantum Annealing then applies to the whole family of control Hamiltonians $ H_{\beta}(t,\boldsymbol{\beta})$ as long as the additional control terms $\boldsymbol{\beta}(t)$ vanish at the protocol boundaries: $\boldsymbol{\beta}(t_i)=\boldsymbol{\beta}(t_f)=0$. This flexibility was explored in finding the optimal adiabatic path characterized by e.g. the shortest distance between the initial and the final states, i.e. a geodesic \cite{tomka2016geodesic}. A similar geodesic approach was developed in the context of dissipative systems to minimize energy losses~\cite{Sivak2012Geodesic}. During the protocol, a dynamical Hamiltonian $H_\beta(t,\boldsymbol{\beta})$ generally induces transitions between the quantum states that it drives and the question about what is the optimal path remains open. The Hamiltonian $H_\beta(t,\boldsymbol{\beta})$ and its eigenstates depend on time only through the driving parameters, which include $\boldsymbol{\beta}$ and any additional control terms in the particular protocol. This makes it convenient to introduce a path in the coupling space parametrized by a dimensionless parameter $\lambda\in [0,1]$ such that both $H_0$ and $\boldsymbol{\beta}$ are functions of $\lambda$ satisfying $H_\beta(\lambda=0)=H_{0}(t_i)$ and $H_\beta(\lambda=1)=H_{0}(t_f)$, i.e. being equal to the initial and the final Hamiltonian at the protocol boundaries. By construction this implies that any additional fields introduced to the bare Hamiltonian $H_0$ must go to zero at the boundaries. In this way, any protocol can be uniquely characterized by first specifying the path $\boldsymbol{\beta}(\lambda)$ in the coupling space manifold and then the time dependence $\lambda(t)$ along it. The path determines the sequence of couplings of the Hamiltonian during time evolution and hence the sequence of ground state wave functions followed by the driven state. Furthermore, the time dependence encodes the speed of traversing this path. We can then introduce a hermitian operator called the (path-dependent) adiabatic gauge potential~\cite{sels_minimizing_2017}: $\mathcal A_\lambda=i \hbar \sum_n \ket{\partial_\lambda n}\bra{n}$, which satisfies a closed form equation, \begin{equation} \label{eq:AGP_eq} \left[\partial_\lambda H_\beta+{i\over \hbar} [ \mathcal A_\lambda,H_\beta],H_\beta\right]=0, \end{equation} where $H_\beta \equiv H_\beta(\lambda,\boldsymbol{\beta}(\lambda))$ and $\mathcal{A}_\lambda \equiv \mathcal{A}_\lambda(\lambda,\boldsymbol{\beta}(\lambda))$. Then the CD Hamiltonian reads \begin{align}\label{eq:counterdiabaticLCD} H_{\mathrm{CD}}(\lambda,\boldsymbol{\beta}(\lambda)) = H_\beta(\lambda,\boldsymbol{\beta}(\lambda)) +\dot \lambda \mathcal A_\lambda(\lambda,\boldsymbol{\beta}(\lambda)), \end{align} and is equivalent to the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} given knowledge of the exact adiabatic gauge potential. However, generally the adiabatic gauge potential is a very non-local object and solutions of Eq.~\eqref{eq:AGP_eq} are unstable to small perturbations containing exponentially many terms in the number of degrees of freedom. LCD aims to find approximate gauge potentials that satisfy particular requirements like robustness and locality, thus circumventing many of the difficulties in determining the second component in Eq.~\eqref{eq:counterdiabatic} and~\eqref{eq:counterdiabaticLCD} exactly. The goal, in essence, is to suppress the most relevant diabatic effects rather than completely eliminate them. This method has recently been experimentally implemented to speed up state transfer for synthetic lattices in ultracold atoms \cite{meier_counterdiabatic_2020}, for preparing states in nuclear-magnetic-resonance systems \cite{zhou_experimental_2020}, and annealing protocols on an IBM quantum computer \cite{Hegade2021Shortcuts}. Following the methods of Ref.~\cite{sels_minimizing_2017}, the problem of finding the optimal adiabatic gauge potential can be cast as the minimisation of the Hilbert-Schmidt norm of the operators \begin{equation}\label{eq:Goperator} G_{\lambda}= \partial_{\lambda}H_\beta + i\comm{\mathcal{A}_\lambda}{H_\beta}, \end{equation} which is equivalent to minimisation of the action \begin{equation}\label{eq:actionCD} \mathcal{S}(\mathcal{A}_{\lambda}) = \Trace{\left[G_{\lambda}(\mathcal{A}_{\lambda})^2\right]}, \end{equation} with respect to $\mathcal{A}_{\lambda}$. In most cases, this is achieved by first choosing an operator ansatz - \@i.e. a set of linearly independent operators $\{\mathcal{O}_{\rm LCD}\}$ - and then using this set as an operator basis for the adiabatic gauge potential $\mathcal A_\lambda=\sum_j \alpha_j \mathcal O_{\rm LCD}^{(j)}$. The action can then be minimized with respect to the the set of coefficients, ${\bm \alpha}$. In the example of an Ising spin chain we may take $\mathcal{A}_\lambda = \sum_j^N \alpha_j \sigma^y_j$, where $j$ labels the $N$ chain sites, and $\{\mathcal{O}_{\rm LCD}\}$ is a set the $y$-pauli matrices. Without any additional control fields $\boldsymbol{\beta}$, LCD is essentially an informed choice of the operator set $\{\mathcal{O}_{\rm LCD}\}$ in a way that the resulting control protocol from the minimisation of Eq.~\eqref{eq:actionCD} is optimal for a given $H_0(\lambda)$. In this case we explore the family of Hamiltonians \begin{equation} H_{\rm LCD}(\lambda) = H_0(\lambda) +\sum_j \alpha_j(\lambda) \mathcal{O}_{\rm LCD}^{(j)}. \end{equation} The performance of such LCD protocols is determined by how accurately the variational manifold spanned by the set $\{\mathcal{O}_{\rm LCD}\}$ can approximate an exact $\mathcal{A}_{\lambda}$ such that Eq.~\eqref{eq:AGP_eq} holds. In the case of the new protocol COLD, we allow for extra exploration of the family of Hamiltonians due to the additional control fields as in Eq.~\eqref{eq:h_optimal_control}. This expands the family of Hamiltonians to \begin{equation}\label{eq:expandedH} H_\beta(\lambda,\boldsymbol{\beta}) = H_0(\lambda) + {\bm \alpha}(\lambda,\boldsymbol{\beta}) \mathcal{O}_{\rm LCD} + \boldsymbol{\beta}(\lambda) \mathcal{O}_{\rm opt}. \end{equation} Note, that the coefficients of the optimal control field change the form of the LCD driving coefficients, i.e. $\bm \alpha = f(\lambda,\boldsymbol{\beta})$. The aim of COLD is then to choose $\boldsymbol{\beta}(\lambda)$ and $\mathcal{O}_{\rm opt}$ such that the LCD term is optimal for the dynamical Hamiltonian $H_0(\lambda)+\boldsymbol{\beta}(\lambda)\mathcal{O}_{\rm opt}$. We will focus on the optimisation of the control parameters $\boldsymbol{\beta}(\lambda)$ for a given choice of $\mathcal{O}_{\rm opt}$, although the choice of operators $\mathcal{O}_{\rm opt}$ can also be optimised over as an extension. With COLD, we have two methods to improve on the existing LCD protocol. As previously shown in Refs.~\cite{claeys_floquet-engineering_2019,prielinger_diabatic_2020}, there is a possibility to add more terms to the LCD making it less local, \@e.g.~through long-range interactions. In the spin chain case, we could take the aforementioned sum over $\sigma^y$ terms to be the \textit{first-order} anzatz for the LCD, where higher-order ans\"{a}tze might contain sets of operators $\{\mathcal{O}_{\rm LCD}\}$ with terms odd in $\sigma^y$ such as $\sigma^y_j\sigma^{(z,x)}_{j+1}$. This procedure generally improves the performance of CD protocols at the expense of adding more complex operators which may be experimentally impractical depending on the scenario. Alternately, with COLD and the introduction of additional local control fields to the Hamiltonian, we can improve the performance of LCD at a fixed complexity of the CD term. If these extra control fields vanish at the beginning and at the end of the protocol, they do not affect the adiabatically connected states, but they can significantly modify the adiabatic landscape at intermediate couplings to enhance the performance of the given order of LCD. In order to optimize for the additional fields, in this work, we use methods of quantum optimal control. However, we note that such optimization can be done locally by requiring that the next order variational correction to the CD terms is small along the optimal landscape. This local optimization may be advantageous in that it does not require knowledge of the wave function and in this sense is not limited by small system sizes. Furthermore, we compare COLD to the use of CRAB, as discussed in Sec.~\ref{sec:OptCont}. An advantage of COLD is that it can be combined with many advanced optimal control procedures, owing to the standard way additional control fields are introduced to the Hamiltonian. In this work we find the combination of COLD and CRAB particularly useful and we will refer to this as COLD-CRAB. \section{Two Spin Quantum Annealing}\label{sec:TwoSpin} To showcase and explore the use of COLD in a relatively simple setting we will consider a two spin quantum annealing problem with bare Hamiltonian \begin{equation}\label{eq:Hanneal} H_0(t) = -2J \sigma_1^z \sigma_2^z - h ( \sigma_{1}^z + \sigma_{2}^z) + 2h \lambda(t) (\sigma_{1}^x + \sigma_{2}^x), \end{equation} where $\sigma^a_j$, $a \in \{x,y,z\}$ are the Pauli matrices applied to spins indexed by $j$. For the scaling function $\lambda(t)$ we pick the form \begin{equation}\label{eq:Scalingfunc} \lambda(t) = \sin^2\left(\frac{\pi}{2} \sin^2 \left( \frac{\pi t}{2 \tau} \right) \right) \end{equation} such that $\lambda(0) = 0$ and $\lambda(\tau) = 1$. We consider the case of $J/h=0.5$, which means the spins start in the initial state of $\ket{\uparrow \uparrow}$ and finish in a superposition of all of the symmetric states. As discussed in Ref.~\citep{sels_minimizing_2017}, since $H_0$ has a standard Ising spin chain form, the first-order LCD terms are given by the following ansatz for the adiabatic gauge potential: \begin{equation}\label{eq:LCD1st} \mathcal{A}(\lambda) = \alpha \sum_{i=1}^2 \sigma_i^y, \end{equation} with the sum being over the full length of the $N$ spin chain. Minimising Eq.~\eqref{eq:actionCD} for this $\mathcal{A}_{\lambda}$ with respect to the coefficient $\alpha$ gives \begin{equation} \alpha = - \frac{h^2}{4(h\lambda)^2 + h^2 + 4J^2}. \end{equation} To further improve on the first-order LCD we can implement COLD, as we will discuss shortly, or we can introduce higher-order terms to the ansatz for $\mathcal{A}_{\lambda}$. This second method serves as a good benchmark against COLD, since it offers an improvement to first-order LCD in the same way as COLD does, but requires more complicated interactions between the two spins increasing the implementation overhead. The second-order LCD can be found by taking an ansatz for the adiabatic gauge potential: \begin{equation} \begin{aligned} \mathcal{A}^{(2)}(\lambda) =& \alpha \sum_{j} \sigma_j^y + \gamma (\sigma_1^x \sigma_{2}^y + \sigma_1^y \sigma_{2}^x) \\ & + \zeta (\sigma_1^z \sigma_{2}^y + \sigma_1^y \sigma_{2}^z), \end{aligned} \end{equation} where to solve for $\alpha$, $\gamma$, and $\zeta$ we once again minimize the action given by Eq.~\eqref{eq:actionCD} and obtain three coupled equations which can be solved numerically (see Appendix \ref{app:derivation} for a detailed derivation). \begin{figure}[t] \includegraphics[width=0.98\linewidth]{TwoSpin.png} \caption{Optimisation of the annealing protocol for two spin Hamiltonian given by Eq.~\eqref{eq:Hanneal} for $h/J=2$. (a) Final fidelities of the annealing protocol with triangles (black) representing the case where no CD is applied and circles showing the case of first-order LCD (pink) as well as the combination of first- and second-order LCD (orange). (b) Final fidelities achieved when using the optimal control method BPO (red diamonds) and the new approach of COLD (blue circles), both with $N_k=1$.}\label{fig:TwoSpin} \end{figure} We now consider three distinct cases in this two spin quantum annealing example: no LCD, first-order LCD, and second-order LCD. The fidelity of the final state for each case over a wide range of driving times is shown in Fig.~\ref{fig:TwoSpin}(a), with an easily distinguishable advantage in the case of LCD. The final fidelity where no LCD is implemented decreases rapidly as the ramp times are made short, with the system getting stuck in its initial state. On the contrary, first-order LCD retains good final state fidelities into short times, as the driving Hamiltonian becomes that of only the LCD term. The second-order LCD then gives unit fidelity, in agreement with previous observations \cite{claeys_floquet-engineering_2019}, as for a two spin Hamiltonian the highest order corrections are that including two spin terms. We now add an optimisable term, as described in Sec.~\ref{sec:OptCont}, so that the new Hamiltonian reads: \begin{equation}\label{eq:HannealOpt} H_\beta(t) = H_0(t) + \sum_{k=1}^{N_k} \beta^k \sin (\pi k t / \tau) \sum_i \sigma_i^z, \end{equation} \noindent with $N_k$ the number of optimisation coefficients, and $\beta^k$ the coefficient of the $k$th frequency of the control function. Note that we consider \begin{equation} \boldsymbol{\beta}(t) = \sum_{k=1}^{N_k} \beta^k \sin (\pi k t / \tau) = \sum_{k=1}^{N_k} \beta^k \sin (\pi k f(\lambda)), \end{equation} with \begin{equation}\label{eq:lambda} f(\lambda) = \frac{2}{\pi}\arcsin\left(\sqrt{\frac{2}{\pi} \arcsin\left(\sqrt{\lambda}\right)}\right). \end{equation} The form of the additional control field fulfils the requirement that the boundary conditions are $H(0) = H_0(0)$ and $H(\tau) = H_0(\tau)$. Numerically optimising the $\beta^k$ for the best final state fidelity \emph{without} adding LCD terms results in the BPO method introduced in Sec.~\ref{sec:OptCont}. We show the results of BPO in Fig.~\ref{fig:TwoSpin}(b), where it is observed that BPO gives better results than the case of no LCD in Fig.~\ref{fig:TwoSpin}(a). However, for short times the BPO approach still results in the system getting stuck in the initial state. \begin{figure*}[t] \includegraphics[width=0.9\linewidth]{IsingUnconstrained.png} \caption{Optimisation of the annealing protocol for the Ising model given by Eq.~\eqref{eq:h0_ising} for $N=5$ spins. (a) A comparison of final state fidelities for different driving times using the optimal control technique BPO (blue diamonds), first-order LCD (black dash-dot line) and COLD (red circles). The same is shown in (b) for CRAB (green diamonds) and COLD-CRAB (purple circles). CD enhanced techniques (COLD and COLD-CRAB) introduced in this work show a clear convergence to good fidelities at short driving times. All results are for the best (lowest) fidelity achieved over $500$ optimisations.}\label{fig:IsingUnconstrained} \end{figure*} Finally we present and compare the results of the new method, COLD. In this case the Hamiltonian before adding LCD terms is given by Eq.~\eqref{eq:HannealOpt} and the coefficient of the first-order LCD is \begin{equation} \alpha = -\frac{h(h+\boldsymbol{\beta}) + h\frac{\lambda}{\dot{\lambda}} \dot{\boldsymbol{\beta}}}{4(h\lambda)^2 + (h+\boldsymbol{\beta})^2 + 4J^2}. \end{equation} Note that the optimisation of the additional control field also feeds into the coefficient of the adiabatic gauge potential during the dynamics as discussed in Sec.~\ref{sec:OLCD}. The results of the COLD approach for this two spin annealing protocol are shown in Fig.~\ref{fig:TwoSpin}(b), where we observe an improvement of the final state fidelity beyond what is possible with first-order LCD alone in Fig.~\ref{fig:TwoSpin}(a). In this example, LCD alone reaches a final state fidelity of $1-F=3\%$ at short times, however COLD improves this error in the final state to $1-F=0.005\%$. This is due to the extended family of dynamical Hamiltonians in Eq.~\eqref{eq:expandedH} owing to the addition of an optimisable control field. This result shows that COLD can provide an advantageous alternative to the addition of higher-order LCD which may be experimentally impractical. We have found that COLD performs better than LCD of the same order or BPO when the system dynamics are calculated numerically. This does not, however, imply anything about the performance of COLD in more complex scenarios, like in the case of an unknown target ground state. In that case the fidelity is a poor optimisation metric. There is, however, a way to come to the same conclusions as those presented in Fig.~\ref{fig:TwoSpin} without the need to compute the dynamics exactly. We can do this by first using a guess for the COLD protocol to find the approximate adiabatic gauge potential and then minimising the integral of the norm of the second-order correction to the adiabatic gauge potential along the path. Note, that the ground state can be in turn obtained through first order COLD, so there is no need to diagonalize the Hamiltonian. This integral should be small if COLD has implemented a dynamical Hamiltonian that makes the first-order adiabatic gauge potential the leading term. It is effectively a measure of the error of COLD and can be given by \begin{equation} \begin{aligned} \mathcal{I}_1 = \int_0^\tau dt^\prime \Big[& \bra{\psi_g(t^\prime)} \Gamma^2(t^\prime) \ket{\psi_g(t^\prime)} \\ &- (\bra{\psi_g(t^\prime)} \Gamma(t^\prime) \ket{\psi_g(t^\prime)})^2\Big]^{1/2}, \end{aligned} \end{equation} \noindent with $\ket{\psi_g(t)}$ the instantaneous ground state along the path and \begin{equation} \Gamma(t) = \gamma(t) \left( \sigma_1^y \sigma_2^x + \sigma_1^x \sigma_2^y \right), \end{equation} \noindent one of the second-order correction terms. In order to confirm this is the case, we compare the different paths -- COLD and LCD only -- in the two-spin example in order to determine if $\mathcal{I}_1$ is small for COLD. If $\mathcal{I}_1$ is small when compared to the same measure for lower-order LCD as $t\rightarrow 0$, then we know that COLD is enforcing a better dynamical Hamiltonian. In the case of the two spin annealing protocol we find that as $t\rightarrow0$, $\mathcal{I}_1\rightarrow 0.04$ for COLD and $\mathcal{I}_1\rightarrow 0.2$ for LCD, showing that COLD is minimising the second-order correction along the path. A simpler integral \begin{equation} \mathcal{I}_2 = \int_0^\tau dt^\prime |\gamma(t^\prime)|, \end{equation} also reflects this correction in this two spin example, with $\mathcal{I}_2 \rightarrow 0.03$ for COLD and $\mathcal{I}_2 \rightarrow 0.1$ for LCD as $t\rightarrow0$. This could be useful in more complex scenarios as $\mathcal{I}_2$ will be relatively simple to calculate. We also observe the reduction of the corresponding integrals of the $(\sigma^y_1\sigma^z_2 + \sigma^z_1\sigma^y_2)$ term of the second-order LCD. By minimising these integrals, it could be possible to extend the COLD approach to more complex scenarios, including where the exact calculation of the dynamics is not possible. \section{1D Ising Model} \label{sec:1dIsing} In this section we apply COLD for state preparation on a 1D Ising spin chain in the presence of a transverse and longitudinal field. We consider an annealing protocol where the aim is to prepare the ground state across the Ising phase transition. The annealing Hamiltonian is given by \begin{align}\label{eq:h0_ising} \begin{split} H_{0}(t) &= - J \sum_{j}^{N-1} \sigma^z_j\sigma_{j+1}^z + Z_0\sum_j^N \sigma_j^z \\ &+ \lambda(t) X_f \sum_j^N \sigma_j^x, \end{split} \end{align} where $Z_0$ is a small offset parameter to break ground state degeneracies and $X_f$ is the final x-field strength. Note, the breaking of the ground state degeneracies is not a requirement but allows for easier consideration of the adiabatic path. As before, $\lambda(t)$ is a scaling function that has the boundary conditions $\lambda(0) = 0$ and $\lambda(\tau) = 1$, with $\tau$ the driving time. This means we start from the ground state of all spins up and drive across the quantum phase transition to the ground state which is a superposition of all basis states. We again take the scaling function to be given by Eq.~\eqref{eq:Scalingfunc}. In this example, we use $X_f = 10J$ and $Z_0 = 0.02J$. For the Hamiltonian of Eq.~\eqref{eq:h0_ising}, the LCD to first and second order is well known, as the wave functions are entirely real. We take the first-order adiabatic gauge potential to be given by \begin{equation}\label{eq:LCD1} \mathcal{A}(\lambda) = \alpha \sum_{j}^N\sigma_j^y, \end{equation} \noindent where the coefficients for the general periodic spin chain of Eq.~\eqref{eq:h0_ising} are \begin{align} \label{eq:alphas} \alpha(\lambda) = \frac{1}{2} \frac{Z_0 X_f}{Z_0^2 + \lambda^2 X_f^2 + 2J^2}. \end{align} Note, that the quoted $\alpha$ above is technically for a periodic or infinite size system, with $J^2 \rightarrow J^2(1-1/N)$ for a finite system. However, we find that the inclusion of the factor for the finite system sizes we consider only changes the final achieved converged fidelities at short times by $\sim 10^{-6}\%$. The second-order adiabatic gauge potential is of the form \begin{equation} \begin{aligned} \mathcal{A}^{(2)}(\lambda) =& \alpha \sum_{j} \sigma_j^y + \gamma \sum_{j} (\sigma_j^x \sigma_{j+1}^y + \sigma_j^y \sigma_{j+1}^x) \\ & + \zeta \sum_{j} (\sigma_j^z \sigma_{j+1}^y + \sigma_j^y \sigma_{j+1}^z) , \end{aligned} \label{eq:SecondLCD} \end{equation} with the coefficients $\alpha$, $\gamma$ and $\zeta$ again obtained by minimising the action given by Eq.~\eqref{eq:actionCD} and solving the coupled set of equations numerically (see Appendix \ref{app:derivation} for a detailed derivation). \begin{figure}[t] \includegraphics[width=0.98\linewidth]{MaxAmp.png} \caption{Maximum amplitudes of CD terms in the Ising model annealing protocol for (a) first- and second-order LCD only with no additional optimal control fields and (b) the COLD approach optimised for the best final state fidelity implementing first-order LCD as shown in Fig.~\ref{fig:IsingUnconstrained} (a). The plot shows the maximum amplitude at each driving time for the first-order $\alpha$ (red circles) and the two second-order terms $\gamma$ (blue diamonds) and $\zeta$ (green triangles) as given in Eq.~\eqref{eq:SecondLCD} (although the second-order LCD is not actually implemented in COLD). An inversion in the strength of the second-order and first-order LCD terms for (a) no additional optimal control fields and (b) the addition of optimal control fields shows that COLD implements a dynamical Hamiltonain which is favourable for the applied order of LCD (first-order in this case).}\label{fig:MaxAmp} \end{figure} In this example, optimal control is implemented by introducing an additional driving field so that the dynamical Hamiltonian is given by \begin{align}\label{eq:h_beta} H_{\beta}(t, \boldsymbol{\beta}) = H_0(t) + \sum_j \boldsymbol{\beta}(t)\sigma^z_j, \end{align} with $\boldsymbol{\beta}$ being the terms to optimise over. We take our additional terms to again respect the boundary conditions $\boldsymbol{\beta}(0) = 0$ and $\boldsymbol{\beta}(\tau) = 0$, meaning a natural choice is \begin{align}\label{eq:optimsable_1} \boldsymbol{\beta}(t) = \sum_k^{N_k} \beta^k \sin(\omega_k t / \tau) = \sum_k^{N_k} \beta^k \sin(\omega_k f(\lambda)), \end{align} with $\omega_k = k 2\pi$ the $k$th principal frequency and $f(\lambda)$ given by Eq.~\eqref{eq:lambda}. To implement the CRAB algorithm discussed in Sec.~\ref{sec:OptCont}, we will use $k \rightarrow k(1+r)$ instead with $r$ drawn from a uniform random distribution $r \in [-0.5,0.5]$. As before, we choose the first order adiabatic gauge potential given by Eq.~\eqref{eq:LCD1} and find that the coefficients are \begin{align} \alpha(\lambda,\boldsymbol{\beta}) = \frac{X_f}{2} \frac{(Z_0 + \boldsymbol{\beta}) - \lambda\dot{\boldsymbol{\beta}}/\dot{\lambda}}{(Z_0 + \boldsymbol{\beta})^2 + \lambda^2 X_f^2 + 2 J^2}. \end{align} Note, with the introduction of the additional control fields $\boldsymbol{\beta}$ it is possible for $\alpha$ to be non-zero at the start or end of the protocol, as $\dot{\boldsymbol{\beta}}$ is not fixed to be zero. However, this can be enforced by a suitable choice of the additional control field, we will consider replacing $\alpha \rightarrow S(\lambda)\alpha$ where $S(\lambda)$ is a scaling function that tends to zero as $\lambda \rightarrow 0$ and $\lambda \rightarrow 1$. We find that the scaling function only has a minimal effect on the final fidelities observed. This issue could also be resolved by a suitable choice of $\boldsymbol{\beta}$, with our example drive being an extreme case as $\dot{\boldsymbol{\beta}}$ is maximal at the boundaries of the protocol. The suitable choice of the form of $\boldsymbol{\beta}$ in a given example is a problem we will leave for future work, with our focus being on the introduction of the COLD protocol. \begin{figure}[t] \includegraphics[width=0.98\linewidth]{ScalingN.png} \caption{Scaling of fidelities in the annealing protocol for the Ising model with (a) system size $N$ and (b) optimisation parameters $N_k$ at driving time $\tau=10^{-2}J^{-1}$. Plots show a comparison between BPO (blue diamonds) and COLD (red circles). In (a) we see that the COLD fidelity decreases as a function of $N$ but remains quite high when compared to BPO while (b) shows the non-existent improvement for both BPO and COLD with an increasing number of parameters in the $N=5$ spin case. Once again, plotted best fidelities are obtained across 500 optimisations.}\label{fig:ScalingN} \end{figure} We first compare the final state fidelity when using COLD versus BPO as shown in Fig.~\ref{fig:IsingUnconstrained}(a) for different driving times in a system of $N=5$ spins and a single $N_k=1$ optimisation coefficient. As expected, at long timescales the two methods agree as we approach the adiabatic limit of the dynamics. However, at shorter time scales the difference in behaviour is dramatic. We observe that the BPO approach fails in the case of very fast driving as the state gets stuck in the initial state but the COLD approach converges to $1-F \sim 10^{-3}$. Note, this is not achieved by the introduction of first-order LCD terms alone, as this will result in $F=0.0440$ for $\tau=10^{-3}J^{-1}$. COLD is instead achieving this by making the LCD term dominant for the dynamical Hamiltonian through the additional control fields. \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{IsingcConstrained.png} \caption{Optimisation of the \emph{constrained} annealing protocol for the Ising model for $N=5$ spins with a maximum amplitude limit on each term in the Hamiltonian of Eq.~\eqref{eq:h0_ising} of $10J$. (a) Shows a comparison between BPO (blue diamonds) and COLD (red circles) which both give much lower fidelities than in the unconstrained case in Fig.~\ref{fig:IsingUnconstrained}, although COLD persists in giving better results. In (b) the comparison is between CRAB (green diamonds) and COLD-CRAB (purple circles) which show orders of magnitude better fidelities than those in (a), with COLD-CRAB eking out higher fidelities at short driving times. The plotted best results are obtained from 200 optimisations for each method.}\label{fig:IsingcConstrained} \end{figure*} To confirm this, we plot the maximum amplitudes of both the first- and second-order adiabatic gauge potentials in Fig.~\ref{fig:MaxAmp}, where Fig.~\ref{fig:MaxAmp}(a) shows the case of no optimisation and Fig.~\ref{fig:MaxAmp}(b) the case of applying COLD. We can see that without COLD the second-order $(\sigma^x_j\sigma^y_{j+1} + \sigma^y_j\sigma^x_{j+1})$ corrections to the LCD are far larger than the first-order, resulting in the small final state fidelities when only first-order LCD is implemented. In the case of COLD, this relationship reverses and the first-order LCD terms dominate the dynamics. These results are further evidence that the implementation of COLD through the minimisation of the second-order corrections discussed in Sec.~\ref{sec:TwoSpin} could be fruitful in more complex and/or larger systems, where the dynamics can not be calculated. We find that the results of BPO and COLD at short driving times are stable against increasing system size, as shown for $\tau = 10^{-2}J^{-1}$ in Fig.~\ref{fig:ScalingN}(a), with only a small decrease in final state fidelity for larger systems with COLD. Similarly, increasing the number of optimisation coefficients $N_k$ results in little improvement in the values obtained at short times for this example, as shown in Fig.~\ref{fig:ScalingN}(b). It is possible that in more complex systems, more optimisation coefficients will be needed to gain a larger advantage. We also note that by increasing the number of coefficients, we are increasing the complexity of the cost function landscape to be explored by the minimisation procedure, hence leading to slightly worse final fidelities This can mean that alternative approaches than the Powell minimisation used so far, \@e.g.~that of CRAB, could be better suited to probing the cost function for high $N_k$. We also note that this lack of improvement in the results is likely the consequence of the form of the control field given by Eq.~\eqref{eq:optimsable_1} rather than due to a failure of the optimiser in the face of a complex parameter space. We find that the parameter space is relatively smooth in the case of $N_k = 1,2,3$ and a better solution for this form of control field simply does not exist. We now consider the combined method of COLD-CRAB for this annealing example as shown in Fig.~\ref{fig:IsingUnconstrained}(b). We point out that with our application of CRAB in this scenario we are not enforcing $\boldsymbol{\beta}$ to be zero at the start and end of the dynamics, allowing for their to be a tuning of the $z$-field offset. This is consistent between CRAB and COLD-CRAB and therefore does not influence our comparison of the two. First, it is important to note that CRAB alone results in a overall speedup of the dynamics for a high final state fidelity $1-F\sim 10^{-3}$ at long time-scales. However, CRAB still suffers from getting stuck in the initial state at fast driving times and the final state fidelity again tends to zero. This is not the case for COLD-CRAB, which converges to large final state fidelities $1-F \sim 10^{-3}$ at short driving times $\tau \leq 10^{-1}J^{-1}$. Note that the difference between the convergence to final state fidelities are only marginally different between COLD and COLD-CRAB at longer times, but at short times COLD-CRAB performs a lot better. Further improvement could be gained by combining COLD with more advanced versions of CRAB or other optimal control methods. As shown in Fig.~\ref{fig:MaxAmp}, the amplitude of the driving required to achieve the fidelities discussed so far scales with the driving time. Practical scenarios will necessarily place limits both on achievable driving times and the maximum amplitude of any term that is being driven. However, the scaling of the drivings shown do not mean that everything diverges in the limit of $\tau \rightarrow 0$. To see this we can first write the Scr\"odinger equation for COLD as \begin{equation} i \hbar d_t \ket{\psi}=\left(H_\beta+\dot\lambda \mathcal{A}_\lambda\right) \ket{\psi}, \end{equation} we then divide through by $\dot{\lambda}$ to get \begin{equation} i \hbar d_\lambda \ket{\psi}=\left(\frac{H_\beta}{\dot \lambda}+\mathcal{A}_\lambda\right) \ket{\psi}, \end{equation} in the limit of $\tau \rightarrow 0$ then $\dot\lambda\to \infty$ to result in the Hamiltonian term disappearing, or in other words, we turn off the Hamiltonian. We then only drive the system in the $\tau \to 0$ limit with the COLD or LCD driving term: \begin{equation} \label{eq:OnlyA} i \hbar d_\lambda \ket{\psi}=\mathcal{A}_\lambda \ket{\psi}. \end{equation} In this limit then $\lambda$ plays the role of time, and this could then be implemented in a practical scenario in finite time as it corresponds to some manipulation of the couplings in the system. This renormalised time cannot then be infinitesimally short if the couplings are bounded but we have shown that the protocol does not diverge as $\tau \rightarrow 0$. In the case of a spin chain, evolving under Eq.~\eqref{eq:OnlyA} is effectively to first order in LCD implementing independent single spin rotations along the chain, and COLD can be easily applied. If it is not possible to switch off the Hamiltonian as discussed above then as an alternative we can implement COLD with experimental constraints accounted for directly in the optimal control minimisation. We consider an extreme example of constraints to show that even in this scenario COLD can provide an advantage and corresponding speed-up. In the constrained case the annealing protocol remains that of Hamiltonian~\eqref{eq:h0_ising} but we choose to introduce a bound of $X_f$ on the maximum amplitude of all drivings. This makes it so that no optimal control or LCD term can go beyond the original amplitude of the $x$-field drive. We show the final state fidelities achieved for the constrained example in Fig.~\ref{fig:IsingcConstrained}. As can be seen in Fig.~\ref{fig:IsingcConstrained}(a), COLD provides a substantial improvement beyond what is achievable with BPO. BPO manages $F < 0.5$ for $\tau < 1J^{-1}$, but COLD can reach final state fidelities $F\sim 0.9$ for $\tau < 1J^{-1}$. The real improvement, however, comes with the application of CRAB and COLD-CRAB. CRAB already improves the fidelities substantially, and would allow for a speed up in the annealing protocol but with COLD-CRAB the final state fidelities are even better, with $F\sim 0.99$ achievable when approaching $\tau\sim 0.1J^{-1}$. Signs are seen of the onset of the convergence to small values for COLD-CRAB in Fig.~\ref{fig:IsingcConstrained}(b) before the maximum amplitude required becomes too large and the short time results tend towards zero fidelity and the state being stuck again. With this example and the discussion on implementation via turning off the Hamiltonian, we have shown that COLD is capable of delivering improvements beyond other schemes even for practical problems with strict and rather extreme constraints. \section{Transport in a Synthetic Lattice}\label{sec:lattice} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{synthetic.png} \caption{Optimisation of state transfer in a synthetic lattice. In (a) we compare the fidelities obtained via the bare ARP protocol (pink dashed line) and first-order LCD previously implemented in Ref.~\cite{meier_counterdiabatic_2020} (purple dash-dot line) to BPO (blue diamonds) and the COLD method (red circles). (c) Maximum amplitude of the tunneling term at each driving time for LCD (green diamonds) as given by Eq.~\eqref{eq:tunneling} as well as COLD (red triangles) which includes additional control parameters as shown in Eq.~\eqref{eq:tunneling_opt} and BPO (blue triangles) which omits the modifications due to CD but retains the control terms $\boldsymbol{\beta}$. In both (a) and (c) we simulate $N = 7$ lattice sites and use $N_k = 1$ parameter for optimisation of BPO and COLD. (b) Scaling of fidelities with increasing number of lattice sites (where $N_k = 1$) for both COLD (red circles) and BPO (blue diamonds) noting that the latter performs very poorly for $N > 9$. (d) does the same for the number of parameters while keeping $N=7$, with the trend indicating that increasing $N_k$ does not lead to better fidelities in either the BPO or COLD case. Note that both (b) and (d) are simulated for driving time $\tau = 0.5 J^{-1}$ and the best fidelities are obtained across 500 optimisations.}\label{fig:Synthetic} \end{figure*} The efficient transfer of states between opposite ends of a lattice could have future applications in the settings of quantum computation and simulation due to its promise of efficient transport of information \cite{lang2017}. This objective is often tackled in the setting of ultracold atoms in optical lattices. While the problem can be tuned to be a single-particle system and the analytical solutions of the corresponding instantaneous Schr\"odinger equation are known \cite{Hatsugai1993,Hugel2014} even for a finite system \cite{Duncan2018exact}, efficient evolution for state transfer is not straight-forward. This is due to the fact that the majority of the states are delocalised across the lattice, meaning that the $\ket{\psi}\bra{\psi}$ terms of the CD Hamiltonian of Eq.~\eqref{eq:counterdiabatic} are global in reach. It is normal to consider this system in the tight-binding limit where the implementation of global terms is not straightforward. Such terms can be generated via the interactions of the atoms with cavity modes \cite{landig2016,keller2017} or from dipolar interactions \cite{baranov2002,menotti2008,trefzger2011}. However, it would be ambitious to expect this control to be general enough to implement the CD Hamiltonian of the exact solutions. This is one of the reasons that LCD has been pursued in this setting. Recently, LCD has been successfully applied to improve an adiabatic rapid passage (ARP) protocol for population transfer across a synthetic lattice \cite{meier_counterdiabatic_2020}. In this realisation, population transfer was achieved in a synthetic tight-binding lattice of laser coupled atomic momentum states. We will consider the same problem as in Ref.~\cite{meier_counterdiabatic_2020} but with the improvement that can be gained by COLD. This system is described by the Hamiltonian \begin{align}\label{eq:Hlattice} \begin{split} H_0(t) &= - \sum_n J_n(t)(c_n^{\dag}c_{n+1} + H.c.) \\ &+ \sum_n V_n(t) c_n^{\dag}c_n, \end{split} \end{align} where $J_n(t)$ is the time-dependent tunnelling that describes the nearest-neighbour coupling, $V_n(t)$ is the on-site energy offset with respect to neighbouring sites and $c_n^{\dag}$($c_{n}$) is the creation(annihilation) operator on a given synthetic lattice site. In the ARP protocol, the population gets moved from one end of the lattice to the other by linearly ramping the lattice from a positive tilt to a negative tilt via \begin{align}\label{eq:t_and_v} J_n(t) = J_0(1.1 - \lambda) = J_0\Big(0.1 + \frac{t}{\tau}\Big), \\ V_n(t) = n V_0 2 (\lambda - 1/2) = nV_0\Big(1 - \frac{2t}{\tau}\Big), \end{align} where $V_0 = 4J_0$ is the initial site energy slope and $J_0$ is the characteristic tunnelling scale of the lattice. The scaling function in this case is given by \begin{equation} \lambda(t) = 1-\frac{t}{\tau}. \end{equation} In order to implement LCD as shown in Ref.~\cite{meier_counterdiabatic_2020}, the first order LCD can be accounted for by taking \begin{align}\label{eq:tunneling} J_n(t) \rightarrow J_{n, \mathrm{CD}}(t) e^{-i\phi_{n, \mathrm{CD}}(t)}, \end{align} \noindent where \begin{align}\label{eq:t_phi_cd} J_{n, \mathrm{CD}}(t) = \sqrt{J_n(t)^2 + (\alpha_n(t)/\tau)^2}, \\ \phi_{n, \mathrm{CD}}(t) = \arctan\left(-\frac{J_n(t)\tau}{\alpha_n(t)}\right), \end{align} \noindent and $\alpha_n(t)$ is the CD terms which can be found by solving a set of linear equations \begin{align} \begin{split} &-3(J_n J_{n+1})\alpha_{n+1} + (J_{n-1}^2 + 4J^2_n + J_{n+1}^2)\alpha_j \\ &- 3(J_n J_{n-1})\alpha_{n-1} + (V_{n+1} - V_n)^2 \alpha_n \\ &= -\partial_{\lambda}J_n (V_{n+1} - V_{n}). \end{split} \end{align} In order to implement COLD we include additional terms to the tunnelling of the lattice \begin{align}\label{eq:tunneling_opt} J_n(t) \rightarrow J_n(t, \boldsymbol{\beta}) = J_n(t) + \boldsymbol{\beta}(t), \end{align} which can then be incorporated into the forms of both $J_{n, CD}(t)$ and $\phi_{n, CD}(t)$. We again want the additional control terms to go to zero around the problem boundaries and a natural choice is the same as in the Ising spin chain example in Eq.~\eqref{eq:optimsable_1}. The parameters $\boldsymbol{\beta}$ are optimised as before by minimizing with respect to the fidelity of the final state, where the population has been fully transferred to the opposite lattice site. We first consider a system size of $N=7$ sites which was successfully experimentally probed in Ref.~\cite{meier_counterdiabatic_2020}, where final state fidelities of $0.75$ were achieved for $\tau = 1$ms with a final tunnelling strength of $J/\hbar = 1/2\pi kHz$ (equivalent to $\tau \sim 1 J^{-1}$ in our units). We initially confirm the breakdown of ARP in this setting for fast times, and the success of the LCD protocol at short times, as shown in Fig.~\ref{fig:Synthetic} (a) and found in Ref.~\cite{meier_counterdiabatic_2020}. Implementing BPO on its own manages to enhance the achievable fidelities at intermediate times of $\tau > 0.03 J^{-1}$. However, eventually, as observed in all scenarios in this work, BPO becomes stuck in the initial state at fast times, and the fidelity goes to zero. Implementing the newly introduced COLD protocol achieves an order of magnitude improvement in the fidelity over LCD. This is also plotted in Fig.~\ref{fig:Synthetic}(a) alongside previous results of ARP and first-order LCD. One concern could be that COLD is achieving this improvement by simply pumping power into the tunnelling term, but as we can see in Fig.~\ref{fig:Synthetic}(c) the maximum amplitude of the tunnelling term tracks that of LCD. A key issue for experiments is the maximum amplitude achievable by a driving term and with this result we can stipulate that COLD is likely to be feasible in the same regimes as LCD in this synthetic lattice system. There is single outlier at intermediate times as indicated by the single point peaking in maximum amplitude in Fig.~\ref{fig:Synthetic}(c), this is the exception to the rule, where the optimisation has found a marginally higher fidelity (see the offset point in Fig.~\ref{fig:Synthetic}(a)) by pumping in power. A large concern for state transfer techniques is the robustness of a protocol with respect to an increasing system size. We show the best achievable fidelities with increasing system size for both BPO and COLD in Fig.~\ref{fig:Synthetic}(c). While both protocols show a decreasing fidelity with system size as is to be expected, once again COLD does not suffer from getting stuck in the initial state. This is shown by the BPO fidelities going to unity for large systems in Fig.~\ref{fig:Synthetic}(c), and is the same mechanism for this as for the short driving times in Fig.~\ref{fig:Synthetic}(a). Another concern could be that BPO will beat COLD if enough parameters are allowed for the optimisation, i.e. if we increase $N_k$ enough. We observed no evidence of this for the Ising model example and we again do not observe this in this synthetic lattice example, as is shown in Fig.~\ref{fig:Synthetic}(d). Small improvements are made in the fidelities achieved with BPO and COLD for larger $N_k$ but this is not substantial. \section{Discussion and outlook} \label{sec:conclusions} We have introduced a new hybrid approach combining quantum optimal control and shortcuts to adiabaticity: COLD. Inspired by the successes of LCD, where diabatic transitions are suppressed and locality conditions can be met, COLD improves on its methodology by combining it with quantum optimal control. The natural way to enhance the performance of LCD is by introducing higher order CD terms, but these are often non-local and difficult to engineer in experiments. COLD circumvents this by allowing for additional control fields that extend the family of dynamical Hamiltonians which can be explored. In this way, our method may find the best possible path where the effect of lower-order LCD is most relevant and higher order corrections are suppressed. COLD has a clear potential in efficiently speeding up adiabatic evolution in various settings. We demonstrate this numerically via several example protocols which indicate improvements beyond a classical optimisation approach BPO as well as LCD of different orders. Our work shows that COLD reduces the strength of higher order LCD corrections, and that it performs well for increasing system sizes. We have shown that COLD can be implemented in the limit of fast driving by a `switching off' of the original dynamical Hamiltonian. For scenarios where removing the Hamiltonian is not possible, we have shown that an alternative way to implement COLD is to use a bounded optimisation where amplitudes are restricted. We find that both the COLD and COLD-CRAB protocols perform extremely well in this setting. COLD will be most beneficial when the LCD is only realisable to a certain order but the higher order corrections are large. This means the diabatic transitions are not being sufficiently suppressed by the choice of LCD and COLD can be used to find the dynamical Hamiltonian for which the required order of LCD term dominates. Note, that this goes the other way too, with COLD not providing substantial improvements when the chosen lower order LCD is small across the path. This can be thought of as being the case in two limits. First is the adiabatic limit, for which any CD correction is small and COLD will tend towards the adiabatic result. Second, the low-order LCD terms can be small compared to the driving as the exact CD would be correcting transitions due to interactions at longer ranges. In this scenario, the order of LCD being implemented with COLD needs to be increased, so that the CD term is accounting for the longer range terms. While we have only utilised COLD with first-order LCD in this work, it can be applied with higher-order LCD terms. We found this to be the case when we tried to apply COLD to the protocols for \@e.g.~Greenberger-Horne-Zeilinger state generation in arrays of Rydberg atoms in Ref.~\cite{omran_generation_2019}. COLD with first- or second-order LCD could not improve upon the results of BPO or CRAB. This was due to both the first- and second-order corrections being smaller than the dynamical terms of the original Hamiltonian by two orders of magnitude. It is perhaps not surprising when we consider that the Hamiltonian of this protocol includes long-range couplings. In this case, higher-order corrections would need to be implemented with COLD, and finding methods for executing these now highly non-local terms may be a future path of research. COLD may also be applied to more complex systems where exact dynamics are not possible, \@e.g. due to an excessively large Hilbert space. This may be achieved by variationally minimising the integrals of the coefficients for the higher order corrections to the LCD. This was briefly sketched out in the example of two spin annealing, and we have shown that COLD functions by minimising the strength of the higher order LCD corrections along a given path. A further option is to combine COLD with one of a large variety of numerical optimal control methods, as we have done for the example of CRAB. We have shown a substantial improvement for state preparation in the Ising model that can be obtained from the COLD-CRAB combination - particularly in the constrained case. Both fusions of COLD with advanced optimal control methods as well as the minimisation of higher order LCD term amplitudes for complex systems are two potential extensions which could prove fruitful with further study. \begin{acknowledgements} Work at the University of Strathclyde was supported by the EPSRC Quantum Technologies Hub for Quantum Computing and Simulation (EP/T001062/1), and the European Union’s Horizon 2020 research and innovation program under grant agreement No. 817482 PASQuanS. A.P. acknowledges support from NSF under Grant DMR-2103658 and by the AFOSR under Grants No. FA9550-16-1-0334 and FA9550-21-1-0342. \end{acknowledgements}
2023-04-23T08:17:28.423Z
2022-03-07T02:00:45.000Z
redpajama/arxiv
arxiv_0000
75
19,470
817c40acb6ae629307ae69a4b2a307daa30935a1
\section{Introduction} \label{sec_introduction} The Epoch of Reionisation (EoR) marks the second major phase transition in the Universe. With the emergence of the first galaxies, ultraviolet (UV) radiation gradually ionises the neutral hydrogen (\HI) in the intergalactic medium (IGM) until the Universe is reionised by $z\simeq5.3$ \citep{Keating2020, Zhu2021, Bosman2021}. However, as only the brighter galaxies during the EoR are observed to date, key questions detailing the reionisation process remain outstanding: Did the few bright and more massive or the numerous faint and low-mass galaxies contribute more to reionisation? Feedback processes, such as heating by supernovae (SN) and photoionisation, suppress star formation in low-mass galaxies \citep{Gnedin2014, ocvirk2016, Ocvirk2018, Hutter2021a}, and reduce the contribution of very low-mass galaxies to reionisation. An even more critical quantity that regulates the ionising radiation (with energies $E>13.6$~eV) escaping from galaxies and driving the reionisation of the IGM is the fraction of ionising photons $f_\mathrm{esc}$ that escape from galaxies into the IGM \citep{Hutter2021b}. While the presence of \HI in the IGM during the EoR impedes direct measurements of $f_\mathrm{esc}$, different theoretical models and simulations have investigated the physical processes determining and dependencies of $f_\mathrm{esc}$ \citep[e.g.][]{Ferrara2013, Wise2014, Kimm2014, Kimm2019}. Cosmological radiation hydrodynamical simulations suggest that $f_\mathrm{esc}$ decreases towards deeper gravitational potential \citep[e.g.][]{Wise2014, Kimm2014, Kimm2017, Kimm2019, Xu2016, anderson2017, lewis2020}. High-resolution simulations of the ISM indicate that $f_\mathrm{esc}$ is dominated by the escape from star-forming clouds. The ionising radiation of massive stars and their explosions as SN ionise, heat and destroy the star-forming clouds clearing the way for the ionising radiation to escape \citep{Howard2018, Kim2019, He2020, Kimm2021}. The complex dependency of $f_\mathrm{esc}$ on the underlying gravitational potential, the gas distribution and stellar populations in the ISM leaves marks not only in the radiation emitted by galaxies but also in the ionisation topology, the time and spatial distribution of the ionised regions around galaxies. Current and forthcoming observations of galaxies and the ionisation state of the IGM have the potential to constrain galactic properties, such as $f_\mathrm{esc}$, and the reionisation process. On the one hand, detecting the 21cm signal from \HI in the IGM with forthcoming large radio interferometers (e.g. Square Kilometre Array) will measure the ionisation topology, which provides constraints on the dependence of $f_\mathrm{esc}$ on galaxy mass \citep{kim2013b, Seiler2019, Hutter2020}. On the other hand, being extremely sensitive to the attenuation by \HI in the IGM, the observable Lyman-$\alpha$ (Ly$\alpha$) radiation at $1216$\AA~ from high-redshift galaxies has gained popularity in probing reionisation for the following reason: A $z\gtrsim6$ galaxy only exhibits detectable Ly$\alpha$ emission when: (i) it is surrounded by an ionised region that is large enough to allow a sufficient fraction of its emerging Ly$\alpha$ line to traverse the IGM, or (ii) it is gas-rich enough (corresponding to a high \HI column density) such that the red part of the Ly$\alpha$ line emerging from the galaxy is redshifted out of absorption, or (iii) it has strong outflows that also redshift the emerging Ly$\alpha$ line out of absorption, or it is a combination of all three. The first criterion suggest that more massive galaxies able to retain more gas might be the most likely to show observable Ly$\alpha$ emission during the EoR: their higher rates of forming stars emitting ionising photons lead to an increased production of Ly$\alpha$ radiation in the ISM and the growth of large ionised regions around them. The latter is accelerated by their ionised regions merging earlier with those of the surrounding lower mass objects attracted by their deeper gravitational potentials \citep{Chardin2012, Furlanetto2016, Chen2019}. As reionisation progresses and the ionised regions grow, increasingly lower mass galaxies become visible as Ly$\alpha$ emitters (LAEs), which leads not only to a higher fraction of galaxies showing Ly$\alpha$ emission but also to a reduced clustering of LAEs \citep{mcquinn2007, jensen2013, Hutter2015, Sobacchi2015}. This picture is increasingly supported by observations of $z>6$ LAEs. Not only the fraction of Lyman Break Galaxies (LBGs) showing Ly$\alpha$ emission rises from $z\simeq8$ to $z\simeq6$ \citep{Schenker2014, Pentericci2014, Pentericci2018, Fuller2020}, but also the majority of Ly$\alpha$ emission at $z\gtrsim6.5$ is detected in galaxies with a bright UV continuum \citep{Cuby2003, Willott2013, Oesch2015, Sobral2015, Zitrin2015, Roberts-Borsani2016, Matthee2017, Songaila2018, Hashimoto2018, Taylor2020, Endsley2022, Endsley_Stark2022}. Moreover, the close proximity of UV-bright LAEs suggests that LAEs are located in over-dense regions \citep{Vanzella2011, Castellano2016, Castellano2018, Jung2020, Tilvi2020, Hu2021, Endsley_Stark2022} that exhibit the first and largest ionised regions during the EoR. This hypothesis is also in line with the observed double-peaked Ly$\alpha$ profiles in $z\gtrsim6.5$ galaxies \citep{Songaila2018, Hu2016, Matthee2018, Meyer2021}, indicating that the ionised regions surrounding them are so large that even the part bluewards the Ly$\alpha$ resonance redshifts out of resonance. Current theoretical predictions of the large-scale LAE distribution confirm this picture, suggesting that the LAEs we see during the EoR are more massive galaxies naturally located in over-dense regions \citep[c.f.][]{Dayal2011, jensen2013, Hutter2014, Mesinger2015, Weinberger2018, Qin2022}. Yet, all these LAE models effectively assume a constant $f_\mathrm{esc}$ value across the entire galaxy population at a given redshift. This assumption remains highly uncertain as $f_\mathrm{esc}$ is very sensitive to the ISM and the circumgalactic medium (CGM) of galaxies that again depend on the underlying gravitational potential of a galaxy. However, it is essential, since $f_\mathrm{esc}$ defines the critical processes that shape the Ly$\alpha$ luminosities observed from galaxies. An $f_\mathrm{esc}$ varying with galactic properties and the underlying gravitational potential might alter the galaxy population seen as LAEs for the following reasons: Firstly, within a galaxy, most Ly$\alpha$ radiation is produced by recombining hydrogen atoms and scales with the number of \HI ionising photons absorbed within the galaxy ($\propto 1-f_\mathrm{esc}$). Secondly, a fraction of these Ly$\alpha$ photons undergoes only a few scattering events when they escape through the same low-density tunnels that facilitate the escape of \HI ionising photons. In contrast, the other fraction that traverses optically thick clouds upon its escape is scattered and absorbed by hydrogen and dust, respectively \citep[see, e.g.][]{Verhamme2015, Dijkstra2016, Kimm2019, Kakiichi2021}. These different escape mechanisms result not only in $f_\mathrm{esc}$ posing a lower limit to the fraction of Ly$\alpha$ photons escaping from a galaxy but also determining the Ly$\alpha$ line profile that emerges from a galaxy. Detailed low-redshift galaxy observations increasingly supported the $f_\mathrm{esc}$-sensitivity of these Ly$\alpha$ properties \citep{Verhamme2017, Jaskot2019, Gazagnes2020}. Thirdly, $f_\mathrm{esc}$ shapes the IGM ionisation topology by determining the number of ionising photons available to ionise the IGM surrounding a galaxy. While a higher $f_\mathrm{esc}$ value enlarges the ionised region surrounding a galaxy and enhances the transmission of Ly$\alpha$ radiation through the IGM \citep{Dayal2011, Hutter2014}, the corresponding Ly$\alpha$ line emerging from a galaxy will be more peaked around the Ly$\alpha$ resonance and raise the absorption by \HI in the IGM. Given this complex $f_\mathrm{esc}$-dependency of the observed Ly$\alpha$ luminosity, it remains unclear whether different dependencies of $f_\mathrm{esc}$ with galaxy properties (e.g. increasing or decreasing with rising halo mass) would (i) identify the same galaxies as LAEs (exceeding a threshold Ly$\alpha$ luminosity) and/or (ii) lead to different spatial large-scale distribution of the LAEs' Ly$\alpha$ luminosities. In other words, which of these $f_\mathrm{esc}$-dependent Ly$\alpha$ processes dominates the observed Ly$\alpha$ luminosities? For example, is the $f_\mathrm{esc}$-dependency of the intrinsic Ly$\alpha$ luminosity dominant, and we yield a weaker clustering of LAEs when $f_\mathrm{esc}$ value decreases with rising halo mass? Or do they compensate each other once we reproduce the observed Ly$\alpha$ luminosity functions (Ly$\alpha$ LFs)? To address these questions, we use our {\sc astraeus} framework that models galaxy evolution and reionisation self-consistently \citep{Hutter2021a, Ucci2022}, and simulate different reionisation scenarios that gauge the physically plausible range of $f_\mathrm{esc}$ dependencies, i.e. $f_\mathrm{esc}$ decreasing and increasing with rising halo mass. Moreover, we parameterise results from numerical Ly$\alpha$ radiative transfer (RT) simulations of clumpy media \citep{Gronke2017} and build an analytic model for the fraction of Ly$\alpha$ photons escaping and the corresponding Ly$\alpha$ line profile emerging from high-redshift galaxies. Importantly, we explore three different Ly$\alpha$ line profile models, including (i) a Gaussian profile around the Ly$\alpha$ resonance where the Ly$\alpha$ escape fraction is directly related to the dust attenuation of the UV continuum \citep[used in previous LAE models outlined in][]{Dayal2011, Hutter2014}, (ii) a Ly$\alpha$ line profile emerging from a shell of outflowing dusty gas clumps, which we model by using the different Ly$\alpha$ escape regimes identified in \citet{Gronke2017}, and (iii) a Ly$\alpha$ line profile emerging from a shell of outflowing gas clumps with a fraction $f_\mathrm{esc}$ of the solid angle interspersed by gas-free tunnels. The latter two give rise to various combinations of a central peak around the Ly$\alpha$ resonance (Ly$\alpha$ photons hardly scatter in an optically thin medium) and two peaks in the red and blue wings (Ly$\alpha$ photons are scattered in an optically thick medium). By deriving the observed Ly$\alpha$ luminosities of all simulated galaxies for all combinations of reionisation scenarios and Ly$\alpha$ line models, we address the following questions: Which $f_\mathrm{esc}$-dependent Ly$\alpha$ process, i.e. intrinsic production, escape or transmission through the IGM of Ly$\alpha$ radiation, dominates the observed Ly$\alpha$ luminosity? Can the observed Ly$\alpha$ luminosities of galaxies inform us on their emerging Ly$\alpha$ line profile? Given the ionisation topology depends sensitively on the assumed dependency of $f_\mathrm{esc}$ with halo mass, are the same or different galaxies identified as LAEs and do they differ in the spatial distribution of their Ly$\alpha$ luminosities? This paper is organised as follows. In Section \ref{sec_model} we briefly describe the {\sc astraeus} model, its implementation of dust and the different reionisation simulations. In Section \ref{sec_modelling_LAEs} we introduce the different Ly$\alpha$ line profile models and their corresponding attenuation by dust. We then (Section \ref{sec_number_and_properties_LAEs}) discuss how the Ly$\alpha$ line profiles depend on halo mass in our different reionisation scenarios, how free model parameters, such as the ISM clumpiness or size of the dust gas clumps, need to be adjusted to fit the observed Ly$\alpha$ LFs, and how the galaxy properties determining the observed Ly$\alpha$ luminosities depend on the halo mass of a galaxy. In Section \ref{sec_spatial_distribution_LAEs} we identify the location of LAEs in the large-scale density and ionisation structure and assess whether the spatial distribution of LAEs differs for different $f_\mathrm{esc}$-dependencies on halo mass/ionisation topologies. Finally, we briefly discuss which Lyman Break galaxies are preferentially identified as LAEs (Section \ref{sec_LAE_LBG_relation}) and conclude in Section \ref{sec_conclusions}. In this paper we assume a $\Lambda$CDM Universe with cosmological parameter values of $\Omega_\Lambda=0.69$, $\Omega_m=0.31$, $\Omega_b=0.048$, $H_0=100h=67.8$km~s$^{-1}$Mpc$^{-1}$, $n_s=0.96$ and $\sigma_8=0.83$, and a Salpeter initial mass function \citep[IMF;][]{salpeter1955} between $0.1\msun$ to $100\msun$. \section{The model and simulations} \label{sec_model} In this paper, we use the {\sc astraeus} framework. This framework couples a semi-analytic galaxy evolution model (an enhanced version of {\sc delphi}; \citet{Dayal2014}) with a semi-numerical reionisation scheme ({\sc cifog}; \citet{Hutter2018a}) and runs the resulting model on the outputs of a dark matter (DM) only N-body simulation. In this Section, we provide a brief description of the physical processes implemented in {\sc astraeus} \citep[for more details, see][]{Hutter2021a} and introduce the different reionisation simulations. \subsection{N-body simulation} \label{subsec_Nbody_simulation} As part of the Multidark simulation project, the underlying DM N-body simulation ({\sc very small multidark planck; vsmdpl}) has been run with the {\sc gadget-2 tree+pm} code \citep{springel2005}. In a box with a side length of $160h^{-1}$Mpc, it follows the trajectories of $3840^3$ DM particles. Each DM particle has a mass of $6\times10^6 h^{-1}\msun$. For a total of $150$ snapshots ranging from $z=25$ to $z=0$, the phase space {\sc rockstar} halo finder \citep{behroozi2013_rs} has been used to identify all halos and subhalos down to $20$ particles or a minimum halo mass of $1.24 \times 10^8h^{-1}\msun$. To obtain the local horizontal merger trees (sorted on a redshift-by-redshift basis within a tree) for galaxies at $z=4.5$ that {\sc astraeus} requires as input, we have used the pipeline internal {\sc cutnresort} scheme to cut and resort the vertical merger trees (sorted on a tree-branch by tree-branch basis within a tree) generated by {\sc consistent trees} \citep{behroozi2013_trees}. For the first $74$ snapshots that range from $z=25$ to $z=4.5$, we have generated the DM density fields by mapping the DM particles onto $2048^3$ grids and re-sampling these to $512^3$ grids used as input for the {\sc astraeus} pipeline. \subsection{Galaxy evolution} \label{subsec_galaxy_evolution} {\sc astraeus} tracks key processes of early galaxy formation and reionisation. At each time step and for each galaxy, it tracks the amount of gas that is accreted, the gas and stellar mass merging, star formation and associated feedback from SNII and metal enrichment, as well as the large-scale reionisation process and its associated feedback on the gas content of early galaxies. \subsubsection{Gas and stars} \label{subsubsec_gas_and_stars} In the beginning, when a galaxy starts forming stars in a halo with mass $M_h$, it has a gas mass of $M_g^i(z)=f_g (\Omega_b/\Omega_m) M_h(z)$, with $f_g$ being the gas fraction not evaporated by reionisation, i.e. $f_g=1$ and $f_g<1$ as the galaxy forms in a neutral and ionised region, respectively. In subsequent time steps a galaxy gains gas from its progenitors ($M_g^\mathrm{mer}(z)$) and smooth accretion ($M_g^\mathrm{acc}$), while its total gas mass never exceeds the limit given by reionisation feedback: $M_g^i = \min\left( M_g^\mathrm{mer}(z) + M_g^\mathrm{acc}(z), f_g (\Omega_m/\Omega_b) M_h \right)$ with $M_g^\mathrm{acc}=M_h(z) - \sum_{p=1}^\mathrm{N_p} M_{h,p}(z + \Delta z)$ and $M_g^\mathrm{mer}(z)=\sum_{p=1}^\mathrm{N_p} M_{h,p} (z + \Delta z)$ where $N_p$ is the galaxy's number of progenitors and $M_{h,p}$ the halo mass of each progenitor. At each time step, a fraction of the merged and accreted (initial) gas mass is transformed into stellar mass, $M_\star^\mathrm{new}(z)=(f_\star^\mathrm{eff}/\Delta t) M_g^i(z)$.\footnote{We note that this definition has been altered compared to the first version of {\sc astraeus} in \citep{Hutter2021a}.} Here $f_\star^\mathrm{eff}$ represents the fraction of gas that forms stars over a time span $\Delta t$ and is limited by the minimum amount of stars that need to form to eject all gas from the galaxy, $f_\star^\mathrm{ej}$, and an upper limit, $f_\star$. $f_\star^\mathrm{eff}$ depends on the gravitational potential: more massive galaxies form stars at the constant rate $f_\star$, while low-mass galaxies form stars at the limited rate $f_\star^\mathrm{ej}$ due to SN and radiative feedback. While we account for radiative feedback from reionisation by modifying the initial gas mass reservoir with the factor $f_g$, $f_\star^\mathrm{eff}$ incorporates the suppression of star formation in low-mass halos as gas is heated and ejected by SNII explosions. Our model incorporates a delayed SN feedback scheme, i.e. at each time step the effective star formation efficiency accounts for the SNII energy released from stars formed in the current and previous time steps, following the mass-dependent stellar lifetimes \citep{padovani1993}. In contrast to \citet{Hutter2021a}, we have updated our model and do not assume stars to form in bursts to calculate the number of SNII exploding within a time step but $M_\star^\mathrm{new}(z)$ to form at a constant star formation over the entire time step (see Appendix \ref{app_delayed_non-bursty_SNscheme} for a detailed calculation). The star formation efficiency in the SN feedback-limited regime is given by $f_\star^\mathrm{ej}(z) = \frac{v_c^2}{v_c^2 + f_w E_{51} \nu_z} \left[ 1 - \frac{f_w E_{51} \sum_j \nu_j M_{\star,j}^\mathrm{new}(z_j)}{M_\mathrm{g}^i(z)~ v_c^2} \right]$, with $v_c$ being the rotational velocity of the halo, $E_{51}$ the energy released by a SNII, $f_w$ the fraction of SNII energy injected into the winds driving gas outflows, $M_{\star,j}^\mathrm{new}(z_j)$ the stellar mass formed during previous time steps $j$, and $\nu_j$ the fraction of stellar mass formed in previous time step $j$ that explodes in the current time step given the assumed IMF. {\sc astraeus} incorporates multiple models for radiative feedback from reionisation, ranging from a weak and time-delayed ({\it Weak Heating}) to a strong instantaneous feedback ({\it Jeans mass}). In this work, we use the intermediate and time-delayed {\it Photoionisation} model, where the characteristic mass defining the gas fraction not evaporated by reionisation grows on a dynamical timescale to the respective Jeans mass \citep[for a detailed description see][]{Hutter2021a}. We list the {\sc astraeus} model parameters and their assumed values in Table \ref{table_model_params}. $f_\star$ and $f_w$ have been adjusted to reproduce the observed UV LFs, stellar mass functions, global star formation rate density, and global stellar mass density at $z=10-5$. \begin{table} \centering \caption{{\sc astraeus} model parameters and chosen values in this work.} \label{tab:example_table} \begin{tabular*}{\columnwidth}{ccc} \hline\hline Parameter & Value or reference & Description\\ \hline $f_\star$ & $0.025$ & Maximum star-formation efficiency \\ $f_w$ & $0.2$ & SN coupling efficiency \\ - & Photoionization & Radiative feedback model \\ IMF & \citet{salpeter1955} & For stellar evolution, enrichment, SED \\ SED & \textsc{Starburst99} & ionizing SED model \\ \hline \end{tabular*} \label{table_model_params} \end{table} \subsubsection{Metals and dust} \label{subsubsec_metals_and_dust} The current {\sc astraeus} model also incorporates the metal enrichment by stellar winds, SNII and SNIa explosions \citep[for a detailed description see][]{Ucci2022}. At each time step, we assume that gas smoothly accreted has the average metallicity of the gas in the IGM, $Z_\mathrm{IGM}$. Metals are produced through stellar winds, SNII and SNIa explosions. The amount of newly forming metals depends on the number of massive stars exploding as SN in the current time step according to \citet{padovani1993}, \citet{yates2013} and \citet{maoz2012}. For the corresponding stellar metal yields, {\sc astraeus} uses the latest yield tables from \citet{Kobayashi2020b}. We assume that gas and metals are perfectly mixed. Thus, the metals ejected from the galaxy are proportional to the ejected gas mass and the metallicity of the gas in the galaxy. This ejected metal mass contributes to $Z_\mathrm{IGM}$. In this work, we have extended the {\sc astraeus} model \citep{Hutter2021a, Ucci2022} to follow the formation, growth, destruction, astration and destruction of dust in each galaxy \citep[c.f.][for details]{Dayal2022}. We note that we consider dust to be part of our metal reservoir (i.e. $M_\mathrm{dust}\leq M_\mathrm{m}$). At each time step, {\sc astraeus} computes the evolution of the dust mass $M_\mathrm{dust}$ in a galaxy by solving the following differential equation \begin{eqnarray} \frac{\mathrm{d}M_\mathrm{dust}}{\mathrm{d}t} &=& \dot{M}_\mathrm{dust}^\mathrm{prod} + \dot{M}_\mathrm{dust}^\mathrm{grow} - \dot{M}_\mathrm{dust}^\mathrm{dest} - \dot{M}_\mathrm{dust}^\mathrm{astr} - \dot{M}_\mathrm{dust}^\mathrm{ej}. \label{eq_dust} \end{eqnarray} The first term on the right hand side (RHS) of Eqn. \ref{eq_dust} denotes the production of dust in SNII and AGB stars through condensation of metals in stellar ejecta \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{prod} &=& y_\mathrm{SNII} \gamma_\mathrm{SNII} + \dot{M}_\mathrm{dust}^\mathrm{AGB}, \end{eqnarray} with $y_\mathrm{SNII}=0.45\msun$ being the dust mass formed per SNII, \begin{eqnarray} \gamma_\mathrm{SN}(t) &=& \int_{8\msun}^{40\msun} \mathrm{SFR}(t - \tau_m) \phi(m) \mathrm{d}m \end{eqnarray} the number of SNII events, \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{AGB}(t) &=& \int_{0.85\msun}^{50\msun} y_\mathrm{AGB}(m) \mathrm{SFR}(t - \tau_m) \phi(m) \mathrm{d}m \end{eqnarray} the contribution from AGB stars and $y_\mathrm{AGB}$ the dust yields from AGB stars. In agreement with \citep{Ucci2022}, we adopt the latest yield tables from \citet{Kobayashi2020b} for $y_\mathrm{AGB}$. The second term on the RHS of Eqn. \ref{eq_dust} describes the dust grain growth through the accretion of heavy elements in dense molecular clouds in the ISM, \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{grow} &=& \left( Z' - \frac{M_\mathrm{dust}}{M_\mathrm{g}^i} \right) f_\mathrm{cold~gas} \frac{M_\mathrm{dust}}{\tau_\mathrm{gg,0} Z_\odot} \end{eqnarray} where $Z'$ is the metallicity after accretion and star formation, $M_\mathrm{dust}$ is the dust mass, $f_\mathrm{cold~gas}$ the fraction of cold and molecular gas, and $\tau_\mathrm{gg}=\tau_\mathrm{0,gg} / Z$ the accretion timescale adopted from \citet{Asano2013} \citep[see also][]{Triani2020}. We assume $f_\mathrm{cold~gas}=0.5$ and $\tau_\mathrm{gg,0}=30$Myrs. The third term in Eqn. \ref{eq_dust} describes the destruction of dust by SN blastwaves, for which we adopt the analytic description outlined in \citet{mckee1989} \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{dest} &=& \left( 1 - f_\mathrm{cold~gas} \right) \frac{M_\mathrm{dust}}{M_\mathrm{g}^i}\ \gamma_\mathrm{SN} \epsilon\ M_\mathrm{SN, bw}, \end{eqnarray} with $\epsilon$ being the effifiency of dust destruction in a SN-shocked ISM and $M_\mathrm{SN, bw}$ the mass accelerated to $100$~km~s$^{-1}$ by the SN blast wave. In line with \citet{mckee1989} and \citet{lisenfeld_ferrara1998} we adopt $\epsilon=0.03$ and $M_\mathrm{SN, bw}=6.8\times10^3\msun$. Finally, Eqn. \ref{eq_dust} accounts also for the destruction of dust by astration as new stars form from the metal-enriched gas, \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{astr} &=& Z^\mathrm{i}\ \frac{M_\star^\mathrm{new}}{\Delta t}, \end{eqnarray} and the ejection of metals through winds powered by the energy injected by SN, \begin{eqnarray} \dot{M}_\mathrm{dust}^\mathrm{ej} &=& Z'\ \frac{M_\mathrm{g}^\mathrm{ej}}{\Delta t}. \end{eqnarray} The parameter values ($y_\mathrm{SNII}$, $\tau_\mathrm{gg,0}$, $\epsilon$, $M_\mathrm{SN,bw}$) quoted reasonably reproduce the observed UV LFs when the UV is attenuated by dust as follows \citep[UV LFs data points include][]{Atek2015, Atek2018, Bouwens2015, Bouwens2016, Bouwens2017, Bowler2014, Bowler2015, Calvi2016, Castellano2010a, Castellano2010b, Finkelstein2015, Ishigaki2018, Livermore2017, McLeod2015, McLeod2016, McLure2009, McLure2013, Oesch2013, Oesch2018, Ouchi2009, Schenker2013, Schmidt2014, Tilvi2013, vanderBurg2010, Willott2013, Zheng2012}: From the dust mass, $M_d$, we obtain the total optical depth to UV continuum photons as \citep[see e.g.][]{Dayal2011} \begin{eqnarray} \tau_\mathrm{UV,c} &=& \frac{3 \Sigma_d}{4as}, \end{eqnarray} with $\Sigma = M_d / (\pi r_d^2)$ being the dust surface mass density, $r_d$ the dust distribution radius, and $a=0.03~\mu$m and $s=2.25$g~cm$^{-3}$ the radius and material density of graphite/carbonaceous grains \citep{todini-ferrara2001}. Since we assume that dust and gas are perfectly mixed, we equate the dust distribution radius, $r_d$, with the radius of the gas, $r_g=4.5 \lambda r_\mathrm{vir} \left[ (1+z)/6 \right]^{1.8}$. Here $\lambda$ is the spin parameter of the simulated halo, $r_\mathrm{vir}$ the virial radius, and the third factor accounts for the redshift evolution of the compactness of galaxies and ensures that the observed UV LFs at $z=5-10$ are well reproduced. For a slab-like geometry, the escape fraction of UV continuum photons of a galaxy is then given by \begin{eqnarray} f_\mathrm{esc}^\mathrm{c} &=& \frac{1-\exp(-\tau_\mathrm{UV,c})}{\tau_\mathrm{UV,c}}, \end{eqnarray} and its observed UV luminosity by \begin{eqnarray} L_\mathrm{c}^\mathrm{obs} &=& f_\mathrm{esc}^\mathrm{c} L_\mathrm{c}, \end{eqnarray} with the intrinsic UV luminosity, $L_\mathrm{c}$, being computed as outlined in Section 2.2.4 in \citet{Hutter2021a}. \subsection{Reionisation} \label{subsec_reionisation} At each time step {\sc astraeus} follows the time and spatial evolution of the ionised regions in the IGM. For this purpose, it derives the number of ionising photons produced in each galaxy, $\dot{Q}$, by convolving the galaxy's star formation rate history with the spectra of a metal-poor ($Z=0.05$Z$_\odot$) stellar population. Spectra have been obtained from the stellar population synthesis model {\sc starburst99} \citep{Leitherer1999}. Again we assume that stars form continuously over a time step. Then the number of ionising photons that contribute to the ionisation of the IGM is then given by \begin{eqnarray} \dot{N}_\mathrm{ion} &=& f_\mathrm{esc} \dot{Q}, \end{eqnarray} where $f_\mathrm{esc}$ is the fraction of ionising photons that escape from the galaxy into the IGM. From the resulting ionising emissivity and gas density distributions {\sc astraeus} derives the spatial distribution of the ionised regions by comparing the cumulative number of ionising photons with the number of absorption events \citep[see {\sc cifog},][, for details]{Hutter2018a}. Within ionised regions, it also derives the photoionisation rate and residual \HI fraction in each grid cell. The ionisation and photoionisation fields obtained allow us then to determine on the fly whether the environment of a galaxy has been reionised and account for the corresponding radiative feedback by computing the gas mass the galaxy can hold on to ($f_g M_g^i$). \subsection{Simulations} \label{subsec_simulations} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fesc_halomass_relations.png} \caption{The ionising escape fraction $f_\mathrm{esc}$ for the three models, decreasing (solid orange line), being constant (dash dotted magenta line) and increasing (dotted blue line) with halo mass $M_h$.} \label{fig_fesc} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{hist_ion_all_models.png} \caption{Ratio of the mass- and volume-averaged neutral hydrogen fraction (top panel) and volume averaged neutral hydrogen fraction (bottom panel) as a function of redshift. In each panel, we show results for our three $f_\mathrm{esc}$ models: decreasing (solid orange line), being constant (dash dotted magenta line) and increasing (dotted blue line) with halo mass $M_h$. In the lower panel, grey points indicate observational constraints from: GRB optical afterglow spectrum analyses \citep[light triangles;][]{Totani2006, Totani2014}, quasar sightlines \citep[Medium squares;][]{Fan2006}, Lyman-$\alpha$ LFs \citep[dark circles]{Konno2018}, \citep[dark squares;][]{Kashikawa2011}, \citep[dark diamonds][]{Ouchi2010}, \citep[dark pentagons][]{Ota2010} and \citep[dark triangles][]{Malhotra2004}, Lyman-$\alpha$ emitter clustering \citep[dark plus signs;][]{Ouchi2010} and the Lyman-$\alpha$ emitting galaxy fraction \citep[dark crosses;][]{Pentericci2011, Schenker2012, Ono2012, Treu2012, Caruana2012, Caruana2014, Pentericci2014}.} \label{fig_hist_ion} \end{figure} In the following we consider three different reionisation scenarios that explore the physically plausible space of the ionising escape fraction $f_\mathrm{esc}$ (c.f. Fig. \ref{fig_fesc}): \begin{enumerate} \item {\sc mhdec}: $f_\mathrm{esc}$ decreases with rising halo mass of a galaxy (red solid line) \begin{eqnarray} f_\mathrm{esc} &=& f_\mathrm{esc,low} \left( \frac{f_\mathrm{esc,high}}{f_\mathrm{esc,low}} \right)^{\frac{\log_{10} (M_h / M_{h,\mathrm{low}}) }{\log_{10} (M_{h,\mathrm{high}} / M_{h,\mathrm{low}} )}} \label{eq_fesc} \end{eqnarray} with $f_\mathrm{esc,low}=0.55$, $f_\mathrm{esc,high}=0.05$, $M_{h,\mathrm{low}}=2\times10^8h^{-1}\msun$ and $M_{h,\mathrm{high}}=10^{10}h^{-1}\msun$. \item {\sc mhconst}: $f_\mathrm{esc}=0.16$ for each galaxy (magenta dash-dotted line). \item {\sc mhinc}: $f_\mathrm{esc}$ increases with rising halo mass of a galaxy (blue dotted line) following Eqn. \ref{eq_fesc} with $f_\mathrm{esc,low}=0.08$, $f_\mathrm{esc,high}=0.4$, $M_{h,\mathrm{low}}=10^9h^{-1}\msun$ and $M_{h,\mathrm{high}}=10^{11}h^{-1}\msun$. \end{enumerate} These three $f_\mathrm{esc}$ prescriptions have been adjusted to reproduce the electron optical depth measured by Planck \citep{planck2018} and fit the observational constraints from LAEs, quasar absorption spectra and gamma ray bursts (as depicted in the lower panel of Fig. \ref{fig_hist_ion}). In addition, for {\sc mhinc} the maximum $f_\mathrm{esc}$ value of more massive galaxies is also limited by the observed Ly$\alpha$ LFs. Despite having very similar electron optical depths, these three $f_\mathrm{esc}$ prescriptions lead to different ionisation histories and topologies (see Fig. \ref{fig_hist_ion} and \ref{fig_XHImaps_with_LAEs}). As $f_\mathrm{esc}$ decreases with rising halo mass, reionisation is dominated by the low-mass galaxies ($M_h\lesssim10^{10}\msun$), leading to on average smaller ionised regions and lower photoionisation rates. Since these low-mass galaxies appear earlier, reionisation begins earlier (see solid red line in Fig. \ref{fig_hist_ion}); however, as shown in \citet{Hutter2021a} for the {\it Photoionisation} model their overall star formation rate decreases around $z\simeq7$, resulting in the Universe being reionised at a later time and exhibiting a higher average residual \HI fraction in ionised regions. In contrast, as $f_\mathrm{esc}$ increases with rising halo mass, more massive galaxies ($M_h\gtrsim10^{10}\msun$) drive reionisation. On average, ionised regions are larger and more clustered around more massive galaxies, and photoionisation rates within these ionised regions are higher. Reionisation begins later with the appearance of more massive galaxies and ends earlier as the abundance of these massive galaxies increases. \section{Modelling Ly$\alpha$ emitters} \label{sec_modelling_LAEs} In this Section, we introduce the different models for the emergent Ly$\alpha$ line profiles (Section \ref{subsec_Lya_line_models}) and fractions of Ly$\alpha$ radiation escaping from a galaxy (Section \ref{subsec_dust_attenuation}), describe the attenuation of Ly$\alpha$ radiation by \HI in the IGM, and the derivation of the observed Ly$\alpha$ luminosity of a galaxy (Section \ref{subsec_IGM_attenuation}). We summarise the combinations of emerging Ly$\alpha$ line profile and dust attenuation models investigated in this paper in Section \ref{subsubsec_emerging_Lya_profile_models}. \subsection{Emerging Ly$\alpha$ line profiles} \label{subsec_Lya_line_models} We investigate three Ly$\alpha$ line profiles $J(x)$: (1) a thermally Doppler-broadened Gaussian centred at the Ly$\alpha$ resonance; (2) a single, double or triple-peaked profile that depends on the clumpiness and \HI column density of the gas in a galaxy; (3) a single, double or triple-peaked profile that depends both on the ionising escape fraction $f_\mathrm{esc}$ and the clumpiness and \HI column density of the gas in a galaxy. While the first model represents a simple assumption used in previous works \citep[e.g.][]{Dayal2011, Hutter2014}, the latter two models are inspired by observations and detailed Ly$\alpha$ radiative transfer simulations \citep[e.g.][]{Dijkstra2016, Gronke2017}. The Ly$\alpha$ line emerging from a galaxy is given by the intrinsic Ly$\alpha$ luminosity, $L_\alpha^\mathrm{intr}= \frac{2}{3}Q(1-f_\mathrm{esc})~h\nu_\alpha$, the escape fraction Ly$\alpha$ photons from the galaxy, $f_\mathrm{esc}^\mathrm{Ly\alpha}$, and the line profile $J(x)$. \begin{eqnarray} L_\alpha^\mathrm{ISM}(\nu) &=& L_\alpha^\mathrm{intr} f_\mathrm{esc}^\mathrm{Ly\alpha} J(x) \end{eqnarray} In the remainder of this Section, we detail our different models for the Ly$\alpha$ line profiles and escape fractions. \subsubsection{Central Gaussian} \label{subsubsec_Lya_line_model_gaussian} This model assumes that the emission sites of Ly$\alpha$ radiation, the hydrogen atoms within a galaxy, move at velocities that reflect the galaxy's rotation. The corresponding Doppler-broadened Ly$\alpha$ line profile is then given by \begin{eqnarray} J_\mathrm{centre}(x) &=& \frac{1}{\sqrt{\pi}} \frac{\sigma_\mathrm{th}}{\sigma_r} \exp \left[ -x^2 \frac{\sigma_\mathrm{th}^2}{\sigma_r^2} \right], \label{eq_Jcentre} \end{eqnarray} where we have expressed the frequency deviation from the Ly$\alpha$ resonance $\nu_\alpha$ in terms of the thermal line broadening $\sigma_\mathrm{th}=(v_\mathrm{th}/c)\nu_\alpha$ with $v_\mathrm{th}=\sqrt{2k_B T/m_\mathrm{H}}$, yielding $x=\frac{\nu - \nu_\alpha}{\sigma_\mathrm{th}}$. $k_B$ is the Boltzmann constant, $m_\mathrm{H}$ the mass of a hydrogen atom and $T$ the temperature of the \HI gas. Since the $\sigma_\mathrm{th}$-dependence of $x$ cancels any dependency of $J_\mathrm{centre}(\nu)$ on $\sigma_\mathrm{th}$, the assumed gas temperature has no effect on the emerging Ly$\alpha$ line profile (we use $T=10^4$~K in Fig. \ref{fig_profiles}). $\sigma_r\simeq(v_r/c)\nu_\alpha$ describes the Doppler broadening of the line due to the rotation of the galaxy. The rotation velocity of the galaxy $v_r$ is closely linked to the halo rotational velocity $v_c= (3\pi G H_0)^{1/3} \Omega_m^{1/6} (1+z)^{1/2} M_h^{1/3}$, ranging between $v_r=v_c$ and $v_r=2v_c$ \citep{mo1998, cole2000}. We assume $v_r=1.5 v_c$. \subsubsection{Single, double or triple-peaked in a clumpy/homogeneous medium:} \label{subsubsec_Lya_line_model_clumpy} This model describes the Ly$\alpha$ line profile emerging from a clumpy medium. It implements the regimes and characteristic escape frequencies identified in \citet{Gronke2017}. We consider a slab with a thickness of $2B$ and a total optical depth of $2\tau_0$. The source is located at the slab's midplane and injects photons with a frequency $x_i$ close to the Ly$\alpha$ resonance $x=0$. If the slab medium is homogeneous, \citet{neufeld1990} derived the emergent Ly$\alpha$ profile to be \begin{eqnarray} J_\mathrm{slab}(T,\tau_0, x, x_i) &=& 4 \pi \frac{\sqrt{6}}{24} \frac{x^2}{a(T)~ \tau_0} \frac{1}{\cosh\left( \sqrt{\frac{\pi^4}{54}} \frac{|x^3 - x_i^3|}{a(T)~\tau_0}\right)}, \label{eq_Jslab} \end{eqnarray} with $a(T)=\frac{A_\alpha}{4\pi \sigma_\mathrm{th}(T)}$ and $\int_{-\infty}^{\infty} J_\mathrm{slab}(x,x_i)\ \mathrm{d}x = 1$. $A_\alpha$ is the Einstein for the spontaneous emission of Ly$\alpha$ photons. For $x_i=0$, the emerging Ly$\alpha$ spectrum peaks at $x_p= \pm \left(k a \tau_0/\sqrt{\pi} \right)^{1/3}$. However, in our model, we assume the gas in a galaxy not to be static but outflowing at a constant velocity $v$. This translates effectively to a Doppler shift of the injection frequency from $x_i=0$ to $x_i=\frac{v}{v_\mathrm{th}}$. In the following we will revisit the regimes for Ly$\alpha$ escape in a clumpy medium that have been identified in \citet{Gronke2017} and extend them to an "outflowing" slab (or injection frequency $x_i\neq0$). The clumpy medium is characterised by the total optical depth of the clumps and the average number of clumps each Ly$\alpha$ photon escaping the slab scatters with. For a slab consisting of clumps with each having an optical depth $\tau_\mathrm{0,cl}$ at the line centre, Ly$\alpha$ photons escaping the slab will encounter on average $f_c$ clumps and have a total optical depth of $\tau_0=\frac{4}{3} f_c \tau_\mathrm{0,cl}$\footnote{The factor $4/3$ arises from the mean path length through a sphere.} at the line centre. The emerging Ly$\alpha$ line profile depends sensitively on the total and clump optical depth at line centre, $\tau_0$ and $\tau_{0,cl}$, respectively, and the number of clumps the Ly$\alpha$ photons scatter with. \citet{Gronke2017} identified the following regimes: \begin{itemize} \item {\it Free-streaming regime:} The clumpy medium is optically thin ($\tau_\mathrm{0}<1$), and Ly$\alpha$ photons can stream through. The emerging line profile peaks around $x=0$. \item {\it Porous regime:} The clumps are optically thick to Ly$\alpha$ photons ($\tau_\mathrm{0,cl}>1$), but only a fraction $1-\exp(-\tau_{0,cl})$ of the Ly$\alpha$ photons scatter with a clump. The emerging line profile is again peaked around $x=0$. \item {\it Random walk regime:} The clumps are optically thick to Ly$\alpha$ ($\tau_\mathrm{0,cl}>1$), and each Ly$\alpha$ photon encounters $N_\mathrm{sct,rw}\propto f_c^2$ scattering events \citep{Hansen_Oh2006}. However, the number of scattering events is too low for the Ly$\alpha$ photons to scatter in frequency space far enough into the wings to escape through excursion. Hence, the emerging line profile peaks also around $x=0$. \item {\it Homogeneous regime:} The clumps are optically thin ($\tau_\mathrm{0,cl}\leq1$) and Ly$\alpha$ photons scatter $\sim\tau_0$ times ($N_\mathrm{sct,exc}\propto f_c$) and escape via excursion: they follow a random walk in space {\it and} frequency and escape as they are scattered into the wings where the clumps become optically thin. The emerging line profile is a double-peaked with the two peaks being located at $x_p \simeq \left(k a \tau_0\right/\sqrt{\pi})^{1/3}$ for an injection frequency $x=0$ \citep{Adams1975}, $x_{p,1} \simeq \pm \left[ \left(\frac{k a \tau_0}{\sqrt{\pi}} \right)^{k} + x_i^{3k} \right]^{1/3k}$ and $x_{p,2} \simeq \pm \left(\frac{k a \tau_0}{\sqrt{\pi}}\right)^{1/3} \left[ 1 - \frac{k}{2\pi} \frac{x_i^3}{x_i^3 + k a \tau_0/\sqrt{\pi}} \right]$ for an injection frequency $x=\pm x_i$ and $x_i\geq0$.\footnote{The expressions for $x_{p,1}$ and $x_{p,2}$ have been identified as sufficient numerical fits to different $\tau_0$ and $x_i$ values.} \end{itemize} \citet{Gronke2017} derived the boundary criteria between these regimes for a static clumpy medium. We advance these criteria to a clumpy slab moving with a constant velocity $v$ from the source (mimicking outflows) or an injection frequency $x \neq 0$. To derive the critical number of clumps, $4/3 f_c$, required for each regime, we first consider the time and distances covered that it takes a Ly$\alpha$ to transverse the slab. {\it Excursion:} As Ly$\alpha$ photons transverse or escape the slab, they scatter with \HI many times. This alters their direction and frequency $x$, and they essentially perform a random walk. However, as the Ly$\alpha$ cross section is higher close to the line centre, most scatterings will occur close to the line centre and remain spatially close. Only as the Ly$\alpha$ photons are scattered into the wings of the Ly$\alpha$ absorption profile their mean free paths become larger, allowing them to escape the slab \citep{Adams1975}. The series of these so-called wing scatterings that allow Ly$\alpha$ photons to escape are referred to as excursion. We can estimate the mean displacement and time spent in such an excursion event: a Ly$\alpha$ photon with frequency $x$ will scatter on average $N_\mathrm{sct,exc}\sim x^2$ times before it has traversed a slab of thickness $B$. Its average mean free path is then $\lambda_\mathrm{mfp,exc}(x) = B \sigma_0/ (k\tau_0\sigma_\mathrm{HI}(x)) = B / (k \tau_0 H(a,x))$ using the wing approximation of the Ly$\alpha$ cross section and $k$ being a geometrical factor determined in \citet{Adams1975} and amounting to $\sqrt{3}$. This and the random walk nature of the Ly$\alpha$ escape imply an average displacement of \begin{eqnarray} d_\mathrm{exc} &=& \sqrt{N_\mathrm{sct,exc}} \lambda_\mathrm{mfp,exc}(x) = \frac{\sqrt{N_\mathrm{sct,exc}}B}{k\tau_0 H_v(a,x)} = \frac{x B}{k\tau_0 H_v(a,x)} \nonumber \\ \end{eqnarray} and time spent in the excursion of \begin{eqnarray} t_\mathrm{exc} &=& N_\mathrm{sct,exc} \frac{\lambda_\mathrm{mfp,exc}(x)}{c} = \frac{N_\mathrm{sct,exc} B}{c k\tau_0 H_v(a,x)} = \frac{x^2 B}{c k\tau_0 H_v(a,x)}, \nonumber \\ \end{eqnarray} with $H_v(a,x)=\frac{a}{\sqrt{\pi}(x)^2}$ being the effective line absorption profile in the wings. {\it Random Walk:} As the clumps become optically thick, the Ly$\alpha$ photons do not escape the slab via excursion anymore but by random walking: the number of scattering events is smaller than required for excursion and scale with the square of the number of clumps, $N_\mathrm{sct,rw}\propto f_c^2$ \citep{Hansen_Oh2006}. With the mean free path given by the average clump separation $\lambda_\mathrm{mfp,rw}=kB/f_c$, the average displacement and time are then \begin{eqnarray} d_\mathrm{rw} &=& \sqrt{N_\mathrm{sct,rw}} \lambda_\mathrm{mfp,rw} = \frac{\sqrt{N_\mathrm{sct,rw}} k B}{f_c} = k B \end{eqnarray} and \begin{eqnarray} t_\mathrm{rw} &=& \sqrt{N_\mathrm{sct,rw}} \frac{\lambda_\mathrm{mfp,rw}}{c} = \frac{\sqrt{N_\mathrm{sct,rw}} k B}{c f_c} = \frac{k B f_c}{c}. \end{eqnarray} \begin{enumerate} \item {\it Division between random-walk and homogeneous regime in optically thick medium:} For a given total optical depth at the line centre, $\tau_0$, we can derive the critical number of clumps along a line of sight that marks the transition from the random (clumps are optically thick) to the homogeneous regime (clumps become optically thick at the excursion frequency). We estimate this transition to arise when both regimes contribute equally to the flux of escaping Ly$\alpha$ photons. \begin{eqnarray} \frac{F_\mathrm{rw}}{F_\mathrm{exc}} &=& \frac{t_\mathrm{exc}}{t_\mathrm{rw}} = \frac{x^2}{k^2 \tau_0 H(x) f_c} = \frac{\sqrt{\pi} x^4}{k^2 a \tau_0 f_c} = 1 \label{eq_FrwDivFexc} \end{eqnarray} With $\tau_0=4/3 f_c \tau_\mathrm{0,cl}$, the critical number of clumps for Ly$\alpha$ photons escaping at frequency $x$ yields then as \begin{eqnarray} f_{c,\mathrm{crit}} &=& \frac{\sqrt{3}\pi^{1/4}}{2k} \frac{x^2}{\sqrt{a \tau_\mathrm{0,cl}}}. \end{eqnarray} As long as the wings remain optically thick, the majority of Ly$\alpha$ photons (with injection frequency $x_i$) will escape at \begin{eqnarray} x_\mathrm{esc} &\simeq& \begin{cases} \left( \frac{k a \tau_0}{\sqrt{\pi}} \right)^{1/3} & \mathrm{for}\ x_i \leq \left(\frac{k a \tau_0}{\sqrt{\pi}} \right)^{1/3}\\ x_i & \mathrm{otherwise},\\ \end{cases} \label{eq_xesc} \end{eqnarray} leading to \begin{eqnarray} f_{c\mathrm{,crit}} &=& \begin{cases} \frac{2}{\sqrt{3}k\pi^{1/4}} \sqrt{a \tau_\mathrm{0,cl}} & \mathrm{for}\ x_i^2 \leq \frac{4 a \tau_\mathrm{0,cl}}{3 \sqrt{\pi}}\\ \frac{\sqrt{3}\pi^{1/4}}{2k} \frac{x_i^2}{\sqrt{a \tau_\mathrm{0,cl}}} & \mathrm{otherwise}.\\ \end{cases} \label{eq_fccrit} \end{eqnarray} This $f_{c\mathrm{,crit}}$ value marks the transition from the random walk to the excursion regime. Its value depends strongly on the typical escape frequency of Ly$\alpha$ photons, $x_\mathrm{esc}$, which again determines the dependency of $f_{c\mathrm{,crit}}$ on the clump optical depth $\tau_\mathrm{0,cl}$ and injection frequency $x_i$. We can understand these dependencies as follows: For injection frequencies close to $x=0$, $x_i^2 \leq \frac{4 a \tau_\mathrm{0,cl}}{3 \sqrt{\pi}}$, optically thicker clumps require a higher escaping frequency $x_\mathrm{esc}$ and thus a higher total optical depth. This can only be achieved by interacting with more clumps (higher $f_{c,\mathrm{crit}}$). However, for $x_i^2 > \frac{4 a \tau_\mathrm{0,cl}}{3 \sqrt{\pi}}$, raising the injection frequency causes the clumps to be optically thinner at the escape frequency. Consequently, Ly$\alpha$ photons need to scatter more often (higher $f_{c,\mathrm{crit}}$) to reach the escape frequency. However, this rise in the number of scattering events is "normalised" to the escape frequency of the static solution ($x_i=0$), i.e. the larger the optical depth is, the smaller is the difference between $x_\mathrm{esc}=x_i$ and $x_\mathrm{esc}=\left(k a \tau_0/\sqrt{\pi} \right)^{1/3}$, and hence the smaller is the number of "extra" scattering events required. Because the transition described by $f_{c,\mathrm{crit}}$ is not sharp, we model the Ly$\alpha$ line profile emerging from the moving slab by superposing the Ly$\alpha$ radiation escaping in the homogeneous (using $J_\mathrm{slab}$ in Eqn. \ref{eq_Jslab}) and random walk regimes (using $J_\mathrm{centre}$ in Eqn. \ref{eq_Jcentre} and assuming $\sigma_r = \sigma_\mathrm{th}$). \begin{eqnarray} J_\mathrm{rh}(\tau_0, x, x_i) &=& (1 - f_\mathrm{rw}) J_\mathrm{slab}(T, \tau_0, x, x_i) + f_\mathrm{rw} J_\mathrm{centre}(T, x) \label{eq_Jrh} \nonumber \\ \end{eqnarray} We derive the corresponding ratio $f_\mathrm{fw}$ by assuming that the Ly$\alpha$ flux escapes predominantly where the Ly$\alpha$ profiles peak, \begin{eqnarray} f_\mathrm{rw} &=& \frac{F_\mathrm{rw} / F_\mathrm{exc}}{F_\mathrm{rw} / F_\mathrm{exc} + \frac{J_\mathrm{centre}(0)}{2(J_\mathrm{slab}(\tau_0,x_{p,1},x_i) + J_\mathrm{slab}(\tau_0,x_{p,2},x_i))}}. \label{eq_frh} \end{eqnarray} We note that this description reproduces the Ly$\alpha$ line profiles for resting clumps, fixed $\tau$ values, and varying $f_c$ values in \citet{Gronke2017}. \item {\it Division between porous and homogeneous regime in optically thin medium:} As the medium becomes optically thinner, Ly$\alpha$ photons that scatter into the wings can escape the slab before completing their excursion, i.e. $x_\mathrm{esc}=x_\mathrm{max}=\max(x_\star, x_i)$. Here the transition from an optically thin medium to an optically thick medium is described by $k a \tau_0=\sqrt{\pi} x_\mathrm{max}^3$,\footnote{While the wings become optically thin for $k\tau(x_\mathrm{max})\geq1$ corresponding to $k a \tau_0=\sqrt{\pi} x_\mathrm{max}^2$, we adapt this criterion to be consistent with a continuous function for $x_\mathrm{esc}$.} with $x_\star$ being the frequency where the Ly$\alpha$ absorption profile transitions from the Gaussian core to the Lorentzian wings. While the slab is optically thin at $x_\mathrm{max}$, depending on whether the clumps are optically thin or thick at $x=0$, the escape of Ly$\alpha$ photons is described by the homogeneous and porous regime, respectively. Again we estimate the transition to arise when both regimes contribute equally to the flux of escaping Ly$\alpha$ photons. We note that if clumps are optically thin at line centre ($\tau_\mathrm{0,cl}<1$), not every clump encounter leads to a scattering event; this reduces the time to escape $t_\mathrm{exc}$ by a factor $1-e^{-\tau_\mathrm{0,cl}}$. \begin{eqnarray} \frac{F_\mathrm{por}}{F_\mathrm{hom}} &=& \frac{t_\mathrm{exc}}{t_\mathrm{rw}~(1-e^{-\tau_\mathrm{0,cl}})} = \frac{\sqrt{\pi} x_\mathrm{max}^4}{k^2 a \tau_0 f_c (1-e^{-\tau_\mathrm{0,cl}})} = 1 \end{eqnarray} We yield the critical number of clumps that mark the transition from the porous to the homogeneous regime as \begin{eqnarray} f_{c\mathrm{,crit}} &=& \frac{x_\mathrm{max}}{k (1-e^{-\tau_\mathrm{0,cl}})}. \end{eqnarray} The emerging Ly$\alpha$ line profile accounts again for Ly$\alpha$ photons escaping in homogeneous ($J_\mathrm{slab}$, see Eqn. \ref{eq_Jslab}) and porous regime ($J_\mathrm{centre}$, see Eqn. \ref{eq_Jcentre}). \begin{eqnarray} J_\mathrm{ph}(\tau_0, x, x_i) &=& (1 - f_\mathrm{por}) J_\mathrm{slab}(T, \tau_0, x, x_\mathrm{max}) + f_\mathrm{por} J_\mathrm{centre}(T, x) \label{eq_Jph} \nonumber \\ \end{eqnarray} However, to ensure that $J_\mathrm{slab}$ peaks at $x_\mathrm{max}$, we use $J_\mathrm{slab}(T, \tau_0, x, x_i)$ as written in Eqn. \ref{eq_Jslab} for $|x_i|>x_\star$ and replace $a\tau_0$ by $\sqrt{\pi}x_\star^3$ in Eqn. \ref{eq_Jslab} otherwise. The ratio between the two different escape regimes is then again given by assuming that most Ly$\alpha$ photons escape at the peak frequencies, \begin{eqnarray} f_\mathrm{por} &=& \frac{F_\mathrm{por} / F_\mathrm{hom}}{F_\mathrm{por} / F_\mathrm{hom} + \frac{J_\mathrm{centre}(0)}{2(J_\mathrm{slab}(\tau_0,x_{p,1},x_\mathrm{max}) + J_\mathrm{slab}(\tau_0,x_{p,2},x_\mathrm{max}))}}. \nonumber \\ \label{eq_fph} \end{eqnarray} \end{enumerate} To derive the Ly$\alpha$ line profile emerging from a simulated galaxy, we obtain the injection frequency $x_i$ and optical depth at the Ly$\alpha$ line centre $\tau_0$ from the galaxy's initial gas mass $M_g^i$ and SN energy $E_\mathrm{SN}$ as follows: \paragraph*{Outflow velocity:} \label{par_outflow_velocity} The injection frequency $x_i$ is the velocity $v$ of the outflowing gas in terms of the thermal velocity $v_\mathrm{th}$. We derive the outflow velocity $v$ of the gas from the SN energy injected into the gas residing in the galaxy with a total gas mass $M_g$ as \begin{eqnarray} v &=& \left( \frac{2 E_\mathrm{SN}}{M_g} \right)^{1/2} = \left( \frac{2 M_g^\mathrm{ej} v_c^2}{M_g^\mathrm{i}} \right)^{1/2} \simeq v_c \left( \frac{2 f_\star^\mathrm{eff}}{f_\star^\mathrm{ej}} \right)^{1/2}\\ v_c &=& \sqrt{\frac{G M_\mathrm{vir}}{r_\mathrm{vir}}}, \end{eqnarray} with $f_\star^\mathrm{eff}$ being the effective star formation efficiency, and $f_\star^\mathrm{ej}$ the star formation efficiency required to eject all gas as defined for the delayed SN feedback scheme in \citet{Hutter2021a}. This results in outflow velocities of $60$km/s and $143$km/s for $10^9\msun$ and $10^{11}\msun$ halos, respectively. We note that $v$ is linked to the escape velocity $v_e$ of the halo as $\frac{v}{v_e} = \frac{v}{\sqrt{2}v_c} = \sqrt{\frac{M_g^\mathrm{i}}{M_g^\mathrm{ej}}}$. \paragraph*{Optical depth and number of clumps encountered:} The optical depth at the Ly$\alpha$ line centre yields as \begin{eqnarray} \tau_0 &=& \frac{4}{3} f_c \tau_\mathrm{0,cl} = N_\mathrm{HI} \sigma_\mathrm{HI}. \end{eqnarray} $\tau_\mathrm{0,cl}$ is a free parameter in our model and reflects the optical depth of a cloud in the ISM. Thus we can estimate it from the median mass ($M_\mathrm{cl}$) and size ($r_\mathrm{cl}$) of molecular clouds ($M_\mathrm{cl}\simeq 10^5\msun$ and $r_\mathrm{cl}=20$~pc) as \begin{eqnarray} \tau_\mathrm{0,cl} &=& \sigma_\mathrm{HI} \frac{M_\mathrm{cl}}{r_\mathrm{cl}}. \end{eqnarray} \paragraph*{\HI column density:} \label{par_HI_column_density} We derive the neutral hydrogen column density $N_\mathrm{HI}$ from the initial gas mass, $M_g^\mathrm{i}$ as \begin{eqnarray} N_\mathrm{HI} &=& \xi \frac{3 M_\mathrm{HI}}{4\pi r_g^2 m_\mathrm{H}} = \xi \frac{3 X_c (1-Y) M_g^\mathrm{i}}{4\pi\ (4.5\lambda r_\mathrm{vir})^2 m_\mathrm{H}} = \xi \frac{3 f_m M_\mathrm{vir}}{81\pi \lambda^2 r_\mathrm{vir}^2 m_\mathrm{H}} \nonumber \\ &=& \xi \left( \frac{9\pi^2 H_0^2 \Omega_m}{G} \right)^{2/3} (1+z)^2 M_\mathrm{vir}^{1/3} \frac{3 f_m}{81\pi \lambda^2 m_\mathrm{H}}. \end{eqnarray} Here $r_g$ describes the gas radius, for which we assume $r_g= 4.5 \lambda r_\mathrm{rvir}$. $X_c$ and $Y$ are the cold gas and helium mass fractions, respectively. Gas accretion and SN feedback processes determine the relation between the initial gas mass and the halo mass, $f_m$, which ranges typically between $\sim10^{-3}$ for low-mass galaxies to $\sim 10^{-1}$ for more massive galaxies. $\xi$ is a geometrical correction factor that depends on $\tau_0$ and the dust optical depth at the Ly$\alpha$ resonance $\tau_d$. Its maximum values is $0.35$ and we describe its derivation and dependencies in Appendix \ref{app_correction_factor}. For the cosmological parameters assumed in this paper, we yield \begin{eqnarray} N_\mathrm{HI} = 6.5\times 10^{17} \mathrm{cm}^2\ (1+z)^2 \frac{\xi f_m}{\lambda^2} \left( \frac{M_\mathrm{vir}}{10^8 M_\odot} \right)^{1/3}. \end{eqnarray} \subsubsection{Ionising escape fraction dependent in a clumpy/homogeneous medium} \label{subsubsec_Lya_line_model_porous} For this Ly$\alpha$ line profile model, we assume a model similar to the so-called picket fence model \citep{Heckman2011}. Here a fraction $f_\mathrm{esc}$ of the ionising radiation escapes through low-density channels, while the other fraction of ionising photons is absorbed by the dense shell. Correspondingly, the Ly$\alpha$ photons escaping through the channels scatter only a few times, while those escaping through the shell encounter many scattering events. The former gives rise to a single-peaked Ly$\alpha$ line centred around $x=0$, while the latter creates a broader double-peaked Ly$\alpha$ line (assuming a homogeneous slab model with peaks at $x_p$). In detail, we assume that Ly$\alpha$ photons escaping through the low-density channels encounter a total optical depth $\tau_\mathrm{channel}$, leading to the fraction of ionising photons escaping through these channels being given by \begin{eqnarray} f_\mathrm{esc}^\mathrm{channel} &=& \exp\left( - \tau_\mathrm{channel}^\mathrm{LyC} \right) = \exp\left ( - \tau_\mathrm{channel} \frac{\sigma_\mathrm{HI}^\mathrm{LyC}}{\sigma_\mathrm{HI}} \right). \end{eqnarray} Here $\sigma_\mathrm{HI}$ and $\sigma_\mathrm{HI}^\mathrm{LyC}$ are the absorption cross sections of neutral hydrogen for Ly$\alpha$ and ionising photons, respectively. As $f_\mathrm{esc}^\mathrm{channel}$ is very likely to be lower than unity, the fraction of the solid angle covered by channels \begin{eqnarray} f_\mathrm{channel} &=& \frac{f_\mathrm{esc}}{f_\mathrm{esc}^\mathrm{channel}} \end{eqnarray} will exceed $f_\mathrm{esc}$. From the Ly$\alpha$ channel optical depth, $\tau_\mathrm{channel}$, and the gas mass divided into gas located in channels and shells, we derive the optical depth of Ly$\alpha$ photons traversing the shell as \begin{eqnarray} \tau_\mathrm{shell} &=& \frac{\tau_0 - f_\mathrm{channel} \tau_\mathrm{channel}}{1 - f_\mathrm{channel}}, \label{eq_tau_shell} \end{eqnarray} with the total optical depth $\tau_0$ being derived as described in Section \ref{par_HI_column_density}. We assume the gas in the shell to be outflowing while the gas in the channels is at rest.\footnote{To compute the outflow velocity $x_i$, we use the entire gas mass instead of the gas sitting in the shell. Thus, if gas resides in channels, we underestimate $x_i$ slightly.} For a given clump optical depth, $\tau_\mathrm{0,cl}$, we determine the Ly$\alpha$ escape regime for the dense shell and channels, and derive the corresponding fractions of Ly$\alpha$ radiation that escapes without significant scattering, $f_\mathrm{shell}$ and $f_\mathrm{channel}$ respectively, using Eqns \ref{eq_frh} and \ref{eq_fph}. The Ly$\alpha$ line emerging from the galaxy contains Ly$\alpha$ photons that traverse the dense shell \begin{eqnarray} J_\mathrm{shell}(T, \tau_\mathrm{shell}, x, x_i) &=& f_\mathrm{shell} J_\mathrm{centre}(T, x) \\ && + \left( 1-f_\mathrm{shell} \right) J_\mathrm{slab}(T, \tau_\mathrm{shell}, x, x_i) \nonumber \end{eqnarray} and escape through the channels \begin{eqnarray} J_\mathrm{channel}(T, \tau_\mathrm{channel}, x, x_i) &=& f_\mathrm{channel} J_\mathrm{centre}(T, x) \\ && + \left( 1-f_\mathrm{channel} \right) J_\mathrm{slab}(T, \tau_\mathrm{channel}, x, x_i). \nonumber \end{eqnarray} It is given by \begin{eqnarray} J(\tau, x, x_i) &=& f_\mathrm{channel}\ J_\mathrm{channel}(T, \tau_\mathrm{channel}, x, x_i) \\ && +\ \left(1 - f_\mathrm{channel}\right)\ J_\mathrm{shell}(T, \tau_\mathrm{shell}, x, x_i), \nonumber \end{eqnarray} with the outflow velocity $x_i$ being derived as outlined in Section \ref{par_outflow_velocity}. However, in the following, we consider the extreme case of $\tau_\mathrm{channel}=0$ (resulting in $f_\mathrm{channel}=1$). \subsection{Dust attenuation} \label{subsec_dust_attenuation} We employ two different dust models. The first one links the Ly$\alpha$ escape fraction to the escape fraction of UV continuum photons, $f_\mathrm{esc}^\mathrm{c}$. The second one is more complex. It assumes a clumpy medium where the attenuation of Ly$\alpha$ by dust follows different relations in the regimes identified in \citet{Gronke2017}. Both models assume a slab-like geometry and we describe their details in the following. \subsubsection{Simple attenuation model} \label{subsubsec_dust_simple_model} In this model, we assume that (i) dust and gas are perfectly mixed, (ii) the dust distribution is slab-like, and (iii) the dust attenuation of Ly$\alpha$ photons is proportional to the dust attenuation of UV continuum photons. The escape fraction of Ly$\alpha$ photons, $f_\mathrm{esc}^\mathrm{Ly\alpha}$, is then directly related to the escape of UV continuum photons, $f_\mathrm{esc}^\mathrm{c}$, derived in Section \ref{subsubsec_metals_and_dust}. \begin{eqnarray} f_\mathrm{esc}^\mathrm{Ly\alpha} &=& p\ f_\mathrm{esc}^\mathrm{c} \end{eqnarray} We use $p$ as a free parameter to obtain the observed Ly$\alpha$ luminosity functions at $z=6.6-7.3$. \subsubsection{Refined attenuation model} \label{subsubsec_dust_refined_model} This model assumes that dust and gas are perfectly mixed and distributed in clumps. The dust attenuation of Ly$\alpha$ photons depends on the total optical depth of the dust, $\tau_\mathrm{d,total}$, the optical depth of a clump, $\tau_\mathrm{d,cl}$, and the number of clumps, $f_c$, encountered along the sightline from the midplane to the surface of the slab. We derive its value by estimating the dust absorption cross section. Following \citet{Galliano2022} and assuming the radius and density of graphite/carbonaceous grains (see Section \ref{subsubsec_metals_and_dust}), we assume $\kappa_\mathrm{abs} \simeq \frac{Q_\mathrm{abs}}{a s} \simeq 2\times10^5$~cm$^{2}/$g with $Q_\mathrm{abs}\simeq1$ being the absorption efficiency. \footnote{We note that this is in rough agreement with the dust extinction cross sections of the Small and Large Magellanic clouds $\kappa_\mathrm{ext} = \sigma_\mathrm{d} / m_\mathrm{H} = \sigma_\mathrm{d,ref} \frac{M_z}{M_\mathrm{d}\ Z_\mathrm{ref} m_\mathrm{H}} \simeq 4\times 10^5$~cm$^{2}/$g, with the extinction efficiency $Q_\mathrm{ext}=Q_\mathrm{abs}+Q_\mathrm{sca}$ being given by the similar sized absorption ($Q_\mathrm{abs}$) and scattering efficiencies ($Q_\mathrm{sca}$) at Ly$\alpha$, a dust-to-metal mass ratio $M_\mathrm{d}/M_Z\simeq0.25$, $\sigma_\mathrm{ref}\simeq4\times10^{-22}$cm$^2$ and $Z_\mathrm{ref}\simeq0.0025$ for SMC and $\sigma_\mathrm{ref}\simeq7\times10^{-22}$cm$^2$ and $Z_\mathrm{ref}\simeq0.005$ for LMC \citep[for further explanations see][]{Laursen2010}.} \begin{eqnarray} \tau_\mathrm{d,total} &=& \frac{4}{3} f_c \tau_\mathrm{d,cl} = \xi\ \frac{3}{4\pi} \frac{M_\mathrm{d}}{r_\mathrm{d}^2} \kappa = \frac{M_\mathrm{d}}{M_\mathrm{HI}} \frac{\kappa_\mathrm{abs} m_\mathrm{H}}{\sigma_\mathrm{HI}} \tau_0 \end{eqnarray} The resulting estimates for $\tau_\mathrm{d,total}$ and $\tau_\mathrm{d,cl}$ allow us to compute the Ly$\alpha$ escape fractions in the different escape regimes as follows. \paragraph{Free-streaming regime:} \label{par_dust_freestreaming} In an optically thin slab ($\tau_0<1$), the Ly$\alpha$ photons stream through $\sim f_c$ clumps. On their way, they are attenuated by the dust in clumps and hence, the total dust optical depth determines the Ly$\alpha$ escape fraction, $\tau_\mathrm{d,total}$, as \begin{eqnarray} f_\mathrm{esc}^\mathrm{Ly\alpha, fs} &=& \exp\left(-\tau_\mathrm{d, total}\right) = \exp\left(-\frac{4}{3} f_c \tau_\mathrm{d,cl}\right) \end{eqnarray} We note that in this regime, the number of clumps along the sightline $f_c$ and clump optical depth $\tau_\mathrm{0,cl}$ are degenerate. \paragraph{Random walk regime:} \label{par_dust_randomwalk} In the random walk regime, both the slab and individual clumps are optically thick ($\tau_\mathrm{0,cl}\geq1$). As a result, Ly$\alpha$ photons escape by mostly being scattered by the clumps, and their escape fraction is determined by the number of clumps encountered along their random walk, $N_\mathrm{cl}(f_c)$, and the absorption probability per clump interaction $\epsilon$. According to \citet{Hansen_Oh2006}, it is then given by \begin{eqnarray} f_\mathrm{esc}^\mathrm{Ly\alpha, rw} &=& f_\mathrm{HO06} = \frac{1}{\cosh(\sqrt{2 N_\mathrm{cl}(f_c)\ \epsilon})} \end{eqnarray} We assume $N_\mathrm{cl}(f_c)\simeq \frac{3}{2}f_c^2 + 2 f_c$ as found in \citet{Gronke2017}. The scaling of $N_c$ with $f_c$ also agrees with the findings in \citet{Hansen_Oh2006} and prefactors vary slightly due to different geometries of the scattering surface. However, since $\epsilon$ is sensitive to how deep the photons permeate the clump, it depends non-trivially on the clump optical depth and movement. For simplicity, we assume $\epsilon=1-\exp(-\tau_\mathrm{d,cl})$, which has been shown to be in agreement with numerical simulations for small $\tau_\mathrm{d,cl}$ values \citep{Gronke2017}. We note that in this regime the Ly$\alpha$ escape fraction depends solely on the number of clumps encountered $f_c$, and hence $f_c$ and $\tau_\mathrm{d,cl}$ are not degenerate as in the free-streaming or homogeneous regime. \paragraph{Homogeneous regime:} \label{par_dust_homogeneous} In the homogeneous regime, the slab is optically thick ($\tau_0\geq1$), while the individual clumps are optically thin ($\tau_\mathrm{0,cl}<0$). During their initial random walk, the Ly$\alpha$ photons scatter with $N_\mathrm{cl}(f_{c,\mathrm{crit}})$ clumps before they diffuse into the wings and escape by free-stream through $f_c$ clumps. The resulting Ly$\alpha$ escape fraction, \begin{eqnarray} f_\mathrm{esc}^\mathrm{Ly\alpha, hom} &=& f_\mathrm{HO06}(f_{c,\mathrm{crit}}) \exp(-\tau_\mathrm{d, total}), \end{eqnarray} depends on $f_{c,\mathrm{crit}}$ and $\tau_\mathrm{d,total}$, with $f_{c,\mathrm{crit}}$ being determined by $\tau_0$ and $\tau_\mathrm{0,cl}$. \begin{eqnarray} f_{c,\mathrm{crit}} = \begin{cases} \frac{2}{\sqrt{3}k \pi^{1/4}} \sqrt{a \tau_\mathrm{0,cl}} & \mathrm{for}\ ka\tau_0 \geq \sqrt{\pi} x_\mathrm{max}^3\ \mathrm{and}\ x_i^2\leq \frac{4 a \tau_\mathrm{0,cl}}{3\sqrt{\pi}} \\ \frac{\sqrt{3} \pi^{1/4}}{2k} \frac{x_i^2}{\sqrt{a \tau_\mathrm{0,cl}}} & \mathrm{for}\ ka\tau_0 \geq \sqrt{\pi} x_\mathrm{max}^3\ \mathrm{and}\ x_i^2 > \frac{4 a \tau_\mathrm{0,cl}}{3\sqrt{\pi}} \\ \frac{x_\mathrm{max}}{k \left(1-e^{-\tau_\mathrm{0,cl}}\right)} & \mathrm{for}\ ka\tau_0 < \sqrt{\pi} x_\mathrm{max}^3 \end{cases} \end{eqnarray} \paragraph{Porous regime:} \label{par_dust_porous} In the porous regime, the individual clumps are optically thick ($\tau_\mathrm{0,cl}\geq1$), but only a fraction $1-e^{-f_c}$ of the Ly$\alpha$ photons will encounter a clump along their sightlines. The other fraction of Ly$\alpha$ photons does not interact with any clumps and is thus not attenuated by dust as they escape the slab.\footnote{We note that our expression is here a lower limit of $f_\mathrm{esc}^\mathrm{Ly\alpha, por}$ as we assume the Ly$\alpha$ radiation interacting with clumps to experience attenuation as if they streamed through the clump. It might be more appropriate to consider these Ly$\alpha$ photons to be absorbed as in the random walk regime, $f_\mathrm{esc}^\mathrm{Ly\alpha, por} = e^{-f_c} + \left[1 - e^{-f_c}\right] \frac{1}{\cosh(\sqrt{2 N_\mathrm{cl}(f_c)\ \epsilon})}$, however in practise galaxies in the porous regime have not much, if any, dust.} \begin{eqnarray} f_\mathrm{esc}^\mathrm{Ly\alpha, por} &=& e^{-f_c} + \left[1 - e^{-f_c}\right] \exp\left(-\frac{4}{3} f_c \tau_\mathrm{d,cl}\right) \end{eqnarray} \subsubsection{Emerging Ly$\alpha$ line profile models} \label{subsubsec_emerging_Lya_profile_models} We briefly summarize the combinations of Ly$\alpha$ line and dust attenuation models that we will investigate in this paper. \paragraph*{Gaussian:} The Ly$\alpha$ line profile emerging from a galaxy is given by the central Gaussian Ly$\alpha$ line profile (Section \ref{subsubsec_Lya_line_model_gaussian}). To account for the attenuation by dust, we apply the Ly$\alpha$ escape fraction, $f_\mathrm{esc}^\mathrm{Ly\alpha}$, derived in our simple dust model (Section \ref{subsubsec_dust_simple_model}) to all frequencies $x$. \paragraph*{Clumpy:} This model assumes an outflowing shell of dusty gas clumps, whereas gas and dust are perfectly mixed. It combines the Ly$\alpha$ line model described in Section \ref{subsubsec_Lya_line_model_clumpy} with the refined dust model depicted in Section \ref{subsubsec_dust_refined_model}. The gas in the galaxies is assumed to have a temperature of $T=10^4$~K.\footnote{We have chosen $T=10^4$~K for simiplicity. If we were to assume the virial temperature ($T_\mathrm{vir}$), the double-peak line profile would narrow as $T_\mathrm{vir}$ increases.} In contrast to the {\it Gaussian} model, we dust-attenuate the Ly$\alpha$ line of each escape regime (homogeneous, random, porous) by its corresponding escape fraction $f_\mathrm{esc}^\mathrm{Ly\alpha}$. The emerging Ly$\alpha$ line profile is then the superposition of the line profiles of all relevant escape regimes, \begin{equation} L_\alpha^\mathrm{ISM}(x) = f_\mathrm{esc, slab}^\mathrm{Ly\alpha} (1-f) J_\mathrm{slab}(x) + f_\mathrm{esc, centre}^\mathrm{Ly\alpha} f J_\mathrm{centre}(x), \end{equation} with $f_\mathrm{esc, slab}^\mathrm{Ly\alpha}=f_\mathrm{esc, hom}^\mathrm{Ly\alpha}$, and $(f, f_\mathrm{esc, centre}^\mathrm{Ly\alpha})$ given by $(1, f_\mathrm{esc, fs}^\mathrm{Ly\alpha})$, $(f_\mathrm{rw}, f_\mathrm{esc, rw}^\mathrm{Ly\alpha})$ or $(f_\mathrm{por}, f_\mathrm{esc, por}^\mathrm{Ly\alpha})$ depending on the total and clump optical depths $\tau_0$ and $\tau_\mathrm{0,cl}$. \paragraph*{Porous:} This model is very similar to the {\it Clumpy} model. However, it considers the outflowing shell of clumps to be pierced with gas and dust-free channels through which a fraction $f_\mathrm{esc}$ of the Ly$\alpha$ photons escape without scattering. It combines the Ly$\alpha$ line model described in Section \ref{subsubsec_Lya_line_model_porous} and assuming $\tau_\mathrm{channel}^\mathrm{LyC}=0$ with the refined dust model depicted in Section \ref{subsubsec_dust_refined_model}. Again we assume the gas in the galaxy to be heated to a temperature of $T=10^4$~K, and the Ly$\alpha$ line of each escape regime (homogeneous, random, porous) to be dust-attenuated by its corresponding escape fraction $f_\mathrm{esc}^\mathrm{Ly\alpha}$. The emerging Ly$\alpha$ line profile is again a superposition of the Ly$\alpha$ photons escaping through the channels and the clumpy shell, \begin{eqnarray} L_\alpha^\mathrm{ISM}(x) &=& f_\mathrm{esc}~ J_\mathrm{channel}(x)\ +\ (1- f_\mathrm{esc})~ J_\mathrm{shell}(x) \\ J_\mathrm{channel}(x) &=& J_\mathrm{centre}(x)\\ J_\mathrm{shell}(x) &=& f_\mathrm{esc, slab}^\mathrm{Ly\alpha}~ (1-f)~ J_\mathrm{slab}(x) \ +\ f_\mathrm{esc, centre}^\mathrm{Ly\alpha}~ f~ J_\mathrm{centre}(x) \nonumber \\ \end{eqnarray} with $f_\mathrm{esc, slab}^\mathrm{Ly\alpha}=f_\mathrm{esc, hom}^\mathrm{Ly\alpha}$, and $(f, f_\mathrm{esc, centre}^\mathrm{Ly\alpha})$ given by $(1, f_\mathrm{esc, fs}^\mathrm{Ly\alpha})$, $(f_\mathrm{rw}, f_\mathrm{esc, rw}^\mathrm{Ly\alpha})$ or $(f_\mathrm{por}, f_\mathrm{esc, por}^\mathrm{Ly\alpha})$ depending on the total and clump optical depths $\tau_0$ and $\tau_\mathrm{0,cl}$. We note that $\tau_0$ exceeds the $\tau_0$ value in the {\it Clumpy} model when $f_\mathrm{esc}>0$ (see Eqn. \ref{eq_tau_shell}), as the same amount of gas and dust is distributed over a smaller solid angle. \subsection{IGM attenuation} \label{subsec_IGM_attenuation} The Ly$\alpha$ radiation escaping from a galaxy is attenuated by the \HI it encounters along the line of sight from the location of emission ($r_\mathrm{em}$) to the location of absorption ($r_\mathrm{obs}$). Expressing the frequency $\nu$ of a photon in terms of its rest-frame velocity $x = v/b = (\nu_\alpha/\nu - 1) c/b$ relative to the Ly$\alpha$ line centre, the transmitted fraction of radiation at frequency $x$ is given by \begin{eqnarray} T_{\alpha,x}(x) &=& \exp \left[ -\tau_\alpha(x) \right] \\ \tau_\alpha(x) &=& \int_{r_\mathrm{em}}^{r_\mathrm{obs}} \sigma_0~ \phi(x + x_\mathrm{p}(r))\ n_\mathrm{HI}(r)\ \mathrm{d}r. \label{eq_IGMtransmission_tau_alpha} \end{eqnarray} Here $\tau_\alpha$ describes the optical depth to Ly$\alpha$, while $n_\mathrm{HI}(r)$ and $v_\mathrm{p}(r)= b x_\mathrm{p}(r)$ the \HI density and peculiar velocity (in the rest-frame of the emitted Ly$\alpha$ radiation) at a physical distance $r$ from the emitter, respectively. $\sigma_0$ is the absorption cross section, described in the cgs system as \begin{eqnarray} \sigma_0 &=& \frac{\pi e^2 f}{m_e c^2} = \frac{3 \lambda_\alpha^2 A_{21}}{8\pi}, \end{eqnarray} where $f=0.4162$ is the oscillator strength, $e$ the electron charge, $m_e$ the electron mass, $\lambda_\alpha=1216\AA$ the wavelength of a photon at the Ly$\alpha$ line centre, and $A_{21}=6.265\times10^8$s$^{-1}$ the Einstein coefficient for spontaneous emission of Ly$\alpha$ photons. $\phi(x)$ depicts the Ly$\alpha$ profile for absorption and is given by a Voigt profile consisting of a Gaussian core \begin{eqnarray} \phi_\mathrm{Gauss}(x) &=& \frac{\lambda_\alpha}{\sqrt{\pi} b} ~\exp\left( - x^2 \right) \\ b &=& \sqrt{\frac{2 k_B T_\mathrm{IGM}}{m_H}}, \end{eqnarray} and Lorentzian damping wings \begin{eqnarray} \phi_\mathrm{Lorentz}(x) &=& \frac{A_{21} \lambda_\alpha^2}{4\pi^2 (x/b)^2 + A_{21}^2 \lambda_\alpha^2}. \end{eqnarray} Here $b$ is the Doppler parameter, $T_\mathrm{IGM}$ the temperature of the IGM, $k_B$ the Boltzmann constant, and $m_\mathrm{H}$ the mass of a hydrogen atom. While pressure line broadening is unimportant in regions of low \HI density and the profile can be approximated by the Gaussian core, the absorption in the Lorentzian damping wings is non-negligible in regions of high \HI density. In practise, we mimic the Voigt profile by assuming the Gaussian core profile $\phi(x)=\phi_\mathrm{Gauss}(x)$ for $|x|<x_\star$ and the Lorentzian wing profile $\phi(x)=\phi_\mathrm{Lorentz}(x)$ otherwise. Fitting to numerical results yields the transition frequency as \begin{eqnarray} x_\star &=& 0.54 \log_{10}(b) \end{eqnarray} for temperatures between $T=0.01$K and $10^8$K. Our calculations of $T_\alpha$ include the Hubble flow and peculiar velocities $v_\mathrm{p}$: outflows (inflows) of gas from a galaxy that correspond to positive (negative) $v_\mathrm{p}$ values will redshift (blueshift) the Ly$\alpha$ photons and lead to an increase (decrease) in $T_\alpha$. For each galaxy in a simulation snapshot we derive $T_\alpha$ along all directions along the major axes (i.e. along and against the x, y and z axes). By stepping through the simulation box that is divided into $512$ cells on the side (and each cell having a size of $461$ckpc), we derive the $n_\mathrm{HI}(r)$ and $v_\mathrm{p}(r)$ profiles from the {\sc astraeus} ionisation and {\sc vsmdpl} density and velocity grids. For any galaxy, we start the profiles at the galaxy position $r_\mathrm{em}=0$ and end them once the highest frequency $x_\mathrm{max} = v_\mathrm{max}/b=40$ tracked in our Ly$\alpha$ line profiles has redshifted out of absorption at $r\simeq v_\mathrm{max}/ [H_0 \Omega_m^{1/2}(1+z)^{1/2}] \simeq 13.6/(1+z)^{1/2}$cMpc. We assume $T_\mathrm{IGM}=10^4$K in ionised and $T_\mathrm{IGM}=10^2$K in neutral regions. Since the Ly$\alpha$ line redshifts out of resonance very quickly (the light travel time for distance $r$ at $z=7$ is less than $2$~Myrs, shorter than the simulation time steps), a single simulation snapshot suffices for computing the $T_\alpha$ values of the galaxies in that snapshot. We also assume periodic boundary conditions when computing $T_\alpha$. Finally, we derive the observed, i.e. dust and IGM attenuated, Ly$\alpha$ luminosity and line profile along each major axes (resulting in 6 lines of sight) as \begin{eqnarray} L_{\alpha,x}(x) &=& L_\alpha^\mathrm{ISM}(x)\ T_{\alpha,x}(x) = L_\alpha^\mathrm{intr}\ f_\mathrm{esc}^\mathrm{Ly\alpha}\ J(x), \end{eqnarray} where $f_\mathrm{esc}^\mathrm{Ly\alpha}$ and $J(x)$ are the respective Ly$\alpha$ escape fraction and line profile for a one of the models as outlined in Section \ref{subsubsec_emerging_Lya_profile_models}. The total observed Ly$\alpha$ luminosity $L_\alpha$ and total fraction of Ly$\alpha$ radiation transmitted through the IGM are yielded when integrating the respective quantity over the frequency $x$. \begin{eqnarray} L_\alpha &=& \int_{-\infty}^{\infty} L_{\alpha,x}(x)\ \mathrm{d}x \\ T_\alpha &=& \int_{-\infty}^{\infty} T_{\alpha,x}(x)\ \mathrm{d}x \end{eqnarray} In the following, we use all lines of sight as independent probes when line-of-sight-sensitive Ly$\alpha$ quantities are analysed. We derive the observed Ly$\alpha$ luminosities ($L_\alpha$) for all galaxies at $z=20$, $15$, $12$, $10$, $9$, $8$, $7.3$, $7$ and $6.6$ for any combination of emerging Ly$\alpha$ line model ({\it Gaussian}, {\it Clumpy}, {\it Porous}) and reionisation scenario ({\sc mhdec}, {\sc mhconst}, {\sc mhinc}). Free model parameters ($p$ for the {\it Gaussian} model, $\tau_\mathrm{0,cl}$ for the {\it Clumpy} and {\it Porous} models) have been chosen to visually best-fit the observed Ly$\alpha$ LFs at $z\simeq6.7$, $7.0$ and $7.3$ (see Tab. \ref{tab_Lya_line_and_ionisation_topology_models}). For simplicity and better comparison we assume in all models the gas in galaxies to have the temperature of photo-ionised gas, $T=10^4$~K. Moreover, we note that since the {\sc mhconst} scenario represents an intermediate case and provides no further insights, we limit our discussion to the {\sc mhdec} and {\sc mhinc} scenarios in the remainder of this paper. \begin{table} \centering \begin{tabular}{c|c|c|c|c} Parameter & Scenario & {\it Gaussian} & {\it Clumpy} & {\it Porous} \\ \hline $\tau_\mathrm{0,cl}$ & {\sc mhdec} & - & $8.5\times10^5$ & $1.7\times10^6$ \\ $\tau_\mathrm{0,cl}$ & {\sc mhconst} & - & $8\times10^5$ & $1.6\times10^6$ \\ $\tau_\mathrm{0,cl}$ & {\sc mhinc} & - & $5\times10^5$ & $1.5\times10^6$ \\ $p$ & {\sc mhdec} & 0.8 & - & - \\ $p$ & {\sc mhconst} & 0.9 & - & - \\ $p$ & {\sc mhinc} & 1.1 & - & - \\ $T$ & all & $10^4$~K & $10^4$~K & $10^4$~K \\ \end{tabular} \caption{Parameters for our three different Ly$\alpha$ line profile models} \label{tab_Lya_line_and_ionisation_topology_models} \end{table} \section{Numbers and properties of Ly$\alpha$ emitting galaxies} \label{sec_number_and_properties_LAEs} In this Section, we aim to identify which physical process -- the intrinsic Ly$\alpha$ production ($L_\alpha^\mathrm{intr}$), the absorption by dust within the galaxies ($f_\mathrm{esc}^\mathrm{Ly\alpha}$), or the scattering by \HI in the IGM ($T_\alpha$) -- dominates the observed Ly$\alpha$ emission. To this end, we analyse (i) how the IGM attenuation profile $T_\alpha(x)$ depends on galaxy mass and the $f_\mathrm{esc}$-sensitive ionisation topology, (ii) how the Ly$\alpha$ line profiles emerging from a galaxy depend on the density and velocity distributions of gas and dust within a galaxy and $f_\mathrm{esc}$, and how much it affects the fraction of Ly$\alpha$ radiation that is transmitted through the IGM, and (iii) to which degree the $f_\mathrm{esc}$ dependency of $L_\alpha^\mathrm{intr}$, $f_\mathrm{esc}^\mathrm{Ly\alpha}$, and $T_\alpha$ leave characteristic imprints in the Ly$\alpha$ luminosity functions and the population emitting visible Ly$\alpha$ emission. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{hist1Dprofile_Lyaline_Freq_Mvir_zall_allModels.png} \caption{Intrinsic (top) and observed (bottom) Ly$\alpha$ line profile and IGM transmission (centre) at $z=8.0$, $7.3$, $7.0$, $6.6$ for a homogeneous static gas shell (left), a clumpy outflowing gas shell (centre), and a clumpy outflowing gas shell with holes through which Ly$\alpha$ radiation escapes without scattering. Solid (dashed dotted) lines show results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$.} \label{fig_profiles} \end{figure*} \subsection{The transmission through the IGM} \label{subsec_discussion_IGM_transmission} We start by discussing the frequency-dependent IGM transmission $T_{\alpha,x}$ shown in the top row of Fig. \ref{fig_profiles}. These profiles depend solely on the underlying ionisation topology and density distribution of the IGM. From the different panels depicting the average $T_{\alpha,x}$ in different halo mass bins of width $\Delta\log_{10}M_h=0.125$, we see that all $T_{\alpha,x}$ profiles follow a common trend: $T_{\alpha,x}$ decreases towards higher frequencies with an stronger decline around the Ly$\alpha$ resonance ($x=0$). Photons bluewards the Ly$\alpha$ resonance redshift into the Ly$\alpha$ resonance as they propagate through the IGM and have the largest likelihood to be absorbed by the \HI present. Photons redwards the Ly$\alpha$ resonance are also redshifted, but their likelihood of being absorbed by \HI decreases significantly as their energy drops. In each panel in the top row of Fig. \ref{fig_profiles} we show $T_{\alpha,x}$ for the two reionisation scenarios {\sc mhdec} (yellow/orange/brown lines) and {\sc mhinc} (blue lines) and redshifts $z=8.0$, $7.3$, $7.0$, $6.6$ (bright to dark lines as redshift decreases). In general, i.e. for both reionisation scenarios and all halo masses, $T_{\alpha,x}$ increases as the ionised regions grow around galaxies and the IGM is increasingly ionised (bright to dark lines): firstly, a larger ionised region shifts not only the point of strongest Ly$\alpha$ absorption to higher frequencies $x$ but also reduces the absorption in the damping wings of the Ly$\alpha$ absorption profile. Secondly, lower \HI fractions in ionised regions diminish the number of \HI atoms absorbing Ly$\alpha$ photons. These two mechanisms shape $T_{\alpha,x}$ redwards and bluewards the Ly$\alpha$ resonance. As Ly$\alpha$ photons travel through the IGM and redshift, photons emitted at frequencies $x\gtrsim0$ see the Gaussian core of the Ly$\alpha$ absorption profile $\phi(x)$ and are absorbed by \HI abundances as low as $\chi_\mathrm{HI}\gtrsim10^{-4}$; thus they are sensitive to the residual \HI fraction in ionised regions. Correspondingly, we see in Fig. \ref{fig_profiles} that $T_{\alpha,x}$ increases for $x\gtrsim0$ with decreasing redshift as the photoionisation rate around galaxies increases and lowers the residual \HI fraction in ionised regions. However, photons emitted at frequencies $x\lesssim 0$ are absorbed by the damping wings of the Ly$\alpha$ absorption profile $\phi(x)$. Since the Ly$\alpha$ absorption cross section is lower in the damping wings, the abundance of \HI needs to be significantly higher for Ly$\alpha$ photons to be absorbed; thus, as the sizes of ionised regions decrease, photons emitted at these frequencies are increasingly absorbed by the neutral regions located beyond the ionised regions around the emission sites. For this reason, we find $T_{\alpha,x}$ for $x\lesssim0$ to increase as the sizes of the ionised regions around galaxies rise with increasing halo mass and decreasing redshift. The rising sizes of ionised regions also become manifest in the shift of the frequency at which $T_{\alpha,x}$ has a value of $0.5$ to higher frequencies. Its dependence on the size of the ionised regions around galaxies makes $T_{\alpha,x}$ a tracer of the ionisation topology: our two extreme reionisation scenarios where $f_\mathrm{esc}$ either increases ({\sc mhinc}, blue dotted lines) or decreases ({\sc mhdec}, yellow to brown solid lines) with rising halo mass $M_h$ exhibit very different ionisation topologies (see Fig. \ref{fig_XHImaps_with_LAEs}). These differences are imprinted in $T_{\alpha,x}$ as follows. Firstly, since in the {\sc mhinc} scenario the higher $f_\mathrm{esc}$ values of more massive galaxies ($M_h\gtrsim10^{10}\msun$) raise the photoionisation rate within ionised regions (leading to lower $\chi_\mathrm{HI}$ values, also seen in Fig. \ref{fig_hist_ion} at $z\lesssim6$), the corresponding $T_{\alpha,x}$ values are higher bluewards the Ly$\alpha$ resonance than in the {\sc mhdec} scenario. Moreover, in the {\sc mhinc} scenario, reionisation proceeds faster, leading to the Universe being more ionised at $z<7$, and the bias of the ionising emissivity towards more massive galaxies grows with time, raising the photoionisation rate in the ionised regions. Both effects contribute to the relative increase in $T_{\alpha,x}$ from {\sc mhdec} to {\sc mhinc} to rise towards lower redshifts bluewards the Ly$\alpha$ resonance. Secondly, as the size of the ionised regions around galaxies is imprinted in $T_{\alpha,x}$ redwards the Ly$\alpha$ resonance, {\sc mhinc} shows lower (higher) $T_{\alpha,x}$ values at $z\gtrsim7$ ($z\lesssim7$) than the {\sc mhdec} scenario for galaxies with $M_h<10^{11}\msun$: At $z\gtrsim7$, ionised regions become increasingly smaller towards lower mass halos ($M_h\lesssim10^{9.5}\msun$) and higher redshifts as the corresponding $f_\mathrm{esc}$ values and global ionisation fraction decrease. However, at $z\lesssim7$, this trend reverses as the ionised regions become large enough for the red wing of the Ly$\alpha$ to be redshifted out of the absorption resonance of the Gaussian core. Towards more massive halos and higher global ionisation fractions, $T_{\alpha,x}$ becomes sensitive to the residual \HI fraction in ionised regions (c.f. $T_{\alpha,x}$ in {\sc mhinc} (light blue dotted line) exceeds $T_{\alpha,x}$ in {\sc mhdec} (yellow solid line) at $z=6.6$). It is interesting to note that the respective $T_{\alpha,x}$ values are very similar in both reionisation scenarios, despite the $f_\mathrm{esc}$ values of more massive halos ($M_h>10^{10}\msun$) differing by about one order of magnitude or more. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Lya_LFs_allModels.png} \caption{Observed Ly$\alpha$ luminosity functions at $z=20$, $15$, $12$, $10$, $9$, $8$, $7.3$, $7$, $6.6$ for a homogeneous static gas shell (left), a clumpy outflowing gas shell (centre), and a clumpy outflowing gas shell with holes through which Ly$\alpha$ radiation escapes without scattering. Solid (dashed dotted) lines show results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$.} \label{fig_LyaLF} \end{figure*} \subsection{The Ly$\alpha$ line profiles and luminosity functions} The Ly$\alpha$ line profile emerging from a galaxy represents a quantity that (i) is shaped by the density and velocity distribution of gas and dust within the galaxy and (ii) affects which fraction of the Ly$\alpha$ radiation escaping from a galaxy is transmitted through the IGM. In this Section, for our three models of the emerging Ly$\alpha$ line profiles, we discuss the following: (i) How do the assumed gas and dust distributions affect the attenuation of Ly$\alpha$ by dust in a galaxy and the emerging Ly$\alpha$ line profile? (ii) How does the Ly$\alpha$ line profile affect the Ly$\alpha$ transmission through the IGM? And, since the luminosity function of the intrinsic Ly$\alpha$ luminosity ($L_\alpha^\mathrm{intr}$) will be steeper for the scenario where $f_\mathrm{esc}$ increases ({\sc mhinc}) than when it decreases ({\sc mhdec}) with rising halo mass, (iii) which characteristics are required for the Ly$\alpha$ line profiles of the simulated galaxy population to reproduce the observed Ly$\alpha$ luminosity functions (Ly$\alpha$ LFs)? \subsubsection{The Gaussian model} The {\it Gaussian} line model centers the Ly$\alpha$ line at the Ly$\alpha$ resonance. The second row in Fig. \ref{fig_profiles} shows that its width increases as the rotational velocity of a galaxy increases with rising halo mass. Both the increase in the line width and the size of the ionised region surrounding the galaxy lead to higher IGM transmission values of Ly$\alpha$ radiation as galaxies become more massive (c.f. third row in Fig. \ref{fig_profiles}). At the same time, the fraction of Ly$\alpha$ photons that escape from the galaxies drops as the abundance of dust increases. We use the ratio between the Ly$\alpha$ and UV continuum escape fractions to adjust the Ly$\alpha$ luminosities emerging from the galaxies and fit the observed Ly$\alpha$ LFs in each of our reionisation scenarios. In the {\sc mhinc} scenario the more massive galaxies -- that dominate the observed Ly$\alpha$ LF -- have higher $f_\mathrm{esc}$ values than in the {\sc mhdec} scenario; to compensate the corresponding lower $L_\alpha^\mathrm{intr}$ values (and steeper slope of the intrinsic Ly$\alpha$ LF), we need a higher $f_\mathrm{esc}^\mathrm{Ly\alpha}/f_\mathrm{esc}^\mathrm{UV}$ ratio ($1.1$) than in the {\sc mhdec} scenario ($0.8$). Despite this compensation, the slopes of the observed Ly$\alpha$ LFs at $z\lesssim8$ (c.f. left panel in Fig. \ref{fig_LyaLF}) is still steeper for the {\sc mhinc} than for the {\sc mhdec} scenario. \subsubsection{The Clumpy model} In the {\it Clumpy} model, the clumpiness of the gas in the outflowing shell and the attenuation by dust molecules in these clumps determine the shape of the Ly$\alpha$ line profile. We note that in the following clumpiness describes the number of clumps in the dusty gas shell, i.e. a higher clumpiness corresponds to fewer clumps and thus a higher ratio between the clump ($\tau_\mathrm{0,cl}$) and total line optical depth ($\tau_0$). We find the following characteristic trends for the Ly$\alpha$ line profile: Firstly, the clumpier the gas in the shell is, the more Ly$\alpha$ radiation escapes around the Ly$\alpha$ resonance (profile showing a central peak), and the fewer Ly$\alpha$ photons escape through excursion or via the wings (double peak profile). Secondly, when assuming the same clump size for all galaxies -- as we do in this paper -- the gas clumpiness decreases as galaxies become more massive and contain more gas. Thus, from low-mass to more massive galaxies, we find the Ly$\alpha$ line profile to shift from a central peak to a double-peak profile (see the fourth row in Fig. \ref{fig_profiles} from left to right), reflecting the transition from the porous or random regime to the homogeneous regime (see Section \ref{subsubsec_Lya_line_model_clumpy}). This transition also goes in hand with an increased transmission through the IGM, which we can see when comparing the Ly$\alpha$ profiles emerging from galaxies (fourth row) with those after having traversed the IGM (fifth row in Fig. \ref{fig_profiles}). The Ly$\alpha$ luminosity at $x=0$ decreases by $\sim0.4-0.8$ orders of magnitude for all halo masses (from $10^{41.2}$erg~s$^{-1}$ to $10^{40.8}$erg~s$^{-1}$ for $M_h\simeq10^{11}\msun$ and from $10^{39.6}$erg~s$^{-1}$ to $10^{39.0}$erg~s$^{-1}$ for $M_h\simeq10^{9}\msun$ for e.g. {\sc mhdec} model), while the peak Ly$\alpha$ luminosity of the red wing decreases only about $\sim 0-0.3$ orders of magnitude at all halo masses. While the blue wing is similarly or more attenuated than the central peak in the IGM, the total fraction of Ly$\alpha$ radiation transmitted through the IGM for a fully-double peaked profile exceeds that of profiles with a central peak component. Furthermore, as the galaxies' gravitational potentials flatten with decreasing redshift, $\tau_0$ decreases and leads to (i) a narrower double-peak profile (following the dependence of the peak position on $\tau_0^{1/3}$) and (ii) a stronger central peak (the gas becomes clumpier as the ratio $\tau_\mathrm{0,cl}/\tau_0$ increases). A change in the clumpiness of the outflowing gas and dust shell (or clump optical depth $\tau_\mathrm{0,cl}$ and $\tau_\mathrm{d,cl}$) goes not only in hand with a change in the Ly$\alpha$ profile affecting $T_\alpha$ but also an altered attenuation of the escaping Ly$\alpha$ radiation by dust. Thus, adjusting the clump optical depth allows us to enhance and reduce the Ly$\alpha$ luminosities and reproduce the observed Ly$\alpha$ LFs: As we increase the size of the clumps, i.e. increase $\tau_\mathrm{0,cl}$, Ly$\alpha$ photons will scatter with fewer clumps, leading to (i) a higher fraction $f_\mathrm{esc}^\mathrm{Ly\alpha}$ escaping, and (ii) a higher fraction escaping at the Ly$\alpha$ resonance, which again leads to stronger attenuation by \HI in the IGM. However, effectively there exists a limit in enhancing Ly$\alpha$ emission from galaxies by decreasing $\tau_\mathrm{0,cl}$. Once the emerging Ly$\alpha$ profile is fully double-peaked, the attenuation by \HI in the IGM can not be decreased any further (by changing the injected Ly$\alpha$ line profile), and a further enhancement of observable Ly$\alpha$ emission could be solely due to the attenuation by dust. But the latter is not possible, as in the homogeneous escape regime and for sufficiently low $\tau_\mathrm{0,cl}$ values such that the escape frequency $x_\mathrm{esc}$ follows the injection frequency $x_i$ (c.f. Eqns \ref{eq_xesc} and \ref{eq_fccrit}), the attenuation by dust increases with decreasing clump size (see Section \ref{subsubsec_dust_refined_model}). With the observed Ly$\alpha$ LFs being dominated by the more massive galaxies ($M_h\gtrsim10^{10}\msun$, as we will discuss in the next Section), we find that the Ly$\alpha$ luminosities do not increase further for $\tau_\mathrm{0,cl}\lesssim10^5$. Fortunately, all our reionisation scenarios can fit the observed Ly$\alpha$ LFs for $\tau_\mathrm{0,cl}>10^5$ (see centre panel in Fig. \ref{fig_LyaLF}): as the intrinsic Ly$\alpha$ LFs is lower at the bright end in the {\sc mhinc} scenario, a lower $\tau_\mathrm{0,cl}$ value ($5\times10^5$) is required than for the {\sc mhdec} scenario ($8.5\times10^5$). Nevertheless, the slopes of the resulting observed Ly$\alpha$ LFs at $z\lesssim8$ keep the trends of the intrinsic Ly$\alpha$ LFs, with the bright ends of the Ly$\alpha$ LFs being steeper in the {\sc mhinc} than in the {\sc mhdec} scenario. \subsubsection{The Porous model} The {\it Porous} model represents a refinement of the {\it Clumpy} model. It adds gas-free channels through which Ly$\alpha$ and ionising photons escape freely. This explains why, to first order, we find the trends in the last two rows of Fig. \ref{fig_profiles} to be similar to those in the fourth and fifth rows. A lower clumpiness of gas and dust in the outflowing shell induces a stronger prevalence of the double-peak component in the Ly$\alpha$ line profile emerging from a galaxy, enhancing the IGM transmission $T_\alpha$ and absorption by dust within the galaxy, and causing the corresponding Ly$\alpha$ LFs to shift to lower values. On the other hand, it differs from the {\it Clumpy} model substantially, as $f_\mathrm{esc}$ determines the minimum fraction of Ly$\alpha$ radiation that escapes at the Ly$\alpha$ resonance and contributes to the central peak in our modelling. Hence, as long as $\tau_\mathrm{0,cl}$ remains above the $\tau_\mathrm{0,cl}$ value that leads to the same fraction of Ly$\alpha$ escaping in the central peak than given by $f_\mathrm{esc}$ (referred to as $\tau_\mathrm{0,cl}^\mathrm{crit}$ in the following), the {\it Porous} model inherits the trend of the {\it Clumpy} model. As $\tau_\mathrm{0,cl}$ drops below $\tau_\mathrm{0,cl}^\mathrm{crit}$, a further decrease in $\tau_\mathrm{0,cl}$ has no effect on the Ly$\alpha$ line profile emerging from a galaxy, and the observed Ly$\alpha$ LFs remains "fixed". The resulting upper limit of $f_\mathrm{esc}^\mathrm{Ly\alpha}$ is essential, as together with $L_\alpha^\mathrm{intr}$ it provides an upper limit to $f_\mathrm{esc}$ values that fit the observed Ly$\alpha$ LFs. We find this upper limit to be about $f_\mathrm{esc}\sim0.5$ in our {\sc astraeus} model. Due to their opposing dependencies of $f_\mathrm{esc}$ with halo mass, the Ly$\alpha$ profiles in the {\it Porous} model show the largest differences between the {\sc mhdec} and {\sc mhinc} scenarios among our three Ly$\alpha$ line profile models. While the double-peak component is more prominent in the most massive galaxies ($M_h\simeq10^{11}\msun$) in the {\sc mhdec} scenario, the central peak is stronger in the {\sc mhinc} scenario. To fit the observed Ly$\alpha$ LFs, we find that we require for both reionisation scenarios a more clumpy gas and dust distribution than in the {\it Clumpy} model, i.e. a (higher) $\tau_\mathrm{0,cl}$ value of $1.5-1.7\times10^6$. These increased $\tau_\mathrm{0,cl}$ values enhance the corresponding $f_\mathrm{c,crit}$ values and thus the dust attenuation in the homogeneous regime giving rise to the double-peak components and counteract the increased escape close to the Ly$\alpha$ resonance. This model-integrated correlation between $f_\mathrm{esc}^\mathrm{Ly\alpha}$ and $f_\mathrm{esc}$ counteracts the trend of flattening (steepening) the slope of the intrinsic Ly$\alpha$ LFs due to $f_\mathrm{esc}$ decreasing (increasing) with rising halo mass: If $f_\mathrm{esc}$ is low (high), more (less) Ly$\alpha$ radiation is subject to dust attenuation. This model feature explains why the observed Ly$\alpha$ LFs of the {\sc mhdec} ({\sc mhinc}) simulation are steeper (shallower) than in the {\it Clumpy} model. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{2Dhistogram_numDens_medianLyaProperties-Mvir_zall.png} \caption{Median of indicated galactic properties (lines) and their $\sim1.3\sigma$ uncertainties (shaded regions) as a function of halo mass $M_h$ at $z=8.0$, $7.3$, $7.0$, $6.6$ for a homogeneous static gas shell. Solid (dashed dotted) lines show results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$.} \label{fig_histograms} \end{figure*} As the dust composition and absorption cross sections remain uncertain during the EoR, we note that a lower (higher) dust absorption cross section $\kappa_\mathrm{abs}$ could still reproduce the observed Ly$\alpha$ LFs in our {\it Clumpy} and {\it Porous} models by raising (decreasing) the clump optical depth $\tau_\mathrm{0,cl}$. However, this would go along with an enhanced (reduced) double-peak and reduced (enhanced) central-peak component in the average Ly$\alpha$ line profile emerging from galaxies. Finally, we briefly comment on how our emerging and IGM-attenuated Ly$\alpha$ profiles compare to those obtained from radiative hydrodynamical simulations of clouds and small cosmological volumes ($\sim10^3$cMpc$^3$). While the {\it Clumpy} and {\it Porous} reproduce the double- and triple-peak profiles and their dependence on $N_\mathrm{HI}$ and $f_\mathrm{esc}$ found in cloud simulations \citep{Kakiichi2021, Kimm2019, Kimm2021} by construction, their Ly$\alpha$ line profiles differ from those obtained from the {\sc sphinx} simulation \citep{Garel2021}. In {\sc sphinx} the median angle-averaged Ly$\alpha$ line profile has been found to be less double-peaked towards brighter galaxies, with the blue peak being seemingly increasingly suppressed. This is the opposite trend of our findings. The discrepancy lies in the differently assumed or simulated ISM structures: While our LAE models assume an idealised scenario of same-sized outflowing dusty gas clumps, the {\sc sphinx} simulation follows the formation of star-forming clouds within galaxies. With rising galaxy mass, we expect the simulated {\sc sphinx} galaxies to contain a higher number of star-forming clouds with various velocity and size distributions. A single or very few star-forming clouds -- as found in low-mass galaxies -- will give rise to a double-peaked Ly$\alpha$ line profile. Adding the profiles of multiple/many star-forming clouds at different velocities will give rise to increasingly more complex Ly$\alpha$ line profiles as galaxies become more massive. Adjusting our current Ly$\alpha$ line profile models to the complex structure of the ISM will be the subject of future work. \subsection{The dependence of Ly$\alpha$ properties on halo mass} In this Section, we provide a more detailed discussion of how the intrinsic Ly$\alpha$ luminosity ($L_\alpha^\mathrm{intr}$), the Ly$\alpha$ escape fraction, the Ly$\alpha$ transmission through the IGM, the observed Ly$\alpha$ luminosity, and Ly$\alpha$ equivalent width depend on halo mass and evolve with redshift for the different reionisation scenarios. To this end, we show these quantities as a function of halo mass for both reionisation scenarios ({\sc mhdec}: yellow/orange/brown lines; {\sc mhinc}: blue lines) and redshifts $z\simeq8$, $7.3$, $7$ and $6.6$ in Fig. \ref{fig_histograms} and list the corresponding average \HI fractions in Table \ref{table_XHI_reionisation_scenarios}. Solid and dot-dashed lines in Fig. \ref{fig_histograms} depict the median value for galaxies in the given halo mass bin, and shaded regions indicate the range spanned by $68\%$ of the values. For line-of-sight-dependent Ly$\alpha$ properties ($T_\alpha$, $L_\alpha$, EW$_\alpha$), we include all $6$ lines of sight. \begin{table} \centering \begin{tabular}{|c|c|c|c|} $z$ & $\langle\chi_\mathrm{HI}\rangle^\mathrm{MHINC}$ & $\langle\chi_\mathrm{HI}\rangle^\mathrm{CONST}$ & $\langle\chi_\mathrm{HI}\rangle^\mathrm{MHDEC}$\\ \hline 8.0 & 0.84 & 0.80 & 0.71 \\ 7.3 & 0.69 & 0.66 & 0.59 \\ 7.0 & 0.52 & 0.52 & 0.49 \\ 6.6 & 0.23 & 0.29 & 0.34 \\ \end{tabular} \caption{The evolution of the global \HI fractions of the IGM for our reionisation scenarios.} \label{table_XHI_reionisation_scenarios} \end{table} \paragraph*{Intrinsic Ly$\alpha$ luminosity $L_\alpha^\mathrm{intr}$:} As the most recent star formation dominates the production of ionising photons within galaxies, we find $L_\alpha^\mathrm{intr}$ to follow the SFR-$M_h$ relation \citep[for a detailed discussion, see][]{Hutter2021a}. While the range of SFR values is broad for low-mass halos ($M_h\lesssim10^{9.5}\msun$) where SN feedback drives stochastic star formation, the SFR-$M_h$ relation becomes tighter towards more massive galaxies as SN feedback ejects an increasingly lower fraction of gas from the galaxy. Being mainly produced by recombining hydrogen atoms within a galaxy, the Ly$\alpha$ radiation produced within the galaxy correlates with the escape fraction of ionising photons as $1-f_\mathrm{esc}$. As we can see from the first row in Fig. \ref{fig_histograms}, this dependency on $f_\mathrm{esc}$ leads to higher (lower) Ly$\alpha$ luminosities for more massive galaxies, lower (higher) Ly$\alpha$ luminosities for low-mass galaxies, and thus a shallower (steeper) LFs in the {\sc mhdec} ({\sc mhinc}) scenario. \paragraph*{Ly$\alpha$ escape fraction $f_\mathrm{esc}^\mathrm{Ly\alpha}$:} As the dust content in galaxies increases with their mass, we find $f_\mathrm{esc}^\mathrm{Ly\alpha}$ to decrease with rising halo mass at all redshifts and for all Ly$\alpha$ line models. However, the different assumed distributions of dust and their resulting attenuation of Ly$\alpha$ radiation lead to differences in the details of this global trend: Firstly, the {\it Gaussian} model shows a steeper decline in $f_\mathrm{esc}^\mathrm{Ly\alpha}$ for galaxies with $M_h\gtrsim10^{10.5}\msun$ than the {\it Clumpy} and {\it Porous} models. Secondly, $f_\mathrm{esc}^\mathrm{Ly\alpha}$ is always higher in the {\sc mhinc} than in the {\sc mhdec} scenario. This is necessary to reproduce the observed Ly$\alpha$ LFs by compensating the lower intrinsic Ly$\alpha$ luminosities with a more clumpy gas-dust distribution in the {\sc mhinc} scenario. In case of the {\it Clumpy} model, it also highlights how a decrease in the clump optical depth by a factor $\sim 2$ can increase $f_\mathrm{esc}^\mathrm{Ly\alpha}$ by reducing the fraction of Ly$\alpha$ photons escaping in the homogeneous regime (i.e. a decrease in $f_\mathrm{c,crit}$ and $\tau_\mathrm{0,cl}$ leads to a reduced number of clumps encounters $N_\mathrm{cl}$ and the clump albedo $\epsilon$). Thirdly, for the {\sc mhdec} ({\sc mhinc}) scenario, the $f_\mathrm{esc}^\mathrm{Ly\alpha}$ values show higher (lower) values in the {\it Porous} model than in the {\it Clumpy} model for $M_h\lesssim10^{10}\msun$. The reason for this difference is as follows. In both scenarios the higher $\tau_\mathrm{0,cl}$ values in the {\it Porous} model increase the dust attenuation of Ly$\alpha$ escaping in the homogeneous regime. But only a fraction $1-f_\mathrm{esc}$ of the Ly$\alpha$ photons is subject to dust attenuation. This unattenuated escape of Ly$\alpha$ radiation imprints the mass-dependency of $f_\mathrm{esc}$ in the $f_\mathrm{esc}^\mathrm{Ly\alpha}$ values. However, for galaxies with $M_h\gtrsim10^{10}\msun$, this imprint ($f_\mathrm{esc}^\mathrm{Ly\alpha}$ enhancement in {\it Porous} model) is only visible in the {\sc mhinc} scenario where $f_\mathrm{esc}$ values are sufficiently large ($>0.1$); in the {\sc mhdec} scenario $f_\mathrm{esc}$ values are too small. \paragraph*{Ly$\alpha$ IGM transmission $T_\alpha$:} As outlined in Section \ref{subsec_discussion_IGM_transmission}, the surrounding ionised region (in particular its size and residual \HI fraction) and the Ly$\alpha$ line profile emerging from a galaxy determine how much of a galaxy's escaping Ly$\alpha$ radiation is transmitted through the IGM. For more massive galaxies with $M_h\gtrsim10^{10}\msun$, $T_\alpha$ is mainly shaped by the Ly$\alpha$ profile. This is because the ionised regions surrounding them are sufficiently large -- due to their enhanced ionising emissivity and their clustered neighbourhood -- for the Ly$\alpha$ radiation to redshift out of absorption. Hence, at these high halo masses, any trends in $T_\alpha$ reflect the ratio between the Ly$\alpha$ radiation escaping around the Ly$\alpha$ resonance and escaping through the wings: the more Ly$\alpha$ escapes in the central peak, the lower is the $T_\alpha$ value. Indeed, as can be seen in Fig. \ref{fig_histograms}, the {\it Gaussian} model concentrating the emerging Ly$\alpha$ radiation around the Ly$\alpha$ resonance shows the lowest median $T_\alpha$ values at $M_h\gtrsim10^{10}\msun$ among all Ly$\alpha$ line profile models. In the {\it Clumpy} model, where the fraction of Ly$\alpha$ escaping through the wings increases with rising halo mass, we find the median $T_\alpha$ value to increase accordingly. This effect is more evident for the {\sc mhinc} scenario as it transitions from a Ly$\alpha$ line profile with a dominating central peak at $M_h\simeq10^{11}\msun$ to one with a prevailing double peak component. The {\it Porous} model also confirms that $T_\alpha$ is highly sensitive to the Ly$\alpha$ line profile. In the {\sc mhinc} scenario, the double peak component is weaker and increases less with halo mass, leading to slightly lower $T_\alpha$ values than in the {\it Clumpy} model for $M_h\simeq10^{10-11}\msun$ and $T_\alpha$ hardly changing with halo mass. In the {\sc mhdec} scenario, we see the same effect but to a lower degree. However, for less massive galaxies ($M_h\lesssim10^{10}\msun$), $T_\alpha$ is more sensitive to the properties of their surrounding ionised regions. Since the ionised regions around less massive galaxies can differ significantly depending on their environment and phase in their stochastic star formation cycle (see \citet{Hutter2021b} and \citet{Legrand2022b} for environment dependence), their $T_\alpha$ values span across an extensive range from as low as effectively zero to as high as $\simeq70\%$. Nevertheless, the median $T_\alpha$ value shows a definite trend. It increases with rising halos mass for all models and at all stages of reionisation. With increasing halo mass, galaxies are surrounded by larger ionised regions as they form more stars emitting ionising photons and are more likely to be located in clustered regions that are reionised earlier. The larger the surrounding ionised regions are, the higher the transmission of Ly$\alpha$ radiation through the IGM. We can see this relationship when comparing the median $T_\alpha$ values of the {\sc mhdec} and {\sc mhinc} simulations. In the {\sc mhdec} scenario low-mass galaxies are surrounded by larger ionised regions at $z\gtrsim7$ than in the {\sc mhinc}, causing their corresponding $T_\alpha$ values to be raised (c.f. orange/brown solid lines vs dark blue/blue lines in the third row of Fig. \ref{fig_histograms}). At $z\lesssim7$, however, reionisation progresses faster and the photoionisation rate in clustered ionised regions yields a lower residual \HI fraction in the {\sc mhinc} simulation, both leading to a higher median $T_\alpha$ value for the {\sc mhinc} than {\sc mhdec} scenario at $z\simeq6.6$. Finally, we briefly discuss how the Ly$\alpha$ line profile emerging from a galaxy affects $T_\alpha$ for less massive galaxies. From Fig. \ref{fig_histograms} we see that the $T_\alpha$ values differ between our three different Ly$\alpha$ line profile models: While at all stages of reionisation the $T_\alpha$ values for $M_h\lesssim10^{10}\msun$ are very similar in the {\it Porous} and {\it Clumpy} model, the {\it Porous} model shows lower $T_\alpha$ values for $M_h\gtrsim10^{10}\msun$ at $z\lesssim7$ than the {\it Clumpy} model in the {\sc mhinc} scenario. This drop goes in hand with the increased central peak component in these more massive galaxies (c.f. Fig. \ref{fig_profiles} and the previous Section). The median $T_\alpha$ values of the {\it Gaussian} model always lie below those of the {\it Clumpy} and {\it Porous} models; a larger fraction of Ly$\alpha$ radiation escapes closer to the Ly$\alpha$ resonance and is thus subject to stronger attenuation by the IGM. \paragraph*{Variance of the IGM transmission along different lines of sight:} To investigate how strongly the transmission of Ly$\alpha$ radiation through the IGM depends on the direction, we show the standard deviation of $T_\alpha$ values over the $6$ lines of sight aligning with the major axes in relation to the corresponding mean value, $\sigma_{T_\alpha} / \langle T_\alpha \rangle = \sqrt{\langle T_\alpha^2\rangle - \langle T_\alpha\rangle^2}/ \langle T_\alpha \rangle$, in the fourth row of Fig. \ref{fig_histograms}. At all redshifts and for all models, $\sigma_{T_\alpha} / \langle T_\alpha \rangle$ decreases with rising halo mass and decreasing redshift for the following reason. As galaxies grow in mass, they produce more ionising photons that can ionise larger regions around them and are also more likely to be located in more strongly clustered ionised regions, both enhancing and homogenising Ly$\alpha$ transmission through the IGM along different lines of sight. However, we note that parts of the decrease of $\sigma_{T_\alpha} / \langle T_\alpha \rangle$ with decreasing redshift is also due to $\langle T_\alpha \rangle$ rising. Since it is hard to disentangle these two effects, we will focus on relative differences between the different reionisation scenarios and Ly$\alpha$ line profile models. Firstly, the more the emerging Ly$\alpha$ line profile is concentrated around the Ly$\alpha$ resonance, the more sensitive is $T_\alpha$ to the varying \HI abundance around a galaxy, and the larger is the variance across different lines of sight (c.f. the higher $\sigma_{T_\alpha} / \langle T_\alpha \rangle$ values in the {\it Gaussian} compared to the other two models, and in the {\it Porous} compared to the {\it Clumpy} model for $M_h\gtrsim10^{10.5}\msun$ when central peak component dominates). Secondly, we focus on Ly$\alpha$ line profiles more sensitive to the environmental \HI abundance of a galaxy ({\it Gaussian} model). When accounting for the $\langle T_\alpha \rangle$ values to be lower in the {\sc mhinc} than in the {\sc mhdec} scenario at $z\simeq7$ (see median $T_\alpha$ values in the third row of Fig. \ref{fig_histograms}), we can deduce that the variance of $T_\alpha$ across different lines of sight is higher in the {\sc mhdec} than in the {\sc mhinc} scenario. Indeed in the {\sc mhinc} scenario, the shape of ionised regions is closer to spheres and less filamentary, which results in more ``homogeneous" $T_\alpha$ values. \paragraph*{Observed Ly$\alpha$ luminosity $L_\alpha$:} For any model and reionisation scenario, the trend of $L_\alpha$ with rising halo mass depends on the respective trends of $L_\alpha^\mathrm{intr}$, $f_\mathrm{esc}^\mathrm{Ly\alpha}$, and $T_\alpha$. Being surrounded by smaller ionised regions, the low $T_\alpha$ values of less massive galaxies ($M_h\lesssim10^{10}\msun$) strongly suppress and shape their emerging Ly$\alpha$ radiation. In contrast, the $T_\alpha$ values of more massive galaxies ($M_h\gtrsim10^{10}\msun$) show only weak trends with halo mass and similar values throughout reionisation. For this reason, the trends of their $L_\alpha$ values with halo mass are predominantly shaped by the corresponding trends of $L_\alpha^\mathrm{intr}$ and $f_\mathrm{esc}^\mathrm{Ly\alpha}$. Though, for model parameters that reproduce the observed Ly$\alpha$ LFs, a relative increase (decrease) of $L_\alpha^\mathrm{intr}$ towards higher halo masses, such as in the {\sc mhinc} ({\sc mhdec}) scenario, is compensated by an $f_\mathrm{esc}^\mathrm{Ly\alpha}$ that decreases more (less) strongly with halo mass. Nevertheless, the resulting relation between $L_\alpha$ and halo mass does not significantly change. It shows that only more massive galaxies where SN and radiative feedback do not considerably suppress star formation exhibit observable Ly$\alpha$ emission of $L_\alpha\gtrsim10^{41}$erg~s$^{-1}$. \paragraph*{Observed Ly$\alpha$ equivalent width EW$_\alpha$:} We compute the Ly$\alpha$ equivalent width EW$_\alpha$ from $L_\alpha$ and the observed UV continuum luminosity at $1500$\AA~ ($L_c$). The trend of the median EW$_\alpha$ with halo mass follows that of $L_\alpha$, with median EW$_\alpha$ values ranging from $\sim20$\AA~ for galaxies in $M_h\simeq10^{10}\msun$ halos to $\sim50$\AA~ for galaxies in $M_h\simeq10^{11.3}\msun$ halos. More massive galaxies with a strongly attenuated UV continuum -- the fraction of these galaxies increases towards higher halo masses due to the higher abundance of dust -- and high $L_\alpha$ values show EW$_\alpha$ values up to $\sim300$\AA~ (and very few up to $\sim1000$\AA) in the {\it Clumpy} and {\it Porous} models. However, these high EW$_\alpha$ values are not present in the {\it Gaussian} model for the following reason: in this model, the escape of Ly$\alpha$ and UV continuum radiation differs just by a constant factor, while the dust attenuation of Ly$\alpha$ and UV continuum photons within a galaxy are not only linked via the dust mass in the {\it Clumpy} and {\it Porous} models. \paragraph*{} In summary, we find that only more massive galaxies ($M_h\gtrsim10^{10}\msun$) where star formation is not substantially suppressed by SN and radiative feedback from reionisation show significant Ly$\alpha$ emission of $L_\alpha\gtrsim10^{41}$erg~s$^{-1}$. This limitation of observable Ly$\alpha$ emission to more massive galaxies allows the $f_\mathrm{esc}$-dependency of the intrinsic Ly$\alpha$ luminosity to be compensated by a weaker or stronger attenuation of Ly$\alpha$ by dust within a galaxy. If less massive galaxies were visible in Ly$\alpha$, they would break this degeneracy as they would not contain enough dust to attenuate the Ly$\alpha$ radiation in all scenarios sufficiently. \section{The spatial distribution of Ly$\alpha$ emitting galaxies} \label{sec_spatial_distribution_LAEs} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{fescMHDECfescMHINC_LyaProfileFESC_tauClump1.7e6_LyaLum1.e42_dustGRONKE_clumpCrossSec2.e5_clumpAlbedo-1._XHI_z256.png} \caption{Neutral hydrogen fraction fields at $z=8.0$ (left), $z=7.0$ (centre), and $z=6.6$ (right) for the {\sc mhdec} (top) and {\sc mhinc} models (bottom). We show a $1.6h^{-1}$cMpc-thick (5 cells) slice through the centre of the simulation box. The blue color scale depicts the volume-averaged value of the neutral fraction in each cell. Red stars show the location of LAEs, with their sizes and colour scale encoding the observed Ly$\alpha$ luminosity along the $z$-direction.} \label{fig_XHImaps_with_LAEs} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{FESC_2Dhistogram_numDens_Lya_obs_XHI-DENS.png} \caption{2D probability distribution in $\chi_\mathrm{HI}$ and overdensity for all simulation cells (grey) and galaxies with $L_\alpha\geq10^{42}$erg~s$^{-1}$ (green), $L_\alpha\geq10^{42.5}$erg~s$^{-1}$ (blue), and $L_\alpha\geq10^{43}$erg~s$^{-1}$ (red) in the {\it Porous} model. The top (bottom) row shows results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$.} \label{fig_Lya_obs_XHI-DENS} \end{figure*} In this Section, we analyse where galaxies with observable Ly$\alpha$ emission are located in the large-scale structure and how their environment and Ly$\alpha$ luminosity distributions differ in our reionisation scenarios ({\sc mhdec} and {\sc mhinc}). For this purpose, we discuss the environment of Ly$\alpha$ emitting galaxies in terms of their large-scale spatial distribution (Fig. \ref{fig_XHImaps_with_LAEs}), their surrounding over-density ($1+\delta$) and \HI fraction ($\chi_\mathrm{HI}$) (Fig. \ref{fig_Lya_obs_XHI-DENS}), and their 3D autocorrelation functions (Fig. \ref{fig_3Dcorrfuncs_FESC}). As we yield very similar results for our three Ly$\alpha$ lines profile, we use the {\it Porous} model as a representative case. \subsection{The environment} Before detailing the location of Ly$\alpha$ emitting galaxies in the large-scale matter distribution, we briefly discuss the ionisation structure of the IGM using Fig. \ref{fig_XHImaps_with_LAEs} and \ref{fig_Lya_obs_XHI-DENS}. Fig. \ref{fig_XHImaps_with_LAEs} shows the ionisation fields at $z=8$, $7$ and $6.7$ for the {\sc mhdec} (top) and {\sc mhinc} scenarios (bottom). As can be seen in this Figure, if $f_\mathrm{esc}$ decreases with halo mass ({\sc mhdec} scenario), reionisation is not only more extended but also ionised regions are on average smaller, follow more the large-scale density distribution and thus have less bubble-like shapes than if $f_\mathrm{esc}$ increases with halo mass ({\sc mhinc} scenario). The grey contours in Fig. \ref{fig_Lya_obs_XHI-DENS}, showing the two-dimensional probability density distribution of the \HI fraction ($\chi_\mathrm{HI}$) and over-density of the IGM at $z=8$, $7.3$, $7$ and $6.7$ (derived from all cells of the $512^3$ ionisation and density grids output by {\sc astraeus}), complement the picture. These contours indicate that not only an increasing fraction of the volume becomes ionised as reionisation progresses (from right to left) but also the $\chi_\mathrm{HI}$ values in ionised regions decrease (e.g. from $\chi_\mathrm{HI}\simeq10^{-4}$ ($10^{-4.3}$) in average dense regions with $\log_{10}(1+\delta)+1$ at $z=8$ to $\chi_\mathrm{HI}\simeq10^{-4.7}$ ($10^{-5.2}$) at $z=6.7$ for the {\sc mhdec} ({\sc mhinc}) scenario). The latter is because as galaxies grow in mass with decreasing redshift, their emission of ionising photons increases, leading to a rise of the photoionisation rates within ionised regions and thus lower $\chi_\mathrm{HI}$ values. Moreover, at the same time, as the photoionisation rate within ionised regions becomes increasingly homogeneous, the enhanced number of recombinations in denser regions \citep[for the detailed modelling description see][]{Hutter2018a} leads to a positive correlation between the \HI fraction and density in ionised regions. However, the exact value of the photoionisation rate within ionised regions and its spatial distribution depend strongly on the ionising emissivities escaping from the galaxies into the IGM. If less clustered low-mass galaxies drive reionisation -- as in the {\sc mhdec} scenario (top row in Fig. \ref{fig_Lya_obs_XHI-DENS}) --, the resulting photoionisation rate is more homogeneous and lower than if the more strongly clustered massive galaxies are the main drivers of reionisation (c.f. {\sc mhinc} scenario in the bottom row of Fig. \ref{fig_Lya_obs_XHI-DENS}). The difference in the photoionisation rate's magnitude explains the shift of the $\chi_\mathrm{HI}$ values by an order of magnitude to lower values in under-dense to moderately over-dense regions ($\log_{10}(1+\delta)\lesssim1.2$) when going from the {\sc mhdec} to the {\sc mhinc} scenario. In contrast, the more inhomogeneous distribution of the photoionisation rate's values enhances this drop in $\chi_\mathrm{HI}$ in over-dense regions where the most massive galaxies are located. As we can see from the red stars in Fig. \ref{fig_XHImaps_with_LAEs} and coloured contours in Fig. \ref{fig_Lya_obs_XHI-DENS}, Ly$\alpha$ emitting galaxies always lie in ionised regions. Although these galaxies trace the ionisation topology, their populations hardly differ for our two opposing reionisation scenarios. This absence of a significant difference is due to their massive nature \citep[see also e.g.][]{Kusakabe2018}: hence, all Ly$\alpha$ emitting galaxies lie in over-dense regions, with the ones brighter in Ly$\alpha$ located in denser regions (c.f. green to blue to red contours). The latter trend is mainly because more massive galaxies, which exhibit higher star formation rates and produce more ionising and Ly$\alpha$ radiation, are located in denser regions. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{LyaProfileFESC_3Dcorrfuncs.png} \caption{Top panels: 3D correlation function of galaxies that exceed an observed Ly$\alpha$ luminosity of $L_\alpha>10^{42}$erg~s$^{-1}$ (left), $L_\alpha>10^{42.5}$erg~s$^{-1}$ (centre) and $L_\alpha>10^{43}$erg~s$^{-1}$ (right) at $z=10$, $9$, $8$, $7.3$, $7$, $6.6$ for the {\it Porous} model. Solid (dashed dotted) lines show results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$ and assumes $\tau_\mathrm{0,cl}=4\times10^5$ ($2\times10^5$). The grey to black lines indicate the corresponding LBG ($M_\mathrm{UV}<-17$) 3D correlation functions from $z=10$ to $6.6$. Bottom panels: Ratio between the 3D LAE correlation functions of the {\sc mhinc} and the {\sc mhdec} scenario at fixed redshifts.} \label{fig_3Dcorrfuncs_FESC} \end{figure*} \subsection{The clustering} In this Section, we address the question whether the Ly$\alpha$ luminosity-dependent distribution of LAEs could differ for reionisation scenarios with opposing trends of $f_\mathrm{esc}$ with halo mass. For this purpose, we analyse the 3D autocorrelation function for LAE samples with different minimum Ly$\alpha$ luminosities (Fig. \ref{fig_3Dcorrfuncs_FESC}). We define a galaxy to be an LAE if it has an observed Ly$\alpha$ luminosity of $L_\alpha\geq10^{42}$erg~s$^{-1}$. Before we discuss the differences between our opposing $f_\mathrm{esc}$ descriptions, we give a brief overview of the global trends and their origins. Firstly, as predicted by hierarchical structure formation, all autocorrelation functions in Fig. \ref{fig_3Dcorrfuncs_FESC} decrease from small to large scales, implying stronger clustering of galaxies on small scales than on large scales. Secondly, the dropping amplitude of the LAE autocorrelation functions with decreasing redshift (from ochre to blue lines) reflects the growth and increasing ionisation of ionised regions. Thirdly, since the $L_\alpha$ value of a galaxy is strongly correlated to its halo mass in our galaxy evolution model, selecting galaxies with increasingly brighter Ly$\alpha$ luminosities (left to right in Fig. \ref{fig_3Dcorrfuncs_FESC}) corresponds to selecting more massive galaxies. The latter explains the increasing amplitude and stronger clustering. Comparing the correlation functions of the $L_\alpha$ selected galaxies with those of LBGs (galaxies with $M_\mathrm{UV}\geq-17$) shows that the Ly$\alpha$ selected galaxies are more massive than our LBGs (solid grey lines). It also shows that the decrease in the clustering of LAEs is partially due to galaxies of a given mass becoming a less biased tracer of the underlying density field as the density of the Universe drops with decreasing redshift. Comparing the autocorrelation functions of our two opposing $f_\mathrm{esc}$ descriptions, we find that the {\sc mhinc} scenario (dotted lines) has higher autocorrelation amplitudes than the {\sc mhdec} scenario (solid lines) throughout reionisation and for all minimum Ly$\alpha$ luminosities studied. This difference decreases towards larger scales. The reason for these higher amplitudes is twofold: On the one hand, the {\sc mhinc} scenario has a lower global average ionisation fraction at $z\gtrsim7$ than the {\sc mhdec} scenario (see Fig. \ref{fig_hist_ion}). Its ionised regions are located around more biased tracers of matter, i.e. more massive galaxies, leading to a stronger clustering. While the scenarios' difference in $\langle\chi_\mathrm{HI}\rangle$ reaches its maximum with $\sim0.13$ around $z\simeq8$, the difference in the autocorrelation amplitudes rises even towards higher redshifts. This is because, with increasing redshift, galaxies of the same mass become more biased tracers of the underlying matter distribution. Thus, the same difference in $\langle \chi_\mathrm{HI}\rangle$ at higher $\langle \chi_\mathrm{HI}\rangle$ values leads to a larger difference in the clustering of LAEs, since the Ly$\alpha$ luminosity of a galaxy correlates strongly with its halo mass. We note that selecting LAEs with a higher minimum Ly$\alpha$ luminosity also corresponds to selecting more biased tracers and yields higher correlation amplitudes (c.f. different panels in Fig. \ref{fig_3Dcorrfuncs_FESC}). On the other hand, during the early stages of reionisation, ionised regions grow preferentially around the most biased tracers of the underlying matter field (most massive galaxies) in the {\sc mhinc} scenario. Thus, we would expect that, at the same $\langle\chi_\mathrm{HI}\rangle$ value, LAEs in this scenario are more clustered than LAEs in the {\sc mhdec} scenario where the $f_\mathrm{esc}$ decreasing with rising halo mass counteracts the biased growth of ionised regions. Indeed, at $z\lesssim7$, the correlation amplitude in the {\sc mhinc} scenario is higher or similar than in the {\sc mhdec} scenario, although the Universe is similarly or more ionised in the former, respectively. This difference becomes more apparent as we consider higher minimum Ly$\alpha$ luminosities of $L_\alpha>10^{42.5}$erg~s$^{-1}$. It is driven by the higher photoionisation rates in the ionised regions around massive galaxies. We conclude that, since LAEs coincide with the most massive galaxies located in dense and ionised regions, their clustering is primarily a tracer of the global ionisation state of the IGM. While the exact ionisation topology at fixed $\langle\chi_\mathrm{HI}\rangle$ values has only a secondary effect on the clustering of LAEs during the second half of reionisation, the spatial distribution of LAEs provides a relatively robust tool to map the detailed ionisation history at early times. \section{The relation of LAEs to LBGs} \label{sec_LAE_LBG_relation} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{2Dhistogram_numDens_median_LAE-LBG-relation-Lc_obs_zall.png} \caption{The two top rows depict the fraction of LBGs showing Ly$\alpha$ emissions with $L_\alpha\geq10^{42}$erg~s$^{-1}$ and $\mathrm{EW_\alpha}$ exceeding the value marked in the panels for the different Ly$\alpha$ line profile models as marked. The bottom row shows the corresponding medians of the $\mathrm{EW_\alpha}$ values (lines) and their $\sim1.3\sigma$ uncertainties (shaded regions). Solid (dashed dotted) lines show results for the reionisation scenario where $f_\mathrm{esc}$ decreases (increases) with halo mass $M_h$.} \label{fig_LAE_LBG} \end{figure*} In this Section, we address the question of what defines whether an LBG shows Ly$\alpha$ emission and why the fraction of LBGs with observable Ly$\alpha$ emission changes as the observed UV continuum luminosity (at $1500$\AA) or the minimum Ly$\alpha$ equivalent rise. For this purpose, we show both the fraction of LBGs with a Ly$\alpha$ equivalent width of at least EW$_\alpha\geq25$\AA~ (top row) and EW$_\alpha\geq50$\AA~ (central row) and the median EW$_\alpha$ value (bottom row) as a function of the UV continuum luminosity in Fig. \ref{fig_LAE_LBG}. For our three different Ly$\alpha$ line profile models, we find the median EW$_\alpha$ to exhibit similar values of $\sim5-30$\AA~ at all redshifts shown. Furthermore, the EW$_\alpha$ values range to lower values as galaxies become UV fainter. As galaxies become less massive, this spread in EW$_\alpha$ values reflects the increasingly broader range of star formation rate values to lower values, which traces back to the larger variety of mass assembly histories that increasingly include progenitors with particularly SN feedback-suppressed star formation. The $M_\mathrm{UV}$-dependency of the fraction of LBGs with Ly$\alpha$ emission ($f_\mathrm{LAE}$) also reflects this shift towards lower EW$_\alpha$ values (c.f. top and central row of Fig. \ref{fig_LAE_LBG}): firstly, $f_\mathrm{LAE}$ decreases towards lower UV luminosities, and secondly, this decrease is stronger for lower than higher EW$_\alpha$ cuts. These trends imply that UV bright galaxies are more likely to show higher EW$_\alpha$ values for all our Ly$\alpha$ line profile models and reionisation scenarios. For example, while only $<10\%$ of galaxies with $M_\mathrm{UV}\simeq-18$ exceed EW$_\alpha>25$\AA, $>40\%$ of galaxies with $M_\mathrm{UV}\gtrsim-20$ exceed EW$_\alpha>25$\AA~ and $>5\%$ even EW$_\alpha>50$\AA. Moreover, both the slight rise of EW$_\alpha$ and $f_\mathrm{LAE}$ values with decreasing redshift and their variation among our different reionisation scenarios can be attributed to the increasing fraction of Ly$\alpha$ radiation that is transmitted through the IGM as the Universe becomes more ionised (see $T_\alpha$ in Fig. \ref{fig_histograms}). For example, at a given redshift $z>7$ ($z<7$), the EW$_\alpha$ and $f_\mathrm{LAE}$ values are higher (lower) in the {\sc mhdec} than in the {\sc mhinc} scenario, which is due to a more (less) ionised IGM. Similarly, the lower EW$_\alpha$ values reached in the {\it Gaussian} model for UV fainter galaxies are due to the stronger absorption of Ly$\alpha$ radiation by \HI in the IGM. Finally, we note that since in the {\it Clumpy} and {\it Porous} models the attenuation of the UV continuum and Ly$\alpha$ by dust do not necessarily correlate with each other (as e.g. parts of Ly$\alpha$ can escape via random walk), a few galaxies that are attenuated strongly in the UV but less in Ly$\alpha$ show high EW$_\alpha$ values of $\sim1000$\AA. Thus, the main driver of these high EW$_\alpha$ values is the dust attenuation of the UV continuum assumed in our models. Comparing our fraction of LBGs showing Ly$\alpha$ emission with those obtained in observations \citep[e.g.][]{Schenker2012, Schenker2014, Caruana2014, Pentericci2014, Pentericci2018, Mason2019}, we find that (i) the observed trend of $f_\mathrm{LAE}$ decreasing towards higher UV luminosity agrees roughly with our results for EW$_\alpha>50$\AA~ but not for EW$_\alpha>25$\AA, and (ii) our $f_\mathrm{LAE}$ values are higher than those inferred from observations (again more so for EW$_\alpha>25$\AA~ than EW$_\alpha>50$\AA). These discrepancies hint either at our model predicting too high Ly$\alpha$ or too low UV luminosities (particularly for more massive galaxies) despite reproducing the observed Ly$\alpha$ and UV LFs, or observations missing bright LAEs. Interestingly, we find that the fraction of LBGs with high EW$_\alpha$ values of $f_\mathrm{LAE}(\mathrm{EW}_\alpha>100$\AA$)\simeq1-12\%$ and $f_\mathrm{LAE}(\mathrm{EW}_\alpha>240$\AA$)\simeq1-2\%$ in the {\it Clumpy} and {\it Porous} models are in rough agreement with the results from deep MUSE observations at $z=3-6$ that consider only LAEs with detected UV continuum \citep{Kerutt2022}. A higher abundance of high EW$_\alpha$ values has been found in various high-redshift LAE observations \citep[e.g.][]{Shibuya2018, Malhotra_Rhoads2002, shimasaku2006}. Nevertheless, our $f_\mathrm{LAE}$ values agree roughly with the results from radiative hydrodynamical simulations post-processed with Ly$\alpha$ radiative transfer, such as {\sc sphinx} \citep[c.f. Fig. B1 in][]{Garel2021}. \section{Conclusions} \label{sec_conclusions} We apply our new framework for LAEs to different reionisation scenarios, and analyse how the escape fraction of \HI ionising photons, $f_\mathrm{esc}$, and its dependence on halo mass affect the luminosity-dependent number and spatial distributions of LAEs. Besides $f_\mathrm{esc}$ affecting the IGM ionisation topology and the strength of the Ly$\alpha$ line produced in the ISM, its sensitivity to the density and velocity structure of ISM gas and dust has been found to correlate with the Ly$\alpha$ line profile emerging from a galaxy and the fraction of Ly$\alpha$ radiation escaping into the IGM. Notably, the emerging Ly$\alpha$ line profile reflects the attenuation by dust in the ISM and can also change the fraction of Ly$\alpha$ radiation that traverses the IGM unattenuated by \HI. For this reason, we build an analytical model for Ly$\alpha$ line profiles that emerge from a Ly$\alpha$ source surrounded by an outflowing shell of dusty gas clumps interspersed with low-density channels. Our model reproduces the numerical radiative transfer results of a shell with dust gas clumps of different sizes as presented in \citet{Gronke2017}. By coupling this model to {\sc astraeus}, a semi-numerical model coupling galaxy evolution and reionisation self-consistently, we derive the Ly$\alpha$ line profiles emerging from the simulated galaxy population and explore the resulting large-scale distribution of LAEs for different dependencies of $f_\mathrm{esc}$ on halo mass (decreasing, constant, increasing) and Ly$\alpha$ line profiles (Gaussian profile, outflowing shell of dusty clumps interspersed with low-density channels or not). For this parameter space, we analyse the resultant ionisation topologies, the dependencies of Ly$\alpha$ line profiles and Ly$\alpha$ properties on halo mass, and the location of galaxies with observable Ly$\alpha$ emission in the large-scale structure. Our main results are the following: \begin{enumerate} \item For an outflowing shell consisting of clumps of the same size, the Ly$\alpha$ line profile emerging from a galaxy develops from a central peak at the Ly$\alpha$ resonance to a double peak profile as it becomes more massive. Adding low-density channels results in either an enhancement or weakening of this trend as $f_\mathrm{esc}$ decreases or increases with rising halo mass. \item In all reionisation scenarios and Ly$\alpha$ line profile models, LAEs are more massive galaxies with $M_h\gtrsim10^{10}\msun$. These galaxies exhibit continuous star formation and are biased tracers of the underlying mass density distribution. Both allow efficient transmission of the Ly$\alpha$ line through the IGM by facilitating the build-up of ionised regions around them. In contrast, less massive galaxies are surrounded by smaller ionised regions, which results in their Ly$\alpha$ radiation being significantly attenuated by \HI in the IGM. \item As LAEs are more massive galaxies and the most biased tracers of the underlying mass density distribution, they are located in the densest and most highly ionised regions. This finding holds for any inside-out reionisation scenario where dense regions containing massive galaxies are ionised before under-dense voids \citep[see also][]{Hutter2014, Hutter2017}. In such scenarios, the spatial distribution of LAEs is primarily sensitive to the global ionisation fraction and only in second-order to the ionisation topology or the trend of $f_\mathrm{esc}$ with halo mass. \item As the observable Ly$\alpha$ LFs are composed of the Ly$\alpha$ emission from more massive galaxies, a decrease in their intrinsic Ly$\alpha$ luminosities (Ly$\alpha$ produced in the ISM) due to higher $f_\mathrm{esc}$ values can be compensated by reducing the attenuation by dust in the ISM \citep[echoing the degeneracy found in][]{Hutter2014}. However, if $f_\mathrm{esc}$ exceeds $\sim0.5$ for the most massive galaxies ($M_h\gtrsim10^{11}\msun$), their intrinsic Ly$\alpha$ luminosity is too low to reproduce the observed Ly$\alpha$ LFs \citep[see also][]{Hutter2014}. \end{enumerate} All combinations of our reionisation scenarios and Ly$\alpha$ line profile models result in Ly$\alpha$ and UV luminosities in reasonable agreement with observational constraints. However, although two of the three Ly$\alpha$ line profile models investigated use parameterisations of numerical Ly$\alpha$ radiative transfer simulation results, they represent idealised scenarios where the gas in each galaxy is distributed in clumps of the same mass and moves outwards at the same velocity. In reality, the density and velocity distributions of gas and dust in the ISM are more complex: Firstly, the dusty gas clumps will have different masses, with a distribution close to that of a scale-free one at the massive end. Such a mass distribution would result in more massive galaxies having larger clumps than less massive galaxies, which again would lead to a homogenisation of their Ly$\alpha$ line profiles where more massive (less massive) galaxies have an enhanced (weakened) central peak component and a weakened (enhanced) double-peak component. This change in the Ly$\alpha$ line profiles would result in the Ly$\alpha$ radiation being less (more) attenuated by dust in the ISM and traversing the IGM more (less) efficiently. Secondly, the low-density channels might not be gas-free or fully ionised, giving rise to a narrower double-peak profile instead of the central peak profile used in this work. Additionally, the gas may exhibit a turbulent velocity structure that could broaden the double-peak component. Both partially neutral low-density channels and an inhomogeneous velocity structure are likely to enhance the transmission of Ly$\alpha$ through the IGM. Thirdly, the attenuation of Ly$\alpha$ radiation by dust in the ISM depends on the distribution of dust in clumps. While our model assumes that gas and dust are perfectly mixed (resulting in an absorption probability per clump of $\epsilon\propto 1-\exp(-\tau_\mathrm{d,cl})$), a scenario where dust condensates in the centre surrounded by a shell of hydrogen gas would lower the absorption probability per clump and enhance the escape fraction of Ly$\alpha$ photons from a galaxy. While our Ly$\alpha$ line profile models are limited by the simplified structure assumed for the ISM, they cover the extreme cases of a central peak profile and a double-peak profile for more massive galaxies. Since these more massive galaxies drive the observed Ly$\alpha$ LFs, models accounting for a more detailed ISM structure are likely to lie in between and not change our key findings. In particular, the insensitivity of the spatial distribution of LAEs to any dependence of $f_\mathrm{esc}$ with halo mass suggests that LAEs alone can not help to constrain any gradual dependence of $f_\mathrm{esc}$ with galactic properties. Any dependency introduced in the intrinsic Ly$\alpha$ luminosity we can compensate by the opposed trend of the Ly$\alpha$ escape fraction, achieved by changing the ISM gas and dust distribution. This insensitivity to $f_\mathrm{esc}$ dependencies makes LAEs relatively robust tracers of the underlying density field that we can use to pin down the ionisation topology. Constraining $f_\mathrm{esc}$ during the EoR will require a combination of ionisation topology measurements through the \HI 21cm signal and measurements of other emission lines. \section*{Acknowledgements} We thank Max Gronke and Peter Laursen for useful discussions. AH, GY, LL, PD and SG acknowledge support from the European Research Council's starting grant ERC StG-717001 (``DELPHI"). AH, MT, PD also acknowledge support from the NWO grant 016.VIDI.189.162 (``ODIN") and the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. PD thanks the Institute for Advanced Study (IAS) Princeton, where a part of this work was carried out, for their generous hospitality and support through the Bershadsky Fund. GY acknowledges financial support from MICIU/FEDER under project grant PGC2018-094975-C21. We thank Peter Behroozi for creating and providing the {\sc rockstar} merger trees of the {\sc vsmdpl} and {\sc esmdpl} simulations. The authors wish to thank V. Springel for allowing us to use the L-Gadget2 code to run the different Multidark simulation boxes, including the {\sc vsmdpl} and {\sc esmdpl} used in this work. The {\sc vsmdpl} and {\sc esmdpl} simulations have been performed at LRZ Munich within the project pr87yi. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SUPERMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). The CosmoSim database (\url{www.cosmosim.org}) provides access to the simulations and the Rockstar data. The database is a service by the Leibniz Institute for Astrophysics Potsdam (AIP). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. This research made use of \texttt{matplotlib}, a Python library for publication quality graphics \citep{hunter2007}; and the Python library \texttt{numpy} \citep{numpy}. \section*{Data Availability} The source code of the semi-numerical galaxy evolution and reionisation model within the {\sc astraeus} framework and the employed analysis scripts are available on GitHub (\url{https://github.com/annehutter/astraeus}). The underlying N-body DM simulation, the {\sc astraeus} simulations and derived data in this research will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
2023-04-23T08:17:28.729Z
2022-09-30T02:08:43.000Z
redpajama/arxiv
arxiv_0000
91
22,647
9e8386c8b2d8ef7fb38a3da43ade44dedad91b03
\section*{Introduction} Materials crystallizing in the perovskite structure attract attention in many fields due to their high tunability and the resulting, broad spectrum of physical and chemical properties \cite{hwang2017perovskites,zhu2014perovskite,ha2011adaptive}. Most applications of these materials rely on their surfaces and interfaces. The ternary chemical composition of perovskites allows for many different surface terminations \cite{erdman2003surface, kienzle2011vacant} with dramatically different chemical and physical behaviors. It is mostly assumed, however, that perovskites prepared by wet chemical techniques are bulk-terminated (1$\times$1) \cite{kawasaki1994atomic}. Theoretical modeling of catalytic processes \cite{Suntivich2011, Norskov2011} and physical phenomena \cite{santander-syro2011two, meevasana2011creation} generally assume this pristine surface structure. The interfaces between two perovskites preserve the bulk crystal structure and the properties of the junction are strongly influenced by the local defect chemistry. Several emerging applications also profit from the bulk termination, such as employing ferroelectricity for optimizing or promoting catalytic reactions \cite{Kakekhani2015ACS, Kakekhani2016}. Surprisingly, the available atomic-scale information on bulk-terminated cubic perovskites is quite limited. A prototypical example is SrTiO$_3$: The most common SrTiO$_3$(001) surface preparation method is wet-chemical treatment. Etching with buffered-HF \cite{kawasaki1994atomic} preferentially removes the SrO layer and leaves the surface fully covered with flat, TiO$_2$-terminated terraces \cite{koster1998quasi, koster1998influence}. This method was re-examined several times \cite{deak2006ordering,castell2002scanning,silly2006srtio3,baniecki2008photoemission,herger2007surfacePRL}; as was shown by x-ray photoelectron spectroscopy (XPS) it can induce an unintentional substitution of oxygen with F \cite{chambers2012unintentional}. This can be avoided by etching the surface with HCl--HNO$_3$ \cite{kareev2008atomic} or non-acidic solvents \cite{chambers2012unintentional,connell2012preparation, gerhold2014stoichiometry}. Etched surfaces must undergo annealing to at least 600\,$^\circ$C after introduction into vacuum to remove adsorbates. Up to this temperature, perfectly flat, TiO$_2$-terminated surfaces usually display a (1$\times$1) pattern in low-energy electron diffraction (LEED) or reflection high energy diffraction (RHEED) \cite{castell2002scanning,di2012observation}, while a series of surface reconstructions appear after annealing at higher temperatures \cite{herger2007surfacePRL,herger2007surface,silly2006srtio3,dagdeviren2016surface}. Some reconstructions have been atomically resolved using scanning tunneling microscopy (STM) including the (1$\times$2), (2$\times$2), \textit{c}(4$\times$2), \textit{c}(4$\times$4), and \textit{c}(6$\times$2) \cite{castell2002scanning,erdman2003surface,lanier2007atomic,gerhold2014stoichiometry}, and several more on surfaces that were sputtered and annealed in ultrahigh vacuum (UHV) \cite{deak2006ordering}. So far, the structural characterization of SrTiO$_3$(001) and other cubic perovskites was mostly based on diffraction techniques. A simple (1$\times$1) diffraction pattern could also stem from the bulk underneath a disordered layer, however; a true proof of a crystalline top surface requires atomically resolved imaging. For perovskite oxides this has been demonstrated only recently \cite{sokolovic2019incipient,setvin2018polarity} using non-contact atomic force microscopy (nc-AFM, AFM hereafter) \cite{morita2015noncontact}. This technique provides clear, atomically resolved images of the (1$\times$1) termination, whereas STM shows no atomic corrugation \cite{sokolovic2019incipient}. It should be noted that the cleaving process that produces such unequivocally crystalline (1$\times$1) terminated SrTiO$_3$(001) surfaces relies on incipient ferroelectricity, and that the induced polarity necessarily results in a sizable density of charged point defects \cite{sokolovic2019incipient}. Such cleaved surfaces can have micron-sized domains with exclusively TiO$_2$ and SrO termination, and each contains 14$\pm$2\% Sr adatoms and Sr vacancies, respectively. One could expect that thermal annealing heals such intrinsic point defects. Instead, this work shows that the as-cleaved SrTiO$_3$(001)-(1$\times$1) surface is unstable. Raising the temperature results in the lateral migration of the point defects, and an overlayer without long-range order forms above 400\,$^\circ$C. Having established that AFM is capable of providing a clear picture of ordered perovskite surfaces, it was applied to SrTiO$_3$(001) prepared by wet etching. These samples show no signs of an ordered surface in AFM, however, raising doubts that the commonly observed (1$\times$1) diffraction pattern indicates a crystalline, unreconstructed surface. \begin{figure}[h!] \begin{center} \includegraphics[width=1.0\columnwidth,clip=true]{Fig4.pdf} \end{center} \caption{ Cut-and-polished TiO$_2$-terminated SrTiO$_3$(001) surface prepared by \textit{ex-situ} wet-chemical treatment. (Top to bottom) LEED, large- and small-area empty-states STM, and AFM images after annealing at (a) 400\,$^\circ$C in UHV, (b) 500\,$^\circ$C and (c) 700\,$^\circ$C in 1$\times$10$^{-6}$\,mbar O$_2$. (d) XPS core-level spectra of the main elements at different annealing conditions, obtained with Al$K\alpha$ line in grazing emission. }\label{fig4} \end{figure} SrTiO$_3$ samples doped with 0.7\,at.\% Nb were used. For cleaving, samples were held in a cleaving device \cite{sokolovic2019incipient} constructed from Mo. After insertion into a UHV chamber with a base pressure below 2$\times$10$^{-10}$ mbar, the device was thoroughly degassed and cooled down to room temperature (RT) prior to cleaving. Annealing was performed \textit{via} a resistive-heating wire below the sample mount. The precision of the reported annealing temperatures is estimated as $\pm$30\,$^\circ$C due to the limited thermal conductivity of the sample mount. Annealing times ranged from 30 to 60 min. Scanning--probe measurements were performed in UHV with a base pressure below 2$\times$10$^{-11}$~mbar on a ScientaOmicron low-temperature STM/AFM at either $T$=77.7\,K or $T$=4.8\,K, using qPlus cantilevers \cite{giessibl2013sensor} with a separate wire for the tunneling current \cite{Majzik2012} and a differential preamplifier \cite{GiessiblPreamp}. Etched tungsten tips were glued on the tuning fork and cleaned by self-sputtering in Ar atmosphere \cite{setvin2012ultrasharp} prior to the experiment. The resonance frequency of the qPlus cantilevers ranged from 25 to 80\,kHz, with Q factors of $\approx$ 50000. The STM images were obtained by applying a positive bias to the sample. All presented AFM images were acquired with an oxygen-terminated tip \cite{SokolovicPNAS2020}, $i.e.$, surface cations appear attractive (bright) and anions repulsive (dark) \cite{Yurtsever2012}. Figure\,\hyperref[fig1]{1} shows XPS and LEED results of SrTiO$_3$(001) prepared by the \emph{in-situ} cleaving \cite{sokolovic2019incipient} and after annealing at increasingly higher temperatures for 45 min each, up to 500\,$^\circ$C. In XPS the surfaces are free of contaminants such as carbon. The core-level spectra of the constituents do not change, indicating that the overall surface stoichiometry is preserved. Despite the reducing character of the UHV environment, the elements retain their oxidation state. The as-cleaved surface exhibits a clear (1$\times$1) LEED pattern. The periodicity does not change various annealing steps. On the other hand, the sharpness of the LEED spots and the background intensity vary: The sharpest diffraction pattern was observed after annealing at 200\,$^\circ$C, while further annealing degraded the pattern somewhat. A distinct (1$\times$1) LEED pattern is also reported throughout literature for polished SrTiO$_3$(001) surfaces annealed at temperatures above 600\,$^\circ$C, necessary for degassing after introduction into UHV. Above 850\,$^\circ$C, spots originating from \textit{c}(4$\times$2) \cite{erdman2002structure,erdman2003srtio3}, or (1$\times$2) \cite{castell2002scanning} reconstructions start to appear. The surfaces of cleaved samples have SrO and TiO$_2$ domains that span 10 to 100 $\mu$m \cite{sokolovic2019incipient}. This is below the resolution of either of the techniques applied in Fig.\,\hyperref[fig1]{1}. The STM/AFM results in Figs. \hyperref[fig2]{2} and \hyperref[fig3]{3} show the temperature evolution of the two terminations separately. Figure\,\hyperref[fig2]{2} focuses on the SrO termination. STM images illustrate the large-scale surface morphology, while smaller-size AFM images provide details of the atomic structure. After cleaving at room temperature the surface is atomically flat \cite{sokolovic2019incipient}. The SrO (1$\times$1) surface is covered with the specific concentration of 14$\pm$2\% of Sr vacancies, V$_{\mathrm{Sr}}^{2-}$, apparent as black, `missing' atoms in Fig.\,\hyperref[fig2]{2(b)} \cite{sokolovic2019incipient}. The corresponding STM image shows a corrugation as high as a full unit cell ($\approx$0.4\,nm), which is a purely electronic effect originating from the band bending induced by the charged Sr vacancies. Annealing at 250$^\circ$C results in the formation of pits at the flat SrO terraces, seen in both, STM and AFM. Their surroundings remain unreconstructed, with fewer V$_{\mathrm{Sr}}^{2-}$ defects. Intrinsic V$_{\mathrm{Sr}}^{2-}$ agglomerate into larger half-unit-cell-deep pits, that expose the underlying TiO$_2$ termination. No oxygen atoms are visible within the pits; they can be considered aggregates of Schottky-type defects \cite{walsh2015self}, formed by Sr vacancy diffusion and the formation of O vacancies \cite{walsh2011strontium}. STM images of the SrO domains annealed to 250\,$^\circ$C and 330\,$^\circ$C show significantly smaller corrugation compared to the as-cleaved surface, consistent with a reduced band bending. Such surfaces also show a LEED pattern [Fig.\,\hyperref[fig1]{1(b)}] with the lowest background intensity. Further annealing results in the lateral growth of the pits, preferentially along the [100] and [010] directions. The STM images in Fig.\,\hyperref[fig2]{2(a)} appear considerably rougher (up to 3 layers). Annealing the SrO termination at 500\,$^\circ$C results in a loss of the (1$\times$1) ordering in AFM. Instead, images locally show a short-range (2$\times$2) periodicity, while LEED retains the (1$\times$1) symmetry. The surface corrugation deduced from the AFM images is less than half unit cell. STM images exhibit larger apparent height differences, but this can be partially attributed to electronic effects, when domains with different electronic properties form. At this stage of annealing, it is no longer possible to distinguish the previously SrO- and TiO$_2$-terminated areas: The entire cleaved SrTiO$_3$(001) surface shows the same morphology in STM and AFM images as in the rightmost panels of Fig.\,\hyperref[fig2]{2a} and Fig.\,\hyperref[fig2]{2b}, respectively. The (1$\times$1) LEED pattern of this surface is attributed to diffraction from the subsurface layers, while the disordered surface layer results in the increased background intensity. The evolution of the TiO$_2$ termination with temperature is shown in Fig.\,\hyperref[fig3]{3}. After cleaving, the TiO$_2$ termination hosts Sr$_{\mathrm{ad}}^{2+}$ adatoms [bright dots in Fig.\,\hyperref[fig3]{3(b)}], complementary to the V$_{\mathrm{Sr}}^{2-}$ vacancies at the SrO termination. In STM, the two terminations appear very similar. Annealing above 250\,$^\circ$C results in the formation of small, disconnected islands. The clustering of Sr$_{\mathrm{ad}}^{2+}$ adatoms requires the presence of O$^{2-}$ to compensate their electric charge, and indeed AFM images show the presence of anions [dark dots in Fig.\,\hyperref[fig3]{3(b)}]. These SrO islands show tiny areas with a \textit{c}(2$\times$2) SrO structure on top of the TiO$_2$ termination. The intrinsic excess of Sr adatoms at the as-cleaved TiO$_2$ termination constitute an ideal seed for the crystal growth of the next perovskite SrTiO$_3$ layer, provided the temperature is sufficiently high for the diffusion of the adatoms and of oxygen to complete the SrO stoichiometry. The \textit{c}(2$\times$2)-like areas spread with increasing temperature up to 430\,$^\circ$C. Islands grow more connected, while still not covering the entire surface. The maximum coverage of this SrO superstructure over the TiO$_2$ termination is limited by the initial 0.14~ML coverage of the Sr adatoms. When arranged in a \textit{c}(2$\times$2) superstructure, it can cover 28\% of the surface area, close to the maximum coverage observed in AFM. After annealing at 500\,$^\circ$C the previous TiO$_2$ termination appears disordered and becomes indistinguishable from what was the SrO termination [rightmost panel of Fig.\,\hyperref[fig2]{2(a)} and Fig.\,\hyperref[fig2]{2(b)}]. Further annealing of the mixed-termination morphology at 600\,$^\circ$C [rightmost panel of Fig.\,\hyperref[fig3]{3(a)}] does not improve the surface roughness, but instead increases the width of the pits and islands. The two opposite terminations of the as-cleaved SrTiO$_3$(001)-(1$\times$1) surface experience a complementary evolution with annealing. The pit/island creation mechanism is induced by the presence of the intrinsic, polarity--compensating point defects, $i.e.$, Sr vacancies and adatoms \cite{sokolovic2019incipient}. Migration of these charged V$_{\mathrm{Sr}}^{2-}$ and Sr$_{\mathrm{ad}}^{2+}$ point defects is activated at temperatures as low as 200\,$^\circ$C. Moreover, when the surface is additionally enriched by Sr adatoms \textit{via} evaporation, the adatoms are mobile at both terminations and start to aggregate at temperatures as low as 150\,$^{\circ}$C, as shown in the Supplemental Material (SM). The disappearance/appearance of O likely originates from the exchange with the subsurface region, because lateral diffusion across the whole micrometers-wide domains is unlikely \cite{riva2019pushing,riva2019epitaxial}. The temperature range for migration of vacancies is slightly lower than reported in the literature for SrTiO$_3$ \cite{de2015oxygen, riva2018influence}, but can be rationalized by the presence of electric fields related to the charged defects. Since the intrinsic point defects dominate the thermal behavior of the cleaved crystals, annealing excursions were also conducted on SrTiO$_3$ samples prepared by a wet chemical treatment that exposes only the TiO$_2$ termination. Full details are laid out in the SM. Two cut-and-polished SrTiO$_3$(001) crystals, again with 0.7\,at.\% Nb doping, were cleaned \textit{ex situ} and boiled in ultra-pure water to etch away the soluble SrO termination. One sample was baked in air at 950\,$^\circ$C prior to introduction to UHV (to create large, flat terraces) and turned out to be contaminated with carbon, alkali, and alkaline earth metals, see SM. The second sample was introduced to UHV directly after wet cleaning and was contaminated with carbon alone. Figure\,\hyperref[fig4]{4} shows the temperature evolution of the latter sample. After annealing to mild temperatures, LEED shows a distinct (1$\times$1) pattern, and large-area STM images show flat terraces. In AFM, however, clumps are visible, likely due to contamination. A substantial C1s signal in XPS [Fig.\,\hyperref[fig4]{4(d)}], indicates that the contaminants are carbon-based organics. Annealing at 500\,$^\circ$C--650\,$^\circ$C in 1$\times$10$^{-7}$--1$\times$10$^{-6}$ O$_2$ back-pressure for 1--2 h gradually removed the C, but constant-height AFM still shows a surface covered by undetermined hillocks, with no hints of the underlying substrate. A significant reduction of C occurred only after annealing at 700\,$^\circ$C in 1$\times$10$^{-7}$\,mbar O$_2$ for 2 h. This treatment resulted in a reconstructed surface with a $\left(\sqrt{\mathrm{13}}\times \sqrt{\mathrm{13}}\right)\!\!R\mathrm{33.7}^\circ$ superstructure \cite{kienzle2011vacant,ohsawa2016negligible} clearly visible in LEED and the constant-height AFM image in Fig.\,\hyperref[fig4]{4(c)}. In summary, the search for an SrTiO$_3$(001)-(1$\times$1) surface that can be considered 'pristine' --- crystalline and well-ordered, with a negligible amount of defects and contaminants --- was not successful so far. As-cleaved surfaces come closest, but necessarily contain both, SrO- and TiO$_2$-terminated domains (albeit of considerable size), and charged point defects. The temperature-induced transformation to an ill-defined, disordered top layer, indicates that the (1$\times$1) has a high surface energy and is only metastable. The techniques generally applied to judge surface quality (electron diffraction, XPS, large-area STM or ambient AFM) give results that would be consistent with a perfect (1$\times$1) termination; but are contradicted by atomically resolved nc-AFM. The same is true for the etched TiO$_2$-terminated surfaces that are used to great extent as substrates in the growth of heteroepitaxial oxide films. While such surfaces can display a flat morphology with a high-quality diffraction pattern, nc-AFM shows no signs of the unreconstructed surface. The temperature required for removing carbon lies above the stability region of the bulk-terminated surface. The (001) surface of SrTiO$_3$ and other perovskites continue to be of great interest for both, probing their intrinsic properties and as substrates for heteroepitaxy. As the results presented here show, the assumption of an atomically clean, crystalline bulk-termination might not be warranted. This should be considered in the proper interpretation of experimental data. This work was supported by the Austrian Science Fund (FWF) projects Wittgenstein Prize (Z-250), Solids4Fun (F-1234) and SuPer (P32148-N36). J.X. acknowledges the support from the National Natural Science Foundation of China (91634106), China Scholarship Council and Chongqing University. M.Se. acknowledges support from the Czech Science Foundation GACR 20-21727X and GAUK Primus/20/SCI/009.
2023-04-23T08:17:29.351Z
2020-12-17T02:13:51.000Z
redpajama/arxiv
arxiv_0000
110
2,798
e8ec9d76bc3ac42a3d8b950b7352da6828031d53
\section{Introduction} Replicating human movements and behaviour in humanoid robots is a formidable challenge with many exciting applications, ranging from health care (e.g. artificial limbs \cite{alshamsi2016}) to space exploration \cite{nasa2015}. Since the manual engineering of controllers for such tasks is extremely difficult, machine learning and specifically reinforcement learning has received much attention in this area. A recent NIPS competition \cite{kidzinski2018learningtorun} focused on the creation of a simulated humanoid running robot in a continuous and high-dimensional environment. The top entries to the competition employed state-of-the-art deep reinforcement learning techniques \cite{jaskowski2018rltorunfast,kidzinski2018l2rsolutions}, which resulted in strong, but not optimal, performance. In this paper, we demonstrate that videos showing human running behaviour can be used to significantly improve the learning performance. In our approach, we use the coordinates of specific body parts (e.g. the foot) to define a potential function that is fed into potential-based reward shaping \cite{ng1999policy}. To create a strong baseline for the evaluation of our approach, we combined selected RL techniques of the top ten competition entries with further optimizations to create a running agent that displays a significantly faster learning rate than the top entry. We then add the reward shaping from video data sampled from various YouTube videos, which resulted in a running agent that reached twice the running speed as our baseline in 12 hours of training. Since potential-based reward shaping has the nice theoretical property of not changing the optimal policy \cite{ng1999policy}, data taken from sub-optimal running behaviour does not prevent the RL agent from overcoming the sub-optimalities and produce humanoid running that outperforms the data source. We demonstrate this theoretical property empirically by sampling limb positions from a slower-running agent and show how our approach generates a running robot, that after a relatively short time of training, starts to run faster than the original agent. Overall, the main contribution of our work is to demonstrate how data extracted from videos of human movements can be used to significantly speed up the reinforcement learning of humanoid robots. While our work focuses on the training of humanoid running behaviour, the proposed techniques can easily be applied to any other form of humanoid movements. \section{Background} \subsection{Reinforcement Learning} Reinforcement learning is a paradigm which allows agents to learn by reward and punishment from interactions with the environment \cite{sutton1984temporal}. The numeric feedback received from the environment is used to improve the agent's actions. The majority of work in the area of reinforcement learning applies a Markov Decision Process (MDP) as a mathematical model \cite{puterman2014markov}. An MDP is a tuple $\big(S, A, T, R)$, where $S$ is the state space, A is the action space, $T(s,a,s') = Pr(s'|s,a)$ is the probability that action a in state s will lead to state $s'$, and $R(s, a, s')$ is the immediate reward $r$ received when action $a$ taken in state $s$ results in a transition to state $s'$. The problem of solving an MDP is to find a policy (i.e., mapping from states to actions) which maximises the accumulated reward. When the environment dynamics (transition probabilities and reward function) are available, this task can be solved using policy iteration \cite{bertsekas1995dynamic}. When the environment dynamics are not available, as with most real problem domains, policy iteration cannot be used. However, the concept of an iterative approach remains the backbone of the majority of reinforcement learning algorithms. These algorithms apply so called temporal-difference updates to propagate information about values of states and/or state-action pairs, $Q(s, a)$ [20]. These updates are based on the difference of the two temporally different estimates of a particular state or state-action value. The Q-learning algorithm is such a method [21]. After each transition, $(s, a) \rightarrow (s', r)$, in the environment, it updates state-action values by the formula: \begin{equation} \label{eq:qlearn} Q(s,a) \leftarrow Q(s,a) + \alpha[r + \gamma\max Q(s',a') - Q(s,a)] \end{equation} where $\alpha$ is the rate of learning and $\gamma$ is the discount factor. It modifies the value of taking action $a$ in state $s$, when after executing this action the environment returned reward $r$, and moved to a new state $s'$. \subsection{Potential Based reward shaping} The idea of reward shaping is to provide an additional reward representative of prior knowledge to reduce the number of suboptimal actions made and so reduce the time needed to learn \cite{ng1999policy, randlov1998learning}. This concept can be represented by the following formula for the Q-learning algorithm: \begin{equation} \label{eq:shaping} Q(s,a) \leftarrow Q(s,a)+\alpha[r+F(s,s')+\gamma\max Q(s',a') - Q(s,a)] \end{equation} where $F(s,s')$ is the general form of any state-based shaping reward. Even though reward shaping has been powerful in many experiments it quickly became apparent that, when used improperly, it can change the optimal policy \cite{randlov1998learning}. To deal with such problems, potential-based reward shaping was proposed \cite{ng1999policy} as the difference of some potential function $\Phi$ defined over a source s and a destination state $s':F(s,s')=\gamma \Phi(s') - \Phi(s)$ where $\gamma$ must be the same discount factor as used in the agent's update rule (see Equation \ref{eq:qlearn}). Ng et al. \cite{ng1999policy} proved that potential-based reward shaping, defined according to Equation \ref{eq:shaping}, guarantees learning a policy which is equivalent to the one learned without reward shaping in both infinite and finite horizon MDPs. Wiewiora \cite{wiewiora2003potential} later proved that an agent learning with potential-based reward shaping and no knowledge-based Q-table initialization will behave identically to an agent without reward shaping when the latter agent's value function is initialized with the same potential function. These proofs, and all subsequent proofs regarding potential-based reward shaping including those presented in this paper, require actions to be selected by an advantage-based policy \cite{wiewiora2003potential}. Advantage-based policies select actions based on their relative differences in value and not their exact value. Common examples include greedy, $\epsilon$-greedy and Boltzmann softmax. \subsection{Deep-RL} In Deep RL, the Q value function is represented as a multi-layer neural network \cite{Goodfellow-et-al-2016}. Deep RL algorithms have been shown to perform strongly on RL tasks which have been infeasible to tackle before. Over recent years, a number of algorithms and optimizations have been proposed, and we have chosen to apply the Deep Deterministic Policy Gradient (DDPG) algorithm for our application domain \cite{lillicrap2015continuous}. DDPG has been shown to be effective in continuous action domains where classic reinforcement learning methods struggled. Specifically, in the DDPG algorithm two neural networks are used: $\mu(S)$ is a network (the {\it actor}) that returns the action vector whose components are the values of the corresponding control signals. $Q^w(s, a)$ is a second neural network (the {\it critic}, that returns the $Q$ value, i.e. the value estimate of the action of $a$ in state $s$. \begin{equation} \label{eq:policy5} \begin{aligned} \nabla_\theta J(\pi_\theta) & =\int_{S}\rho^\pi(s)\int_{A}\nabla_\theta\pi_\theta(a|s)Q^w(s,a) da ds \\ &=\mathbb{E}_{s\sim \rho^\pi,a\sim\pi_\theta}[\nabla_\theta \log \pi_\theta (a|s)Q^w(s,a)] \end{aligned} \end{equation} where $\theta$ is the parameter vector of the probabilistic policy and $\rho^\pi(s)$ is the probability of reaching state $s$ with policy $\pi$. For a more complete description of DDPG, see \cite{lillicrap2015continuous}. \subsection{Reinforcement Learning from Demonstration} Human expert demonstrations have been demonstrated to improve the learning speed and accuracy of RL agent on a wide range of tasks. Most work in this area (e.g. \cite{suay2016learning, brys2015reinforcement}) focused on the use of state-action recordings as demonstration. This is infeasible in the case of video data, where only state information is available and the demonstration actions are not explicitly provided and often can not be derived either (as is the case with running). More recently, various methods for state-only demonstrations have been proposed (e.g. \cite{peng2018deepmimic, liu2017imitation}). However, all of these methods target the imitation of demonstrations. Our work is employing potential-based reward shaping which uses the demonstrations to speed up the learning, but is also able to overcome any sub-optimalities in the demonstration rather than purely imitating them. \section{Simulation Environment} \label{sec:domain} \begin{figure} \centering \includegraphics[width=0.40\textwidth]{img/opensim3.jpg} \caption{A screenshot of the "Learning to Run" simulation environment} \label{img:environment} \end{figure} The simulation environment has been provided by the "Learning to Run" competition and is based on the OpenSim environment employing the Simbody physics engine. The environment simulates a three-dimensional race course with small obstacles, along which a humanoid robot with 6 joints (ankle, knee, and hip on two legs) and corresponding muscles is running (see Figure \ref{img:environment}). The actions of the running humanoid robot are excitation values applied to the muscles implemented in the robot model. The next state of the environment is computed by the physics engine based on the resulting muscle activations, forces, velocities and positions of the joints. The OpenSim \cite{kidzinski2018learningtorun} model environment represents the robot state using a vector of 41 features: \begin{itemize} \item position of the pelvis (rotation, x, y) \item velocity of the pelvis (rotation, x, y) \item rotation of each ankle, knee and hip (6 values) \item angular velocity of each ankle, knee and hip (6 values) \item position of the center of mass (2 values) \item velocity of the center of mass (2 values) \item positions of head, pelvis, torso, left and right toes, left and right talus (14 values) \item strength of left and right psoas (a muscle at the lower spine) \item next obstacle: x distance from the pelvis, y position of the center relative to the the ground, radius. \end{itemize} The reward of an agent is provided at each simulation step and is the distance covered in the run minus the muscle strain as computed by the simulation environment. \section{Baseline Agent} When designing the baseline agent, we combined selected techniques from the top 10 competition entries \cite{kidzinski2018l2rsolutions} with further optimizations. In this section, we summarize the most beneficial techniques used, all of which are taken from various contributions published in \cite{kidzinski2018l2rsolutions}. In all exerimental results presented in the remainder of this paper, the experiments have been repeated 5 times, and the graphs show the standard error from the mean. The RL parameter choice was $\alpha = 0.08$ and $\gamma = 0.9$, which have been determined experimentally. \subsection{State representation} The original state representation provided by the competition software contained 41 features, described in Section \ref{sec:domain}. In our state representation we added 71 features, including: \begin{itemize} \item Two-dimensional coordinates of key body positions relative to the pelvis at the center point (0,0). \item Two-dimensional velocity and acceleration vectors for key body points. \end{itemize} The new state representation allowed us to significantly speed up the learning process as seen on Figure \ref{img:centerandfeatures}. \begin{figure} \includegraphics[width=0.47\textwidth]{img/modified.jpg} \caption{This Figure shows significant learning speed increase after adding velocity and acceleration features and centering the coordinates system at the pelvis position} \label{img:centerandfeatures} \end{figure} \subsection{Additional training experience} After running a simulation episode, we trained the RL agent with additional mirrored data, which represented the agents experience during the episode and reflecting it along the $xy$ plane. This adds valuable training for the value estimator (i.e. the critic), since the task is symmetrical. Figure \ref{img:mirrored} shows the resulting performance improvement. \begin{figure} \includegraphics[width=0.47\textwidth]{img/mirrored.jpg} \caption{Speeding up learning process through adding mirrored data} \label{img:mirrored} \end{figure} \subsection{Repeating the chosen action} Each time the running agent chooses an action, this is repeated three times. Because we employed an actor-critic method, this reduced the number of computations needed to generate the next action during an episode by a factor of three. The resulting performance gain can be seen in Figure \ref{img:flipaction}. \begin{figure} \includegraphics[width=0.47\textwidth]{img/flip.jpg} \caption{Performance gains when repeating each action three times} \label{img:flipaction} \end{figure} \subsection{Reducing state resolution} In this optimization step, all the state representation data was changed from {\it double} to {\it float}. This resulted in a speed-up of the computations and a somewhat smaller state space, while also reducing the precision of the state representation. Figure \ref{img:doublefloat} shows the resulting performance increase. \begin{figure} \includegraphics[width=0.47\textwidth]{img/Speedup.jpg} \caption{Switching from double to float} \label{img:doublefloat} \end{figure} \subsection{Neural network topology} After applying all of the techniques above, we compared 5 different network architectures by arbitrarily varying the number of layers and neurons per layer. The results are presented in Figure \ref{img:architectures}. For our baseline agent we chose the best performing layer, using 5 layers with 128 neurons each. \begin{figure} \includegraphics[width=0.47\textwidth]{img/6.jpg} \caption{Various neural network topologies: LxN denotes L layers with N neurons per layer} \label{img:architectures} \end{figure} \section{Reward Shaping from Video Data} After designing our baseline, we added potential-based reward shaping from video data taken from arbitrary YouTube videos depicting running of humans and human-like characters. In this section we describe how the potential function was generated. \subsection{Potential function} The overall potential function is defined as the sum of potential functions for every body part: pelvis, two knees and two feet. Following the potential-based reward shaping approach, an additional reward is given to an agent on each simulation step corresponding to the change in potentials of the source and target state. We considered the following three different potential functions for each body part (knee and foot) in our research, all of them based on the inverse of the distance between the respective body part coordinate in the video-generated data and the humanoid robot. The three potential functions represent three different inverse distance functions: \begin{itemize} \item PF1: $\frac{1}{dx + dy}$ \item PF2: $\frac{1}{\sqrt{dx^2 + dy^2}} $ \item PF3: $\frac{1}{dx^2 + dy^2}$ \end{itemize} where $dx$ ($dy$) is the absolute difference between the x (y) coordinate of the respective body part taken from the video data and the x (y) coordinate of the body part of the humanoid robot. \subsection{Data collection} For our potential function we have used the following three sources of video data: \begin{itemize} \item A video of a cartoon character running (see Figure \ref{img:cartoon} for a screenshot) \item A video of a running character in a computer game (see Figure \ref{img:videogame} for a screenshot) \item A video of a running human (see Figure \ref{img:runninghuman} for a screenshot) \end{itemize} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/cartoon.jpg} \caption{Screenshot from a video depicting a cartoon character running (taken from http://y2u.be/2y6aVz0Acx0)} \label{img:cartoon} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/videogame.jpg} \caption{Screenshot from a video depicting a computer game character running (taken from http://y2u.be/YbYOsE7JyXs)} \label{img:videogame} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/runninghuman.jpg} \caption{Screenshot from a video depicting a human running (taken from http://y2u.be/5mVgThl-yMU)} \label{img:runninghuman} \end{figure} Each of the sources was used to define a potential function. The performance of the resulting potential functions in RL are compared in Figure \ref{img:var_potential}. Note that the learning curves were not trained until final convergence, which would require much more time and based on the theoretical properties of potential-based reward shaping would ultimately reach the same performance. In each source we recorded the positions of the two knees and the two feet relative to the pelvis as a two-dimensional coordinate. The recording frequency was four positions per half step. The resulting coordinates were normalized according to the OpenSim simulation. While in our work the extraction of the coordinates was done manually, algorithms to accurately extract body part positions in images with a clear view of the body do exist (e.g. \cite{toshev2014deeppose,guler2018densepose}), and we intend to use these in future work. \subsection{Selecting the data source and the potential function} We first compared the performance of the potential functions based on three videos and the inverse distance measure PF2. The results for this experiment is shown in Figure \ref{img:differentvideo}, and demonstrates that the human video is the best data source for the reward shaping. \begin{figure} \includegraphics[width=0.47\textwidth]{img/differentvideo.jpg} \caption{Comparing the three videos as a data source for the potential function based on PF2} \label{img:differentvideo} \end{figure} After selecting the running human video as the data source, we compared the three different potential functions as depicted in Figure \ref{img:var_potential}. The results show that PF3 performs best. \begin{figure} \includegraphics[width=0.47\textwidth]{img/pot_functions.JPG} \caption{Comparing three potential functions} \label{img:var_potential} \end{figure} \section{Evaluation of Video-based Reward Shaping} Figure \ref{img:baseline_vs_shaping} shows the comparison of our chosen reward shaping approach (PF3) to the RL baseline. The results show that the reward shaping speeds up the learning significantly, reaching double the running speed at 12 hours of training. The end result after 24 hours of training still shows a significant advantage of the reward shaping approach. It is also worth noting that the demonstration video is of a running human who is using his arms, while the simulation model does not include these. \begin{figure} \includegraphics[width=0.47\textwidth]{img/rewardshaping.jpg} \caption{Performance comparison between the baseline and the reward shaping approach} \label{img:baseline_vs_shaping} \end{figure} An important advantage of potential-based reward shaping is the theoretical guarantee that the shaping will not change the optimal policy. In order to demonstrate this advantage in our context, we used a weak running robot generated by the baseline RL agent after 12 hours of training as a sub-optimal data source for the potential function. Clearly, the resulting agent is not running optimally, and the positions of the feet and knees will not be in optimal positions most of the time. We then train our RL agent with the reward shaping generated from these sub-optimal coordinates (using PF3), and compared the performance to the weak runner. The results are shown in Figure \ref{img:suboptimal}, and demonstrate that the RL agent is able to overcome the suboptimal performance of the data source. In fact, after 20 hours of training, the performance is more than double that of the suboptimal running agent. Also, note that the suboptimal shaping did not hurt the learning performance significantly. After 12 hours of training the shaped agent performs comparable to the baseline agent with 12 hours of training. \begin{figure} \includegraphics[width=0.47\textwidth]{img/sim.jpg} \caption{Performance of the reward shaping approach with suboptimal data. The dotted vertical line represents 12 hours of training (the training time of the shaping source).} \label{img:suboptimal} \end{figure} \section{Conclusions} In this paper, we presented a method to use videos of human and human-like running to shape the reward of an RL agent learning to run. Our results demonstrate that a significant improvement in learning speed can be achieved by our proposed method, as compared to a strong baseline which we designed combining selected techniques of the top ten entries to the "Learning to Run" competition at NIPS 2017. In future work, we intend to employ automated body pose extraction methods such as the one presented in \cite{guler2018densepose} and widen our investigation to other humanoid movement apart from running, e.g. jumping. \section{Introduction} The \textit{proceedings} are the records of a conference.\footnote{This is a footnote} ACM seeks to give these conference by-products a uniform, high-quality appearance. To do this, ACM has some rigid requirements for the format of the proceedings documents: there is a specified format (balanced double columns), a specified set of fonts (Arial or Helvetica and Times Roman) in certain specified sizes, a specified live area, centered on the page, specified size of margins, specified column width and gutter size. \section{The Body of The Paper} Typically, the body of a paper is organized into a hierarchical structure, with numbered or unnumbered headings for sections, subsections, sub-subsections, and even smaller sections. The command \texttt{{\char'134}section} that precedes this paragraph is part of such a hierarchy.\footnote{This is a footnote.} \LaTeX\ handles the numbering and placement of these headings for you, when you use the appropriate heading commands around the titles of the headings. If you want a sub-subsection or smaller part to be unnumbered in your output, simply append an asterisk to the command name. Examples of both numbered and unnumbered headings will appear throughout the balance of this sample document. Because the entire article is contained in the \textbf{document} environment, you can indicate the start of a new paragraph with a blank line in your input file; that is why this sentence forms a separate paragraph. \subsection{Type Changes and {\itshape Special} Characters} We have already seen several typeface changes in this sample. You can indicate italicized words or phrases in your text with the command \texttt{{\char'134}textit}; emboldening with the command \texttt{{\char'134}textbf} and typewriter-style (for instance, for computer code) with \texttt{{\char'134}texttt}. But remember, you do not have to indicate typestyle changes when such changes are part of the \textit{structural} elements of your article; for instance, the heading of this subsection will be in a sans serif\footnote{Another footnote here. Let's make this a rather long one to see how it looks.} typeface, but that is handled by the document class file. Take care with the use of\footnote{Another footnote.} the curly braces in typeface changes; they mark the beginning and end of the text that is to be in the different typeface. You can use whatever symbols, accented characters, or non-English characters you need anywhere in your document; you can find a complete list of what is available in the \textit{\LaTeX\ User's Guide} \cite{Lamport:LaTeX}. \subsection{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsubsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsubsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \subsection{Citations} Citations to articles~\cite{bowman:reasoning, clark:pct, braams:babel, herlihy:methodology}, conference proceedings~\cite{clark:pct} or maybe books \cite{Lamport:LaTeX, salas:calculus} listed in the Bibliography section of your article will occur throughout the text of your article. You should use BibTeX to automatically produce this bibliography; you simply need to insert one of several citation commands with a key of the item cited in the proper location in the \texttt{.tex} file~\cite{Lamport:LaTeX}. The key is a short reference you invent to uniquely identify each work; in this sample document, the key is the first author's surname and a word from the title. This identifying key is included with each item in the \texttt{.bib} file for your article. The details of the construction of the \texttt{.bib} file are beyond the scope of this sample document, but more information can be found in the \textit{Author's Guide}, and exhaustive details in the \textit{\LaTeX\ User's Guide} by Lamport~\shortcite{Lamport:LaTeX}. This article shows only the plainest form of the citation command, using \texttt{{\char'134}cite}. Some examples. A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128, Kirschmer:2010:AEI:1958016.1958018}. Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. \subsection{Tables} Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} It is strongly recommended to use the package booktabs~\cite{Fear05} and follow its main principles of typography with respect to tables: \begin{enumerate} \item Never, ever use vertical rules. \item Never use double rules. \end{enumerate} It is also a good idea not to overuse horizontal rules. \subsection{Figures} Like tables, figures cannot be split across pages; the best placement for them is typically the top or the bottom of the page nearest their initial cite. To ensure this proper ``floating'' placement of figures, use the environment \textbf{figure} to enclose the figure and its caption. This sample document contains examples of \texttt{.eps} files to be displayable with \LaTeX. If you work with pdf\LaTeX, use files in the \texttt{.pdf} format. Note that most modern \TeX\ systems will convert \texttt{.eps} to \texttt{.pdf} for you on the fly. More details on each of these are found in the \textit{Author's Guide}. \begin{figure} \includegraphics{fly} \caption{A sample black and white graphic.} \end{figure} \begin{figure} \includegraphics[height=1in, width=1in]{fly} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} As was the case with tables, you may want a figure that spans two columns. To do this, and still to ensure proper ``floating'' placement of tables, use the environment \textbf{figure*} to enclose the figure and its caption. And don't forget to end the environment with \textbf{figure*}, not \textbf{figure}! \begin{figure*} \includegraphics{flies} \caption{A sample black and white graphic that needs to span two columns of text.} \end{figure*} \begin{figure} \includegraphics[height=1in, width=1in]{rosette} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} \subsection{Theorem-like Constructs} Other common constructs that may occur in your article are the forms for logical constructs like theorems, axioms, corollaries and proofs. ACM uses two types of these constructs: theorem-like and definition-like. Here is a theorem: \begin{theorem} Let $f$ be continuous on $[a,b]$. If $G$ is an antiderivative for $f$ on $[a,b]$, then \begin{displaymath} \int^b_af(t)\,dt = G(b) - G(a). \end{displaymath} \end{theorem} Here is a definition: \begin{definition} If $z$ is irrational, then by $e^z$ we mean the unique number that has logarithm $z$: \begin{displaymath} \log e^z = z. \end{displaymath} \end{definition} The pre-defined theorem-like constructs are \textbf{theorem}, \textbf{conjecture}, \textbf{proposition}, \textbf{lemma} and \textbf{corollary}. The pre-defined de\-fi\-ni\-ti\-on-like constructs are \textbf{example} and \textbf{definition}. You can add your own constructs using the \textsl{amsthm} interface~\cite{Amsthm15}. The styles used in the \verb|\theoremstyle| command are \textbf{acmplain} and \textbf{acmdefinition}. Another construct is \textbf{proof}, for example, \begin{proof} Suppose on the contrary there exists a real number $L$ such that \begin{displaymath} \lim_{x\rightarrow\infty} \frac{f(x)}{g(x)} = L. \end{displaymath} Then \begin{displaymath} l=\lim_{x\rightarrow c} f(x) = \lim_{x\rightarrow c} \left[ g{x} \cdot \frac{f(x)}{g(x)} \right ] = \lim_{x\rightarrow c} g(x) \cdot \lim_{x\rightarrow c} \frac{f(x)}{g(x)} = 0\cdot L = 0, \end{displaymath} which contradicts our assumption that $l\neq 0$. \end{proof} \section{Conclusions} This paragraph will end the body of this sample document. Remember that you might still have Acknowledgments or Appendices; brief samples of these follow. There is still the Bibliography to deal with; and we will make a disclaimer about that here: with the exception of the reference to the \LaTeX\ book, the citations in this paper are to articles which have nothing to do with the present subject and are used as examples only. \section{Introduction} Replicating human movements and behaviour in humanoid robots is a formidable challenge with many exciting applications, ranging from health care (e.g. artificial limbs \cite{alshamsi2016}) to space exploration \cite{nasa2015}. Since the manual engineering of controllers for such tasks is extremely difficult, machine learning and specifically reinforcement learning has received much attention in this area. A recent NIPS competition \cite{kidzinski2018learningtorun} focused on the creation of a simulated humanoid running robot in a continuous and high-dimensional environment. The top entries to the competition employed state-of-the-art deep reinforcement learning techniques \cite{jaskowski2018rltorunfast,kidzinski2018l2rsolutions}, which resulted in strong, but not optimal, performance. In this paper, we demonstrate that videos showing human running behaviour can be used to significantly improve the learning performance. In our approach, we use the coordinates of specific body parts (e.g. the foot) to define a potential function that is fed into potential-based reward shaping \cite{ng1999policy}. To create a strong baseline for the evaluation of our approach, we combined selected RL techniques of the top ten competition entries with further optimizations to create a running agent that displays a significantly faster learning rate than the top entry. We then add the reward shaping from video data sampled from various YouTube videos, which resulted in a running agent that reached twice the running speed as our baseline in 12 hours of training. Since potential-based reward shaping has the nice theoretical property of not changing the optimal policy \cite{ng1999policy}, data taken from sub-optimal running behaviour does not prevent the RL agent from overcoming the sub-optimalities and produce humanoid running that outperforms the data source. We demonstrate this theoretical property empirically by sampling limb positions from a slower-running agent and show how our approach generates a running robot, that after a relatively short time of training, starts to run faster than the original agent. Overall, the main contribution of our work is to demonstrate how data extracted from videos of human movements can be used to significantly speed up the reinforcement learning of humanoid robots. While our work focuses on the training of humanoid running behaviour, the proposed techniques can easily be applied to any other form of humanoid movements. \section{Background} \subsection{Reinforcement Learning} Reinforcement learning is a paradigm which allows agents to learn by reward and punishment from interactions with the environment \cite{sutton1984temporal}. The numeric feedback received from the environment is used to improve the agent's actions. The majority of work in the area of reinforcement learning applies a Markov Decision Process (MDP) as a mathematical model \cite{puterman2014markov}. An MDP is a tuple $\big(S, A, T, R)$, where $S$ is the state space, A is the action space, $T(s,a,s') = Pr(s'|s,a)$ is the probability that action a in state s will lead to state $s'$, and $R(s, a, s')$ is the immediate reward $r$ received when action $a$ taken in state $s$ results in a transition to state $s'$. The problem of solving an MDP is to find a policy (i.e., mapping from states to actions) which maximises the accumulated reward. When the environment dynamics (transition probabilities and reward function) are available, this task can be solved using policy iteration \cite{bertsekas1995dynamic}. When the environment dynamics are not available, as with most real problem domains, policy iteration cannot be used. However, the concept of an iterative approach remains the backbone of the majority of reinforcement learning algorithms. These algorithms apply so called temporal-difference updates to propagate information about values of states and/or state-action pairs, $Q(s, a)$ [20]. These updates are based on the difference of the two temporally different estimates of a particular state or state-action value. The Q-learning algorithm is such a method [21]. After each transition, $(s, a) \rightarrow (s', r)$, in the environment, it updates state-action values by the formula: \begin{equation} \label{eq:qlearn} Q(s,a) \leftarrow Q(s,a) + \alpha[r + \gamma\max Q(s',a') - Q(s,a)] \end{equation} where $\alpha$ is the rate of learning and $\gamma$ is the discount factor. It modifies the value of taking action $a$ in state $s$, when after executing this action the environment returned reward $r$, and moved to a new state $s'$. \subsection{Potential Based reward shaping} The idea of reward shaping is to provide an additional reward representative of prior knowledge to reduce the number of suboptimal actions made and so reduce the time needed to learn \cite{ng1999policy, randlov1998learning}. This concept can be represented by the following formula for the Q-learning algorithm: \begin{equation} \label{eq:shaping} Q(s,a) \leftarrow Q(s,a)+\alpha[r+F(s,s')+\gamma\max Q(s',a') - Q(s,a)] \end{equation} where $F(s,s')$ is the general form of any state-based shaping reward. Even though reward shaping has been powerful in many experiments it quickly became apparent that, when used improperly, it can change the optimal policy \cite{randlov1998learning}. To deal with such problems, potential-based reward shaping was proposed \cite{ng1999policy} as the difference of some potential function $\Phi$ defined over a source s and a destination state $s':F(s,s')=\gamma \Phi(s') - \Phi(s)$ where $\gamma$ must be the same discount factor as used in the agent's update rule (see Equation \ref{eq:qlearn}). Ng et al. \cite{ng1999policy} proved that potential-based reward shaping, defined according to Equation \ref{eq:shaping}, guarantees learning a policy which is equivalent to the one learned without reward shaping in both infinite and finite horizon MDPs. Wiewiora \cite{wiewiora2003potential} later proved that an agent learning with potential-based reward shaping and no knowledge-based Q-table initialization will behave identically to an agent without reward shaping when the latter agent's value function is initialized with the same potential function. These proofs, and all subsequent proofs regarding potential-based reward shaping including those presented in this paper, require actions to be selected by an advantage-based policy \cite{wiewiora2003potential}. Advantage-based policies select actions based on their relative differences in value and not their exact value. Common examples include greedy, $\epsilon$-greedy and Boltzmann softmax. \subsection{Deep-RL} In Deep RL, the Q value function is represented as a multi-layer neural network \cite{Goodfellow-et-al-2016}. Deep RL algorithms have been shown to perform strongly on RL tasks which have been infeasible to tackle before. Over recent years, a number of algorithms and optimizations have been proposed, and we have chosen to apply the Deep Deterministic Policy Gradient (DDPG) algorithm for our application domain \cite{lillicrap2015continuous}. DDPG has been shown to be effective in continuous action domains where classic reinforcement learning methods struggled. Specifically, in the DDPG algorithm two neural networks are used: $\mu(S)$ is a network (the {\it actor}) that returns the action vector whose components are the values of the corresponding control signals. $Q^w(s, a)$ is a second neural network (the {\it critic}, that returns the $Q$ value, i.e. the value estimate of the action of $a$ in state $s$. \begin{equation} \label{eq:policy5} \begin{aligned} \nabla_\theta J(\pi_\theta) & =\int_{S}\rho^\pi(s)\int_{A}\nabla_\theta\pi_\theta(a|s)Q^w(s,a) da ds \\ &=\mathbb{E}_{s\sim \rho^\pi,a\sim\pi_\theta}[\nabla_\theta \log \pi_\theta (a|s)Q^w(s,a)] \end{aligned} \end{equation} where $\theta$ is the parameter vector of the probabilistic policy and $\rho^\pi(s)$ is the probability of reaching state $s$ with policy $\pi$. For a more complete description of DDPG, see \cite{lillicrap2015continuous}. \subsection{Reinforcement Learning from Demonstration} Human expert demonstrations have been demonstrated to improve the learning speed and accuracy of RL agent on a wide range of tasks. Most work in this area (e.g. \cite{suay2016learning, brys2015reinforcement}) focused on the use of state-action recordings as demonstration. This is infeasible in the case of video data, where only state information is available and the demonstration actions are not explicitly provided and often can not be derived either (as is the case with running). More recently, various methods for state-only demonstrations have been proposed (e.g. \cite{peng2018deepmimic, liu2017imitation}). However, all of these methods target the imitation of demonstrations. Our work is employing potential-based reward shaping which uses the demonstrations to speed up the learning, but is also able to overcome any sub-optimalities in the demonstration rather than purely imitating them. \section{Simulation Environment} \label{sec:domain} \begin{figure} \centering \includegraphics[width=0.40\textwidth]{img/opensim3.jpg} \caption{A screenshot of the "Learning to Run" simulation environment} \label{img:environment} \end{figure} The simulation environment has been provided by the "Learning to Run" competition and is based on the OpenSim environment employing the Simbody physics engine. The environment simulates a three-dimensional race course with small obstacles, along which a humanoid robot with 6 joints (ankle, knee, and hip on two legs) and corresponding muscles is running (see Figure \ref{img:environment}). The actions of the running humanoid robot are excitation values applied to the muscles implemented in the robot model. The next state of the environment is computed by the physics engine based on the resulting muscle activations, forces, velocities and positions of the joints. The OpenSim \cite{kidzinski2018learningtorun} model environment represents the robot state using a vector of 41 features: \begin{itemize} \item position of the pelvis (rotation, x, y) \item velocity of the pelvis (rotation, x, y) \item rotation of each ankle, knee and hip (6 values) \item angular velocity of each ankle, knee and hip (6 values) \item position of the center of mass (2 values) \item velocity of the center of mass (2 values) \item positions of head, pelvis, torso, left and right toes, left and right talus (14 values) \item strength of left and right psoas (a muscle at the lower spine) \item next obstacle: x distance from the pelvis, y position of the center relative to the the ground, radius. \end{itemize} The reward of an agent is provided at each simulation step and is the distance covered in the run minus the muscle strain as computed by the simulation environment. \section{Baseline Agent} When designing the baseline agent, we combined selected techniques from the top 10 competition entries \cite{kidzinski2018l2rsolutions} with further optimizations. In this section, we summarize the most beneficial techniques used, all of which are taken from various contributions published in \cite{kidzinski2018l2rsolutions}. In all exerimental results presented in the remainder of this paper, the experiments have been repeated 5 times, and the graphs show the standard error from the mean. The RL parameter choice was $\alpha = 0.08$ and $\gamma = 0.9$, which have been determined experimentally. \subsection{State representation} The original state representation provided by the competition software contained 41 features, described in Section \ref{sec:domain}. In our state representation we added 71 features, including: \begin{itemize} \item Two-dimensional coordinates of key body positions relative to the pelvis at the center point (0,0). \item Two-dimensional velocity and acceleration vectors for key body points. \end{itemize} The new state representation allowed us to significantly speed up the learning process as seen on Figure \ref{img:centerandfeatures}. \begin{figure} \includegraphics[width=0.47\textwidth]{img/modified.jpg} \caption{This Figure shows significant learning speed increase after adding velocity and acceleration features and centering the coordinates system at the pelvis position} \label{img:centerandfeatures} \end{figure} \subsection{Additional training experience} After running a simulation episode, we trained the RL agent with additional mirrored data, which represented the agents experience during the episode and reflecting it along the $xy$ plane. This adds valuable training for the value estimator (i.e. the critic), since the task is symmetrical. Figure \ref{img:mirrored} shows the resulting performance improvement. \begin{figure} \includegraphics[width=0.47\textwidth]{img/mirrored.jpg} \caption{Speeding up learning process through adding mirrored data} \label{img:mirrored} \end{figure} \subsection{Repeating the chosen action} Each time the running agent chooses an action, this is repeated three times. Because we employed an actor-critic method, this reduced the number of computations needed to generate the next action during an episode by a factor of three. The resulting performance gain can be seen in Figure \ref{img:flipaction}. \begin{figure} \includegraphics[width=0.47\textwidth]{img/flip.jpg} \caption{Performance gains when repeating each action three times} \label{img:flipaction} \end{figure} \subsection{Reducing state resolution} In this optimization step, all the state representation data was changed from {\it double} to {\it float}. This resulted in a speed-up of the computations and a somewhat smaller state space, while also reducing the precision of the state representation. Figure \ref{img:doublefloat} shows the resulting performance increase. \begin{figure} \includegraphics[width=0.47\textwidth]{img/Speedup.jpg} \caption{Switching from double to float} \label{img:doublefloat} \end{figure} \subsection{Neural network topology} After applying all of the techniques above, we compared 5 different network architectures by arbitrarily varying the number of layers and neurons per layer. The results are presented in Figure \ref{img:architectures}. For our baseline agent we chose the best performing layer, using 5 layers with 128 neurons each. \begin{figure} \includegraphics[width=0.47\textwidth]{img/6.jpg} \caption{Various neural network topologies: LxN denotes L layers with N neurons per layer} \label{img:architectures} \end{figure} \section{Reward Shaping from Video Data} After designing our baseline, we added potential-based reward shaping from video data taken from arbitrary YouTube videos depicting running of humans and human-like characters. In this section we describe how the potential function was generated. \subsection{Potential function} The overall potential function is defined as the sum of potential functions for every body part: pelvis, two knees and two feet. Following the potential-based reward shaping approach, an additional reward is given to an agent on each simulation step corresponding to the change in potentials of the source and target state. We considered the following three different potential functions for each body part (knee and foot) in our research, all of them based on the inverse of the distance between the respective body part coordinate in the video-generated data and the humanoid robot. The three potential functions represent three different inverse distance functions: \begin{itemize} \item PF1: $\frac{1}{dx + dy}$ \item PF2: $\frac{1}{\sqrt{dx^2 + dy^2}} $ \item PF3: $\frac{1}{dx^2 + dy^2}$ \end{itemize} where $dx$ ($dy$) is the absolute difference between the x (y) coordinate of the respective body part taken from the video data and the x (y) coordinate of the body part of the humanoid robot. \subsection{Data collection} For our potential function we have used the following three sources of video data: \begin{itemize} \item A video of a cartoon character running (see Figure \ref{img:cartoon} for a screenshot) \item A video of a running character in a computer game (see Figure \ref{img:videogame} for a screenshot) \item A video of a running human (see Figure \ref{img:runninghuman} for a screenshot) \end{itemize} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/cartoon.jpg} \caption{Screenshot from a video depicting a cartoon character running (taken from http://y2u.be/2y6aVz0Acx0)} \label{img:cartoon} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/videogame.jpg} \caption{Screenshot from a video depicting a computer game character running (taken from http://y2u.be/YbYOsE7JyXs)} \label{img:videogame} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{img/runninghuman.jpg} \caption{Screenshot from a video depicting a human running (taken from http://y2u.be/5mVgThl-yMU)} \label{img:runninghuman} \end{figure} Each of the sources was used to define a potential function. The performance of the resulting potential functions in RL are compared in Figure \ref{img:var_potential}. Note that the learning curves were not trained until final convergence, which would require much more time and based on the theoretical properties of potential-based reward shaping would ultimately reach the same performance. In each source we recorded the positions of the two knees and the two feet relative to the pelvis as a two-dimensional coordinate. The recording frequency was four positions per half step. The resulting coordinates were normalized according to the OpenSim simulation. While in our work the extraction of the coordinates was done manually, algorithms to accurately extract body part positions in images with a clear view of the body do exist (e.g. \cite{toshev2014deeppose,guler2018densepose}), and we intend to use these in future work. \subsection{Selecting the data source and the potential function} We first compared the performance of the potential functions based on three videos and the inverse distance measure PF2. The results for this experiment is shown in Figure \ref{img:differentvideo}, and demonstrates that the human video is the best data source for the reward shaping. \begin{figure} \includegraphics[width=0.47\textwidth]{img/differentvideo.jpg} \caption{Comparing the three videos as a data source for the potential function based on PF2} \label{img:differentvideo} \end{figure} After selecting the running human video as the data source, we compared the three different potential functions as depicted in Figure \ref{img:var_potential}. The results show that PF3 performs best. \begin{figure} \includegraphics[width=0.47\textwidth]{img/pot_functions.JPG} \caption{Comparing three potential functions} \label{img:var_potential} \end{figure} \section{Evaluation of Video-based Reward Shaping} Figure \ref{img:baseline_vs_shaping} shows the comparison of our chosen reward shaping approach (PF3) to the RL baseline. The results show that the reward shaping speeds up the learning significantly, reaching double the running speed at 12 hours of training. The end result after 24 hours of training still shows a significant advantage of the reward shaping approach. It is also worth noting that the demonstration video is of a running human who is using his arms, while the simulation model does not include these. \begin{figure} \includegraphics[width=0.47\textwidth]{img/rewardshaping.jpg} \caption{Performance comparison between the baseline and the reward shaping approach} \label{img:baseline_vs_shaping} \end{figure} An important advantage of potential-based reward shaping is the theoretical guarantee that the shaping will not change the optimal policy. In order to demonstrate this advantage in our context, we used a weak running robot generated by the baseline RL agent after 12 hours of training as a sub-optimal data source for the potential function. Clearly, the resulting agent is not running optimally, and the positions of the feet and knees will not be in optimal positions most of the time. We then train our RL agent with the reward shaping generated from these sub-optimal coordinates (using PF3), and compared the performance to the weak runner. The results are shown in Figure \ref{img:suboptimal}, and demonstrate that the RL agent is able to overcome the suboptimal performance of the data source. In fact, after 20 hours of training, the performance is more than double that of the suboptimal running agent. Also, note that the suboptimal shaping did not hurt the learning performance significantly. After 12 hours of training the shaped agent performs comparable to the baseline agent with 12 hours of training. \begin{figure} \includegraphics[width=0.47\textwidth]{img/sim.jpg} \caption{Performance of the reward shaping approach with suboptimal data. The dotted vertical line represents 12 hours of training (the training time of the shaping source).} \label{img:suboptimal} \end{figure} \section{Conclusions} In this paper, we presented a method to use videos of human and human-like running to shape the reward of an RL agent learning to run. Our results demonstrate that a significant improvement in learning speed can be achieved by our proposed method, as compared to a strong baseline which we designed combining selected techniques of the top ten entries to the "Learning to Run" competition at NIPS 2017. In future work, we intend to employ automated body pose extraction methods such as the one presented in \cite{guler2018densepose} and widen our investigation to other humanoid movement apart from running, e.g. jumping. \section{Introduction} The \textit{proceedings} are the records of a conference.\footnote{This is a footnote} ACM seeks to give these conference by-products a uniform, high-quality appearance. To do this, ACM has some rigid requirements for the format of the proceedings documents: there is a specified format (balanced double columns), a specified set of fonts (Arial or Helvetica and Times Roman) in certain specified sizes, a specified live area, centered on the page, specified size of margins, specified column width and gutter size. \section{The Body of The Paper} Typically, the body of a paper is organized into a hierarchical structure, with numbered or unnumbered headings for sections, subsections, sub-subsections, and even smaller sections. The command \texttt{{\char'134}section} that precedes this paragraph is part of such a hierarchy.\footnote{This is a footnote.} \LaTeX\ handles the numbering and placement of these headings for you, when you use the appropriate heading commands around the titles of the headings. If you want a sub-subsection or smaller part to be unnumbered in your output, simply append an asterisk to the command name. Examples of both numbered and unnumbered headings will appear throughout the balance of this sample document. Because the entire article is contained in the \textbf{document} environment, you can indicate the start of a new paragraph with a blank line in your input file; that is why this sentence forms a separate paragraph. \subsection{Type Changes and {\itshape Special} Characters} We have already seen several typeface changes in this sample. You can indicate italicized words or phrases in your text with the command \texttt{{\char'134}textit}; emboldening with the command \texttt{{\char'134}textbf} and typewriter-style (for instance, for computer code) with \texttt{{\char'134}texttt}. But remember, you do not have to indicate typestyle changes when such changes are part of the \textit{structural} elements of your article; for instance, the heading of this subsection will be in a sans serif\footnote{Another footnote here. Let's make this a rather long one to see how it looks.} typeface, but that is handled by the document class file. Take care with the use of\footnote{Another footnote.} the curly braces in typeface changes; they mark the beginning and end of the text that is to be in the different typeface. You can use whatever symbols, accented characters, or non-English characters you need anywhere in your document; you can find a complete list of what is available in the \textit{\LaTeX\ User's Guide} \cite{Lamport:LaTeX}. \subsection{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsubsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsubsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \subsection{Citations} Citations to articles~\cite{bowman:reasoning, clark:pct, braams:babel, herlihy:methodology}, conference proceedings~\cite{clark:pct} or maybe books \cite{Lamport:LaTeX, salas:calculus} listed in the Bibliography section of your article will occur throughout the text of your article. You should use BibTeX to automatically produce this bibliography; you simply need to insert one of several citation commands with a key of the item cited in the proper location in the \texttt{.tex} file~\cite{Lamport:LaTeX}. The key is a short reference you invent to uniquely identify each work; in this sample document, the key is the first author's surname and a word from the title. This identifying key is included with each item in the \texttt{.bib} file for your article. The details of the construction of the \texttt{.bib} file are beyond the scope of this sample document, but more information can be found in the \textit{Author's Guide}, and exhaustive details in the \textit{\LaTeX\ User's Guide} by Lamport~\shortcite{Lamport:LaTeX}. This article shows only the plainest form of the citation command, using \texttt{{\char'134}cite}. Some examples. A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128, Kirschmer:2010:AEI:1958016.1958018}. Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. \subsection{Tables} Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} It is strongly recommended to use the package booktabs~\cite{Fear05} and follow its main principles of typography with respect to tables: \begin{enumerate} \item Never, ever use vertical rules. \item Never use double rules. \end{enumerate} It is also a good idea not to overuse horizontal rules. \subsection{Figures} Like tables, figures cannot be split across pages; the best placement for them is typically the top or the bottom of the page nearest their initial cite. To ensure this proper ``floating'' placement of figures, use the environment \textbf{figure} to enclose the figure and its caption. This sample document contains examples of \texttt{.eps} files to be displayable with \LaTeX. If you work with pdf\LaTeX, use files in the \texttt{.pdf} format. Note that most modern \TeX\ systems will convert \texttt{.eps} to \texttt{.pdf} for you on the fly. More details on each of these are found in the \textit{Author's Guide}. \begin{figure} \includegraphics{fly} \caption{A sample black and white graphic.} \end{figure} \begin{figure} \includegraphics[height=1in, width=1in]{fly} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} As was the case with tables, you may want a figure that spans two columns. To do this, and still to ensure proper ``floating'' placement of tables, use the environment \textbf{figure*} to enclose the figure and its caption. And don't forget to end the environment with \textbf{figure*}, not \textbf{figure}! \begin{figure*} \includegraphics{flies} \caption{A sample black and white graphic that needs to span two columns of text.} \end{figure*} \begin{figure} \includegraphics[height=1in, width=1in]{rosette} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} \subsection{Theorem-like Constructs} Other common constructs that may occur in your article are the forms for logical constructs like theorems, axioms, corollaries and proofs. ACM uses two types of these constructs: theorem-like and definition-like. Here is a theorem: \begin{theorem} Let $f$ be continuous on $[a,b]$. If $G$ is an antiderivative for $f$ on $[a,b]$, then \begin{displaymath} \int^b_af(t)\,dt = G(b) - G(a). \end{displaymath} \end{theorem} Here is a definition: \begin{definition} If $z$ is irrational, then by $e^z$ we mean the unique number that has logarithm $z$: \begin{displaymath} \log e^z = z. \end{displaymath} \end{definition} The pre-defined theorem-like constructs are \textbf{theorem}, \textbf{conjecture}, \textbf{proposition}, \textbf{lemma} and \textbf{corollary}. The pre-defined de\-fi\-ni\-ti\-on-like constructs are \textbf{example} and \textbf{definition}. You can add your own constructs using the \textsl{amsthm} interface~\cite{Amsthm15}. The styles used in the \verb|\theoremstyle| command are \textbf{acmplain} and \textbf{acmdefinition}. Another construct is \textbf{proof}, for example, \begin{proof} Suppose on the contrary there exists a real number $L$ such that \begin{displaymath} \lim_{x\rightarrow\infty} \frac{f(x)}{g(x)} = L. \end{displaymath} Then \begin{displaymath} l=\lim_{x\rightarrow c} f(x) = \lim_{x\rightarrow c} \left[ g{x} \cdot \frac{f(x)}{g(x)} \right ] = \lim_{x\rightarrow c} g(x) \cdot \lim_{x\rightarrow c} \frac{f(x)}{g(x)} = 0\cdot L = 0, \end{displaymath} which contradicts our assumption that $l\neq 0$. \end{proof} \section{Conclusions} This paragraph will end the body of this sample document. Remember that you might still have Acknowledgments or Appendices; brief samples of these follow. There is still the Bibliography to deal with; and we will make a disclaimer about that here: with the exception of the reference to the \LaTeX\ book, the citations in this paper are to articles which have nothing to do with the present subject and are used as examples only.
2023-04-23T08:17:29.751Z
2020-12-17T02:13:30.000Z
redpajama/arxiv
arxiv_0000
123
10,974
5f0da58d356fd1293ba86da9658e24c27197f3a4
\section{Introduction} The solar corona is filled with dynamic magnetized plasma, and hosts a variety of disturbances/waves that can be driven by all motions of the ubiquitous plasma. One of the most spectacular waves is the extreme ultraviolet (EUV) wave that appears as the traveling coronal disturbance from its erupting source region in the solar atmosphere. Hints about the existence of EUV waves were initially deduced from indirect evidences of sympathetic solar activities, type II radio bursts, and H$\alpha$ Moreton waves. Later, EUV waves were finally detected directly with the Extreme Ultraviolet Imaging Telescope (EIT; Delaboudini{\`e}re et al. 1995) on-board the SOHO spacecraft (Moses et al. 1997; Thompson et al. 1998; Warmuth 2015) in 1997. EUV waves can provide potential diagnostics of the coronal magnetic field strengths and can be used to estimate coronal plasma parameters that are hard to observe directly (Ballai 2007; Kwon et al. 2013). Although there is ongoing debate about the physical nature of these waves, an EUV wave is always interpreted either as a true wave or as a pseudo wave. The interpretations of true waves include linear/nonlinear fast-mode magnetohydrodynamic (MHD) waves, slow-mode waves, and soliton-like waves (Ofman \& Thompson 2002; Wills-Davey et al. 2007, Wang et al. 2009). On the other hand, the methods of employing pseudo waves embrace e.g. the magnetic field-line stretching, Joule heating in current shells, and continuous small-scale reconnections (Chen et al. 2002; Delaboudini{\`e}re et al. 2008; Attrill et al. 2007). Benefiting from high-quality observations from the Solar Terrestrial Relations Observatory(STEREO; Kaiser et al. 2008) and the Solar Dynamics Observatory (SDO; Pesnell et al. 2012), EUV waves have been best interpreted as a bimodal composition of an outer fast-mode MHD wave and an inner non-wave component of coronal mass ejections (CMEs) in a hybrid model (Liu et al. 2010, Chen \& Wu 2011, Downs et al. 2012, Liu \& Ofman 2014, Mei et al. 2020). More details about the nature of EUV waves are in recent reviews (Gallagher \& Long 2011, Patsourakos \& Vourlidas 2012; Liu \& Ofman 2014, Warmuth 2015, Chen 2016, Long et al. 2017, Shen et al. 2020). It is widely recognized by now that the EUV waves are associated with a variety of energetic eruptions (e.g. CMEs and flares), and small-scale EUV waves are closely associated with small-scale ejections (e.g. jets and eruptions of mini-filaments) (Zheng et al. 2012a, b). In addition, it is suggested that the formation of an EUV wave strongly depends on the rapid expansion of overlying coronal loops ahead of an erupting core (Zheng et al. 2019, 2020). To date, it has only been observed that each single eruption triggered only a single EUV wave. In this study, we now provide new observational evidence for two scenarios of TEWs in a single eruption. TEWs in the first scenario were separately associated with a filament eruption and its precursor jet, while those in the second scenario were successively related to another filament eruption. Hence, we refer to these cases as "fraternal TEWs" and "identical TEWs", respectively. In the following text, the term "EUV wave" is simply abbreviated as ``wave". \section{Observations} For the first scenario, the filament eruption occurred beyond the northwest limb on 2010 August 18, and involved two EUV waves, a C4.5 flare, and a partial halo CME ({\url{https://cdaw.gsfc.nasa.gov/CME\_list/UNIVERSAL/2010\_08/univ2010\_08.html}}) with a linear speed of 1471 km s$^{-1}$. The source region, located at the mixture polarities of National Oceanic and Atmospheric Administration (NOAA) Active Region (AR) 11093 and 11099, was confirmed by the ARs on the solar disk on August 14 in Helioseismic and Magnetic Imager (Scherrer et al. 2012) magnetograms. For the second scenario, the filament eruption occurred at AR 11228 near the northeast limb on 2011 June 1. The eruption involved two EUV waves, and a C2.6 flare. During the eruption, there was a slow CME (\url{https://cdaw.gsfc.nasa.gov/CME\_list/UNIVERSAL/2011\_06/univ2011\_06.html}) with a linear speed of 259 km s$^{-1}$, which was better seen in the view of COR1 (\url{https://cor1.gsfc.nasa.gov/catalog}). However, the CME originated from another eruption in AR 11227 in the south hemisphere. Besides the view provided by SDO, the two scenarios were also captured by the spacecrafts A and B of STEREO, respectively. The positions of STEREOs (A and B) and SDO (Earth) for two cases are shown in Figure 1. We used the EUV observations from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) on-board SDO and from the Extreme Ultraviolet Imager (EUVI; Howard et al. 2008) on the STEREO. The AIA instrument contains seven EUV wavelengths that involve a wide range of temperature coverage, and its images have a field of view (FOV) of 1.5 $R_\odot$ with a pixel resolution of 0.6$"$ and a cadence of 12 s. The EUVI images have a pixel resolution of 1.58$"$, and their cadences are 2.5 minutes for both the 195~{\AA} and 304~{\AA}. In addition, the filaments and the associated jet, observed in the first scenario, were also recorded by the H$\alpha$ filtergrams from the Solar Magnetic Activity Research Telescope (SMART; UeNo et al. 2004) with a solar imaging system of Solar Dynamics Doppler Imager (Ichimoto et al. 2017) at Hida Observatory. The H$\alpha$ images have a cadence of 1 minute and a pixel size of $\sim$1$"$. To display better the faint wavefronts in the first scenario, the EUV images are processed by the intensity-normalization method, where the value at a pixel is normalized by all the intensities at this same pixel among a set of the original images. For the diffuse wavefronts in the second scenario, the EUV images are subtracted by a fixed image and the previous image with a fixed time gap. The evolutions of the EUV waves and the associated erupting cores and loops, are analysed by the time-slice approach (Liu et al. 2010), in which a time-distance plot was constructed with a stack of slices along a selected direction for a set of images. 8 slices (S0-S8) are chosen, and their start points are indicated by triangles. In the first scenario, S0 is in the original ejection direction of the jet, and S1 is in the vertical northward direction, and S2 is an arc at the height of 0.1 $R_\odot$ over the northwest limb, and S3 is in the lateral expanding direction of the north end of L1, and S4 is an arc path with the height of 0.1 $R_\odot$ over the southwest limb, and S5 is in the direction of $20^\circ$ counted clockwise from the south in the perspective of EUVI-A. In the second scenario, S6 is in the direction of the erupting filament, and S7-S8 are in the vertical southward directions in the view of AIA and EUVI-B, respectively. The speeds are obtained by fitting with the linear ({\it linfit.pro}) function, assuming a measurement uncertainty of 4 pixels ($\sim1.74$ Mm) for the selected points. On the other hand, the coronal magnetic filed lines are also extrapolated with {\it pfss\_trace\_field.pro} and {\it pfss\_draw\_field.pro}, parts of the PFSS package of SolarSoftWare (see the template routine of {\it pfss\_sample1.pro}). \section{Results} \subsection{Fraternal TEWs} The fraternal TEWs reported here occurred on 2010 August 18, and the pre-eruption evolution is shown in SMART H$\alpha$, AIA 304~{\AA} and EUVI-A 304~{\AA} (Figure 2). Half an hour prior to the eruption ($\sim$04:41 UT), two small filaments (white arrows in panels (a)-(b)) appeared suddenly and rose slowly from the source region behind the northwest limb. The two small filaments were clearly disconnected with different chiralities, one hour earlier in the disk view of EUVI-A (white arrows and black dotted lines in panel (c)). Interestingly, a jet (marked by cyan arrows) and a new long filament (green arrows) occurred during the filament ascent, and the jet clearly originated from the junction (see the blue arrow) of two small filaments in the disk view of EUVI-A (panels (d)-(f)). It is likely that the jet and the newly-formed long filament resulted from the collision and subsequent magnetic reconnection of the two small filaments. Immediately, this newly-formed long filament lifted southwestwards in a non-radial direction (green arrows) (panels (g)-(h)). On the other hand, the jet moved northwards (cyan arrows in bottom panels), and had an initial speed of $\sim$207 km s$^{-1}$ (panel (h1)) along S0 (panel (h)). The measured jet speed is highly supersonic and is consistent with the Alfv\'en speeds (an estimated speed of $\sim$210 km s$^{-1}$, assuming the electron number density of $10^{8} cm^{-3}$ and the radial coronal magnetic field component of $1 G$ at a height of $0.06 R_\odot$). Ahead of the ejecting jet, a faint wave (W1) formed and is visualised in intensity-normalized images of AIA 211~{\AA} and EUVI-A 195~{\AA} (Figure 3 and Movie S1). W1 mainly propagated northeastwards in the lower corona, and was much clearer in the disk view of EUVI-A, due to the projection effect (red arrows in panels (a)-(b)). There is a loop-like dimming structure (the yellow arrow) propagating visualised in difference images of AIA 193~{\AA} (panel (c)), what is an indicator of the expansion of a group of overlying coronal loops (L1). On the disk view, the dimmings around the north end of L1 follows the faint wavefront (white and red arrows in panel (d)), which reflects a close relation between the propagating W1 and the expanding L1. In the time-distance plot (panel (e)) along S1, the expansion speed of L1 is estimated to be a supersonic $\sim$167 km s$^{-1}$ (dotted lines with asterisks), and there is also a close temporal and spacial relationship between the jet and the onset of the L1 expansion (the blue asterisk and the green arrow). In the time-distance plots of intensity-normalized images along S2 and S3 (panels (f)-(g)), it is shown that W1 started at $\sim$04:58 UT (blue asterisks) with a nearly constant speed of $\sim$230 km s$^{-1}$, and the wavefront in EUVI-A 195~{\AA} is stronger than its counterpart in AIA 211~{\AA}. In the meantime of the northwards propagation of W1, the newly-formed long filament also kept rising and eventually erupted (Figure 4). The filament (green arrows) lifted southwestwards and quickly vanished in AIA 171~{\AA} (panels (a)-(d)). However, a hot channel appeared in AIA 131 and 94~{\AA} (white arrows in panels (e)-(f)), which is an indicator of a high-temperature flux rope (Zhang et al. 2012). Meanwhile, there formed the current sheet containing plasma blobs beneath the flux rope (black arrows in panels (d)-(f)), which was studied in details by Takasao et al. (2012). The legs of the erupting flux rope (white arrows) remain still anchored in the solar surface, and the post-eruption cusp structure (cyan arrows) appeared in the eruption center (panels (h)-(i)). Surprisingly, the flux rope eruption was followed by another wave (W2) that mainly travelled southwards (see Figure 5 and Movie S2). W2 was much stronger than W1, which is apparent in intensity-normalized images in AIA 211~{\AA} and EUVI-A 195~{\AA}. W2 was closely associated with the expansion of another group of coronal loops (L2; cyan arrows in top panels). W2 propagated southwards along the limb, as seen in the AIA view, and displayed an arc shape in the disk view (yellow and red arrows in panels (c)-(d)). In the time-distance plots along S4 and S5 (panels (e)-(f)), W2 shows a nearly constant velocity of $\sim$390 km s$^{-1}$ (yellow and red lines with asterisks) that is highly supersonic and likely super-Alfv\'enic, and set off at $\sim$05:20 UT (blue asterisks), closely following the flux rope eruption. \subsection{Identical TEWs} The identical TEWs occurred on 2011 June 1, and the related eruption is shown in Figure 6. The eruption region (the dotted box) is comprised of a central parasitic negative polarity (N1), and surrounding positive polarities that formed a circular neutral line, and a sunspot with negative polarity (N2) located at the west boundary (panel (a)). It is apparent that the filament consisted of two branches (white arrows) that lay along the circular neutral line, and are rooted at the central N1 and the strong positive polarity (P1) in the north (panels (b)-(c)). Likely due to some disturbances from P1, the filament erupted as a jet (red arrows in panels (d)-(f)). It is noteworthy that the escaping jet was guided by a bundle of coronal loops (L3; the yellow arrow) that connected the P1 and the N2, which was best seen in AIA 94~{\AA} (panel (d)). After the eruption, there appeared a cusp structure (the orange arrow) connecting P1 and the remote N2 over the flare loops (the green arrow), as a result of the explosion of the overlying L3 (panel (g)). Interestingly, the eruption failed and left a bunch of filament threads rooted at the P1 (red arrows in panels (h)-(i)), which indicates the confinement from a higher group of coronal loops. Following the eruption, the taller group of confined loops (L4; dashed curves) is apparent in the running difference images, and interestingly, two waves successively formed in a short period (see Figure 7 and Movie S3). The first wave (W3) emerged from the expansion direction of L3, and the initially narrow wavefront then became diffuse (red arrows in panels (a)-(d)). Following the expansion of L4, the second wave (W4) formed at the south flank of the expanding L4, $\sim$150 Mm behind W3 (the red and cyan arrows in panel (d)). The front of W4 developed into an arc shape some minutes later (the cyan arrow in panel (f)). Note that a strong wave (yellow arrows) came from the south, and interacted with W3-W4 propagating southwards (panels (e)-(f)). From the intensity-normalized time-distance plot in AIA~193{\AA} along S6 (panel (g)), the erupting filament speed of $\sim$250 km s$^{-1}$, again a highly supersonic value, is estimated after extricating from the L3 confinement, what then quickly began to fall back (the pink arrow) at $\sim$02:50 UT (the blue vertical line), likely due to the restriction by L4. In the running-difference time-distance plot in AIA~193{\AA} along S7 (panel (h)), W3 and W4 started at $\sim$02:44 and $\sim$02:49 UT (blue asterisks), respectively, and the likely both supersonic and super-Alfv\'enic speed of W3 ($\sim$495 km s$^{-1}$) was higher than that ($\sim$370 km s$^{-1}$) of W4, and two waves encountered the south strong wave (the yellow arrow). Due to the lower cadence, the wavefronts of W3-W4 consisted of some discrete brightenings (red and cyan arrows) in the running-difference time-distance plot along S8 (panel (i)). \section{Conclusions and Discussion} The observational results presented above are evidences for two distinct scenarios of TEWs that successively occurred in a single eruption in a short interval ($\sim$22 minutes and $\sim$5 minutes) and TEWs were confirmed by two different viewpoints from SDO and STEREO. W1-W4 had a linear speed in the range of $\sim$230-500 km s$^{-1}$ that is highly supersonic and likely super-Alfv\'enic giving away hints about their nature, which is then consistent with the interpretation of fast-mode MHD waves. Two scenarios show two different formation situations of these TEWs. In the first scenario, W1 and W2 were separately associated with two erupting portions (the precursor jet and the first filament eruption) that simultaneously formed during the magnetic reconnection process between two small filaments. In the second scenario, both W3 and W4 were associated with the same filament eruption in the form of blowout jet (Moore et al. 2010, Li et al. 2018). Therefore, we refer two scenarios as "fraternal TEWs" and "identical TEWs", respectively. How did these TEWs formed? Firstly, we superimposed the magnetic field lines extrapolated by the potential field source surface (PFSS; Schrijver \& De Rosa 2003) model on the intensity-normalized images for the fraternal TEWs, and, on difference images for the identical TEWs (see Figure 8). For the fraternal TEWs (top panels), W1 and W2 (yellow and red arrows) appear at the flanks of the yellow and red loops at different times. For the identical TEWs (bottom panels), the loop-like dimmings (the white arrow) and remote dimmings (purple arrows) probably indicate the explosion of the orange loops and the expansion of cyan loops. Hence, the orange, red, and orange, and cyan extrapolated loops are corresponding to the observations of L1, L2, L3, and L4, respectively. Secondly, we proposed a generation sketch of the two distinct scenarios of TEWs (Figure 9), in which the selected important extrapolated coronal loops (yellow, red, orange, and cyan lines) are superimposed on the extrapolated magnetogram (top panels) and the HMI magnetogram (bottom panels). For the fraternal TEWs (top panels), two small filaments (brown and pink lines) slowly rose and interacted with each other (the yellow star symbol), which simultaneously produced a jet and a long filament. The powerful jet moved northwards (the dashed arrow) and punched the northern overlying yellow L1, and the rapid lateral expansion of L1 gave birth of W1 (the cyan shade). On the other hand, the newly-formed long filament (the purple rope) erupted and forced the southern overlying red L2 to expand suddenly, and the W2 formed (the blue shade) at the east flank of the expanding green loops. For the identical TEWs (bottom panels), the filament eruption initially pushed the lower overlying orange L3, and W3 formed (the yellow shade). After the explosion of L3, the erupting filament hit the higher overlying cyan L4, and W4 was formed (the green shade) at the south flank of the expanding L4. Therefore, we suggested that the four waves were driven by the rapid expansions of overlying loops (Figure 10 and 11 in Appendices). TEWs in a single eruption are two fast-mode MHD waves, which is different from the two components of the fast-mode wave and the coronal reconfiguration signature in the hybrid model, see e.g. Liu et al. 2014, Long et al. 2017. On the other hand, there exist reports on multiple-wave phenomena, e.g. the quasi-periodic fast-propagating (QFP) waves and the homologous waves. QFP waves propagate as wave trains at the local Alfv\'{e}n speed along open field lines, and it is believed that QFP waves are excited by repetitive flaring energy releases (Liu et al. 2011, 2012). Meanwhile, homologous waves occur successively from the same place in a few hours or days, but they are generated by a series of distinct eruptions from the same region (Kienreich et al. 2011, Zheng et al. 2012b). In this study, we report TEWs triggered by the rapid expansion of overlying loops, and both waves were closely associated with a single eruption. Hence, TEWs are also intrinsically different from the QFP waves and homologous waves. Why are TEWs rare, though they have the same physical nature and driving mechanism with that of a single wave? It is likely because the TEWs easily mixed together. Assuming that two groups of coronal loops stay near or expand in close directions, the two waves generated will mix and become hardly distinguishable from each other. If a wave-related strong eruption invokes a series of groups of coronal loops, then a series of wave segments will compose a large arc-like or circular wavefront. Hence, we further suggest that some visible wavefront consist of a series of undistinguishable wave segments that are triggered by a series of groups of coronal loops. The discovery of TEWs likely indicate that more TEWs can be detected in future. We point out that the key for the formation of TEWs is the expansion that is divided in the directions of two separate groups of coronal loops in a single eruption. Actually, it is ubiquitous for the plasma motions in the solar corona that consists of building blocks of coronal loops. Hence, the EUV wave should be prevalent in the solar corona. However, the number of detectable EUV waves is far less than the expected one, which is possibly because most weak and small EUV waves are submerged by ubiquitous coronal loops around, or due to some waves may be at lower temperatures than what the current filters can detect. \begin{acknowledgments} SDO is a mission of NASA's Living With a Star Program. We gratefully acknowledge the usage of data from the SDO, STEREO, and from the ground-based SMART project. This work is supported by grants of NSFC 11790303 and 12073016. \end{acknowledgments}
2023-04-23T08:17:31.675Z
2022-03-31T02:15:33.000Z
redpajama/arxiv
arxiv_0000
180
3,527
34bd255fa58d72b26374511e3125e38769a36db4
\section{Introduction} In recent years, many computer vision (CV) researchers make efforts to design CV-oriented vision Transformer to surpass the performance of the convolutional neural networks (CNNs). Due to a high capability in modeling the long-range dependencies, vision Transformer achieves prominent results in diversified vision tasks, such as image classification \cite{ViT,Deit,TNT,Swin,Twins,CSWin}, semantic segmentation \cite{Segformer,zheng2021rethinking,wang2021max}, object detection \cite{DETR,zhu2020deformable,dai2021up} and etc. However, the powerful performance usually comes at a cost of heavy computational complexity. \begin{figure} \centering \includegraphics[height=4cm]{throughput_latency} \caption{Comparison of throughput and latency on ImageNet-1K classification. The throughput and the latency are tested based on the PyTorch framework with a V100 GPU and TensorRT framework with a T4 GPU, respectively. } \label{fig:throughput_latency} \end{figure} Primordially, ViT \cite{ViT} firstly introduces Transformer to the image recognition tasks. It splits the whole image into patches and feeds each patch as a token into Transformer. However, the patch-based Transformer is hard to deploy due to the computationally inefficient full-attention mechanism. To relieve this problem, Swin \cite{Swin} proposes the window-based self-attention to limit the computation of self-attention in non-overlapping sub-windows. Obviously, the window-based self-attention helps to reduce the complexity to a great extent, but the shifted operator for building connection among the windows brings difficulty for ONNX or TensorRT deployment. Twins \cite{Twins} takes advantage of the window-based self-attention and the spatial reduction attention from PVT \cite{PVT_v1} and proposes the spatially separable self-attention. Although Twins is deployment-friendly and achieves outstanding performance, its computational complexity is hardly reduced. CSWin \cite{CSWin} shows state-of-the-art performance via the novel cross-shaped window self-attention, but its throughput is low. Albeit, with varying degrees of progress in these famous vision Transformers, most of its recent successes are accompanied with huge resource demands. To overcome the aforementioned issues, we propose an efficient Transformer backbone, called {\bf Separable Vision Transformer (SepViT)}, which captures both local and global dependencies in a sequential order. A key design element of SepViT is its {\bf depthwise separable self-attention} module, as shown in Fig. \ref{fig:SepViT}. Inspired by the depthwise separable convolution in MobileNets \cite{MobileNet_v1,MobileNet_v2,Mobilenet_v3}, we re-design the self-attention module and propose the depthwise separable self-attention, which consists of a depthwise self-attention and a pointwise self-attention that can correspond to depthwise and pointwise convolution in MobileNets, respectively. The depthwise self-attention is used to capture local feature within each window while the pointwise self-attention is for building connections among windows that notably improve the expressive power. Moreover, to get the global representation of a local window, we develop a novel {\bf window token embedding}, which can model the attention relationship among windows with negligible cost. Furthermore, we also extend the idea of grouped convolution from AlexNet \cite{AlexNet} to our depthwise separable self-attention and present the {\bf grouped self-attention} to further improve the performance. To demonstrate the effectiveness of SepViT, we conduct a series of experiments on some typical vision tasks, including ImageNet-1K \cite{ImageNet-1K} classification, ADE20K \cite{ADE20K} semantic segmentation and COCO \cite{COCO} object detection and instance segmentation. The experimental results show that SepViT can achieve a better trade-off between performance and latency than other competitive vision Transformers \cite{PVT_v1,Swin,Twins,CSWin}. As shown in Fig. \ref{fig:throughput_latency}, SepViT achieves better accuracy at the same latency constraint and costs less inference time than the methods with the same accuracy. Furthermore, SepViT can be expediently applied and deployed since it only contains some universal operators (e.g., transpose and matrix multiplication). To sum up, the contributions of our work can be summarized as follows: \begin{enumerate} \item We propose the Separable Vision Transformer (SepViT) with a depthwise separable self-attention. It can achieve local information communication within the windows and global information exchange among the windows in a single Transformer block. \item We propose the window token embedding to learn a global feature representation of each window, which helps SepViT to model the attention relationship among windows with negligible computational cost. \item We extend depthwise separable self-attention to grouped self-attention in SepViT. It can capture more contextual concepts across multiple windows and achieve better performance. \end{enumerate} \section{Related work} \subsection{Vision Transformer} In the computer vision field, CNN has been dominant for decades due to its advantages of the spatial inductive biases. Later, in order to model the global dependencies of pixels, ViT \cite{ViT} introduces Transformer to computer vision for the first time and achieves an excellent performance on image classification task. In quick succession, a series of vision Transformers have been produced based on ViT. DeiT \cite{Deit} introduces the knowledge distillation scheme and proposes the data-efficient image Transformer. T2T-ViT \cite{T2T} progressively structurizes the image to tokens by recursively aggregating neighboring tokens into one token. TNT \cite{TNT} proposes the inner and outer Transformers to model the relationship of the word embeddings and the sentence embeddings, respectively. CPVT \cite{CPVT} produces the conditional position encoding which is conditioned on the local neighborhood of input tokens and is adaptable to arbitrary input sizes. Recently, PVT \cite{PVT_v1} and Swin \cite{Swin} synchronously propose the hierarchical architecture which is friendly for the dense prediction tasks, such as object detection, semantic and instance segmentation. Meanwhile, Swin \cite{Swin} as a pioneer proposes the window-based self-attention to compute attention within local windows. Soon after, Twins \cite{Twins} and CSWin \cite{CSWin} sequentially propose the spatial separable self-attention and cross-shaped window self-attention based on the hierarchical architecture. On the other hand, some researchers incorporate the spatial inductive biases of CNNs into Transformer. CoaT \cite{CoaT}, CVT \cite{CvT} and LeViT \cite{LeViT} introduce the convolutions before or after self-attentions and obtain well-pleasing results. Regarding to the design of lightweight Transformer, Mobile-Former \cite{Mobile-Former} and MobileViT \cite{MobileViT} combine Transformer blocks with the inverted bottleneck blocks in MobileNet-V2 \cite{MobileNet_v2} in series and parallel. Besides, another direction of research \cite{so2019evolved,Autoformer,Glit,Bossnas} is to automatically search the structure details of Transformer with neural architecture search \cite{zoph2016neural,Bignas} technology. \subsection{Lightweight Convolutions} Many lightweight and mobile-friendly convolutions are proposed for mobile vision tasks. Of these, grouped convolution is the first to be proposed by AlexNet \cite{AlexNet}, which groups the feature maps and conducts the distributed training. Then the representative work of mobile-friendly convolution must be the MobileNets \cite{MobileNet_v1,MobileNet_v2,Mobilenet_v3} with depthwise separable convolution. The depthwise separable convolution contains a depthwise convolution for spatial information communication and a pointwise convolution for information exchange across the channels. As time goes on, plenty of variants based on the aforementioned works are developed, such as \cite{ShuffleNet,ShuffleNet_v2,EfficientNet,GhostNet}. In our work, we adapt the ideology of depthwise separable convolution to Transformer, which aims to reduce the Transformer's computational complexity without the sacrifice of performance. \section{Methodology: SepViT} In this section, we first illustrate the design overview for SepViT, and then discuss some key modules within the SepViT block. Finally, we provide the architecture specifications and variants with different FLOPs. \subsection{Overview} As illustrated in Fig. \ref{fig:SepViT}, SepViT follows the widely-used hierarchical architecture \cite{PVT_v1,Swin,Twins,CSWin} and the window-based self-attention \cite{Swin}. Besides, SepViT also employs conditional position encoding (CPE) from \cite{CPVT,Twins}. For each stage, there is an overlapping patch merging layer for feature map downsampling followed by a series of SepViT blocks. The spatial resolution will be progressively reduced by 32$\times$ with either stride 4 or 2, and the channel dimension will be doubled stage by stage. It is worth noting that both local contextual concepts and global abstraction can be captured in a single SepViT block, while other works \cite{Swin,Twins} should employ two successive blocks to accomplish this local-global modeling. \begin{figure}[th] \centering \includegraphics[width=12cm]{SepViT} \caption{Separable Vision Transformer (SepViT). The top row is the overall hierarchical architecture of SepViT. The bottom row is the SepViT block and the detailed visualization of our depthwise separable self-attention and the window token embedding scheme.} \label{fig:SepViT} \end{figure} In the SepViT block, local information communication within each window is achieved by depthwise self-attention (DWA), and global information exchange among the windows is performed via pointwise self-attention (PWA). \subsection{Depthwise Separable Self-Attention} \subsubsection{Depthwise Self-Attention (DWA).} Similar to some pioneering works \cite{Swin,Twins}, SepViT is built on top of the window-based self-attention scheme. Firstly, we perform a window partition on the input feature map. Each window can be seen as an input channel of the feature map, while different windows contain diverse information. Different from previous works, we create a window token for each window, which serves as a global representation and is used to model the attention relationship in the following pointwise self-attention module. Then, a depthwise self-attention (DWA) is performed on all the pixel tokens within each window as well as its corresponding window token. This window-wise operation is quite similar to a depthwise convolution layer in MobileNets, aiming to fuse the spatial information within each channel. The implementation of DWA can be summarized as follows: \begin{equation} \label{eqution:DWA} \text{DWA}(z)= \text{Attention}(z \cdot W_Q, z \cdot W_K, z \cdot W_V) \; \end{equation} where $z$ is the feature tokens, consisted of the pixel and window tokens. $W_Q$, $W_K$, and $W_V$ denote three Linear layers for query, key and value computation in a regular self-attention. Attention means a standard self-attention operator that works on local windows. \subsubsection{Window Token Embedding.} A straightforward solution to model the attention relationship among windows is to employ all pixel tokens. However, it will bring huge computational costs and make the whole model very complicated. To better establish the attention relationship among windows, we present a window token embedding scheme, which leverages a single token to encapsulate the core information for each sub-window. This window token can be initialized either as a fixed zero vector or a learnable vector with the initialization of zero. While passing through DWA, there is an informational interaction between the window token and pixel tokens in each window. Thus the window token can learn a global representation of this window. Thanks to the effective window token, we can model the attention relationship among windows with negligible computational cost. \subsubsection{Pointwise Self-Attention (PWA).} The famous pointwise convolution in MobileNets is utilized to fuse the information from different channels. In our work, we imitate pointwise convolution to develop the pointwise self-attention (PWA) module to establish connections among windows. PWA is also mainly used to fuse the information across windows and obtain a final representation of the input feature map. More specifically, we extract the feature maps and window tokens from the output of DWA. Then, window tokens are used to model the attention relationship among windows and generate the attention map after a LayerNormalization (LN) layer and a Gelu activation function. Meanwhile, we directly treat the feature maps as the value branch of PWA without any other extra operation. With the attention map and the feature maps which are in the form of windows, we perform an attention computation among the windows for global information exchange. Formally, the implementation of PWA can be depicted as follows: \begin{equation} \text{PWA}(z, wt) = \text{Attention}(\text{Gelu}(\text{LN}(wt)) \cdot W_Q, \text{Gelu}(\text{LN}(wt)) \cdot W_K, z) \; \end{equation} where $wt$ denotes the window token. Here, Attention is a standard self-attention operator but works on all of the windows $z$. \subsubsection{Complexity Analysis.} \label{sec:complexity} Given an input feature with size $H \times W \times C$, the computational complexity of the multi-head self-attention (MSA) is $4HWC^2 + 2H^2W^2C$ in the global Transformer block of ViT \cite{ViT}. The complexity of the MSA in a window-based Transformer with window size $M \times M$ (Usually, $M$ is a common factor of $H$ and $W$, so the number of the windows is $N=\frac{HW}{M^2}$) can be decreased to $4HWC^2 + 2M^2HWC$ in Swin \cite{Swin}. As for the depthwise separable self-attention in SepViT, the complexity contains two parts, DWA and PWA. {\bf DWA.} Building on top of a window-based self-attention, DWA shares a similar computational cost to it. Additionally, the introduction of window tokens will cause an extra cost, but it is negligible compared to the overall cost of DWA. The complexity of DWA can be calculated as follows: \begin{equation} \varOmega(\text{DWA}) = 3HWC^2 + 3NC^2 + 2N(M^2+1)^2C \; \end{equation} where $3NC^2$ is the extra cost of encoding window tokens in Linear layers. $2N(M^2+1)^2C$ indicates the matrix multiplications involved in the self-attention for $N$ windows, where $M^2+1$ represents the $M^2$ pixel tokens in a window and its corresponding window token. Thanks to that the number of sub-windows $N$ is usually a small value, the extra costs caused by window tokens can be ignored. {\bf PWA.} Since the window token summarizes the global information of a local window, the proposed PWA helps to efficiently perform information exchange among windows in a window level instead of pixel level. Concretely, the complexity of PWA is given as follows: \begin{equation} \varOmega(\text{PWA}) = HWC^2 + 2NC^2 + N^2C + NHWC \; \end{equation} where $2NC^2$ represents the little cost of computing the query and key with $N$ window tokens, while the value branch is zero-cost. $N^2C$ indicates the computation of generating attention map with $N$ window tokens in a window level, which saves computational cost for PWA to a great extent. Finally, $NHWC$ indicates the matrix multiplication between the attention map and the feature maps. \subsection{Grouped Self-Attention} Motivated by the excellent performance of grouped convolution in visual recognition \cite{AlexNet}, we make an extension of our depthwise separable self-attention with the ideology of group, and propose the grouped self-attention. As shown in Fig. \ref{fig:grouped_attn}, we splice several neighboring sub-windows to form a larger window, which is similar to dividing the windows into groups and conducting a depthwise self-attention communication inside a group of windows. In this way, the grouped self-attention can capture long-range visual dependencies of multiple windows. In terms of computational cost and performance gains, the grouped self-attention has a certain extra cost compared with the depthwise separable self-attention but results in a better performance. Ultimately, we apply the block with grouped self-attention to SepViT and run it alternately with the depthwise separable self-attention block in the late stages of the network. \begin{figure} \centering\ \includegraphics[width=10cm]{grouped_attn} \caption{A macro view of the similarities and differences between the depthwise separable self-attention and the grouped self-attention.} \label{fig:grouped_attn} \end{figure} \subsection{SepViT Block} To summarize, our SepViT block can be formulated as follows: \begin{align} \label{eqution:SepViT} & \tilde{z}^l = \text{Concat}(z^{l-1}, wt) \; \\ & \ddot{z}^l = \text{DWA}(\text{LN}(\tilde{z}^l)) \; \\ & \dot{z}^l, \dot{wt} = \text{Slice}(\ddot{z}^l) \; \\ & \hat{z}^l = \text{PWA}(\dot{z}^l, \dot{wt}) + z^{l-1} \; \\ & z^l = \text{MLP}(\text{LN}(\hat{z}^l)) + \hat{z}^l \; \end{align} where $\ddot{z}^l$, $\hat{z}^l$ and $z^{l}$ denote the outputs of the DWA, PWA and SepViT block $l$, respectively. $\dot{z}^l$ and $\dot{wt}$ are feature maps and the learned window tokens. Concat represents the concatenation operation while Slice represents the slice operation. {\bf Comparison of Complexity.} We compare the complexity of our proposed SepViT block with two other SOTA blocks (Swin \cite{Swin}, Twins \cite{Twins}). As we stated before, the information interaction within and among windows is completed in a single SepViT block, while Swin \cite{Swin} and Twins \cite{Twins} require two successive blocks. As shown in Fig. \ref{fig:complexity}, we can observe that the SepViT block only costs about half the MACs of its competitors in each stage of the network. The reason lies in two aspects: i) SepViT block is more lightweight; ii) SepViT block removes many redundant layers, e.g., there is only one MLP layer and two LN layers in a single SepViT block while there are double MLP and LN layers in two successive Swin or Twins blocks. \begin{figure} \centering \includegraphics[height=4cm]{flops} \caption{Complexity comparison of an information interaction within and among windows in a single SepViT block with those two-block pattern works in each stage.} \label{fig:complexity} \end{figure} \begin{table} \centering \caption{Detailed configurations of SepViT variants in different stages.} \label{tab:configs} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{@{}c|c|c|c|c@{}} \toprule Configuration & SepViT-Lite & SepViT-T & SepViT-S & SepViT-B \\ \midrule Blocks & [1, 2, 6, 2] & [1, 2, 6, 2] & [1, 2, 14, 2] & [1, 2, 14, 2] \\ \midrule Channels & [32, 64, 128, 256] & [96, 192, 384, 768] & [96, 192, 384, 768] & [128, 256, 512, 1024] \\ \midrule Heads & [1, 2, 4, 8] & [3, 6, 12, 24] & [3, 6, 12, 24] & [4, 8 ,16, 32] \\ \midrule Block Type & \multicolumn{4}{c}{[DSSA, DSSA\&GSA, DSSA\&GSA, DSSA]} \\ \bottomrule \end{tabular}} \end{table} \subsection{Architecture Configurations} To provide a fair comparison with other vision Transformers, we propose the SepViT-T (Tiny), SepViT-S (Small) and SepViT-B (Base) variants. Moreover, we also design the SepViT-Lite variant with a very light model size. The specific configurations of SepViT variants are shown in Table \ref{tab:configs}, the block depth of SepViT is smaller in some stages than the competitors since SepViT is more efficient. DSSA and GSA denote the blocks with depthwise separable self-attention and grouped self-attention, respectively. Additionally, the expansion ratio of each MLP layer is set as 4, the window sizes are 7$\times$7 and 14$\times$14 for DSSA and GSA in all SepViT variants. \section{Experimental Results} \subsection{ImageNet-1K Classification} \subsubsection{Settings.} We carry out the image classification experiment on the ImageNet-1K \cite{ImageNet-1K}, which contains about 1.28M training images and 50K validation images from 1K categories. For a fair comparison, we follow the training settings of the recent vision Transformer \cite{Twins}. Concretely, all of the SepViT variants are trained for 300 epochs on 8 V100 GPUs with a total batch size of 1024. The resolution of the input image is resized to 224 $\times$ 224. We adopt the AdamW \cite{AdamW} as the optimizer with weight decay 0.1 for SepViT-B and 0.05 for SepViT-S/T. The learning rate is gradually decayed based on the cosine strategy with the initialization of 0.001. We use a linear warm-up strategy with 20 epochs for SepViT-B and 5 epochs for SepViT-S/T. Besides, we have also employed the increasing stochastic depth augmentation \cite{Stochasticdepth} with the maximum drop-path rate of 0.2, 0.3, 0.5 for our SepViT-T/S/B models. \begin{table}[ht] \centering \caption{Comparison of different state-of-the-art methods on ImageNet-1K classification. Throughput and latency are tested based on the PyTorch framework with a V100 GPU (batchsize=192) and TensorRT framework with a T4 GPU (batchsize=8).} \label{tab:ImageNet_1K} \scalebox{0.7}{ \begin{tabular}{c|cccc|c} \toprule \multirow{2}{*}{Method} & Param & FLOPs & Throughput & Latency & Top-1 Acc \\ & (M) & (G) & (Images/s) & (ms) & (\%) \\ \midrule \multicolumn{6}{c}{ConvNet} \\ \midrule RegNetY-4G\cite{RegNet} & 21.0 & 4.0 & 1064 & - & 80.0 \\ RegNetY-8G\cite{RegNet} & 39.0 & 8.0 & 548 & - & 81.7 \\ RegNetY-16G\cite{RegNet} & 84.0 & 16.0 & 305 & - & 82.9 \\ \midrule \multicolumn{6}{c}{Transformer} \\ \midrule DeiT-Small/16\cite{Deit} & 22.0 & 4.6 & 406 & - & 79.9 \\ T2T-ViT-14\cite{T2T} & 22.0 & 5.2 & - & - & 81.5 \\ TNT-S\cite{TNT} & 23.8 & 5.2 & - & - & 81.3 \\ CoaT-Lite-Small\cite{CoaT} & 20.0 & 4.0 & - & - & 81.9 \\ CvT-13\cite{CvT} & 20.0 & 4.5 & - & - & 81.6 \\ PVT-Small\cite{PVT_v1} & 24.5 & 3.8 & 794 & 20.7 & 79.8 \\ Swin-T\cite{Swin} & 29.0 & 4.5 & 704 & - & 81.3 \\ Twins-PCPVT-S\cite{Twins} & 24.1 & 3.8 & 778 & 20.2 & 81.7 \\ Twins-SVT-S\cite{Twins} & 24.0 & 2.9 & 979 & 18.8 & 81.7 \\ CSWin-T\cite{CSWin} & 23.0 & 4.3 & 627 & 30.2 & 82.7 \\ PVT-v2-B2\cite{PVT_v2} & 25.4 & 4.0 & 664 & 37.8 & 82.0 \\ {\textbf{SepViT-T}} & 31.2 & 4.5 & 738 & 23.7 & 82.5 \\ \midrule T2T-ViT-19\cite{T2T} & 39.2 & 8.9 & - & - & 81.9 \\ CoaT-Lite-Medium\cite{CoaT} & 45.0 & 9.8 & - & - & 83.6 \\ CvT-21\cite{CvT} & 32.0 & 7.1 & - & - & 82.5 \\ PVT-Medium\cite{PVT_v1} & 44.2 & 6.7 & 511 & 30.0 & 81.2 \\ Swin-S\cite{Swin} & 50.0 & 8.7 & 412 & - & 83.0 \\ Twins-PCPVT-B\cite{Twins} & 43.8 & 6.7 & 502 & 29.1 & 82.7 \\ Twins-SVT-B\cite{Twins} & 56.0 & 8.6 & 433 & 38.9 & 83.2 \\ CSWin-S\cite{CSWin} & 35.0 & 6.9 & 390 & 52.0 & 83.6 \\ PVT-v2-B3\cite{PVT_v2} & 45.2 & 6.9 & 443 & 54.1 & 83.2 \\ \textbf{SepViT-S} & 46.6 & 7.5 & 476 & 34.5 & 83.5 \\ \midrule Deit-Base/16\cite{Deit} & 86.6 & 17.6 & 273 & - & 81.8 \\ T2T-ViT-24\cite{T2T} & 64.1 & 14.1 & - & - & 82.3 \\ TNT-B\cite{TNT} & 66.0 & 14.1 & - & - & 82.8 \\ PVT-Large\cite{PVT_v1} & 61.4 & 9.8 & 357 & 43.2 & 81.7 \\ Swin-B\cite{Swin} & 88.0 & 15.4 & 255 & - & 83.3 \\ Twins-PCPVT-L\cite{Twins} & 60.9 & 9.8 & 339 & 39.8 & 83.1 \\ Twins-SVT-L\cite{Twins} & 99.2 & 15.1 & 271 & 48.3 & 83.7 \\ CSWin-B\cite{CSWin} & 78.0 & 15.0 & 216 & 76.1 & 84.2 \\ PVT-v2-B4\cite{PVT_v2} & 62.6 & 10.1 & 298 & 75.5 & 83.6 \\ PVT-v2-B5\cite{PVT_v2} & 82.0 & 11.8 & 285 & 77.5 & 83.8 \\ \textbf{SepViT-B} & 82.3 & 13.1 & 308 & 46.0 & 84.0 \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Results.} As shown in Table \ref{tab:ImageNet_1K}, compared to the latest state-of-the-art methods, our SepViT achieves the best trade-off between accuracy and latency. To be more specific, our SepViT-B and SepViT-S achieve 84.0\% and 83.5\% top-1 accuracy, surpassing Swin by 0.7\% and 0.5\% with about 14\% fewer FLOPs. And for the tiny variant, our SepViT-T outperforms the Swin-T by 1.2\% with the same FLOPs of 4.5G. In terms of latency, compared to the recent methods (e.g., PVT-v2 \cite{PVT_v2} and CSWin \cite{CSWin}) with similar performance, both of our small and base models cost about 40\% less inference time. Moreover, our SepViT shows very promising performance by comparison with the CNNs counterparts. \subsection{ADE20K Semantic Segmentation} \begin{table} \centering \caption{Comparison of different backbones on ADE20K semantic segmentation task. FLOPs are measured with the input size of 512$\times$2048.} \label{tab:ADE20K} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{c|ccc|ccc} \toprule \multirow{2.5}{*}{Backbone} & \multicolumn{3}{c|}{Semantic FPN 80k} & \multicolumn{3}{c}{UperNet 160k} \\ \cmidrule(l){2-7} & Param(M) & FLOPs(G) & mIoU(\%) & Param(M) & FLOPs(G) & mIoU/MS mIoU(\%) \\ \midrule ResNet50\cite{ResNet} & 28.5 & 183 & 36.7 & - & - & -/- \\ PVT-Small\cite{PVT_v1} & 28.2 & 161 & 39.8 & - & - & -/- \\ Swin-T\cite{Swin} & 31.9 & 182 & 41.5 & 59.9 & 945 & 44.5/45.8 \\ Twins-PCPVT-S\cite{Twins} & 28.4 & 162 & 44.3 & 54.6 & 919 & 46.2/47.5 \\ Twins-SVT-S\cite{Twins} & 28.3 & 144 & 43.2 & 54.4 & 901 & 46.2/47.1 \\ \textbf{SepViT-T} & 38.8 & 181 & \textbf{44.3} & 66.8 & 940 & \textbf{46.9}/\textbf{47.7} \\ \midrule ResNet101\cite{ResNet} & 47.5 & 260 & 38.8 & 86.0 & 1092 & -/44.9 \\ PVT-Medium\cite{PVT_v1} & 48.0 & 219 & 41.6 & - & - & -/- \\ Swin-S\cite{Swin} & 53.2 & 274 & 45.2 & 81.3 & 1038 & 47.6/\textbf{49.5} \\ Twins-PCPVT-B\cite{Twins} & 48.1 & 220 & 44.9 & 74.3 & 977 & 47.1/48.4 \\ Twins-SVT-B\cite{Twins} & 60.4 & 261 & 45.3 & 88.5 & 1020 & 47.7/48.9 \\ \textbf{SepViT-S} & 55.4 & 244 & \textbf{46.1} & 83.4 & 1003 & \textbf{48.1}/49.2 \\ \midrule ResNeXt101-64$\times$4d\cite{ResNeXt} & 86.4 & - & 40.2 & - & - & -/- \\ PVT-Large\cite{PVT_v1} & 65.1 & 283 & 42.1 & - & - & -/- \\ Swin-B\cite{Swin} & 91.2 & 422 & 46.0 & 121.0 & 1188 & 48.1/49.7 \\ Twins-PCPVT-L\cite{Twins} & 65.3 & 283 & 46.4 & 91.5 & 1041 & 48.6/49.8 \\ Twins-SVT-L\cite{Twins} & 103.7 & 404 & 46.7 & 133.0 & 1164 & 48.8/49.7 \\ \textbf{SepViT-B} & 94.7 & 367 & \textbf{47.3} & 124.8 & 1128 & \textbf{49.1}/\textbf{50.4} \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Settings.} To further verify the capacity of our SepViT, we conduct the semantic segmentation experiment on ADE20K \cite{ADE20K}, which contains about 20K training images and 2K validation images from 150 categories. To make fair comparisons, we also follow the training conventions of the previous vision Transformers \cite{PVT_v1,Swin,Twins} on the Semantic FPN \cite{Semantic_FPN} and UperNet \cite{UperNet} frameworks. All of our models are pre-trained on the ImageNet-1k and then finetuned on ADE20K with the input size of 512$\times$512. For the Semantic FPN framework, we adopt the AdamW optimizer with both the learning rate and weight decay being 0.0001. Then we train the whole networks for 80K iterations with the total batch size of 16 based on the stochastic depth of 0.2, 0.3, and 0.4 for SepViT-T/S/B. For the training and testing on the UperNet framework, we train the models for 160K iterations with the stochastic depth of 0.3, 0.3, and 0.5. AdamW optimizer is used as well but with the learning rate $6\times10^{-5}$, total batch size 16 and weight decay 0.01 for SepViT-T/S and 0.03 for SepViT-B. Then we test the mIoU based on both single-scale and multi-scale (MS) where the scale goes from 0.5 to 1.75 with an interval of 0.25. \subsubsection{Results.} In Table \ref{tab:ADE20K}, we make a comparison with the recent vision Transformer and CNN backbones. Based on the Semantic FPN framework, SepViT-T, SepViT-S and SepViT-B surpass Swin's variants by 2.8\%, 0.9\% and 1.3\% mIoU with about 1G, 30G and 55G fewer FLOPs, respectively. Meanwhile, our SepViT shows great advantage over CNNs (e.g., ResNet\cite{ResNet}). By contrast to Swin on the UperNet framework, our models achieve 2.4\%, 0.5\% and 1.0\% higher mIoU with fewer FLOPs in terms of single-scale testing. Extensive experiments reveal that our SepViT shows great potential on segmentation tasks. \subsection{COCO Object Detection and Instance Segmentation} \begin{table}[th] \caption{Comparison of different backbones on RetinaNet-based object detection task. FLOPs are measured with the input size of $800 \times 1280$.} \label{tab:RetinaNet_COCO} \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{c|cc|cccccc|cccccc} \toprule \multirow{2}{*}{Backbone} & Param & FLOPs & \multicolumn{6}{c|}{RetinaNet 1$\times$} & \multicolumn{6}{c}{RetinaNet 3$\times$ + MS} \\ \cline{4-15} & (M) & (G) & AP & AP$_{50}$ & AP$_{75}$ & AP$_S$ & AP$_M$ & AP$_L$ & AP & AP$_{50}$ & AP$_{75}$ & AP$_S$ & AP$_M$ & AP$_L$ \\ \midrule ResNet50\cite{ResNet} & 37.7 & 239 & 36.3 & 55.3 & 38.6 & 19.3 & 40.0 & 48.8 & 39.0 & 58.4 & 41.8 & 22.4 & 42.8 & 51.6 \\ PVT-Small\cite{PVT_v1} & 34.2 & 226 & 40.4 & 61.3 & 43.0 & 25.0 & 42.9 & 55.7 & 42.2 & 62.7 & 45.0 & 26.2 & 45.2 & 57.2 \\ Swin-T\cite{Swin} & 38.5 & 245 & 41.5 & 62.1 & 44.2 & 25.1 & 44.9 & 55.5 & 43.9 & 64.8 & 47.1 & 28.4 & 47.2 & 57.8 \\ Twins-PCPVT-S\cite{Twins} & 34.4 & 226 & 43.0 & 64.1 & 46.0 & 27.5 & 46.3 & 57.3 & 45.2 & 66.5 & 48.6 & 30.0 & 48.8 & 58.9 \\ Twins-SVT-S\cite{Twins} & 34.4 & 210 & 43.0 & 64.2 & \textbf{46.3} & 28.0 & 46.4 & 57.5 & 45.6 & 67.1 & 48.6 & 29.8 & 49.3 & 60.0 \\ \textbf{SepViT-T} & 45.4 & 243 & \textbf{43.9} & \textbf{65.1} & 46.2 & \textbf{28.4} & \textbf{47.3} & \textbf{58.5} &\textbf{ 46.2} & \textbf{67.7} & \textbf{49.4 } & \textbf{30.3 } & \textbf{49.8 } & \textbf{60.7} \\ \midrule ResNet101\cite{ResNet} & 58.0 & 315 & 38.5 & 57.8 & 41.2 & 21.4 & 42.6 & 51.1 & 40.9 & 60.1 & 44.0 & 23.7 & 45.0 & 53.8 \\ PVT-Medium\cite{PVT_v1} & 53.9 & 283 & 41.9 & 63.1 & 44.3 & 25.0 & 44.9 & 57.6 & 43.2 & 63.8 & 46.1 & 27.3 & 46.3 & 58.9 \\ Swin-S\cite{Swin} & 59.8 & 335 & 44.5 & 65.7 & 47.5 & 27.4 & 48.0 & 59.9 & 46.3 & 67.4 & 49.8 & 31.1 & 50.3 & 60.9 \\ Twins-PCPVT-S\cite{Twins} & 54.1 & 283 & 44.3 & 65.6 & 47.3 & 27.9 & 47.9 & 59.6 & 46.4 & 67.7 & 49.8 & 31.3 & 50.2 & 61.4 \\ Twins-SVT-B\cite{Twins} & 67.0 & 326 & 45.3 & 66.7 & 48.1 & 28.5 & 48.9 & 60.6 & 46.9 & 68.0 & 50.2 & 31.7 & 50.3 & 61.8 \\ \textbf{SepViT-S} & 61.9 & 302 &\textbf{ 45.5 } &\textbf{ 66.8} &\textbf{ 48.3} & \textbf{28.9} & \textbf{49.4 } & \textbf{60.8 } & \textbf{47.5} & \textbf{68.9} & \textbf{50.9} & \textbf{32.4} & \textbf{51.1} &\textbf{ 62.5} \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Settings.} Next, we evaluate SepViT on the objection detection and instance segmentation task \cite{COCO} based the RetinaNet \cite{RetinaNet} and Mask R-CNN \cite{Mask_RCNN} frameworks with COCO2017 \cite{COCO}. Specifically, all of our models are pre-trained on ImageNet-1K and then finetuned following the settings of the previous works \cite{PVT_v1,Swin,Twins}. As for the 12 epochs (1$\times$) experiment, both the RetinaNet-based and the Mask R-CNN-based models use the AdamW optimizer with the weight decay 0.001 for SepViT-T and 0.0001 for SepViT-S. And they are trained with the total batch size of 16 based on the stochastic depth of 0.2 and 0.3 for SepViT-T/S. \begin{table}[ht] \centering \caption{Comparison of different backbones on Mask R-CNN-based object detection and instance segmentation tasks. FLOPs are measured with the inpus size of $800 \times 1280$. The superscript $b$ and $m$ denote the box detection and mask instance segmentation.} \label{tab:MaskRCNN_COCO} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{c|cc|cccccc|cccccc} \toprule \multirow{2}{*}{Backbone} & Param & FLOPs & \multicolumn{6}{c|}{Mask R-CNN 1$\times$} & \multicolumn{6}{c}{Mask R-CNN 3$\times$ + MS} \\ \cline{4-15} & (M) & (G) & AP$^b$ & AP$_{50}^b$ & AP$_{75}^b$ & AP$^m$ & AP$_{50}^m$ & AP$_{75}^m$ & AP$^b$ & AP$_{50}^b$ & AP$_{75}^b$ & AP$^m$ & AP$_{50}^m$ & AP$_{75}^m$ \\ \midrule ResNet50\cite{ResNet} & 44.2 & 260 & 38.0 & 58.6 & 41.4 & 34.4 & 55.1 & 36.7 & 41.0 & 61.7 & 44.9 & 37.1 & 58.4 & 40.1 \\ PVT-Small\cite{PVT_v1} & 44.1 & 245 & 40.4 & 62.9 & 43.8 & 37.8 & 60.1 & 40.3 & 43.0 & 65.3 & 46.9 & 39.9 & 62.5 & 42.8 \\ Swin-T\cite{Swin} & 47.8 & 264 & 42.2 & 64.4 & 46.2 & 39.1 & 64.6 & 42.0 & 46.0 & 68.2 & 50.2 & 41.6 & 65.1 & 44.8 \\ Twins-PCPVT-S\cite{Twins} & 44.0 & 245 & 42.9 & 65.8 & 47.1 & 40.0 & 62.7 & 42.9 & 46.8 & 69.3 & 51.8 & 42.6 & 66.3 & 46.0 \\ Twins-SVT-S\cite{Twins} & 44.0 & 228 & 43.4 & 66.0 & 47.3 & 40.3 & 63.2 & 43.4 & 46.8 & 69.2 & 51.2 & 42.6 & 66.3 & 45.8 \\ \textbf{SepViT-T} & 54.7 & 261 & \textbf{44.4} & \textbf{67.1 } & \textbf{48.3} &\textbf{41.1} & \textbf{64.1} & \textbf{43.9} & \textbf{47.5} & \textbf{70.0} & \textbf{52.3} & \textbf{43.2} & \textbf{67.1} & \textbf{46.3} \\ \midrule ResNet101\cite{ResNet} & 63.2 & 336 & 40.4 & 61.1 & 44.2 & 36.4 & 57.7 & 38.8 & 42.8 & 63.2 & 47.1 & 38.5 & 60.1 & 41.3 \\ ResNeXt101-32$\times$4d\cite{ResNeXt} & 63.0 & 340 & 41.9 & 62.5 & 45.9 & 37.5 & 59.4 & 40.2 & 44.0 & 64.4 & 48.0 & 39.2 & 61.4 & 41.9 \\ PVT-Medium\cite{PVT_v1} & 63.9 & 302 & 42.0 & 64.4 & 45.6 & 39.0 & 61.6 & 42.1 & 44.2 & 66.0 & 48.2 & 40.5 & 63.1 & 43.5 \\ Swin-S\cite{Swin} & 69.1 & 354 & 44.8 & 66.6 & 48.9 & 40.9 & 63.4 & 44.2 & 48.5 & 70.2 & 53.5 & 43.3 & 67.3 & 46.6 \\ Twins-PCPVT-B\cite{Twins} & 64.0 & 302 & 44.6 & 66.7 & 48.9 & 40.9 & 63.8 & 44.2 & 47.9 & 70.1 & 52.5 & 43.2 & 67.2 & 46.3 \\ Twins-SVT-B\cite{Twins} & 76.3 & 340 & 45.2 & 67.6 & 49.3 & 41.5 & 64.5 & 44.8 & 48.0 & 69.5 & 52.7 & 43.0 & 66.8 & 46.6 \\ \textbf{SepViT-S} & 71.3 & 321 &\textbf{ 46.3} &\textbf{ 68.0} & \textbf{49.8 } & \textbf{42.3} & \textbf{65.8} & \textbf{45.3} & \textbf{48.7 } & \textbf{70.5} &\textbf{ 53.7} &\textbf{ 43.9} &\textbf{ 67.7 } & \textbf{47.1} \\ \bottomrule \end{tabular} } \end{table} During the training, there are 500 iterations for warm-up and the learning rate will decline by 10$\times$ at epochs 8 and 11. For the 36 epochs (3$\times$) experiment with multi-scale (MS) training, models are trained with the resized images such that the shorter side ranges from 480 to 800 and the longer side is at most 1333. Moreover, most of all the other settings are the same as the 1$\times$ except that the stochastic depth is 0.3 for SepViT-T, the weight decay becomes 0.05 and 0.1 for SepViT-T/S, and the decay epochs are 27 and 33. \subsubsection{Results.} Table \ref{tab:RetinaNet_COCO} reports object detection results using the RetinaNet framework. It indicates that our SepViT can achieve competitive performance, compared with the recent vision Transformers and CNNs. For the 1$\times$ schedule, our SepViT-T and SepViT-S surpass the Swin by 2.4 AP and 1.0 AP with fewer FLOPs. In particular, our SepViT variants achieve a state-of-the-art performance with 46.2 AP and 47.5 AP in the 3$\times$ experiment. Table \ref{tab:MaskRCNN_COCO} shows the evaluation result with Mask R-CNN framework. We can see that our SepViT-T outperforms Swin-T by 2.2 box AP, 2.0 mask AP with 1$\times$ schedule, and 1.5 box AP, 1.6 mask AP with 3$\times$ schedule. For the SepViT-S variant, it achieves a similar performance gain while saving a certain amount of computation overhead. \begin{table}[t] \centering \caption{Ablation studies of the key components in our SepViT. LWT means initializing the window tokens with learnable vectors.} \label{tab:Ablation1} \resizebox{0.85\textwidth}{!}{ \begin{tabular}{@{}c|ccc|ccc|c@{}} \toprule Model & DSSA & GSA & LWT & Param(M) & FLOPs(G) & Throughput(Images/s)) & Top-1 Acc(\(\% \)) \\ \midrule Swin-T+CPVT \cite{Twins} & & & & 28.0 & 4.4 & 704 & 81.2 \\ SepViT-T$\dagger$ & $\surd$ & & & 29.3 & 4.3 & 755 & 81.7 \\ \midrule \multirow{3}{*}{SepViT-T} & $\surd$ & & & 32.1 & 4.4 & 746 & 82.0 \\ & $\surd$ & $\surd$ & & 31.2 & 4.5 & 738 & 82.3 \\ & $\surd$ & $\surd$ & $\surd$ & 31.2 & 4.5 & 738 & \textbf{82.5} \\ \bottomrule \end{tabular} } \end{table} \subsection{Ablation Study} To better demonstrate the significance of each key component, including depthwise separable self-attention, grouped self-attention and the novel window token embedding scheme in SepViT, we conduct a series of ablation experiments on ImageNet-1K classification with SepViT-T variant. \subsubsection{Efficient Components.} As mentioned above, SepViT adopts the conditional position encoding (CPE) \cite{CPVT} and the overlapping patch embedding (OPE) \cite{ViT}. Therefore, we take the Swin-T+CPVT reported in \cite{Twins} as the baseline and we produce the SepViT-T$\dagger$ with CPE but without OPE to eliminate the influence of other factors. As shown in Table \ref{tab:Ablation1} where each component is added in turn to verify their benefits, our SepViT-T$\dagger$ simply equipped with the depthwise separable self-attention block (DSSA) outperforms Swin+CPVT by 0.5\% and it is much faster than Swin with the throughput of 755 images/s. Meanwhile, our SepViT-T with CPE, OPE and DSSA achieves 82.0\% top-1 accuracy. After employing grouped self-attention block (GSA) and DSSA alternately in the second and third stages, we gain an accuracy improvement of 0.3\%. \subsubsection{Window Token Embedding.} We further study whether it makes a difference if the window token is initialized with a fixed zero vector or a learnable vector. In contrast to the fixed zero initialization scheme, the learnable window token helps our SepViT-T to boost the performance to 82.5\%, as shown in the last row of Table \ref{tab:Ablation1}. \begin{table}[th] \centering \caption{Comparison of different approaches of getting the global representation of each window in SepViT.} \label{tab:Ablation2} \resizebox{0.75\textwidth}{!}{ \begin{tabular}{@{}c|ccc|c@{}} \toprule Method & Param(M) & FLOPs(G) & Throughput(Images/s)) & Top-1 Acc(\(\% \)) \\ \midrule Win\_Tokens & 31.2 & 4.5 & 738 & \textbf{82.5} \\ Avg\_Pooling & 31.2 & 4.5 & 743 & 82.1 \\ Dw\_Conv & 31.3 & 4.5 & 735 & 82.3 \\ \bottomrule \end{tabular} } \end{table} Moreover, to verify the effectiveness of learning the global representation of each window with our window token embedding scheme (Win\_Tokens), we further study some other methods that directly get the global representations from the output feature maps of DWA, such as average pooling (Avg\_Pooling) and depthwise convolution (DW\_Conv). As the results illustrated in Table \ref{tab:Ablation2}, our window token embedding scheme achieves the best performance among these approaches. Meanwhile, the comparison of parameters and FLOPs between Win\_Token and Avg\_Pooling methods demonstrates that our window token embedding scheme brings negligible computational cost. \begin{table}[t] \centering \caption{Comparison of lite models on ImageNet-1K classification.} \label{tab:Ablation3} \resizebox{0.6\textwidth}{!}{ \begin{tabular}{c|cc|c} \toprule Method & Param(M) & FLOPs(G) & Top-1 Acc(\(\% \)) \\ \midrule MobileNet-V2 \cite{MobileNet_v2} & 3.4 & 0.3 & 71.8 \\ ResNet18 \cite{ResNet} & 11.1 & 1.8 & 69.8 \\ PVTv2-B0 \cite{PVT_v2} & 3.4 & 0.6 & 70.5 \\ {\bf SepViT-Lite} & 3.7 & 0.6 & \textbf{72.3} \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Comparison with Lite Models.} To further explore the potential of SepViT, we scale down SepViT to a lite model size (SepViT-Lite). As we can observe in Table \ref{tab:Ablation3}, SepViT-Lite obtains an excellent top-1 accuracy of 72.3\%, outperforming its counterparts with similar model sizes. \section{Conclusion} In this paper, we have presented an efficient Separable Vision Transformer, dubbed SepViT, which consists of three core designs. Firstly, depthwise separable self-attention enables SepViT to achieve information interaction within and among the windows in a single block. Next, the window token embedding scheme helps SepViT to model the attention relationship among windows with negligible computational cost. Thirdly, grouped self-attention enables SepViT to capture long-range visual dependencies across multiple windows for better performance. Experimental results on various vision tasks verify that SepViT achieves a better trade-off between performance and latency. \bibliographystyle{splncs04}
2023-04-23T08:17:31.726Z
2022-05-10T02:09:10.000Z
redpajama/arxiv
arxiv_0000
186
6,771
30bdbf208d83041979feab2b093cfc0761128235
\section{Introduction} Solid-state superconducting circuit studies the interaction between artificial atoms and quantized electromagnetic fields in the microwave frequency domain \cite{RevModPhys93/025005}. This architecture has emerged as one of the leading platforms for realizing quantum computation and simulation \cite{Arute2019,PhysRevLett127/180501}. Commonly, superconducting qubits are used for encoding quantum information, with superconducting resonators acting as date buses. However, the efficient generation, manipulation, and transmission of nontrivial quantum states in a linear resonator are also crucial to different kinds of quantum information tasks \cite{Gu2017}. The quantum states of a harmonic oscillator are extraordinarily rich, but are hard to access due to the infinitely equally spaced energy levels. This difficulty can be overcome by interposing a nonlinear artificial atom, and various quantum states in a resonator can be synthesized via the deliberate use of classical control signals, such as Fock state \cite{Hofheinz2008} and Schr\"{o}dinger cat state \cite{Ofek2016}. In recent years, the Josephson-photonics circuit of a dc voltage-biased Josephson junction in series with a microwave resonator has emerged as an alternative tool for efficient on-chip generation of coherent microwave photons \cite{M1,M2,M3,M4,M5,M6,M7,M8,M9,M10}. It relies on the exceptionally strong nonlinearity of light-charge interaction, and eliminates the need for any microwave drives\cite{D1,D2,D3,D4,Kubala2020}. The Josephson junction acts as a highly nonlinear driving element; that is, the inelastic Cooper pair tunneling across a junction can convert different numbers of photons from an easily controlled bias voltage source into a microwave cavity \cite{PhysRevLett247001,PhysRevLett247002}. By modulating the charge tunneling effect via a specifically tailored electromagnetic environment, the nonclassical microwave light can be generated \cite{non1,non2,non3,non4,non5,Dambach2019}. The resulting quantum electrodynamics of this simple circuit has been demonstrated to realize Josephson junction lasers \cite{L1,Cassidy2017,L2}, single-photon sources \cite{S0,S1,S2,S3}, multi-photon sources \cite{mul1,mul2,mul3}, and near quantum-limited amplifiers \cite{Jebari2018}. In parallel, a significant development in this field is to connect Josephson junctions with multiple resonators for various quantum technological applications, such as the implementations of entangled quantum microwaves \cite{Dambach2017} and microwave single-photon detectors \cite{SD}. While the simplest case is one voltage-biased junction in series with two cavities of incommensurate frequencies. When the bias voltage is tuned to match the energy required to simultaneously produce one photon in each cavity for a single Cooper pair traversing the circuit, the system can be effectively reduced as a nondegenerate parametric amplifier \cite{F1,F2,F3}. It allows the continuous emission of correlated photon pairs. Recent experiments have observed the amplitude squeezing \cite{squeeze} and entanglement \cite{entangle} of these output microwave beams. Especially, when the cavities possess the impedance of 4.1 k$\Omega$, there is no matrix element for a transition between the one and two photon states \cite{PhysRevLett247002,S2}. So, the two cavities can be regarded as two-level systems, leading to an antibunched photon-pair source \cite{F1}. However, the fabrication of coplanar waveguide resonators with such high impedances is highly challenging as the standard cavity designs only yields characteristic impedances of the order of 100 $\Omega$. To go beyond this limitation, we propose a more practical way to realize photon-pair blockade by regulating the anticorrelated behavior of the charge transport via a two-level charge qubit. In our scheme, we study the Josephson-photonics device of a voltage-biased Josephson junction in series with a charge qubit and two nondegenerate microwave resonators. The nonlinear qubit-resonator coupling can be sculpted via the phase difference across the junction. For each tunneling Cooper pair, the suitably set bias voltage enables the excitation of charge qubit and the creation of one photon in each resonator concurrently. Since the charge qubit is an ideal anharmonic element with two quantum energy levels, the anticorrelations of the tunneling Cooper pairs can be created, preventing the simultaneous tunnel events. Combined with the dissipation, we show that the two resonators can release their energies in the form of antibunched photon pairs in a controllable manner. Compared with the previous work \cite{F1}, the present one constitutes a significant step forward; that is, the photon-pair source can be achieved with the standard coplanar waveguide resonator designs, eliminating the need for ultrahigh cavity impedance that is not accessible in current experiment. Our work offers an appealing method for generating a bright nonclassical source of antibunched pairs of two strongly correlated photons, required at the core of quantum computing and quantum communication protocols \cite{P0,P1,P2,P3}. \begin{figure}[tbh] \centering \includegraphics[width=\linewidth]{fig1.eps} \caption{Schematic diagram of the proposed experimental setup that consists of a dc-SQUID coupled to a charge qubit and two nondegenerate $LC$ resonators. A dc bias voltage $V$ is applied across all the elements, and a flow of Cooper pairs across the circuit can pump both the qubit and the two LC$ resonators. In addition, each resonator is coupled to a transmission line with the coupling strength $\kappa$, which is used for collecting the emitted photons.} \label{fig.1} \end{figure} \section{Model} As shown in Figure 1, we investigate the Josephson-photonics circuit of a voltage-biased dc superconducting quantum interference device (dc-SQUID) in series with a charge qubit and two nondegenerate $LC$ resonators. We focus on the situation that the bias voltage $V$ is smaller than the gap voltage, and no quasi-particle excitation can be produced in the superconducting electrodes. So, the quantum transport of Cooper pairs through the circuit will supply energies to both the charge qubit and the two $LC$ resonators. The model Hamiltonian describing the entire setup takes the form (see Appendix) \begin{eqnarray}\label{1} H_{T} &=&E_{c}(n_{q}-n_{g})^{2}-E_{Jq}\cos \phi _{q}+\sum_{j=1,2}[\frac q_{j}^{2}}{2C_{j}}+(\frac{\hbar }{2e})^{2}\frac{\phi _{j}^{2}}{2L_{j}}] \nonumber \\ &&\quad -E_{J}\cos \phi _{J}-2en_{J}(V-V_{q}-V_{R1}-V_{R2}). \end{eqnarray} The first two terms denote the part of charge qubit, the third term describes the part of two $LC$ resonators, and the last two terms represent the part of dc-SQUID, where $V_{q}=-\hbar\dot{\phi}_{q}/2e$ is the voltage drop at the qubit, and $V_{Rj}=-\hbar\dot{\phi}_{j}/2e$ is the voltage drop at the $j$-th resonator. To exclude the Cooper pair number $n_{J}$, we perform a unitary transformation $U(t)=\exp [i(\omega _{J}t+\phi _{q}+\phi _{1}+\phi _{2})n_{J}]$ on the full Hamiltonian, where $\omega _{J}=2eV/\hbar $ is the Josephson frequency. Then, we can obtain \begin{eqnarray} \tilde{H}_{T} &=&U^{^{\dag }}(t)H_{T}U(t)+i\hbar \frac{dU^{^{\dag }}(t)}{dt U(t) \nonumber \\ &=&E_{c}(\tilde{n}_{q}-n_{g})^{2}-E_{Jq}\cos \eta _{q}+\sum_{j=1,2}[\frac \tilde{q}_{j}^{2}}{2C_{j}}+(\frac{\hbar }{2e})^{2}\frac{\phi _{j}^{2}}{2L_{j }] \nonumber \\ &&\quad -E_{J}\cos (\omega _{J}t+\phi _{q}+\phi _{1}+\phi _{2}), \end{eqnarray where $\tilde{n}_{q}=n_{q}+n_{J}$, $\tilde{q}_{j}=q_{j}+2en_{J}$ are the transformed number and charge operators, arising from the charge fluctuations regard to the flow of Cooper pairs through the dc-SQUID. As described by the last term in $\tilde{H}_{T},$ the nonlinear coupling between the charge qubit and the two cavities is established via the phase difference across the junctions of dc-SQUID. When the charge qubit is operated at the degeneracy point with $n_{g}=1/2$, we can quantize the excitations in the two resonators and qubit, and the Hamiltonian of the whole circuit will yield (hereafter let \hbar =1$) \begin{equation} H=\frac{1}{2}\delta \sigma _{z}+\sum_{j=1,2}\omega _{j}a_{j}^{\dag }a_{j}-E_{J}\cos [\omega _{J}t+\phi _{q}+\sum_{j=1,2}2\lambda _{j}(a_{j}^{\dag }+a_{j})]. \end{equation Note that the charge qubit has been reduced as a two-level system containing an excited state $|e\rangle $ and a ground state $|g\rangle $ under the basis of charge states \cite{Bouchiat1998,Makhlin1999,Nakamura1999,Jos}, i.e., $\sigma _{z}=|e\rangle \langle e|-|g\rangle \langle g|$ is the Pauli matrix, and $\delta =E_{Jq}$ is the energy splitting. $a_{j}^{\dag }$ ($a_{j}$) is the photon creation (annihilation) operator of the $j$th resonator ($[a_{j},a_{j}^{\dag }]=1$), and $\omega _{j}=1/\sqrt{L_{j}C_{j}}$ is the corresponding resonance frequency. The parameter $\lambda _{j}=\sqrt \pi Z_{j}/R_{K}}$ quantifies the amplitude of the cavity's zero-point displacement, and characterizes the coupling between the oscillator and the tunnel junction ($Z_{j}=\sqrt{L_{j}/C_{j}}$ is the characteristic impedance, and $R_{K}=h/e^{2}$ is the resistance quantum). \section{Photon-pair blockade} \subsection{Anticorrelations of the charge transport} In this section, we will illustrate the procedure for the realization of photon-pair blockade in the aforementioned two-mode superconducting circuit. The central idea is to create anticorrelations of the charge transport, which gives rise to the desired antibunching of the photon pairs leaking out of the two microwave resonators. To this end, we start to derive the effective Hamiltonian of the system, which helps to uncover the mechanism of our scheme. In the interaction picture with respect to the frame rotating $\exp (-iH_{0}t)$, where $H_{0} \frac{1}{2}\delta \sigma _{z}+\omega _{1}a_{1}^{\dag }a_{1}+\omega _{2}a_{2}^{\dag }a_{2}$ is the free Hamiltonian, we can obtain \begin{equation} H_{I}=-\frac{E_{J}}{4}e^{i\omega _{J}t}\left( \sigma _{+}e^{i\delta t}-\sigma _{-}e^{-i\delta t}-\sigma _{z}\right) D\left[ \alpha _{1}(t)\right] \times D\left[ \alpha _{2}(t)\right] +h.c., \end{equation where we have exploited the formula $e^{i\phi _{q}}=(\sigma _{+}-\sigma _{-}-\sigma _{z})/2$, and $\sigma _{+}=|e\rangle \langle g|$, $\sigma _{-}=|g\rangle \langle e|$ are the spin-ladder operators. In addition, $ \left[ \alpha _{j}(t)\right] $ is the cavity displacement operator with the time-dependent amplitude $\alpha _{j}(t)$, which is defined in terms of the photon creation and annihilation operators as \begin{equation} D\left[ \alpha _{j}(t)\right] =e^{[\alpha _{j}(t)a_{j}^{\dag }-\alpha _{j}^{\ast }(t)a_{j}]},\alpha _{j}(t)=2i\lambda _{j}e^{i\omega _{j}t}. \end{equation It is now clearly seen that the tunneling of Cooper pairs through a voltage-biased Josephson junction can not only displace the cavity fields, but also flip the qubit state. To formulate the desired coupling, we express $D\left[ \alpha _{j}(t)\right] $ directly in the Fock-state basis \cite{basis} \begin{equation} D\left[ \alpha _{j}(t)\right] =\sum_{n=0}^{\infty }(\sum\limits_{l=0}^{\infty }\beta _{n}^{n+l}(\lambda _{j})|n+l\rangle \langle n|e^{il\omega _{j}t}+\sum\limits_{l^{\prime }=1}^{\infty }\beta _{n}^{n+l^{\prime }}(\lambda _{j})|n\rangle \langle n+l^{\prime }|e^{-il^{\prime }\omega _{j}t}) \end{equation with \begin{equation} \beta _{n}^{n+l}(\lambda _{j})=\sqrt{\frac{n!}{(n+l)!}}\left( 2i\lambda _{j}\right) ^{l}e^{-2\lambda ^{2}}L_{n}^{(l)}(4\lambda _{j}^{2}). \end{equation Here $\beta _{n}^{n+l}(\lambda _{j})$ is a generalized Frank-Condon factor that describes an $l$-photon transition rate, and $L_{n}^{(l)}(4\lambda _{j}^{2})$ is a Laguerre polynomial. To go a further step, we should tune the bias voltage $V$ to meet the resonance condition $\omega _{J}=\delta +\omega _{1}+\omega _{2}$, i.e., the energy provided by the voltage source upon the transfer of a Cooper pair matches the sum of the excitation energy of qubit and the photon energies of the two oscillators. In this case, we can retain the resonant terms, but discard those fast oscillating terms under the rotating-wave approximation provided that the condition $\delta ,\omega _{j},|\omega _{1}-\omega _{2}|\gg \frac{E_{J}}{4}|\beta _{n}^{n+l}(\lambda _{1})\beta _{m}^{m+l^{\prime }}(\lambda _{2})|$ is satisfied. Thus, with all the other possibilities strongly suppressed, the effective Hamiltonian is derived as \begin{equation} H_{eff}=\sum\limits_{n,m=0}^{\infty }H_{nm} \end{equation with \begin{equation} H_{n,m}=g_{eff}^{n,m}|n,m,g\rangle \langle n+1,m+1,e|+h.c., \end{equation where $g_{eff}^{n,m}=\frac{E_{J}}{4}\beta _{n}^{n+1}(\lambda _{1})\beta _{m}^{m+1}(\lambda _{2})$ is the effective coupling strength, and |n,m,g\rangle $ is the tensor product of $|n\rangle \otimes |m\rangle \otimes |g\rangle $. \begin{figure}[th] \centering \includegraphics[width=9cm]{fig2.eps} \caption{(Color online) Time evolution of the state populations $P_{00g}$ and $P_{11e}$. The blue solid and red dashed curves are simulated with the original Hamiltonian $H_{\mathrm{I}}$, while the black dotted and green dash-dotted curves are achieved with the effective Hamiltonian $H_{\mathrm eff}}$. The relevant parameters are chosen as $\protect\omega _{1}/2\protec \pi =9$ GHz, $\protect\omega _{2}/2\protect\pi =7$ GHz, $\protect\delta / \protect\pi =5$ GHz, $E_{J}/2\protect\pi =0.5$ GHz, and $\protect\lambda _{1}=\protect\lambda _{2}=0.2$. } \label{fig.2} \end{figure} The effective Hamiltonian $H_{eff}$ elucidates the process of energy conversion of Josephson frequency to qubit excitation and photon production in the two resonators. Note that each subunit $H_{n,m}$ can induce a coherent quantum transition from the state $|n,m,g\rangle $ to |n+1,m+1,e\rangle $; that is, each Cooper pair can tunnel to simultaneously populate the qubit and add one photon in each cavity. Since the charge qubit is a two-level system, its excitation will greatly inhibit the whole system's transitions to higher occupations. Thus, the anticorrelations of the charge transport are created: a former Cooper-pair tunneling event acts back onto the next one; that is, a second Cooper pair can not pass through the circuit until the flip of the qubit state. This is the key point to induce antibunching in the photon emission. If the system is initially prepared in the ground state |0,0,g\rangle$, the Hamiltonian H_{0,0}$ will dominate the dynamics, enabling a Rabi oscillation |0,0,g\rangle \longleftrightarrow |1,1,e\rangle $. Obviously, the absorption of one photon in each cavity is accompanied with a excitation in the charge qubit. Since the charge qubit is treated as a two-level system, its excitation will inhibit further photon absorption, which is the mechanism involved in photon blockade \cite{PhysRevLett.118.133604,PhysRevA.97.013851}. Consequently, the photon-pair blockade can be realized in this two-cavity system, which offers a source of antibunched pairs of two strongly correlated photons. It is also pointed out here that the charge qubit has finite nonlinearity in practice, and many other higher excited states exist. So, the excitation of these states is accompanied with the higher photon number excitations, which will degrade the photon blockade. However, the charge qubit is operated in the regime in which the charging energy $E_{c}$ is much larger than the Josephson coupling energy $E_{Jq}$ \cite{Bouchiat1998,Makhlin1999,Nakamura1999,Jos}. The third energy level has a eigenfrequency about $6E_{c}$, which is far greater than the transition frequency $E_{Jq}$ of the lowest two energy levels. So, the higher-order qubit excitations are greatly inhibited, which has negligible effect on the photon blockade. Compared with the previous proposal \cite{F1}, our scheme can be implemented with the standard coplanar waveguide resonator designs, greatly lowering the requirement for ultrahigh cavity impedance that is not accessible in current experiment. To check the validity of the approximation, we now investigate the dynamics of the system by numerically solving the Schr\"{o}dinger equation with both the full Hamiltonian $H_{I}$ and the effective Hamiltonian $H_{eff}$. With the system initialized in the ground state |0,0,g\rangle$, the time evolution of populations $P_{00g}$ in the state $|0,0,g\rangle $ and $P_{11e} $ in the state $|1,1,e\rangle $ is shown in Figure 2. The perfect Rabi oscillations |0,0,g\rangle \longleftrightarrow |1,1,e\rangle $ are observed with both of these two Hamiltonians $H_{I}$ and $H_{eff}$, implying that our approximation is valid. \subsection{Antibunched photon pair emission} As a trigger of quantum emission of antibunched photon pairs, dissipation has to be taken into account. When the system-environment coupling is considered in the Born-Markov approximation, the time evolution of density matric $\rho $ of the whole system is now governed by the master equation \begin{equation} \frac{d\rho }{dt}=-i[H_{eff},\rho ]+\sum_{j=1,2}\frac{\kappa }{2 D[a_{j}]\rho +\frac{\gamma }{2}D[\sigma _{-}]\rho , \end{equation where $D[o]\rho =2o\rho o^{\dag }-o^{\dag }o\rho -\rho o^{\dag }o$ is the standard Lindblad operator for a given operator $o$, and $\kappa $ ($\gamma ) denotes the energy damping rate of cavities (qubit). In the presence of dissipation, a pair of two strongly correlated photons will be transferred outside of the two cavities for each tunneling Cooper pair. We now detail the underlying principle of the fundamental dynamics of this photon-pair emission below. \begin{figure}[tbh] \centering \includegraphics[width=14.5cm]{fig3.eps} \caption{(Color online) Steady-state photon correlation functions versus \protect\kappa \protect\tau $: [a], [b] for the stand second-order correlation functions $g_{11}^{(2)}(\protect\tau )$, $g_{22}^{(2)}(\protec \tau )$; [c] for the cross-correlation function $g_{12}^{(2)}(\protect\tau ) ; [d] for the generalized second-order correlation functions g_{12,12}^{(2)}(\protect\tau )$. The cavity damping rate is chosen as \protect\kappa $/2$\protect\pi $ = 0.1 GHz, and the other parameters are chosen to be the same as those in Figure 2.} \label{fig.3} \end{figure} Specifically, we prepare the system initially in the ground state, and the tunneling of a Cooper pair will draw energy quanta from the bias voltage, inducing the coherent transition $|0,0,g\rangle \longrightarrow |1,1,e\rangle $. Due to the energy upper limit of the two-level charge qubit, the system populated in the state $|1,1,e\rangle $ can not be excited to higher energy levels. This indicates that a second Cooper pair can't pass through the circuit only after the spontaneously emission of the charge qubit. So, to guarantee the desired antibunching, it is crucial to meet the condition $\kappa \gg \gamma $, i.e., the coherence time of charge qubit is much longer than that of cavities. In this situation, the two cavities will take the lead to emit two correlated photons within the lifetime $1/\kappa , stemming from the spontaneous emission of $|1,1,e\rangle $ state via the photonic dissipation. Then, the wavefunction of the system is collapsed into the state $|0,0,e\rangle $ but without Rabi flopping. Only after a quantum jump $|0,0,e\rangle \longrightarrow |0,0,g\rangle $ of the qubit state within a relative longer lifetime $1/\gamma $, a second Cooper pair can tunnel to restart the coherent transition $|0,0,g\rangle \longrightarrow |1,1,e\rangle $ for the next emission of a photon pair. This is the mechanism for generating antibunched photon pairs. On the other hand, it is worth noting here that we also can not make an arbitrary small $\gamma$. For $\gamma\rightarrow0$, the system seems to behave as a completely antibunched photon-pair source. However, it will take a very long time to flip the state of charge qubit $|0,0,e\rangle \longrightarrow |0,0,g\rangle$ and reconstruct the state $|1,1,e\rangle $ for the two cavities to emit a next correlated photon pair. This will result in an extremely low emission rate. Therefore, there is a tradeoff between the emission rate and the nonclassical property of the radiation field, which can be balanced by tuning the ratio $\gamma/\kappa$. \begin{figure}[tbh] \centering \includegraphics[width=10cm]{fig4.eps} \caption{(Color online) The photon-pair emission rate $S$ versus $\protec \gamma/\protect\kappa$. The relevant parameters are chosen to be the same as those in Figure 3.} \label{fig.4} \end{figure} To describe the quantum statistics of the photon emission, we further study the following time-delay correlation functions \begin{equation} g_{pq}^{(2)}(\tau )=\frac{\langle a_{p}^{\dag }(0)a_{q}^{\dag }(\tau )a_{q}(\tau )a_{p}(0)\rangle }{\langle (a_{p}^{\dag }a_{p})(0)\rangle \langle (a_{q}^{\dag }a_{q})(\tau )\rangle } \end{equation with $p=1,2$ and $q=1,2$. For $p=q$, $g_{pq}^{(2)}(\tau )$ is just the standard second-order correlation function that can quantify the photon correlation emitted by a single cavity. While for $p\neq q$, it represents the cross-correlation between the photons emitted by different cavities. Besides, we should also introduce the generalized second-order correlation function \begin{equation} g_{12,12}^{(2)}(\tau )=\frac{\langle a_{1}^{\dag }(0)a_{2}^{\dag }(0)a_{1}^{\dag }(\tau )a_{2}^{\dag }(\tau )a_{1}(\tau )a_{2}(\tau )a_{1}(0)a_{2}(0)\rangle }{\langle (a_{1}^{\dag }a_{2}^{\dag }a_{1}a_{2})(0)\rangle \langle (a_{1}^{\dag }a_{2}^{\dag }a_{1}a_{2})(\tau )\rangle }, \end{equation where the joint two-photon emission event by the two cavities is considered as a single entity \cite{Munoz2014,PhysRevA103/053710}. Here, $g_{12,12}^{(2)}(\tau )$ can capture the fundamental dynamics of photon-pair emission, and characterize the quantum statistics of photon pairs. In Figure 3, we plot the different steady-state correlation functions by numerically solving the master equation (10). For $\kappa\gg\gamma$, the zero-delay second-order correlation functions $g_{11}^{(2)}(0)\rightarrow0$ and $g_{22}^{(2)}(0)\rightarrow0$ are observed in Figures 3(a) and 3(b), exhibiting distinct antibunching effects. So, each cavity behaves as an excellent single-photon emitter. As seen in Figure 3(c), the zero-delay cross-correlation function yields $g_{12}^{(2)}(0)\gg 1$. This indicates that a pair of strongly correlated photons are emitted simultaneously by the two cavities. Moreover, the generalized zero-delay second-order correlation function $g_{12,12}^{(2)}(0)\ll1$ in Figure 3(d) manifests clearly that the two cavities release their energies in the form of antibunched photon pairs. In addition, as expected before, the changes of the different zero-delay correlation functions indicate that our device approaches a perfect photon-pair emitter with the decrease of $\gamma$. Finally, we investigate the emission rate of our photon-pair source. It is defined a \begin{equation} S=\kappa \overline{n}, \end{equation where $\overline{n}$ is the average photon numbers of the two cavities. The emission rate $S$ versus $\gamma /\kappa $ is displayed in Figure 4. Under the premise of $\kappa \gg \gamma $, we can observe that a tunable emission rate can be achieved, i.e., $S$ gradually increases with the increase of $\gamma . Hence, the emission rate can be experimentally controlled by changing the distance between the cavity and the transmission line to adjust the value of $\kappa $ \cite{S2}. With the currently available parameters $\omega _{1}/2\pi =9$ GHz, $\omega _{2}/2\pi =7$ GHz, $\delta /2\pi =5$ GHz, $E_{J}/2\pi =0.5$ GHz, $\lambda _{1}=\lambda _{2}=0.2$, we can obtain an emission rate of the order of MHz. \section{Conclusion} In conclusion, we have proposed a practical approach to generate antibunched photon pairs in a Josephson-photonics circuit of a dc voltage-biased Josephson junction coupled to both a superconducting charge qubit and two nondegenerate microwave cavities. Under an appropriate bias voltage, each Cooper pair can tunnel inelastically to cause the excitation of charge qubit and the creation of one photon in each cavity. We demonstrate that the charge transport can be controlled via the two-level charge qubit, preventing the simultaneous Cooper pair tunneling events. As a result, the photon-pair blockade can be realized, i.e., the presence of a qubit excitation will impede further photon absorption. Together with the photonic dissipation, the two cavities can emit antibunched pairs of two strongly correlated photons with a tunable emission rate. Moreover, the generation of such a nonclassical microwave source is compatible with current experimental architectures, and may stimulate a variety of applications in the field of quantum information science. \section*{acknowledgments} This work was supported by the National Natural Science Foundation of China (Grant Nos. 11704306 and 12074307), the Fundamental Research Funds for the Central Universities (Grant No. 11913291000022), and the Doctoral Scientific Research Foundation of Hubei University of Automotive Technology (Grant No. BK202113).
2023-04-23T08:17:31.942Z
2022-03-30T02:27:04.000Z
redpajama/arxiv
arxiv_0000
191
4,258
83b65634dee41cccbaf8cf7cba2627def4514f24
\section{Introduction} \label{sec:intro} \begin{figure}[t] \centering {\includegraphics[width=1.0\linewidth]{intro_tight.pdf}} \caption{ The comparisons of model outputs (logits) and Kullback–Leibler (KL) distance between two networks that are trained from scratch. Analysis is conducted on CIFAR100-LT dataset with Imbalanced Factor (IF) of 100. The logits are visualized on the basis of a random selected example, and the KL distance is computed based on the whole test set and then the average results of each category are counted and reported. Although the employed two networks have the same network structure and training settings, their predictions differ largely from each other especially in tail classes. Bested viewed in color. } \label{fig_intro} \end{figure} In recent years, deep neural networks have achieved resounding success in various visual tasks, i.e., face analysis~\cite{wan2020multi,zhao2003face}, action and gesture recognition~\cite{Zhou_Li_Wan_2021,mitra2007gesture}. Despite the advances in deep technologies and computing capability, the huge success also highly depends on large well-designed datasets of having a roughly balanced distribution, such as ImageNet~\cite{deng2009imagenet}, MS COCO~\cite{lin2014microsoft} and Places~\cite{zhou2017places}. This differs notably from real-world datasets, which usually exhibit long-tailed data distributions~\cite{wang2017learning,liu2019large} where few head classes occupy most of the data while many tail classes have only few samples. In such scenarios, the model is easily dominated by those few head classes, whereas low accuracy rates are usually achieved for many other tail classes. Undoubtedly, the long-tailed characteristics challenges deep visual recognition, and also immensely hinders the practical use of deep models. In long-tailed visual recognition, several works focus on designing the class re-balancing strategies~\cite{tan2017efficient,tan2019attention,he2009learning,cui2019class,huang2016learning,wang2017learning} and decoupled learning~\cite{kang2019decoupling,cao2019learning}. More recent efforts aim to improve the long-tailed learning by using multiple experts~\cite{xiang2020learning,wang2020long,li2020overcoming,cai2021ace,zhang2021test}. The multi-expert algorithms follow a straightforward idea of complementary learning, which means that different experts focus on different aspects and each of them benefits from the specialization in the dominating part. For example, LFME~\cite{xiang2020learning} formulates a network with three experts and it forces each expert learn samples from one of head, middle and tail classes. Previous multi-expert methods~\cite{wang2020long,xiang2020learning,cai2021ace}, however, only force each expert to learn the knowledge in a specific area, and there is a lack of cooperation among them. \iffalse In long-tailed visual recognition, most early works focus on designing the class re-balancing strategies~\cite{he2009learning,buda2018systematic,cui2019class,huang2016learning,ren2018learning,wang2017learning} to balance the contributions of training samples for all classes. Although class re-balancing methods can obtain some accuracy improvements, they are often confronted with risk of overfitting due to the distortion in data distribution or high sensitivity of hyper-parameters. Later, some researchers~\cite{kang2019decoupling,cao2019learning} propose to decouple the network training into representation learning and classifier learning. The decoupled learning can address the overfitting problem via a two-stage learning manner, but the imbalanced data distribution issues have not been well handled especially in the representation learning stage. More recent efforts aim to improve the long-tailed learning by using multiple experts~\cite{xiang2020learning,wang2020long,li2020overcoming,cai2021ace,zhang2021test}. The multi-expert algorithms follow a straightforward idea of complementary learning, which means that different experts focus on different aspects and each of them benefits from the specialization in the dominating part. For example, LFME~\cite{xiang2020learning} formulates a network with three experts and it forces each expert learn samples from one of head, middle and tail classes. Multi-expert framework usually can achieve satisfactory recognition accuracy on long-tailed recognition since each of those experts is well learned on its own field. \fi Our motivation is inspired by a simple experiment as shown in Fig.~\ref{fig_intro}, where the different networks vary considerably, particularly in tail classes, even if they have the same network structure and the same training settings. This signifies the great uncertainty in the learning process. One reliable solution to alleviate the uncertainty is the collaborative learning through multiple experts, namely, that each expert can be a teacher to others and also can be a student to learn additional knowledge of others. Grounded in this, we propose a Nested Collaborative Learning (NCL) for long-tailed visual recognition. NCL contains two main important components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD), the former of which aims to enhance the discriminative capability of each network, and the later collaboratively transfers the knowledge among any two experts. Both NCL and NBOD are performed in a nested way, where the NCL or NBOD conducts the supervised learning or distillation from a full perspective on all categories, and also implements that from a partial perspective of focusing on some important categories. Moreover, we propose a Hard Category Mining (HCM) to select the hard categories as the important categories, in which the hard category is defined as the category that is not the ground-truth category but with a high predicted score and easily resulting to misclassification. The learning manners from different perspectives are nested, related and complementary, which facilitates to the thorough representations learning. Furthermore, inspired by self-supervised learning~\cite{he2020momentum}, we further employ an additional moving average model for each expert to conduct self-supervision, which enhances the feature learning in an unsupervised manner. In the proposed NCL, each expert is collaboratively learned with others, where the knowledge transferring between any two experts is allowed. NCL promotes each expert model to achieve better and even comparable performance to an ensemble's. Thus, even if a single expert is used, it can be competent for prediction. Our contributions can be summarized as follows: \iffalse Inspired by this, the online knowledge distillation is employed for cooperation. Moreover, we further formulate a balanced form for the distillation for long-tailed learning. In addition to the collaborative learning, the individual learning is also important for each expert. Besides the common supervised loss, e.g., Cross Entropy (CE) loss or Balance Softmax Cross Entropy (BSCE) loss~\cite{ren2020balanced}, we propose a Hard Category Focus (HCF) to improve network's discriminative capability. It differs from traditional losses (e.g., CE loss) that focus on all categories, and it only pays attention to some important categories that are most difficult to distinguish. Based on the HCF, we further formulate the online knowledge distillation as Dual Balanced Online Distillation (DBOD), which distils the knowledge from dual levels, i.e., full category level and partial category level, to complementary and comprehensively conduct the collaborative learning. Finally, inspired by self-supervised learning~\cite{he2020momentum}, we further employ an additional moving average model for each expert to conduct self-supervision, which enhances features learning in a unsupervised manner. The goal of our multi-expert framework is to strengthen the capability of each expert via collaborative Learning. In this way, both a single network and an ensemble can be employed for evaluation, and the single model is promoted to achieve comparable performance to an ensemble's. \fi \iffalse Considering the advantages of multi-expert strategy, our method is also constructed following this scheme. Previous multi-expert methods~\cite{wang2020long,xiang2020learning,cai2021ace} force each expert to only learn the knowledge from a specific area, and there is a lack of interactions and guidance from each other. Different from those methods, our method employs a Multi-Expert Collaborative Learning (MECL) among experts and encourages them to learn from each other. The proposed MECL consists of following parts. The first part is called individual supervised learning, which employs the individual supervised loss (e.g., Cross Entropy (CE) loss or Balance Softmax Cross Entropy (BSCE) loss~\cite{ren2020balanced} ) to learn individual features for each expert. To let the learning be more effective, we further propose a Hard Category Focus (HCF), which facilitates the network to pay attention to the most difficult categories to distinguish. The proposed HCF is highly complementary with the traditional supervised loss, with one oriented to all categories and the other one to only partial categories. To collaboratively learn multiple experts, we propose a Dual Balanced Online Distillation (DBOD) to allow each expert model distillate knowledge from others. The proposed DBOD is different from previous distillation methods~\cite{zhang2018deep,guo2020online} in following aspects: 1) our DBOD is with balanced configuration targeted for long-tailed learning; 2) the distillation is conducted on dual levels, i.e., full category level and partial category level, to complementary and comprehensively exploit discriminative features. Moreover, inspired by self-supervised learning~\cite{he2020momentum}, we further employ an addition moving average model for each expert to conduct self-supervision, which enhances features learning in a unsupervised manner. The goal of the proposed MECL is to improve the discriminative capability of each expert model via the collaborative Learning, which promotes each model to achieve comparable performance to an ensemble's. In this way, only one model is enough for forward prediction. \fi \iffalse The method is motivated by some interesting experiments as shown in Fig.~\ref{}. The experiments show that the predictions of different networks varies greatly although they are trained with same settings. More interestingly, the performance on tail classes is dramatically improved when employing a simple ensemble of them, which shows the complementary representations among those models. Inspired with Deep Mutual Learning (DML)~\cite{zhang2018deep}, we propose an collaborative learning based on online knowledge distillation which enables experts to gain extra knowledge from each other. Moreover, ************ \fi \begin{itemize} \setlength{\itemsep}{1.0pt} \item We propose a Nested Collaborative Learning (NCL) to collaboratively learn multiple experts concurrently, which allows each expert model to learn extra knowledge from others. \item We propose a Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD) to conduct the learning from both a full perspective on all categories and a partial perspective of focusing on hard categories. \item We propose a Hard Category Mining (HCM) to greatly reduce the confusion with hard negative categories. \item The proposed method gains significant performance over the state-of-the-art on five popular datasets including CIFAR-10/100-LT, Places-LT, ImageNet-LT and iNaturalist 2018. \end{itemize} \section{Related Work} \label{sec:related} \textbf{Long-tailed visual recognition.} To alleviate the long-tailed class imbalance, lots of studies ~\cite{zhou2020bbn,cao2020domain,xiang2020learning,ren2020balanced,wang2020long,ye2020identifying,liu2020memory} are conducted in recent years. The existing methods for long-tailed visual recognition can be roughly divided into three categories: class re-balancing~\cite{he2009learning,buda2018systematic,cui2019class,huang2016learning,ren2018learning,wang2017learning}, multi-stage training~\cite{kang2019decoupling,cao2019learning} and multi-expert methods~\cite{xiang2020learning,wang2020long,li2020overcoming,cai2021ace,zhang2021test}. Class re-balancing, which aims to re-balance the contribution of each class during training, is a classic and widely used method for long-tailed learning. More specifically, class re-balancing consists of data re-sampling~\cite{chawla2002smote,kang2019decoupling}, loss re-weighting~\cite{lin2017focal,wang2021seesaw,ren2020balanced,tan2020equalization}. Class re-balancing improves the overall performance but usually at the sacrifice of the accuracy on head classes. Multi-stage training methods divide the training process into several stages. For example, Kang et al.~\cite{kang2019decoupling} decouple the training procedure into representation learning and classifier learning. Li et al.~\cite{li2021self} propose a multi-stage training strategy constructed on basis of knowledge distillation. Besides, some other works~\cite{menon2020long,zhang2021distribution} tend to improve performance via a post-process of shifting model logits. However, multi-stage training methods may rely on heuristic design. More recently, multi-expert frameworks receive increasing concern, e.g., LFME~\cite{xiang2020learning}, BBN~\cite{zhou2020bbn}, RIDE~\cite{wang2020long}, TADE~\cite{zhang2021test} and ACE~\cite{cai2021ace}. Multi-expert methods indeed improve the recognition accuracy for long-tailed learning, but those methods still need to be further exploited. For example, most current multi-expert methods employ different models to learn knowledge from different aspects, while the mutual supervision among them is deficient. Moreover, they often employ an ensemble of experts to produce predictions, which leads to a complexity increase of the inference phase. \textbf{Knowledge distillation.} Knowledge distillation is a prevalent technology in knowledge transferring. Early methods~\cite{hinton2015distilling,passalis2018learning} often adopt an offline learning strategy, where the distillation follows a teacher-student learning scheme~\cite{hinton2015distilling,furlanello2018born}, which transfers knowledge from a large teacher model to a small student model. However, the teacher normally should be a complex high-capacity model and the training process may be cumbersome and time-consuming. In recent years, knowledge distillation has been extended to an online way~\cite{zhang2018deep,guo2020online,chen2020online,dvornik2019diversity}, where the whole knowledge distillation is conducted in a one-phase and end-to-end training scheme. For example, in Deep Mutual Learning~\cite{zhang2018deep}, any one model can be a student and can distil knowledge from all other models. Guo et al.~\cite{guo2020online} propose to use an ensemble of soft logits to guide the learning. Zhu et al.~\cite{lan2018knowledge} propose a multi-branch architecture with treating each branch as a student to further reduce computational cost. Online distillation is an efficient way to collaboratively learn multiple models, and facilitates the knowledge transferred among them. \textbf{Contrastive learning.} Many contrastive methods~\cite{he2020momentum,chen2020improved,chen2020simple,grill2020bootstrap} are built based on the task of instance discrimination. For example, Wu et al.~\cite{wu2018unsupervised} propose a noise contrastive estimation to compare instances based on a memory bank of storing representations. Representation learning for long-tailed distribution also been exploit~\cite{kang2020exploring}. More recently, Momentum Contrast (MoCo)~\cite{he2020momentum} is proposed to produce the compared representations by a moving-averaged encoder. To enhance the discriminative ability, contrastive learning often compares each sample with many negative samples. SimCLR~\cite{chen2020simple} achieves this by using a large batch size. Later, Chen et al.~\cite{chen2020improved} propose an improved method named MOCOv2, which achieving promising performance without using a large batch size for training. Considering the advantages of MoCOv2, our self-supervision is also constructed based on this structure. \begin{figure*}[t] \centering {\includegraphics[width=0.9\linewidth]{framework.pdf}} \caption{ An illustration of our proposed NCL of containing three experts. The NIL enhances discriminative ability of a single expert, and NBOD allows knowledge transferring among multiple experts. NIL conducts the supervised learning from both a full and a partial view, which focus on all categories and some hard categories, respectively. Similarly, NBOD conducts the knowledge distillation also from both a full and a partial view. The contrastive loss is calculated by using an extra momentum encoder and MLP layers, which can be removed in evaluation. Probabilities employed in NIL and NBOD are balanced according to the data distribution. } \label{fig_framework} \end{figure*} \section{Methodology} \label{sec:method} The proposed NCL aims to collaboratively and concurrently learn multiple experts together as shown in Fig.~\ref{fig_framework}. In the following, firstly, we introduce the preliminaries, and then present Hard Category Mining (HCM), Nested Individual Learning (NIL), Nested Balanced Online Distillation (NBOD) and self-supervision part. Finally, we show the overall loss of how to aggregate them together. \subsection{Preliminaries} We denote the training set with $n$ samples as ${ {\mathcal D}} = \{ {\bf x}_i, y_i \}$, where ${\bf x}_i$ indicates the $i$-th image sample and $y_i$ denotes the corresponding label. Assume a total of $K$ experts are employed and the $k$-th expert model is parameterized with ${\bm \theta}_k$. Given image ${\bf x}_i$, the predicted probability of class-$j$ in the $k$-th expert is computed as: \begin{equation}\label{ce_prob} {\tilde {\bf p}}_j({\bf x}_i;{\bm \theta}_k) = \frac{exp({ {z}_{ij}^k }) }{\sum_{l=1}^C exp({{z}_{il}^k }) } \end{equation} where $z_{ij}^k$ is the k-th expert model's class-$j$ output and $C$ is the number of classes. This is a widely used way to compute the predicted probability, and some losses like Cross Entropy (CE) loss is computed based on it. However, it does not consider the data distribution, and is not suitable for long-tailed visual recognition, where a naive learned model based on ${\tilde {\bf p}}({\bf x}_i;{\bm \theta}_k)$ would be largely dominated by head classes. Therefore, some researchers~\cite{ren2020balanced} proposed to compute predicted probability of class-$j$ in a balanced way: \begin{equation}\label{bsce_prob} {\bf p}_{j} ({\bf x}_i;{\bm \theta}_k) = \frac{n_{j} exp({ {z}_{ij}^k }) }{\sum_{l=1}^C n_l exp({{z}_{il}^k }) } \end{equation} where $n_j$ is the total number of samples of class $j$. In this way, contributions of tail classes are strengthened while contributions of head classes are suppressed. Based on such balanced probabilities, Ren et al.~\cite{ren2020balanced} further proposed a Balanced Softmax Cross Entropy (BSCE) loss to alleviate long-tailed class imbalance in model training. However, BSCE loss is still not enough, where the uncertainty in training still cannot be eliminated. \subsection{Hard Category Mining} In representation learning, one well-known and effective strategy to boost performance is Hard Example Mining (HEM)~\cite{hermans2017defense}. HEM selects hard samples for training while discarding easy samples. However, directly applying HEM to long-tailed visual recognition may distort the data distribution and make it more skewed in long-tailed learning. Differing from HEM, we propose a more amicable method named Hard Category Mining (HCM) to exclusively select hard categories for training, which explicitly improves the ability of distinguishing the sample from hard categories. In HCM, the hard category means the category that is not the ground-truth category but with a high predicted score. Therefore, the hard categories can be selected by comparing values of model's outputs. Specifically, we have $C$ categories in total and suppose $C_{hard}$ categories are selected to focus on. For the sample ${\bf x}_i$ and expert $k$, the corresponding set ${\bm \Psi}_i^k$ containing the output of selected categories is denoted as: \begin{equation}\label{hard_select} {\bm \Psi}_i^k = TopHard\{ z_{ij}^k | j \neq y_i \} \cup \{z_{iy_i}^k \} \end{equation} where $TopHard$ means selecting $C_{hard}$ examples with largest values. In order to adapt to long-tailed learning better, we computed the probabilities of the selected categories in a balanced way, which is shown as: \begin{equation}\label{hard_p} {\bf p}^* ({\bf x}_i;{\bm \theta}_k) = \{ \frac{n_j exp( { {z}_{ij}^k }) }{\sum_{z_{il}^k \in {\bm \Psi}_i^k } n_l exp({{z}_{il}^k }) } | {z}_{ij}^k \in {\bm \Psi}_i^k \} \end{equation} \subsection{Nested Individual Learning} The individual supervised learning on each expert is also an important component in our NCL, which ensures that each network can achieve the strong discrimination ability. To learn thoroughly, we proposed a Nested Individual Learning (NIL) to perform the supervision in a nested way. Besides the supervision on all categories for a global and robust learning, we also force the network to focus on some important categories selected by HCM, which enhance model's meticulous distinguishing ability. The supervision on all categories is trivial and constructed on BSCE loss. Since our framework is constructed on multiple experts, the supervision is applied to each expert and the loss on all categories over all experts is the sum of the loss of each expert: \begin{equation}\label{nil_all} L_{nil}^{all} = -\textstyle{\sum\nolimits_k} log( { {\bf p}}_{y_i} ({\bf x}_i;{\bm {\bm \theta} }_k) ) \end{equation} For the supervision on hard categories, it also can be obtained in a similar way. Mathematically, it can be represented as: \begin{equation}\label{nil_hard} L_{nil}^{hard} = -\textstyle{\sum\nolimits_k} log( { {\bf p}^*}_{y_i} ({\bf x}_i;{\bm {\bm \theta} }_k) ) \end{equation} In the proposed NIL, the two nested supervisions are employed together to achieve a comprehensive learning, and the summed loss is written as: \begin{equation}\label{loss_dil} L_{nil} = L_{nil}^{all} + L_{nil}^{hard} \end{equation} \iffalse \subsection{Overview} The proposed Multi-Expert Collaborative Learning (MECL) collaboratively and concurrently learn multiple experts to address uncertain predictions in the long-tailed setting, which allows each expert to learn extra knowledge from other experts. An illustration of our MECL is shown in Fig.~\ref{fig_framework}. In MECL, we propose a Dual Balanced Online Distillation (DBOD) for online knowledge transferring. The proposed DBOD differs from previous works~\cite{chen2020online,guo2020online,zhang2018deep} from two aspects as following. First, DBOD distills the knowledge on the re-balanced probability distribution rather than the original probability distribution, which learns knowledge in a balanced way. Second, the knowledge distillation is performed on dual levels. Besides giving a full perspective on all categories, we also add an extra distillation supervision on some important categories from a partial perspective. The important categories are defined as hard categories, that is, categories that are easy to cause misclassification. To achieve this, we proposed a Hard categories Focus (HCF) to focus on hard categories for training. In addition to the cooperation among multiple experts, the individual learning on each expert is also important. Similar to DBOD, the individual learning is also conducted from two aspects, where one is the traditional supervised loss on all categories, like Cross-Entropy (CE) or Balanced Softmax Cross-Entropy (BSCE) loss, and the other is the loss function based on HCF to focus on hard categories. Furthermore, we employ a self-supervision to further enhance the feature learning. In the following, we shall first introduce CE and BSCE loss, and then present HCF, DBOD and the employed self-supervision. Finally, we present a training scheme of how to aggregate them together. \fi \iffalse The proposed MECL consists of four parts. The first part is called individual supervised learning, where each expert would be individually trained in supervised manner with a classification loss. In the first part, the supervised loss is performed over all categories, where some of them may be easy categories that contribute very little in the learning. Therefore, we propose the second part called Hard Category Focus (HCF), which draws the attention of the model to the most difficult categories to distinguish. In the third part, we propose a Dual Balanced Online Distillation (DBOD) to encourage each expert model collaboratively distillate knowledge from other models. As shown in Fig~\ref{***}, DBOD is performed on dual levels, i.e., full category level and partial category level, which comprehensively transfers the knowledge to each other. The last part is to employ the self-supervision to enhance the feature learning. In the following, we first introduce aforementioned four components, and then present the overall loss of how to aggregate them together. \subsection{CE and BSCE losses} We denote the training set with $n$ samples as ${ {\mathcal D}} = \{ {\bf x}_i, y_i \}$, where ${\bf x}_i$ indicates the $i^{th}$ image sample and $y_i$ denotes the corresponding label. Assume a total of $K$ experts are employed and the $k^{th}$ expert model is parameterized with ${\bm \theta}_k$. Each model is first trained in a supervised manner to learn their own features. Generally, a most common loss, namely Cross-Entropy (CE) loss, is employed as the loss function, which can be formulated as: \begin{equation}\label{ce_loss} L_{ce}^k = - log( {\tilde {\bf p}}_{y_i} ({\bf x}_i;{\bm {\bm \theta} }_k) ) \end{equation} where ${\tilde {\bf p}}_j({\bf x}_i;{\bm \theta}_k) = \frac{exp({ {z}_{ik} }) }{\sum_{j=1}^C exp({{z}_{ij} }) }$ represents the probability of classifying the image ${\bf x}_i$ to class $j$, $z_{ij}$ is the model's class-$j$ output and $C$ is the number of classes. The CE loss can deal well with the classification problem with a balanced data distribution. However, it is not suitable for long-tailed visual recognition, where a immaturely learned model would be largely dominated by head classes. To alleviate the effect of long-tailed distribution, we re-balance the classifying probability $p_{y_i} ({\bf x}_i;{\bm \theta}_k)$ as following: \begin{equation}\label{bsce_p} {\bf p}_{y_i} ({\bf x}_i;{\bm \theta}_k) = \frac{n_{y_i} exp({ {z}_{iy_i} }) }{\sum_{j=1}^C n_j exp({{z}_{ij} }) } \end{equation} where $n_j$ is the total number of samples of class $j$. The CE loss with the re-balanced probabilities is called Balanced Softmax Cross-Entropy (BSCE) loss~\cite{ren2020balanced}, which is denoted as $L_{bsce}^k = - log( { {\bf p}}_{y_i} ({\bf x}_i;{\bm \theta}_k) )$. Specifically, contributions of tail classes are strengthened while contributions of head classes are suppressed. To the end, the loss to train all expert models is the sum of all individual losses: \begin{equation}\label{bsce_all} L_{bsce} = -\textstyle{\sum\nolimits_k} L_{bsce}^k \end{equation} \subsection{Hard Category Focus} In representation learning, one well-known and effective strategy to boost performance is Hard Example Mining (HEM)~\cite{hermans2017defense}. HEM selects hard samples for training while discarding easy samples, which holds that easy samples contribute very little and are even detrimental to features learning. HEM may also be of value to long-tailed visual recognition. However, HEM of discarding easy samples may distort the data distribution and make it more skewed in long-tailed learning. Differing from HEM, we propose a more amicable method named Hard Category Focus (HCF) to exclusively select hard categories for training, which explicitly improves the ability of distinguishing between hard categories. In HCF, the hard category means the category that is not the ground-truth category but with a high predicted score. Therefore, the hard categories can be selected by comparing values of model's outputs. Specifically, we have $C$ categories in total and suppose $C_{hard}$ categories are selected to focus on. For the sample ${\bf x}_i$, the corresponding set ${\bm \Psi}_i$ of containing the outputs of selected categories is denoted as: \begin{equation}\label{hard_select} {\bm \Psi}_i = TopHard\{ z_{ij} | j \neq y_i \} \cup \{z_{iy_i} \} \end{equation} where $TopHard$ means selecting $C_{hard}$ examples with largest values. Then, the classification loss like BSCE also can be employed to compute the loss over the selected set ${\bm \Psi}_i$ , which facilitates the model to distinguish the input sample from those confusing categories. Here we take HCF with the BSCE loss as an example. Then, the probabilities over the selected categories are re-computed as: \begin{equation}\label{hard_p} {\bf p}^* ({\bf x}_i;{\bm \theta}_k) = \{ \frac{n_l exp( { {z}_{il} }) }{\sum_{z_{ij} \in {\bm \Psi}_i } n_j exp({{z}_{ij} }) } | {z}_{il} \in {\bm \Psi}_i \} \end{equation} Specifically, the BSCE loss over ${\bf p}^*$ on all experts can be represented as: \begin{equation}\label{bsce_p} L_{hcf} = -\textstyle{\sum\nolimits_k} log( { {\bf p}}_{y_i}^* ({\bf x}_i;{\bm \theta}_k) ) \end{equation} Our HCF also can apply to the classical CE loss, which can be extended to other classification tasks. \fi \subsection{Nested Balanced Online Distillation} To collaboratively learn multiple experts from each other, online distillation is employed to allow each model to learn extra knowledge from others. Previous methods~\cite{zhang2018deep,guo2020online} consider the distillation from a full perspective of all categories, which aims to capture global and robust knowledge. Different from previous methods, we propose a Nested Balanced Online Distillation (NBOD), where the distillation is conducted not only on all categories, but also on some hard categories that are mined by HCM, which facilitates the network to capture meticulous distinguishing ability. According to previous works~\cite{zhang2018deep,guo2020online}, the Kullback Leibler (KL) divergence is employed to perform the knowledge distillation. The distillation on all categories can be formulated as: \begin{equation}\label{eq_dis_all} L_{dis}^{all} = \frac{1}{K(K-1)} {\sum_{k}^{K} \sum_{q \neq k}^{K} } KL( {\bf p} ({\bf x}_i;{\bm \theta}_k) || {\bf p} ({\bf x}_i;{\bm \theta}_q) ) \end{equation} As we can see, the distillation is conducted among any two experts. Note here that we use balanced distributions instead of original distributions to compute KL distance, which aims to eliminate the distribution bias under the long-tailed setting. And this is also one aspect of how we distinguish from other distillation methods. Moreover, all experts employ the same hard categories for distillation, and we randomly select an expert as an anchor to generate hard categories for all experts. Similarly, the distillation on hard categories also can be formulated as: \begin{equation}\label{eq_dis_hard} L_{dis}^{hard} = \frac{1}{K(K-1)} {\sum_{k}^{K} \sum_{q \neq k}^{K} } KL( {\bf p}^* ({\bf x}_i;{\bm \theta}_k) || {\bf p}^* ({\bf x}_i;{\bm \theta}_q) ) \end{equation} The nested distillation on both all categories and hard categories are learned together, which is formulated as: \begin{equation}\label{eq_dis} L_{dis} = L_{dis}^{all} + L_{dis}^{hard} \end{equation} \iffalse The loss $L_{kl}({\bm \theta}_k,{\bm \theta}_q)$ and $L_{kl}^*({\bm \theta}_k,{\bm \theta}_q)$ are optimized concurrently to distill the knowledge from the perspective of both all categories and hard categories. Moreover, our framework consists of multiple experts, and the knowledge transferring between any two experts would be considered thoroughly. Very natural, the proposed DBOD among all experts can be computed as: \begin{equation}\label{mutual_all} L_{dis} = \frac{1}{K(K-1)} \textstyle{\sum_{k}^{K} \sum_{q \neq k}^{K} } \big( L_{kl} ({\bm \theta}_k,{\bm \theta}_q) + L_{kl}^*({\bm \theta}_k,{\bm \theta}_q) \big) \end{equation} \fi \subsection{Feature Enhancement via Self-Supervision} Self-supervised learning aims to improve feature representations via an unsupervised manner. Following previous works~\cite{he2020momentum,chen2020improved}, we adopt the instance discrimination as the self-supervised proxy task, in which each image is regarded as a distinct class. We leverage an additional temporary average model so as to conduct self-supervised learning, and its parameters are updated following a momentum-based moving average scheme~\cite{he2020momentum,chen2020improved} as shown in Fig.~\ref{fig_framework}. The employed self-supervision is also a part of our NCL, which cooperatively learns an expert model and its moving average model to capture better features. Take the self-supervision for expert $k$ as an example. Let ${\bf v}_i^k$ denote the normalized embedding of the $i^{th}$ image in the original expert model, and ${\bf {\tilde v} }_i^k$ denote the normalized embedding of its copy image with different augmentations in the temporally average model. Besides, a dynamic queue $\mathcal{Q}^k$ is employed to collect historical features. The samples in the queue are progressively replaced with the samples in current batch enqueued and the samples in oldest batch dequeued. Assume that the queue $\mathcal{Q}^k$ has a size of $N$ and $N$ can be set to be much larger than the typical batch size, which provides a rich set of negative samples and thus obtains better feature representations. The goal of instance discrimination task is to increase the similarity of features of the same image while reduce the similarity of the features of two different images. We achieve this by using a contrastive learning loss, which is computed as: \begin{small} \begin{equation}\label{contrastive_loss} L_{con}^k = -log( \frac{exp( {{\bf v}_i^k}^T {\bf {\tilde v}}_i^k /\tau )} { exp( { {\bf v}_i^k }^T {\bf {\tilde v}}_i^k /\tau) + \sum_{{\bf {\tilde v}}_j^k \in { \mathcal{Q}^k}} exp( { {\bf v}_i^k }^T {\bf {\tilde v}}_j^k /\tau) } ) \end{equation} \end{small} where $\tau$ is a temperature hyper-parameter. Similar to Eq.~\ref{nil_all} and Eq.~\ref{nil_hard}, the self-supervised loss over all experts can be represented as $L_{con} = \sum\nolimits_k L_{con}^k$. \subsection{Model Training} The overall loss in our proposed NCL consists of three parts: the loss $L_{nil}$ of our NIL for learning each expert individually, the loss $L_{dis}$ of our NBOD for cooperation among multiple experts, and the loss $L_{con}$ of self-supervision. The overall loss $L$ is formulated as: \begin{equation}\label{mcl_loss} L = L_{nil} + L_{con} + \lambda L_{dis} \end{equation} where $\lambda$ denotes the loss weight to balance the contribution of cooperation among multiple experts. For $L_{nil}$ and $L_{con}$, they play their part inside the single expert, and we equally set their weighs as 1 in consideration of generality. \section{Experiments} \subsection{Datasets and Protocols} We conduct experiments on five widely used datasets, including CIFAR10-LT~\cite{cui2019class}, CIFAR100-LT~\cite{cui2019class}, ImageNet-LT~\cite{liu2019large}, Places-LT~\cite{zhou2017places}, and iNaturalist 2018~\cite{van2018inaturalist}. \textbf{CIFAR10-LT and CIFAR100-LT}~\cite{cui2019class} are created from the original balanced CIFAR datasets~\cite{krizhevsky2009learning}. Specifically, the degree of data imbalance in datasets is controlled by an Imbalance Factor (IF), which is defined by dividing the number of the most frequent category by that of the least frequent category. The imbalance factors of 100 and 50 are employed in these two datasets. \textbf{ImageNet-LT}~\cite{liu2019large} is sampled from the popular ImageNet dataset~\cite{deng2009imagenet} under long-tailed setting following the Pareto distribution with power value $\alpha$=6. ImageNet-LT contains 115.8K images from 1,000 categories. \textbf{Places-LT} is created from the large-scale dataset Places~\cite{zhou2017places}. This dataset contains 184.5K images from 365 categories. \textbf{iNaturalist 2018}~\cite{van2018inaturalist} is the largest dataset for long-tailed visual recognition. iNaturalist 2018 contains 437.5K images from 8,142 categories, and it is extremely imbalanced with an imbalance factor of 512. According to previous works~\cite{cui2019class,kang2019decoupling} the top-1 accuracy is employed for evaluation. Moreover, for iNaturalist 2018 dataset, we follow the works~\cite{cai2021ace,kang2019decoupling} to divide classes into many (with more than 100 images), medium (with 20 $\sim$ 100 images) and few (with less than 20 images) splits, and further report the results on each split. \subsection{Implementation Details} For CIFAR10/100-LT, following~\cite{cao2019learning, zhang2021bag}, we adopt ResNet-32~\cite{he2016deep} as our backbone network and liner classifier for all the experiments. We utilize ResNet-50~\cite{he2016deep}, ResNeXt-50~\cite{xie2017aggregated} as our backbone network for ImageNet-LT, ResNet-50 for iNaturalist 2018 and pretrained ResNet-152 for Places-LT respectively, based on~\cite{liu2019large,kang2019decoupling,cui2021parametric}. Following~\cite{zhang2021distribution}, cosine classifier is utilized for these models. Due to the use of the self-supervision component, we use the same training strategies as PaCo~\cite{cui2021parametric}, i.e., training all the models for 400 epochs except models on Places-LT, which is 30 epochs. In addition, for fair comparison, following~\cite{cui2021parametric}, RandAugument~\cite{cubuk2020randaugment} is also used for all the experiments except Places-LT. The influence of RandAugument will be discussed in detail in Sec.~\ref{Component_Analysis}. These models are trained on 8 NVIDIA Tesla V100 GPUs. The $\beta = C_{hard} / C$ in HCM is set to 0.3. And the ratio of Nested Balanced Online Distillation loss $\lambda$, which plays its part among networks, is set to 0.6. The influence of $\beta$ and $\lambda$ will be discussed in detail in Sec.~\ref{Component_Analysis}. \setlength{\tabcolsep}{5pt} \begin{table}[t] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|cc|cc} \toprule[1pt] \multirow{2}{*}{Method} & \multirow{2}{*}{Ref.} & \multicolumn{2}{c|}{ CIFAR100-LT } & \multicolumn{2}{c}{ CIFAR10-LT } \\ \cline{3-6} & & 100 & 50 & 100 & 50 \\ \hline CB Focal loss~\cite{cui2019class} & CVPR'19 & 38.7 & 46.2 & 74.6 & 79.3\\ LDAM+DRW~\cite{cao2019learning} & NeurIPS'19 & 42.0 & 45.1 & 77.0 & 79.3\\ LDAM+DAP~\cite{jamal2020rethinking} & CVPR'20 & 44.1 & 49.2 & 80.0 & 82.2\\ BBN~\cite{zhou2020bbn} & CVPR'20 & 39.4 & 47.0 & 79.8 & 82.2\\ LFME~\cite{xiang2020learning} & ECCV'20 & 42.3 & -- & -- & --\\ CAM~\cite{zhang2021bag} & AAAI'21 & 47.8 & 51.7 & 80.0 & 83.6\\ Logit Adj.~\cite{menon2020long} & ICLR'21 & 43.9 & -- & 77.7 & --\\ RIDE~\cite{wang2020long} & ICLR'21 & 49.1 & -- & -- & --\\ LDAM+M2m~\cite{kim2020m2m}& CVPR'21 & 43.5 & -- & 79.1 & --\\ MiSLAS~\cite{zhong2021improving} & CVPR'21 & 47.0 & 52.3 & 82.1 & 85.7 \\ LADE~\cite{hong2021disentangling} & CVPR'21 & 45.4 & 50.5 & -- & --\\ Hybrid-SC~\cite{wang2021contrastive} & CVPR'21 & 46.7 & 51.9 & 81.4 & 85.4 \\ DiVE~\cite{he2021distilling} & ICCV'21 & 45.4 & 51.3 & -- & -- \\ SSD~\cite{li2021self_iccv} & ICCV'21 & 46.0 & 50.5 & -- & --\\ ACE~\cite{cai2021ace}& ICCV'21 & 49.6 & 51.9 & 81.4 & 84.9\\ PaCo~\cite{cui2021parametric} & ICCV'21 & 52.0 & 56.0 & -- & --\\ \hline BSCE (baseline) & -- & 50.6 & 55.0 & 84.0 & 85.8 \\ Ours (single) & -- & \textbf{53.3} & \textbf{56.8} & \textbf{84.7} & \textbf{86.8} \\ Ours (ensemble) & -- & \textbf{54.2} & \textbf{58.2} & \textbf{85.5} & \textbf{87.3} \\ \bottomrule[1pt] \end{tabular}}\caption{Comparisons on CIFAR100-LT and CIFAR10-LT datasets with the IF of 100 and 50. } \label{results_cifar} \end{table} \iffalse \setlength{\tabcolsep}{8pt} \begin{table*}[t] \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|c|c|ccc|c|c|ccc|c} \toprule[1pt] \multirow{2}{*}{Method} & \multirow{2}{*}{Ref.} & \multicolumn{5}{c|}{ ResNet-50 } & \multicolumn{5}{c}{ ResNeXt-50 } \\ \cline{3-12} & & GFlops & Many & Medium & Few & All & GFlops & Many & Medium & Few & All\\ \hline BBN~\cite{zhou2020bbn} & CVPR'20 & --& --& --& --& 48.3 & --& --& --& --& 49.3\\ NCM~\cite{kang2019decoupling} & ICLR'20 & -- & 53.1 & 42.3 & 26.5 & 44.3 & -- & 56.6 & 45.3 & 28.1 & 47.3\\ cRT~\cite{kang2019decoupling} & ICLR'20 & 4.11 (1.0x) & 58.8 & 44.0 & 26.1 & 47.3 & 4.26 (1.0x) & 61.8 & 46.2 & 27.4 & 49.6\\ $\tau$-norm~\cite{kang2019decoupling} & ICLR'20 & 4.11 (1.0x) & 56.6 & 44.2 & 27.4 & 46.7 & 4.26 (1.0x) & 59.1 & 46.9 & 30.7 & 49.4\\ LWS~\cite{kang2019decoupling} & ICLR'20 & 4.11 (1.0x) & 57.1 & 45.2 & 29.3 & 47.7 & 4.26 (1.0x) & 60.2 & 47.2 & 30.3 & 49.9\\ RIDE~\cite{wang2020long} & ICLR'21 & 5.15 (1.3x) & 66.2 & 52.3 & 36.5 & 55.4 & 5.19 (1.2x) & 68.2 & 53.8 & 36.0 & 56.8\\ DisAlign~\cite{zhang2021distribution} & CVPR'21 & -- & 61.3 & 52.2 & 31.4 & 52.9 & --& --& --& --& --\\ DiVE~\cite{he2021distilling} & ICCV'21 & -- & 64.06 & 50.41 & 31.46 & 53.10 & --& --& --& --& --\\ SSD~\cite{li2021self_iccv} & ICCV'21 & --& --& --& --& --& -- & 66.8 & 53.1 & 35.4 & 56.0\\ ACE~\cite{cai2021ace} & ICCV'21 & --& --& --& --& 54.7 & --& --& --& --& 56.6\\ PaCo~\cite{cui2021parametric} & ICCV'21 & -- & 65.0 & 55.7 & 38.2 & 57.0 & -- & 67.5 & 56.9 & 36.7 & 58.2\\ \hline Ours (single) & -- &&&&&\textbf{57.4}& \\ Ours (ensemble) & -- &&&&&\textbf{58.34}& \\ \bottomrule[1pt] \end{tabular} } \caption{Comparisons on ImageNet-LT dataset. } \label{results_imagenet} \end{table*} \fi \setlength{\tabcolsep}{6pt} \begin{table}[t] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{l|c|cc|c} \toprule[1pt] \multirow{2}{*}{Method} & \multirow{2}{*}{Ref.} & \multicolumn{2}{c|}{ImageNet-LT} & Places-LT \\ \cline{3-5} & & Res50 & ResX50 & Res152\\ \hline OLTR~\cite{liu2019large} & CVPR'19 & -- & -- & 35.9\\ BBN~\cite{zhou2020bbn} & CVPR'20 & 48.3 & 49.3 & --\\ NCM~\cite{kang2019decoupling} & ICLR'20 & 44.3 & 47.3 & 36.4\\ cRT~\cite{kang2019decoupling} & ICLR'20 & 47.3 & 49.6 & 36.7\\ $\tau$-norm~\cite{kang2019decoupling} & ICLR'20 & 46.7 & 49.4 & 37.9\\ LWS~\cite{kang2019decoupling} & ICLR'20 & 47.7 & 49.9 & 37.6\\ BSCE~\cite{ren2020balanced} & NeurIPS'20 & -- & -- & 38.7 \\ RIDE~\cite{wang2020long} & ICLR'21 & 55.4 & 56.8 & --\\ DisAlign~\cite{zhang2021distribution} & CVPR'21 & 52.9 & -- & --\\ DiVE~\cite{he2021distilling} & ICCV'21 & 53.1 & -- & --\\ SSD~\cite{li2021self_iccv} & ICCV'21 &--& 56.0 & --\\ ACE~\cite{cai2021ace} & ICCV'21 & 54.7 & 56.6 & --\\ PaCo~\cite{cui2021parametric} & ICCV'21 & 57.0 & 58.2 & 41.2\\ \hline BSCE (baseline) & -- &53.9&53.6&40.2\\ Ours (single) & -- &\textbf{57.4}& \textbf{58.4} & \textbf{41.5}\\ Ours (ensemble) & -- &\textbf{59.5}& \textbf{60.5}& \textbf{41.8}\\ \bottomrule[1pt] \end{tabular} } \caption{Comparisons on ImageNet-LT and Places-LT datasets. } \label{results_imagenet} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[t] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{l|c|ccc|c} \toprule[1pt] \multirow{2}{*}{Method} & \multirow{2}{*}{Ref.} & \multicolumn{4}{c}{ iNaturalist 2018} \\ \cline{3-6} & & Many & Medium & Few & All \\ \hline OLTR~\cite{liu2019large} & CVPR'19 & 59.0 & 64.1 & 64.9 & 63.9\\ BBN~\cite{zhou2020bbn} & CVPR'20 & 49.4 & 70.8 & 65.3 & 66.3\\ DAP~\cite{jamal2020rethinking}& CVPR'20 & -- & -- & -- & 67.6\\ NCM~\cite{kang2019decoupling} & ICLR'20 & \\ cRT~\cite{kang2019decoupling} & ICLR'20 & 69.0 & 66.0 & 63.2 & 65.2\\ $\tau$-norm~\cite{kang2019decoupling} & ICLR'20 & 65.6 & 65.3 & 65.9 & 65.6\\ LWS~\cite{kang2019decoupling} & ICLR'20 & 65.0 & 66.3 & 65.5 & 65.9\\ LDAM+DRW~\cite{cao2019learning} & NeurIPS'19 & -- & -- & -- & 68.0\\ Logit Adj.~\cite{menon2020long} & ICLR'21 & -- & -- & -- & 66.4\\ CAM~\cite{zhang2021bag} & AAAI'21 & -- & -- & -- & 70.9 \\ RIDE~\cite{wang2020long} & ICLR'21 & 70.9 & 72.4 & 73.1 & 72.6\\ SSD~\cite{li2021self_iccv} & ICCV'21 & \\ ACE~\cite{cai2021ace} & ICCV'21 & -- & -- & -- & 72.9\\ PaCo~\cite{cui2021parametric} & ICCV'21 & -- & -- & -- & 73.2\\ \hline BSCE (baseline) & -- &67.5&72.0&71.5&71.6 \\ Ours (single) & -- &\textbf{72.0}&\textbf{74.9}&\textbf{73.8}&\textbf{74.2} \\ Ours(ensemble) & -- &\textbf{72.7}&\textbf{75.6}&\textbf{74.5}&\textbf{74.9} \\ \bottomrule[1pt] \end{tabular} } \caption{Comparisons on iNaturalist 2018 dataset with ResNet-50. } \label{results_inatu} \end{table} \subsection{Comparisons to Prior Arts} We compare the proposed method NCL with previous state-of-the-art methods, like LWS~\cite{kang2019decoupling}, ACE~\cite{cai2021ace} and so on. Our NCL is constructed based on three experts and both the performance of a single expert and an ensemble of multiple experts are reported. Besides NCL, we also report the baseline results of a network with using BSCE loss for comparisons. Comparisons on CIFAR10/100-LT are shown in Table~\ref{results_cifar}, comparisons on ImageNet-LT and Places-LT are shown in Table~\ref{results_imagenet}, and comparisons on iNaturalist2018 are shown in Table~\ref{results_inatu}. Our proposed method achieves the state-of-the-art performance on all datasets whether using a single expert or an ensemble of all experts. For only using a single expert for evaluation, our NCL outperforms previous methods on CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT and iNaturalist2018 with accuracies of 84.7\% (IF of 100), 53.3\% (IF of 100), 57.4\% (with ResNet-50), 41.5\% and 74.2\%, respectively. When further using an ensemble for evaluation, the performance on CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT and iNaturalist2018 can be further improved to 85.5\% (IF of 100), 54.2\% (IF of 100), 59.5\% (with ResNet-50), 41.8\% and 74.9\%, respectively. More results on many, medium and few splits are listed in \textbf{Supplementary Material}. Some previous multi-expert methods were constructed based on a multi-branch network with higher complexity. For example, RIDE~\cite{wang2020long} with 4 experts brings 0.4 times more computation than the original single network. However, our method of only using a single expert for evaluation won't bring any extra computation but still outperforms them. Besides, despite that some previous methods employ a multi-stage training~\cite{li2021self_iccv,kang2019decoupling} or a post-processing~\cite{menon2020long,zhang2021distribution} to further improve the performance, our method still outperforms them. The significant performance over the state-of-the-art shows the effectiveness of our proposed NCL. \subsection{Component Analysis} \label{Component_Analysis} \textbf{Influence of the ratio of hard categories.} The ratio of selected hard categories is defined as $\beta = C_{hard} / C$. Experiments on our NIL model are conducted within the range of $\beta$ from 0 to 1 as shown in Fig.~\ref{fig_hcf} (a). The highest performance is achieved when setting $\beta$ to 0.3. Setting $\beta$ with a small and large values brings limited gains due to the under and over explorations on hard categories. \textbf{Effect of loss weight.} To search an appropriate value for $\lambda$, experiments on the proposed NCL with a series of $\lambda$ are conducted as shown in Fig.~\ref{fig_hcf} (b). $\lambda$ controls the contribution of knowledge distillation among multiple experts in total loss. The best performance is achieved when $\lambda=0.6$, which shows that a balance is achieved between single network training and knowledge transferring among experts. \begin{figure}[t] \centering {\includegraphics[width=1.0\linewidth]{HCF_b.pdf}} \vspace{-0.6cm} \caption{ Parameter analysis of (a) the ratio $\beta$ and (b) the loss weight $\lambda$ on CIFAR100-LT dataset with IF of 100. } \label{fig_hcf} \end{figure} \textbf{Impact of different number of experts.} As shown in Fig.~\ref{fig_multi_expert}, experiments using different number of experts are conducted. The ensemble performance is improved steadily as the number of experts increases, while for only using a single expert for evaluation, its performance can be greatly improved when only using a small number of expert networks, e.g., three experts. Therefore, three experts are mostly employed in our multi-expert framework for a balance between complexity and performance. \textbf{Single expert vs. multi-expert.} Our method is essentially a multi-expert framework, and the comparison among using a single expert or an ensemble of multi-expert is a matter of great concern. As shown in Fig.~\ref{fig_multi_expert}, As the number of experts increases, the accuracy of the ensemble over a single expert also tends to rise. This demonstrates the power of ensemble learning. But for the main goal of our proposed NCL, the performance improvement over a single expert is impressive enough at the number of three. \begin{figure}[!t] \centering {\includegraphics[width=0.8\linewidth]{expert_num_ana.pdf}} \vspace{-0.3cm} \caption{ Comparisons of using different expert numbers on CIFAR100-LT with an IF of 100. We report the performance on both a single network and an ensemble. Specifically, the performance on a single network is reported as the average accuracy on all experts, and the ensemble performance is computed based on the averaging logits over all experts. } \label{fig_multi_expert} \end{figure} \textbf{Influence of data augmentations.} Data augmentation is a common tool to improve performance. For example, previous works use Mixup~\cite{zhang2021bag,cai2021ace,zhong2021improving,zhang2018mixup} and RandAugment~\cite{cubuk2020randaugment} to obtain richer feature representations. Our method follows PaCo~\cite{cui2021parametric} to employ RandAugment~\cite{cubuk2020randaugment} for experiments. As shown in Table~\ref{auto_aug_analysis}, the performance is improved by about 3\% to 5\% when employing RandAugment for training. However, our high performance depends not entirely on RandAugment. When dropping RandAugment, our ensemble model reaches an amazing performance of 49.22\%, which achieves comparable performance to the current state-of-the-art ones. \setlength{\tabcolsep}{8pt} \begin{table}[t] \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{c|c|c} \toprule[1pt] Method & w/o RandAug & w/ RandAug\\ \hline CE & 41.88 & 44.79 \\ BSCE & 45.88 & 50.60 \\ BSCE+NCL & 47.93 & 53.31 \\ BSCE+NCL$^\dagger$ & 49.22 & 54.42 \\ \bottomrule[1pt] \end{tabular} } \caption{Comparisons of training the network with ('w/') and without ('w/o') employing RandAugment. Experiments are conducted on CIFAR100-LT dataset with an IF of 100. $^\dagger$ Indicates the ensemble performance is reported. } \label{auto_aug_analysis} \end{table} \textbf{Ablation studies on all components.} In this sub-section, we perform detailed ablation studies for our NCL on CIFAR100-LT dataset, which is shown in Table~\ref{analysis_ablation}. To conduct a comprehensive analysis, we evaluate the proposed components including Self-Supervision ('SS' for short), NIL, NBOD and ensemble on two baseline settings of using CE and BSCE losses. Furthermore, for more detailed analysis, we split NBOD into two parts namely BOD$_{all}$ and BOD$_{hard}$. Take the BSCE setting as an example, SS and NIL improve the performance by 0.82\% and 0.64\%, respectively. And employing NBOD further improves the performance from 51.24\% to 53.19\%. When employing an ensemble for evaluation, the accuracy is further improved and reaches the highest. For the CE baseline setting, similar improvements can be achieved for SS, NIL, DBOD and ensemble. Generally, benefiting from the label distribution shift, BSCE loss can achieve better performance than CE loss. The steadily performance improvements are achieved for all components on both baseline settings, which shows the effectiveness of the proposed NCL. \setlength{\tabcolsep}{3pt} \begin{table}[t] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccc|c|c} \toprule[1pt] NIL & SS & BOD$_{all}$ & BOD$_{hard}$ & Ensemble & Acc.@CE & Acc.@BSCE\\ \hline & & & & &44.79 &50.60 \\ \checkmark & & & & &48.18 &51.24 \\ &\checkmark & & & &46.05 &51.42 \\ \checkmark & &\checkmark & & &48.81 &52.64 \\ \checkmark & &\checkmark &\checkmark & &49.34 &53.19 \\ \checkmark &\checkmark &\checkmark &\checkmark & &49.89 &53.31 \\ \checkmark &\checkmark &\checkmark &\checkmark & \checkmark &51.04 &54.42 \\ \bottomrule[1pt] \end{tabular} } \vspace{-0.3cm} \caption{Ablation studies on CIFAR100-LT dataset with an IF of 100. 'SS' indicates self-supervision. 'BOD$_{all}$' and 'BOD$_{hard}$' represent the balanced online distillation on all categories and only hard categories, respectively. NBOD means the setting when both 'BOD$_{all}$' and 'BOD$_{hard}$' are employed. Experiments are conducted on the framework of containing three experts. } \label{analysis_ablation} \end{table} \begin{figure}[t] \centering {\includegraphics[width=1.0\linewidth]{hcfn_top10.pdf}} \vspace{-0.8cm} \caption{ (a) The distribution of the largest softmax probability of hardest negative category. (b) The average KL distance between two models' output probabilities on the test set. Analysis is conducted on CIFAR100-LT with an IF of 100. Best viewed in color. } \label{fig_top10} \end{figure} \subsection{Discussion and Further Analysis} \iffalse \textbf{Distribution of hard categories.} We count the distribution of top-10 hard categories of few split during training as shown in Fig.~\ref{fig_top10} (a). At the initial stage of the network training (e.g., the first ten epoches), the network is dominated by the head classes, which often confuses the tail classes and results into misclassification. With employing our method to train the network, the frequency that the head class become a difficult category for few split is reduced (see the distribution in the final ten epoches). This shows that the proposed HCF helps to reduce the long-tailed bias and pull the sample away from the most difficult categories. \fi \textbf{Score distribution of hardest negative category.} Deep models normally confuse the target sample with the hardest negative category. Here we visualize the score distribution for the baseline method ('BSCE') and our method ('BSCE+NCL') as shown in Fig~\ref{fig_top10} (a). The higher the score of the hardest negative category is, the more likely it is to produce false recognition. The scores in our proposed method are mainly concentrated in the range of 0-0.2, while the scores in the baseline model are distributed in the whole interval (including the interval with large values). This shows that our NCL can considerably reduce the confusion with the hardest negative category. \textbf{KL distance of pre/post collaborative learning.} As shown in Fig.~\ref{fig_top10} (b), when networks are trained with our NCL, the KL distance between them is greatly reduced, which shows that the uncertainty in predictions is effectively alleviated. Besides, the KL distance is more balanced than that of BSCE and CE, which indicates that collaborative learning is of help to the long-tailed bias reduction. \textbf{NBOD without balancing probability.} As shown in Fig.~\ref{balance_offdis} (a), when removing the balanced probability in NBOD (denoted as 'NOD') both the performance of the single expert and the ensemble decline about 1\%, which manifests the importance of employing the balanced probability for the distillation in long-tailed learning. \textbf{Offline distillation vs. NBOD.} To further verify the effectiveness of our NBOD, we employ an offline distillation for comparisons. The offline distillation (denoted as 'NIL+OffDis') first employs three teacher networks of NIL to train individually, and then produces the teacher labels by using the averaging outputs over three teacher models. The comparisons are shown in Fig.~\ref{balance_offdis} (b). Although NIL+OffDis gains some improvements via an offline distillation, but its performance still 1.5\% worse than that of NIL+NBOD. It shows that our NBOD of the collaborative learning can learn more knowledge than offline distillation. \iffalse \setlength{\tabcolsep}{6pt} \begin{table}[t] \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{c|ccccc|c} \toprule[1pt] balanced probability & Single expert & Ensemble \\ \hline & 52.1 & 53.4 \\ \checkmark & 53.3 & 54.4 \\ \bottomrule[1pt] \end{tabular} } \caption{ Comparisons of with or without using the balanced probability in NBOD. Experiments are conducted on CIFAR100-LT dataset with an IF of 100. } \label{analysis_re-balancing} \end{table} \fi \begin{figure}[t] \centering {\includegraphics[width=0.8\linewidth]{balance_ana.pdf}} \vspace{-0.3cm} \caption{ (a) Comparisons of using NOD or NBOD for distillation. (b) Comparisons of using offline distillation or our NBOD. Analysis is conducted on CIFAR100-LT with an IF of 100. } \label{balance_offdis} \end{figure} \section{Conclusions} In this work, we have proposed a Nested Collaborative Learning (NCL) to collaboratively learn multiple experts. Two core components, i.e., NIL and NBOD, are proposed for individual learning of a single expert and knowledge transferring among multiple experts. Both NIL and NBOD consider the features learning from both a full perspective and a partial perspective, which exhibits in a nested way. Moreover, we have proposed a HCM to capture hard categories for learning thoroughly. Extensive experiments have verified the superiorities of our method. \textbf{Limitations and Broader impacts.} One limitation is that more GPU memory and computing power are needed when training our NCL with multiple experts. But fortunately, one expert is also enough to achieve promising performance in inference. Moreover, the proposed method improves the accuracy and fairness of the classifier, which promotes the visual model to be further put into practical use. To some extent, it helps to collect large datasets without forcing class balancing preprocessing, which improves efficiency and effectiveness of work. The negative impacts can yet occur in some misuse scenarios, e.g., identifying minorities for malicious purposes. Therefore, the appropriateness of the purpose of using long-tailed classification technology is supposed to be ensured with attention. \section*{Acknowledgements} This work was supported by the National Key Research and Development Plan under Grant 2020YFC2003901, the External cooperation key project of Chinese Academy Sciences 173211KYSB20200002, the Chinese National Natural Science Foundation Projects 61876179 and 61961160704, the Science and Technology Development Fund of Macau Project 0070/2020/AMJ, and Open Research Projects of Zhejiang Lab No. 2021KH0AB07, and the InnoHK program. {\small \bibliographystyle{ieee_fullname}
2023-04-23T08:17:31.984Z
2022-04-20T02:24:16.000Z
redpajama/arxiv
arxiv_0000
192
9,016
b04bb17d04d70eb60b20ababa140830c7f683ef6
\section{Introduction} The core inverse and the dual core inverse of a matrix were introduced in \cite{BT}. These generalized inverses have been studied by several authors, in particular they have been extended to rings with involution (\hspace{-1pt}\cite{rdd}) and to Hilbert space operators (\hspace{-1pt}\cite{rdd2}). It is worth noticing that the inverses under consideration are closely related to the group inverse and the Moore-Penrose inverse; to learn more results concerning these notions, see for example \cite{BT, rdd, rdd2, 14}. So far the properties of the (dual) core inverse that have been researched are mainly of algebraic nature and the setting has been essentially the one of rings with involution. The objective of the present article is to study the continuity and the differentiability of these inverses in the context of $C^*$-algebras. In fact, in section 3, after having recalled several preliminary results in section 2, the continuity of the core inverse and of the dual core inverse will be studied. Two main characterizations will be presented. The first one relates the continuity of the aforementioned notions to the continuity of the group inverse and of the Moore-Penrose inverse. The second characterization uses the notion of the gap between subspaces; a similar approach has been used to study the continuity of the Drazin inverse and of the Moore-Penrose inverse, see for example \cite{V, V2, kr1} and \cite[Chapter 4]{dr}. In section 4 results regarding the continuity of the (dual) core inverse of Hilbert space operators and matrices will be presented. Finally, in section 5 the differentiability of the generalized inverses under consideration will be researched. Furthermore, some results concerning the continuity and the differentiability of the group inverse and the Moore-Penrose inverse will be also proved. It is noteworthy to mention that the core inverse and the dual core inverse are two particular cases of the $(b, c)$-inverse (\hspace{-1pt}\cite{D}), see \cite[theorem 4.4]{rdd}. Therefore the representations and other results presented in \cite[Section 7]{b2} can be applied to these generalized inverses. \section{Preliminary Definitions} \noindent Since properties of $C^*$-algebra elements will be studied in what follows, although the main notions considered in this article can be given in the context of rings with involution, all the definition will be presented in the frame of $C^*$-algebras. From now on $\aa$ will denote a unital $C^*$-algebra with unity $\uno$. In addition, $\aa^{-1}$ will stand for the set of all invertible elements in $\aa$. Given $a\in\aa$, the {\em image ideals} and the {\em null ideals} defined by $a\in\aa$ are the following sets: \begin{align*} & &&a \aa = \{ ax: x \in \aa \},& &\aa a = \{ xa: x \in \aa \},&\\ & &&a^\circ = \{ x \in \aa: ax=0 \},& &{}^\circ a = \{ x \in \aa: xa=0 \}.&&\\ \end{align*} \noindent Recall that $a\in\aa$ is said to be \it regular, \rm if there exists $b\in \aa$ such that $a=aba$. In addition, $b\in\aa$ is said to be \it an outer inverse of $a\in\aa$, \rm if $b=bab$. The notion of invertible element has been generalized or extended in several ways. One of the most important notion of generalized inverse is the Moore-Penrose inverse. An element $a\in\aa$ is said to be \it Moore-Penrose invertible, \rm if there is $x\in\aa$ such that the following equations hold: \begin{align*} & &&axa=a,& &xax=x,& &(ax)^*=ax,& &(xa)^* = xa.&\\ \end{align*} \noindent It is well known that if such an $x$ exists, then it is unique, and in this case $x$, the Moore-Penrose inverse of $a$, will be denoted by $a^\dag$. Moreover, the subset of $\aa$ composed of all the Moore-Penrose invertible elements of $\aa$ will be denoted by $\aa^\dagger$. It is worth noticing that according to \cite[Theorem 6]{hm1}, a necessary and sufficient condition for $a\in \aa^\dagger$ is that $a\in\aa$ is regular, which in turn is equivalent to $a\aa$ is closed (\hspace{-1pt}\cite[Theorem 8]{hm1}). Moreover, if $a\in\aa^\dagger$, then it is not difficult to prove that $a^\dag \aa=a^* \aa$ and $ \aa a^\dag=\aa a^*$. To learn more properties of the Moore-Penrose inverse in the frame of $C^*$-algebras, see \cite{hm1, hm, P, mb, rs, dr}.\par Another generalized inverse which will be central for the purpose of this article is the group inverse. An element $a \in \aa$ is said to be {\em group invertible}, if there is $x \in \aa$ such that \begin{align*} & & &axa=a,& &xax=x,& &ax=xa.& \end{align*} It can be easily proved that if such $x$ exists, then it is unique. The group inverse is customarily denoted by $a^\#$. The subset of $\aa$ composed by all the group invertible elements in $\aa$ will be denoted by $\aa^\#$. Next follows one of the main notions of this article (see \cite[Definition~2.3]{rdd}, see also \cite{BT} for the original definition in the context of matrices). \begin{df} \label{df1}Given a unital $C^*$-algebra $\aa$, an element $a \in \aa$ will be said to be core invertible, if there exists $x \in \aa$ such that the following equalities hold: $$ axa=a, \qquad x \aa = a \aa, \qquad \aa x = \aa a^*. $$ \end{df} According to \cite[Theorem 2.14]{rdd}, if such an element $x$ exists, then it is unique. This element will be said to be the {\em core inverse} of $a \in \aa$ and it will be denoted by $\core{a}$. In addition, the set of all core invertible elements of $\aa$ will be denoted by $\core{\aa}$. Recall that according to \cite[Theorem 2.14]{rdd}, when $\core{a}$ exists ($a\in\aa$), it is an outer inverse of $a$, i.e., $\core{a} a \core{a} = \core{a}$. Moreover, in \cite[Theorem 2.14]{rdd}, the authors characterized the core invertibility in terms of equalities. This characterization was improved in \cite[Theorem~3.1]{14}. Specifically, $a \in \aa$ is core invertible if and only if there exists $x \in \aa$ such that \begin{align*} \label{prop_core} & &&a x^2 = x,& &x a^2 = a,& &(a x)^* = a x.&\\ \end{align*} Furthermore, if such $x$ exists, then $x = \core{a}$. Another generalized inverse, which is related with the core inverse, was defined in \cite{rdd}. \begin{df}\label{df2} Given $\aa$ a unital $C^*$-algebra, an element $a \in \aa$ is said to be dual core invertible, if there is $x \in \aa$ such that $axa=a$, $x\aa=a^* \aa$, and $\aa x = \aa a$. \end{df} As for the core inverse, it can be proved that this $x$ is unique, when it exists; thus it will be denoted by $\dcore{a}$ and $\dcore{\aa}$ will stand for the set of all dual core invertible elements of $\aa$. Note that $a \in \aa$ is core invertible if and only if $a^*$ is dual core invertible and in this case, $\dcore{(a^*)}=(\core{a})^*$. In addition, according to \cite[Theorem 2.15]{rdd}, when $a\in\dcore{\aa}$, $\dcore{a}$ is an outer inverse of $a$, i.e., $\dcore{a}=\dcore{a} a\dcore{a}$. Observe that according to Definition \ref{df1} (respectively Definition \ref{df2}), if $a\in\aa$ is core invertible (respectively dual core invertible), then it is regular, and hence $a$ is Moore-Penrose invertible (\hspace{-1pt}\cite[Theorem 6]{hm1}). Moreover, if $a\in \core{\aa}\cup\dcore{\aa}$, then $a$ is group invertible (\hspace{-1pt}\cite[Remark 2.16]{rdd}). Furthermore, according to \cite[Theorem 2.19]{rdd}, if $a\in\core{\aa}$, then the following equalities hold: \begin{align*} &a^\#=(\core{a})^2a,& &a^\#=\core{a}a\dcore{a},& &a^\dag=\dcore{a}a\core{a},&\core{a}=a^\# a a^\dag,& &\dcore{a}=a^\dag a a^\#.&\\ \end{align*} \noindent To learn more on the properties of the core and dual core inverse, see \cite{BT, rdd, 14}. On the other hand, $\xx$ will stand for a Banach space and $\ll(\xx)$ for the algebra of all operators defined on and with values in $\xx$. When $A \in \ll(\xx)$, the range and the null space of $A$ will be denoted by $\rr(A)$ and $\nn(A)$, respectively. When $\dim \xx < \infty$ and $A \in \ll(\xx)$, the dimension of $\rr(A)$ will be denoted with $\rk(A)$. Evidently, if $A\in\ce_n$, the set of complex $n \times n$ matrices, by considering that $A \in \ll(\ce^n)$, the rank of the complex matrix $A$ coincides with the previously defined $\rk(A)$; consequently, the same notation will be used for both notions. One of most studied generalized inverses is the outer inverse with prescribed range and null space. This generalized inverse will be introduced in the Banach frame. Let $\xx$ be a Banach space and consider $A \in \ll(\xx)$ and $\ttt$, $\sss$ two closed subspaces in $\xx$. If there exists an operator $B \in \ll(\xx)$ such that $BAB=B$, $\nn(B)=\sss$, and $\rr(B)=\ttt$, then such $B$ is unique (\hspace{-1pt}\cite[Theorem 1.1.10]{dr}). In this case, $B$ will be said to be the {\em $A_{\ttt,\sss}^{(2)}$ outer inverse of} $A$. To prove several results of this article, the definition of the gap between two subspaces need to be recalled. Let $\xx$ be a Banach space and consider $\mm$ and $\nn$ two closed subspaces in $\xx$. If $\mm=0$, then set $\delta(\mm,\nn)=0$, otherwise set \begin{align*} & & &\delta(\mm,\nn) = \sup \{ \ds (x, \nn) : x \in \mm, \| x \| = 1 \},&\\ \end{align*} where $\ds(x,\nn)=\inf \{ \| x - y \| : y \in \nn \}$. The {\em gap between the closed subspaces $\mm$ and $\nn$} is \begin{align*} & &&\gap{\mm}{\nn} = \max \{ \delta(\mm,\nn), \delta(\nn,\mm) \}.&\\ \end{align*} See \cite{dr,dx,kato} for a deeper insight of this concept. Another notion needed to study the continuity of the (dual) core inverse is the following. Let $p$ and $q$ be self-adjoint idempotents in a $C^*$-algebra $\aa$. The {\em maximal angle} between $p$ and $q$ is the number $\psi(p, q) \in [0, \pi/2]$ such that $\| p - q \| = \sin \psi(p, q)$; see \cite[Definition 2.3]{cbl2}. In what follows, given $x\in\aa^\dag$, $\psi_x$ will stand for the maximal angle between $xx^\dag$ and $x^\dag x$, i.e., $\psi_x=\psi (xx^\dag, x^\dag x)$. \section{Continuity of the (dual) core inverse } In first place a preliminary result need to be presented. \begin{theorem}\label{theo310} Let $\aa$ be a unital $C^*$-algebra and consider $a\in\aa$. The following statements are equivalent \begin{enumerate}[{\rm (i)}] \item $a$ is core invertible. \item $a$ is dual core invertible. \item $a^*$ is core invertible. \item $a^*$ is dual core invertible. \item $a$ is group invertible and Moore-Penrose invertible. \end{enumerate} \noindent In particular, $\core{\aa}=\dcore{\aa}=\aa^\#\cap\aa^\dag=\aa^\#$. \end{theorem} \begin{proof} The equivalence between statements (i) and (iv) and between statements (ii) and (iii) can be derived from Definition \ref{df1} and Definition \ref{df2}. Note that to conclude the proof, it is enough to prove the last statement of the Theorem. In fact, this statement implies that statement (i) and (ii) are equivalent. According to \cite[Remark 2.16]{rdd}, $\core{\aa}\cup\dcore{\aa}\subseteq \aa^\#$. Moreover, according to \cite[Theorem 6]{hm1}, $\core{\aa}\cup\dcore{\aa}\subseteq \aa^\dag$. Therefore, according to \cite[Remark 2.16]{rdd}, \begin{align*} &\core{\aa}\subseteq \aa^\#\cap \aa^\dag=\core{\aa}\cap \dcore{\aa}\subseteq \core{\aa},& &\dcore{\aa}\subseteq \aa^\#\cap \aa^\dag=\core{\aa}\cap \dcore{\aa}\subseteq \dcore{\aa}&\\ \end{align*} \noindent Finally, according to \cite[Theorem 6]{hm1}, $ \aa^\#\cap \aa^\dag= \aa^\#$. \end{proof} Note that under the same conditions in Theorem \ref{theo310}, as for the group inverse and the Moore-Penrose inverse, $(\core{\aa})^*=\core{\aa}$ and $(\dcore{\aa})^*=\dcore{\aa}$, where if $X\subseteq \aa$ is a set, $X^*$ stands for the following set: $X^*=\{x^*\colon x\in X\}$. However, in contrast to the case of the group inverse and the Moore-Penrose inverse (when $a^\#$ (respectively $a^\dag$) exits, $(a^*)^\#=(a^\#)^*$ (respectively $(a^*)^\dag=(a^\dag)^*$), $a\in\aa$), recall that to obtain the core inverse (respectively the dual core inverse) of $a^*$ it is necessary to consider the dual core (respectively the core) inverse of $a$: \begin{align*} & &&\core{(a^*)}=(\dcore{a})^*,& &\dcore{(a^*)}=(\core{a})^*.&\\ \end{align*} To prove the first characterization of this section some preparation is needed. \begin{lemma}\label{lema2} Let $\aa$ be a unital $C^*$-algebra and consider $a\in\aa$. \begin{enumerate}[{\rm (i)}] \item If $a \in \core{\aa}$, then $aa^\dag \core{a} = \core{a}$. \item If $a \in \core{\aa}$, then $(aa^\dag + a^\dag a - \uno) \core{a} = a^\dag$. \item Suppose that $a\in\aa$ is regular. The element $aa^\dag + a^\dag a -\uno$ is invertible if and only if $a$ is core invertible. Moreover, in this case, $(aa^\dag + a^\dag a -\uno)^{-1} = \core{a}a + (\core{a}a)^* - \uno$. \item If $a\in\dcore{\aa}$, then $a^\#=a(\dcore{a})^2$. \end{enumerate} \end{lemma} \begin{proof} Recall that according to \cite[Theorem 6]{hm1}, if $a\in \core{\aa}$, then $a^\dag$ exists.\par The proof of statement (i) can be derived from the fact that $\core{a} \in a \aa$. To prove statement (ii), recall that according to \cite[Theorem 2.19 (v)]{rdd}, $\core{a} = a^\# a a^\dag$. Therefore, $$(aa^\dag + a^\dag a - \uno) \core{a} = aa^\dag \core{a} + a^\dag a \core{a}-\core{a}= a^\dag a \core{a} = a^\dag a a^\# a a^\dag = a^\dag.$$ Now statement (iii) will be proved. Note that according to \cite[Theorem 6]{hm1}, $a^\dag$ exists. Recall that according to \cite[Theorem 2.3]{bc}, $aa^\dag + a^\dag a -\uno\in\aa^{-1}$ is equivalent to $a\in\aa^\#$. Thus, according to Theorem \ref{theo310}, necessary and sufficient for $aa^\dag + a^\dag a -\uno\in\aa^{-1}$ is that $a\in\core{\aa}$. Next the formula of the inverse of $aa^\dag + a^\dag a -\uno$ will be proved. Recall that according to \cite[Theorem~3.1]{14}, $ \core{a} a aa^\dag= aa^\dag$. \begin{equation*} \begin{split} [ \core{a} a & + (\core{a}a)^* - \uno ] [ aa^\dag + a^\dag a - \uno ] \\ & = \left[ \core{a} a + (\core{a}a)^* - \uno \right] aa^\dag + \left[ \core{a} a + (\core{a}a)^* - \uno \right] a^\dag a - \left[ \core{a} a + (\core{a}a)^* - \uno \right] & \\ & = aa^\dag + (\core{a}a)^*(aa^\dag)^*-aa^\dag + \core{a}a + (\core{a}a)^*(a^\dag a)^*-a^\dag a - \core{a}a - (\core{a} a)^* + \uno \\ & = (aa^\dag \core{a}a)^* + (a^\dag a \core{a}a)^*-a^\dag a-(\core{a}a)^* +\uno & \\ & = (\core{a}a)^* + (a^\dag a)^*-a^\dag a-(\core{a}a)^* +\uno \\ & = \uno. \end{split} \end{equation*} Since $aa^\dag + a^\dag a - \uno$ is invertible, $(aa^\dag + a^\dag a - \uno)^{-1} = \core{a} a + (\core{a}a)^* - \uno$. To prove statement (iv), recall that according Theorem \ref{theo310}, $a^*\in\core{\aa}$. In addition, according to the paragraph between Theorem \ref{theo310} and the present Lemma, $\dcore{a}=(\core{(a^*)})^*$. However, according to \cite[Theorem 2.19]{rdd}, $(a^*)^\#=(\core{(a^*)})^2a^*$. Thus, $$ a^\#=a((\core{(a^*)})^*)^2=a(\dcore{a})^2. $$ \end{proof} Note that given a ring with involution $\rr$, Lemma \ref{lema2} holds in such a context provided that $a\in\rr$ is Moore-Penrose invertible, In the next theorem the continuity of the (dual) core inverse will be characterized. It is worth noticing that $a\in\aa$ will be not assumed to be core invertible, dual core invertible, group invertible or Moore-Penrose invertible. Note also that the following well known result will be used in the proof of the theorem: given $\aa$ a unital Banach algebra, $b\in\aa$ and $\suc{b_n}\subset\aa^{-1}$ a sequence such that $\suc{b_n}$ converges to $b$, if $\suc{b_n^{-1}}$ is a bounded sequence, then $b$ is invertible and the sequence $\suc{b_n^{-1}}$ converges to $b^{-1}$. \begin{theorem}\label{thm320} Let $\aa$ be a unital $C^*$-algebra and consider $a \in \aa$. Let $\suc{a_n}\subset \core{\aa}=\dcore{\aa}$ be such that $\suc{a_n}$ converges to $a$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The element $a\in\core{\aa}$ and $\suc{\core{a}_n}$ converges to $\core{a}$. \item The element $a\in\dcore{\aa}$ and $\suc{\dcore{a_n}}$ converges to $\dcore{a}$. \item The element $a\in\aa^\#$ and $\suc{a_n^\#}$ converges to $a^\#$. \item The element $a\in\core{\aa}$ and $\suc{\core{a_n}}$ is a bounded sequence. \item The element $a\in\dcore{\aa}$ and $\suc{\dcore{a_n}}$ is a bounded sequence. \item The element $a\in\aa^\dag$, $\suc{a_n^\dag}$ converges to $a^\dag$, and $\suc{\core{a_n}a_n}$ is a bounded sequence. \item The element $a\in\aa^\dag$, $\suc{a_n^\dag}$ converges to $a^\dag$, and $\suc{a_n\dcore{a_n}}$ is a bounded sequence. \item The element $a\in\aa^\dag$, $\suc{a_n^\dag}$ converges to $a^\dag$, and there exists $\psi\in[0, \frac{\pi}{2})$ such that $\psi_n=\psi_{a_n}\le \psi$ for all $n\in\ene$. \end{enumerate} \end{theorem} \begin{proof} Note that according to Theorem \ref{theo310}, $\suc{a_n}\subset \core{\aa}\cap\dcore{\aa}\cap\aa^\#\cap\aa^\dag$. First the equivalence between statements (i) and (iii) will be proved. Suppose that statement (i) holds. Then according to Theorem \ref{theo310}, $a\in\aa^\#$. In addition, $\suc{\core{a}_n a_n}$ converges to $\core{a}a$. However, according to \cite[Remark 2.17]{rdd}, $a^\#a=\core{a}a$, and for each $n\in\ene$, $a_n^\#a_n=\core{a}_na_n$. Consequently, $\suc{a_n^\#a_n}$ converges to $a^\#a$, which according to \cite[Theorem 2.4]{kr1}, implies that $\suc{a_n^\#}$ converges to $a^\#$. Suppose that statement (iii) holds. Note that according to \cite[Theorem 6]{hm1}, $a\in\aa^\dag$. In particular, according to Theorem \ref{theo310}, $a\in \core{\aa}$. Moreover, according to \cite[Corollary 2.1 (ii)]{bc} and \cite[Equation (2.1)]{cbl2}, $$ \| a_n^\dag \| = \| (a_n a_n^\dag +a_n^\dag a_n - \uno) a_n^\# (a_n a_n^\dag +a_n^\dag a_n - \uno) \| \leq \|a_n a_n^\dag +a_n^\dag a_n - \uno\|^2 \| a_n^\#\| \leq \| a_n^\# \|. $$ \noindent Consequently, $\suc{a_n^\dag}$ is a bounded sequence. Now two cases need to be considered. If $a=0$, then $\core{a}=0$. However, according to \cite[Theorem 2.19]{rdd}, $\core{a_n}=a_n^\#a_na_n^\dag$. Since $\suc{a_n}$ converges to $0$ and $\suc{a_n^\#}$ and $\suc{a_n^\dag}$ are bounded sequences, $\suc{\core{a_n}}$ converges to $0$. Now suppose that $a\neq 0$. Since $\suc{a_n}$ converges to $a$, there exists and $n_0\in\ene$ such that $a_n\neq 0$, $n\ge n_0$. Without loss of generality, it is possible to assume that $\suc{a_n}\subset\core{\aa}\setminus\{ 0\}$. Thus, according to \cite[Theorem 1.6]{kol}, $\suc{a_n^\dag}$ converges to $a^\dag$. However, according again to \cite[Theorem 2.19]{rdd}, $\core{a}=a^\#aa^\dag$ and for each $n\in\ene$, $\core{a}_n=a_n^\# a_n a_n^\dag$. Therefore, $\suc{\core{a}_n}$ converges to $\core{a}$. To prove the equivalence between statements (ii) and (iii), apply a similar argument to the one used to prove the equivalence between statements (i) and (iii). In particular, use the following identities, which holds for $b\in\dcore{\aa}$: ($\alpha$) $b^\# b=b\dcore{b}$ (\hspace{-1pt}\cite[Remark 2.17]{rdd}); ($\beta$) $\dcore{b}=b^\dag bb^\#$ (\hspace{-1pt}\cite[Theorem 2.19]{rdd}). It is evident that statement (i) implies statement (iv). Now suppose that statement (iv) holds. It will be proved that statement (iii) holds. According to Theorem \ref{theo310}, $a\in\aa^\#$. In addition, according to \cite[Theorem 2.19]{rdd}, for each $n\in\ene$, $a_n^\#=(\core{a_n})^2a_n$. In particular, $\suc{a_n^\#}$ is a bounded sequence. Consequently, according to \cite[Theorem 2.4]{kr1}, $\suc{a_n^\#}$ converges to $a^\#$, equivalently, statement (iii) holds. The equivalence between statements (ii) and (v) can be proved applying a similar argument to the one used to prove the equivalence between statements (i) and (iv), using in particular Lemma \ref{lema2} (iv). Next it will be proved that statement (iv) implies statement (vi). Suppose then that statement (iv) holds. Then, $\suc{\core{a_n}a_n}$ is a bounded sequence. In addition, according to Theorem \ref{theo310}, $a\in\aa^\dag$. Now two cases need to be considered. Suppose first that $a=0$. Since statement (iv) and (v) are equivalent, $\suc{\dcore{a_n}}$ is a bounded sequence. According to \cite[Theorem 2.19]{rdd}, for each $n\in\ene$, $a_n^\dag=\dcore{a_n}a_n\core{a_n}$. Since $\suc{\core{a_n}}$ and $\suc{\dcore{a_n}}$ are bounded sequences, $\suc{a_n^\dag}$ converges to $0=a^\dag$. If $a\neq 0$, as when it was proved that statement (iii) implies statement (i), it is possible to assume that $\suc{a_n}\subset\aa\setminus\{ 0\}$. According to Lemma \ref{lema2} (ii), $$ \parallel a_n^\dag\parallel\le\parallel a_na_n^\dag +a_n^\dag a_n-\uno\parallel\parallel\core{a_n}\parallel\le 3\parallel\core{a_n}\parallel. $$ \noindent In particular, $\suc{a_n^\dag}$ is a bounded sequence. However, according to \cite[Theorem 1.6]{kol}, $\suc{a_n^\dag}$ converges to $a^\dag$. Suppose that statement (vi) holds. It will be proved that statement (vi) implies statement (iv). According to \cite[Theorem 2.19]{rdd}, for each $n\in\ene$, $\core{a_n}=\core{a_n}a_na_n^\dag$. Since $\suc{a_n^\dag}$ and $\suc{\core{a_n}a_n}$ are bounded sequences, $\suc{\core{a_n}}$ is a bounded sequence. Note that according to Lemma \ref{lema2} (iii), for each $n\in\ene$, $b_n=a_na_n^\dag +a_n^\dag a_n-\uno$ is invertible and $b_n^{-1}=\core{a_n}a_n+(\core{a_n}a_n)^*-\uno$. In addition, the sequence $\suc{b_n^{-1}}$ is bounded. In fact, according to \cite[Lemma 2.3]{kr}, $\parallel b_n^{-1}\parallel=\parallel\core{a_n}a_n\parallel$. Now, since $\suc{b_n}$ converges to $b=aa^\dag +a^\dag a-\uno$, the element $b$ is invertible, which in view of Lemma \ref{lema2} (iii), is equivalent to $a\in\core{\aa}$. According to \cite[Theorem 2.19]{rdd}, for each $n\in\ene$, $\core{a_n}a_n=a_n\dcore{a_n}$. Thus, statement (vii) is an equivalent formulation of statement (vi). Finally, statements (vi) and (viii) will be proved to be equivalent. In fact, note that if $a_n=0$, then $\psi_n=0$. In addition, according to Lemma \ref{lema2} (iii), $a_na_n^\dag+a_n^\dag a_n-\uno$ is invertible, and when $a_n\neq0$, according to Lemma \ref{lema2} (iii), \cite[Theorem 2.4 (iii)]{cbl2} and \cite[Lemma 2.3]{kr}, $$ \frac{1}{\cos \psi_n}=\parallel (a_na_n^\dag +a_n^\dag a_n-\uno)^{-1}\parallel= \parallel \core{a_n}a_n+(\core{a_n}a_n)^*-\uno\parallel= \parallel \core{a_n}a_n\parallel. $$ \noindent In particular, $\suc{\core{a_n}a_n}$ is bounded if and only if there exists $\psi\in[0, \frac{\pi}{2})$ such that $\psi_n\le\psi$ for all $n\in\ene$. \end{proof} Theorem \ref{thm320} shows that the continuity of the group inverse and of the Moore-Penrose inverse are central for the continuity of the core inverse and the dual core inverse. To learn more on the continuity of the group inverse and the Moore-Penrose inverse, see for example \cite{cbl, cbl2, kr1, V, rs} and \cite{hm, kol, mb, rs, V2}, respectively, see also \cite[Chapter 4]{dr}. Observe that the conditions in statement (vi) of Theorem \ref{thm320}, ($\alpha$) $a \in \aa^\dag$, $a_n^\dag \to a^\dagger$, and ($\beta$) $\{ \core{a_n} a_n \}$ is a bounded sequence, are independent from each other, as the following two examples show. \begin{example}\label{example1} {\rm Consider $\ce$ as a $C^*$-algebra. Let $a_n = 1/n$ and $a=0$. It is evident that $a_n \to a$, $a_n^\dag = n$, and $\suc{ a_n^\dag }$ does not converge to $a^\dag =0$. However, it should be clear that $\core{a}_n=n$. Therefore, $\core{a}_n a_n = 1$, and thus, $\suc{ \core{a_n} a_n }$ is a bounded sequence. } \end{example} \begin{example}\label{ejemplo2} {\rm Consider the set of $2 \times 2$ complex matrices as a $C^*$-algebra. Take the conjugate transpose of the matrix as the involution on this matrix. Let $\suc{\psi_n}$ be a sequence in $(0,\pi/2)$ such that $\psi_n \to \pi/2$ and let \begin{align*} & &&A_n = \left[ \begin{array}{cc} \cos \psi_n & \sin \psi_n \\ 0 & 0 \end{array} \right],& &A = \left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right]. &\\ \end{align*} It is simple prove that \begin{align*} &A_n^\dag = \left[ \begin{array}{cc} \cos \psi_n & 0 \\ \sin \psi_n & 0 \end{array} \right], & &A^\dag = \left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right],& &\core{A}_n = \left[ \begin{array}{cc} 1/\cos \psi_n & 0 \\ 0 & 0 \end{array} \right].& \end{align*} Therefore, $\suc{A_n^\dag}$ converges to $A^\dag$ and \begin{align*} & &&\core{A}_n A_n = \left[ \begin{array}{cc} 1 & \tan \psi_n \\ 0 & 0 \end{array} \right],&\\ \end{align*} which shows that $\suc{ \core{A}_n A_n }$ is not bounded. Note also that $\suc{\core{A_n}}$ is not a convergent sequence. } \end{example} \indent Observe also that if $\aa$ is a unital $C^*$-algebra and $\suc{a_n}\subset\core{\aa}$ is such that $\suc{a_n}$ converges to $a\in\aa$, Example \ref{example1} also shows that the condition $\suc{\core{a_n}a_n}$ is a convergent sequence does not implies that $\suc{\core{a_n}}$ is convergent. \indent It is worth noticing that Example \ref{ejemplo2} also proves that $\core{\aa}=\dcore{\aa}$ is not in general a closed set. In fact, using the same notation as in Example \ref{ejemplo2}, $\suc{A_n}\subset\core{\aa}$, $\suc{A_n}$ converges to $A$ but $A\notin\core{\aa}$ ($A^2=0$, $\rk(A^2)=0\neq 1=\rk(A)$, i.e., $A$ is not group invertible). \indent Next an extension of \cite[Theorem 2.7]{cbl2} will be derived from Theorem \ref{thm320}. \begin{corollary}\label{cor3900} Let $\aa$ be a unital $C^*$-algebra and consider $a\in\aa$. Suppose that the sequence $\suc{a_n}\subset\aa^\#$ is such that $\suc{a_n}$ converges to $a$. Then, the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The element $a\in\aa^\#$ and $\suc{a_n^\#}$ converges to $a^\#$. \item The sequence $\suc{a_n^\#}$ is bounded. \item The element $a\in\aa^\dag$, $\suc{a_n^\dag}$ converges to $a^\dag$, and the sequence $\suc{a_n^\# a_n}$ is bounded. \item The element $a\in\aa^\dag$, $\suc{a_n^\dag}$ converges to $a^\dag$, and there exists $\psi\in[0, \frac{\pi}{2})$ such that $\psi_n=\psi_{a_n}\le \psi$ for all $n\in\ene$. \end{enumerate} \end{corollary} \begin{proof} Statement (ii) is a consequence of statement (i). Suppose that statement (ii) holds. Then, $\suc{a_n^\# a_n}$ is a bounded sequence. To prove that $a\in\aa^\dag$ and $\suc{a_n^\dag}$ converges to $a^\dag$, proceed as in the corresponding part of the proof of \cite[Theorem 2.7]{cbl2} (see statement (ii) implies statement (iii) in \cite[Theorem 2.7]{cbl2}). Suppose that statement (iii) holds. First note that if $a_n=0$, then $\psi_n=0$. In addition, according to \cite[Theorem 2.5]{cbl2}, if $a_n\neq 0$, then, $$ \|a_na_n^\# \|=\frac{1}{\cos \psi_{a_n}}. $$ \noindent Therefore, the sequence $\suc{a_n^\# a_n}$ is bounded if and only if there exists $\psi\in[0, \frac{\pi}{2})$ such that $\psi_n=\psi_{a_n}\le \psi$ for all $n\in\ene$. To prove that statement (iv) implies statement (i), apply Theorem \ref{thm320} (equivalence between statements (iii) and (viii)). \end{proof} In Theorem \ref{thm320} and Corollary \ref{cor3900} the general case has been presented for sake of completeness. However, the case $a=0$ is particular and it deserves to be studied. Recall that given a unital $C^*$-algebra $\aa$, if $\suc{a_n}\subset\aa^{-1}$ is such that $\suc{a_n}$ converges to 0, then the sequence $\suc{a_n^{-1}}$ is unbounded. Next the case of a sequence $\suc{a_n}\subset\aa^\#=\core{\aa}=\dcore{\aa}\subseteq \aa^\dag$ such that it converges to $0$ will be studied. In first place the Moore-Penrose inverse will be considered. \begin{remark}\label{rem3950}\rm Let $\aa$ be a unital $C^*$-algebra and consider $a\in\aa^\dag$ and $\suc{a_n}\subset\aa^\dag$ such that $\suc{a_n}$ converges to $a$. Recall that according to \cite[Theorem 1.6]{kol}, the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{a_n^\dag}$ converges to $a^\dag$. \item The sequence $\suc{a_n a_n^\dag}$ converges to $aa^\dag$. \item The sequence $\suc{a_n^\dag a_n}$ converges to $a^\dag a$. \item The sequence $\suc{a_n^\dag}$ is bounded. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent Now when $a=0$, according to \cite[Theorem 1.3]{kol}, the following equivalence holds:} \item A necessary and sufficient condition for $\suc{a_n^\dag}$ to converge to 0 is that the sequence $\suc{a_n^\dag}$ is bounded. \hspace*{\dimexpr\linewidth-\textwidth\relax} \begin{minipage}[t]{\textwidth} \noindent However, concerning the convergence of $\suc{a_n a_n^\dag}$, note that given $n\in\ene$, since $a_n a_n^\dag$ is a self-adjoint idempotent, if $\parallel a_n a_n^\dag\parallel < 1$, then $a_n a_n^\dag =0$, which implies that $a_n=0$; \hskip.1truecm a similar result can be derived for the convergence of $\suc{a_n^\dag a_n}$. Consequently, the following statements are equivalent. \end{minipage} \item The sequence $\suc{a_n^\dag}$ converges to 0. \item There exists $n_0\in\ene$ such that for $n\ge n_0$, $a_n=0$. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent Therefore, according to statements (v)-(vii), given $\suc{a_n}\subset\aa^\dag$ such that $\suc{a_n}$ converges to $0$, there are only two possibilities.} \item There exists $n_0\in\ene$ such that for $n\ge n_0$, $a_n=0$; or \item the sequence $\suc{a_n^\dag}$ is unbounded. \end{enumerate} \end{remark} In the following proposition, sequences of group invertible or (dual) core invertible elements that converge to 0 will be studied. \begin{proposition}\label{prop3960} Let $\aa$ be a unital $C^*$-algebra and consider a sequence $\suc{a_n}\subset \aa^\#=\core{\aa}=\dcore{\aa}$. such that $\suc{a_n}$ converges to 0. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{a_n^\#}$ converges to 0. \item The sequence $\suc{\core{a_n}}$ converges to 0. \item The sequence $\suc{\dcore{a_n}}$ converges to 0. \item The sequence $\suc{a_n^\#}$ is bounded. \item There exists $n_0\in\ene$ such that for $n\ge n_0$, $a_n=0$. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent In addition, there exist only two possibilities for the sequence $\suc{a_n}$.} \item There exists $n_0\in\ene$ such that for $n\ge n_0$, $a_n=0$; or \item the sequence $\suc{a_n^\#}$ is unbounded. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent Moreover, statement {\rm (vii)} is equivalent to the following two statements.} \item the sequence $\suc{\core{a_n}}$ is unbounded. \item the sequence $\suc{\dcore{a_n}}$ is unbounded. \end{enumerate} \end{proposition} \begin{proof} According to Theorem \ref{thm320}, statements (i)-(iii) are equivalent. It is evident that statement (i) implies statement (iv). Suppose that statement (iv) holds. According to \cite[Theorem 6]{hm1}, $\suc{a_n}\subset \aa^\#\subset \aa^\dag$. In addition according to \cite[Corollary 2.1 (ii)]{bc}, $$ \parallel a_n^\dag\parallel\le \parallel a_n^\#\parallel \parallel a_na_n^\dag+a_n^\dag a_n-\uno\parallel^2\le 9 \parallel a_n^\#\parallel. $$ \noindent In particular, the sequence $\suc{a_n^\dag}$ is bounded. Thus, according to Remark \ref{rem3950} (v), $\suc{a_n^\dag}$ converges to 0. However, according to Remark \ref{rem3950} (vi)-(vii), statement (v) holds. It is evident that statement (v) implies statement (i). Statements (vi) and (vii) can be derived from what has been proved. According to Theorem \ref{thm320}, statements (vii)-(ix) are equivalent. \end{proof} To prove the second characterization of this section some preparation is needed. \begin{remark}\label{nota2}\rm Let $\aa$ be a unital $C^*$-algebra and consider $a \in \core{\aa}=\dcore{\aa}$. If $L_a\colon \aa\to \aa$ and $R_a\colon \aa\to \aa$ are the left and the right multiplication operators defined by $a$, i.e., for $x\in\aa$, $L_a(x)=ax$, $R_a(x)=xa$, respectively, then according to \cite[Theorem 2.14]{rdd}, \begin{align*} & &&L_{\core{a}}L_a L_{\core{a}}=L_{\core{a}},& &R_{\core{a}}R_a R_{\core{a}}=R_{\core{a}}.&&\\ \end{align*} Note also that according to Definition \ref{df1}, \begin{align*} & &&\rr(L_{\core{a}})=a\aa,& &\nn(L_{\core{a}})=(a^*)^\circ.&\\ & &&\rr(R_{\core{a}})=\aa a^*,& &\nn(R_{\core{a}})={}^\circ a.&\\ \end{align*} Therefore, $L_{\core{a}} = (L_a)^{(2)}_{a \aa, (a^*)^\circ}$ and $R_{\core{a}} = (R_a)^{(2)}_{ \aa a^*, {}^\circ a}$. \noindent In addition, since $L_{a \core{a}}=L_a L_{\core{a}}$, $R_{a \core{a}}=R_{\core{a}}R_a\in\ll (\aa)$ are idempotents, observe that according to Definition \ref{df1} and \cite[Theorem 2.14]{rdd}, \begin{align*} & &&\rr(L_{a \core{a}})=a \aa,& &\rr(R_{a \core{a}})=\aa a^*,&\\ & &&\nn(L_{a \core{a}})=(a^*)^\circ ,& &\nn(R_{a \core{a}})={}^\circ a.\\ \end{align*} Similar arguments prove the following facts: $L_{\dcore{a}} = (L_a)^{(2)}_{a^* \aa, a^\circ}$, $R_{\dcore{a}} = (R_a)^{(2)}_{ \aa a, {}^\circ(a^*)}$ and \begin{align*} & &&\rr(L_{\dcore{a}a})=a^* \aa,& &\rr(R_{\dcore{a}a})=\aa a,&\\ & &&\nn(L_{\dcore{a}a})=a^\circ ,& &\nn(R_{\dcore{a}a})={}^\circ (a^*).\\ \end{align*} \end{remark} Next follows the second characterization of the continuity of the (dual) core inverse. In this case, the notion of the gap between subspaces will be used. \begin{theorem} \label{th1} Let $\aa$ be a unital $C^*$-algebra and consider $a \in \core{\aa}=\dcore{\aa}$, $a\neq 0$. Consider a sequence $\suc{a_n} \subset \core{\aa}=\dcore{\aa}$ such that $\suc{a_n}$ converges to $a$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item $\suc{\core{a}_n}$ converges to $\core{a}$. \item $\suc{a_n \core{a}_n}$ converges to $a \core{a}$. \item $\suc{\gap{a_n \aa}{a\aa}}$ and $\suc{\gap{(a_n^*)^\circ}{(a^*)^\circ}}$ converge to $0$. \item $\suc{\gap{\aa a_n^*}{\aa a^*}}$ and $\suc{\gap{{}^\circ a_n}{{}^\circ a}}$ converge to $0$. \item $\suc{\dcore{a_n}}$ converges to $\dcore{a}$. \item $\suc{\dcore{a_n} a_n}$ converges to $\dcore{a} a$. \item $\suc{\gap{a_n^* \aa}{a^*\aa}}$ and $\suc{\gap{a_n^\circ}{a^\circ}}$ converge to $0$. \item $\suc{\gap{\aa a_n}{\aa a}}$ and $\suc{\gap{{}^\circ (a_n^*)}{{}^\circ (a^*)}}$ converge to $0$. \end{enumerate} \end{theorem} \begin{proof} It is evident that statement (i) implies statement (ii). Suppose that statement (ii) holds. According to Remark \ref{nota2}, $a \aa = \rr(L_{a \core{a}})$, $(a^*)^\circ =\nn(L_{a \core{a}})$, $a_n \aa = \rr(L_{a_n \core{a}_n})$ and $(a_n^*)^\circ =\nn(L_{a_n \core{a}_n})$ ($n \in \ene$). However, according to \cite[Lemma 3.3]{kr1}, statement (iii) holds. Suppose that statement (iii) holds. Recall that according to Remark~\ref{nota2}, \begin{align*} & & &L_{\core{a}} = (L_a)^{(2)}_{a \aa, (a^*)^\circ},& &L_{\core{a}_n} = (L_{a_n})^{(2)}_{a_n \aa, (a_n^*)^\circ},&\\ \end{align*} \noindent for each $n \in \ene$. Let $\kappa = \| L_a \| \| L_{\core{a}} \| = \| a \| \| \core{a} \|$ and consider $n_0 \in \ene$ such that for all $n \geq n_0$, \begin{align*} & &&r_n =\gap {\nn \left( (L_{a_n})^{(2)}_{a_n \aa, (a_n^*)^\circ}\right)} {\nn \left( (L_a)^{(2)}_{a \aa, (a^*)^\circ} \right)} =\gap{(a_n^*)^\circ}{(a^*)^\circ}< \frac{1}{3+\kappa},&\\ & &&s_n = \gap {\rr \left( (L_{a_n})^{(2)}_{a_n \aa, (a_n^*)^\circ}\right)} {\rr \left( (L_a)^{(2)}_{a \aa, (a^*)^\circ} \right)} =\gap{a_n \aa}{a \aa} < \frac{1}{(1+\kappa)^2},&\\ \end{align*} and \begin{align*} & &&t_n = \| L_{\core{a}} \| \| L_a - L_{a_n} \| = \| \core{a} \| \| a - a_n \| < \frac{2 \kappa}{(1+\kappa)(4+\kappa)}.&\\ \end{align*} Thus, according to \cite[Theorem~3.5]{dx}, \begin{align*} & &&\left\| \core{a}_n - \core{a} \right\| = \left\| L_{\core{a}_n} - L_{\core{a}} \right\| \leq \frac {(1+\kappa)(s_n+r_n)+(1+r_n)t_n} {1-(1+\kappa)s_n-\kappa r_n - (1+r_n)t_n}\| \core{a} \|, &\\ \end{align*} which implies statement (i). Statements (i), (ii) and (iv) are equivalent. To prove this fact, apply a similar argument to the one used to prove the equivalence among statements (i), (ii) and (iii), using in particular $R_{\core{a}}=(R_a)^{(2)}_{\aa a^*, {}^\circ a}$, $R_{\core{a}_n}=(R_{a_n})^{(2)}_{\aa a_n^*, {}^\circ a_n}$, $R_{a\core{a}}$ and $R_{a_n\core{a}_n}$ instead of the respectively left multiplication operators (Remark~\ref{nota2}, $n\in\ene$). Statements (i) and (v) are equivalent (Theorem \ref{thm320}). To prove the equivalence among statements (v) and (viii), apply a similar argument to the one used to prove that statements (i)-(iv) are equivalent, using in particular Remark~\ref{nota2} and \cite[Theorem~3.5]{dx}. \end{proof} Next, some bounds for $\| \core{a}_n - \core{a} \|$ will be proved, when $\suc{a_n}\subset\aa$ converges to $a\in\aa$ in a $C^*$-algebra $\aa$. Before, a technical lemma is presented. \begin{lemma}\label{lema4} Let $\aa$ be a unital $C^*$-algebra and let $a,b \in \core{\aa}=\dcore{\aa}$. Then \begin{align*} &{\rm (i)}&&\core{b}-\core{a} = \core{b}b(b^\dag -a^\dag)(\uno-a \core{a}) + \core{b}(a-b)\core{a} + (\uno - \core{b}b)(b-a)a^\dag\core{a}. &\\ &{\rm (ii)}& &\dcore{b}-\dcore{a} = (\uno- \dcore{a}a) (b^\dag -a^\dag)b\dcore{b} + \dcore{a}(a-b)\dcore{b}+ \dcore{a}a^\dag(b-a)(\uno - b\dcore{b}). \end{align*} \end{lemma} \begin{proof} To prove statement (i), recall that since $a$ and $b$ are core invertible, $a$ and $b$ are Moore-Penrose invertible (Theorem \ref{theo310}). In addition, according to \cite[Theorem 3.1]{14}, $b=\core{b}b^2$. Thus, according to Lemma \ref{lema2} (i), \begin{align*} & &&(\uno - \core{b}b)(b-a)a^\dag\core{a} = -(\uno - \core{b}b)aa^\dag\core{a} = -(\uno - \core{b}b)\core{a} = \core{b}b\core{a}-\core{a}.&\\ \end{align*} Now, according to \cite[Theorem 2.19]{rdd}, $\core{b}=\core{b}bb^\dag$. In addition, $a^*a\core{a} = a^* (a \core{a})^* = (a\core{a}a)^* = a^*$, i.e., $a^*(\uno-a \core{a})=0$. Moreover, since $a^\dag = a^\dag a a^\dagger = a^\dag (aa^\dag)^* = a^\dag (a^\dagger)^* a^*$, $a^\dag (\uno-a\core{a})=0$. Therefore, \begin{align*} & &&\core{b}b(b^\dag -a^\dag)(\uno-a \core{a}) = \core{b}bb^\dag (\uno-a \core{a}) = \core{b}(\uno-a \core{a}) = \core{b}-\core{b}a\core{a}.&\\ \end{align*} As a result, \begin{align*} & &&\core{b}-\core{a} & &= \core{b} - \core{b}a\core{a} + \core{b}a\core{a} - \core{b}b \core{a} + \core{b}b \core{a} - \core{a} \\ & & & && = \core{b}b(b^\dag -a^\dag)(\uno-a \core{a}) + \core{b}(a-b)\core{a} + (\uno - \core{b}b)(b-a)a^\dag\core{a}. &\\ \end{align*} To prove statement (ii), use that $\dcore{x}=(\core{(x^*)})^*$ ($x\in\aa$), and apply statement (i). \end{proof} Next the aforementioned bounds will be given. \begin{theorem} \label{th_des} Let $\aa$ be a unital $C^*$-algebra and consider $a\in\core{\aa}=\dcore{\aa}$. The following statements holds.\par \begin{enumerate}[{\rm (i)}] \item If $b\in\core{\aa}=\dcore{\aa}$, $b\neq 0$, then \begin{align*} & & &\| \core{b} - \core{a} \| \leq \frac{\| b^\dag -a^\dag \|}{\cos \psi_b} + \left[ \| \core{b} \| + \frac{\| a^\dag\|}{\cos \psi_b} \right] \| \core{a} \|\| a-b\|.&\\ \end{align*} \item In addition, \begin{align*} & &&\| \dcore{b} - \dcore{a} \| \leq \frac{\| b^\dag -a^\dag \|}{\cos \psi_b} + \left[ \| \dcore{b} \| + \frac{\| a^\dag\|}{\cos \psi_b} \right] \| \dcore{a} \|\| a-b\|.&\\ \end{align*} \item If also $a\neq 0$, then \begin{align*} & &&\| \dcore{b} - \dcore{a} \| =\| \core{b} - \core{a} \| \leq \frac{\| b^\dag -a^\dag \|}{\cos \psi_b} + \frac{\| a^\dag \| \left(\| b^\dag \| + \| a^\dag \| \right)}{\cos \psi_a \cos \psi_b} \| a-b \|.&\\ \end{align*} \item In particular, if $a\in\core{\aa}=\dcore{\aa}$, $a\neq 0$, and $\suc{a_n}\subset \core{\aa}=\dcore{\aa}$, $a_n\neq 0$ for all $n\in\ene$, then \begin{align*} & &&\| \dcore{a_n} - \dcore{a} \| =\| \core{a_n} - \core{a} \| \leq \frac{\| a_n^\dag -a^\dag \|}{\cos \psi_n} + \frac{\| a^\dag \| \left(\| a_n^\dag \| + \| a^\dag \| \right)}{\cos \psi_a \cos \psi_n} \| a-a_n \|,&\\ \end{align*} \noindent where $\psi_n=\psi_{a_n}$. \end{enumerate} \end{theorem} \begin{proof} To prove statement (i), note that according to \cite[Lemma 2.3]{kr}, Lemma \ref{lema2} (iii) and \cite[Theorem 2.4 (iii)]{cbl2} $$ \| \uno- \core{b}b \| = \| \core{b} b \| = \| (\core{b}b) + (\core{b} b)^* - \uno\| = \| (b b^\dag + b^\dag b - \uno)^{-1} \| = \frac{1}{\cos \psi_b}. $$ Observe that $\uno-a \core{a}$ is a self-adjoint idempotent, Hence $\| \uno - a \core{a} \| = 1$, and according to Lemma \ref{lema4}, \begin{align*} \| \core{b} -\core{a} \| &\leq \| \core{b}b \| \| b^\dag -a^\dag \| \| \uno-a \core{a}\| + \left[ \| \core{b} \| \| \core{a} \| + \| \uno - \core{b}b \| \| a^\dag\core{a} \| \right] \| a-b \| \\ & = \frac{\| b^\dag -a^\dag \|}{\cos \psi_b} + \left[ \| \core{b} \| \| \core{a} \| + \frac{\| a^\dag\core{a} \|}{\cos \psi_b} \right] \| a-b \|. \\ & \le \frac{\| b^\dag -a^\dag \|}{\cos \psi_b} + \left[ \| \core{b} \| + \frac{\| a^\dag \|}{\cos \psi_b} \right] \| \core{a} \|\| a-b \|. \\ \end{align*} Statement (ii) can be derived from statement (i). In fact, recall that given $x\in\core{\aa}=\dcore{\aa}$, $\dcore{x}=(\core{(x^*)})^*$. Moreover, if $x\in \aa^\dag\setminus\{ 0\}$, then note that $\psi_{x^*}=\psi_{x^\dag}=\psi_x$. Now apply statement (i) to $a^*$ and $b^*$. \indent Now observe that if $a=0$ in statement (i), then $\| \core{b}\|\le\frac{\|b^\dag\|}{\cos \psi_b}$. Thus, if $a\neq 0$, then $\| \core{a}\|\le\frac{\|a^\dag\|}{\cos \psi_a}$. To prove statement (iii) for the core inverse, apply these inequalities to statement (i). To prove statement (iii) for the dual core inverse, proceed as in the proof of statement (ii). \indent Statement (iv) can be derived from statement (iii). \end{proof} \begin{remark} \rm As it was used in the proof of Theorem \ref{th_des}, given $a\in\core{\aa}=\dcore{\aa}$, $a\neq 0$, Theorem \ref{th_des} (i) (respectively Theorem \ref{th_des} (ii)) gives a relationship between the norm of $\core{a}$ (respectively of $\dcore{a}$) and the norm of the $a^\dag$: $\| \core{a}\|\le\frac{\|a^\dag\|}{\cos \psi_a}$ (respectively $\| \dcore{a}\|\le\frac{\|a^\dag\|}{\cos \psi_a}$). \noindent Moreover, under the same hypothesis of Theorem \ref{thm320}, when $a\neq 0$, Theorem \ref{th_des} (iv) gives an estimate of the convergence of $\suc{\core{a_n}}$ and $\suc{\dcore{a_n}}$ to $\core{a}$ and $\dcore{a}$, respectively. \end{remark} \section{Continuity of (dual) core invertible Hilbert space operators} Let $\hh$ be a Hilbert space and consider $A\in\ll (\hh)$. The definition of core invertible Hilbert space operators was given in \cite[Definition 3.2]{rdd2}. In fact, $A\in\ll (\hh)$ is said to be core invertible, if there exists $X\in\ll (\hh)$ such that $$ A=AXA,\hskip.3truecm \rr(X)=\rr(A),\hskip.3truecm \nn (X)=\nn (A^*). $$ \noindent Thus, when $A\in\ll (\hh)$, two definitions of the core inverse of $A$ has been given: as an element of the $C^*$-algebra $\ll (\hh)$ and as Hilbert space operator. However, as the following proposition shows, both definitions coincide in the Hilbert space context. \begin{proposition}\label{proposition410} Let $\hh$ be a Hilbert space and consider $A\in\ll (\hh )$. The following statements are equivalent.\par \begin{enumerate}[{\rm (i)}] \item The core inverse of $A$ exists. \item There exists an operator $X\in \ll (\hh )$ such that $AXA=A$, $\rr(X)=\rr(A)$ and $\nn(X)=\nn(A^*)$. \end{enumerate} \noindent Moreover, in this case $X=\core{A}=A^{(2)}_{\rr(A), \nn(A^*)}$. \end{proposition} \begin{proof} Suppose that $\core{A}$ exists. Then, $A=A\core{A}A$ and there are operator $S$, $T$, $U$, $V\in\ll (\hh )$ such that $$ \core{A}=AS, \hskip.2truecm A=\core{A}T, \hskip.2truecm\core{A}=UA^*, \hskip.2truecmA^*=V\core{A}. $$ \noindent In particular, $\rr(\core{A})=\rr(A)$ and $\nn(\core{A})=\nn(A^*)$.\par \indent Now suppose that statement (ii) holds. Then, there exists $X\in\ll (\hh )$ such that $\rr(X)=\rr(A)$. According to \cite[Theorem 1]{Do}, there are $L$, $K\in\ll (\hh )$ such that $A=XL$ and $X=AK$. In particular, $X\ll (\hh )= A\ll (\hh )$. In addition, since $\rr (A)$ is closed, $\rr(X)$ is closed, which is equivalent to the fact that $X$ is regular. Now since $A^*$ is regular, according to \cite[Remark 6]{b}, there exist operators $M$, $N\in \ll (\hh )$ such that $X=MA^*$ and $A^*=NX$. In particular, $\ll (\hh )X= \ll (\hh )A^*$. Since $A=AXA$ and the core inverse is unique, when it exists (\hspace{-1pt}\cite[Theorem 2.14]{rdd}), $X=\core{A}$. Finally, since according again to \cite[Theorem 2.14]{rdd}, $\core{A}$ is an outer inverse, according to what has been proved, $\core{A}=A^{(2)}_{\rr(A), \nn(A^*)}$. \end{proof} \indent As for the core inverse case, a definition of dual core invertible Hilbert space operators was given in \cite[Definition 3.3]{rdd2}. In the following proposition the equivalence between Definition \ref{df2} and \cite[Definition 3.3]{rdd2} will be considered. \begin{proposition}\label{proposition420} Let $\hh$ be a Hilbert space and consider $A\in\ll (\hh )$. The following statements are equivalent.\par \begin{enumerate}[{\rm (i)}] \item The dual core inverse of $A$ exists. \item There exists an operator $X\in \ll (\hh )$ such that $AXA=A$, $\rr(X)=\rr(A^*)$ and $\nn(X)=\nn(A)$. \end{enumerate} \noindent Moreover, in this case $X=\dcore{A}=A^{(2)}_{\rr(A^*), \nn(A)}$. \end{proposition} \begin{proof} Apply a similar argument to the one used in Proposition \ref{proposition410}. \end{proof} \indent Note that the relationship between the (dual) core inverse and the outer inverse with prescribed range and null space for the case of square complex matrices was studied in \cite[Theorem 1.5]{r} (apply \cite[Theorem 4.4]{rdd}). \indent Next the continuity of the (dual) core inverse will be characterized using the gap between subspaces. The next theorem is the Hilbert space version of Theorem \ref{th1} \begin{theorem}\label{thm430}Let $\hh$ be a Hilbert space and consider $A\in \ll (\hh)$, $A\neq 0$, such that $A$ is (dual) core invertible. Suppose that there exists a sequence of operators $\suc{A_n}\subset \ll (\hh )$ such that for each $n\in\ene$, $A_n$ is (dual) core invertible and $\suc{A_n}$ converges to $A$. Then, the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{\core{A_n}}$ converges to $\core{A}$. \item The sequence $\suc{\dcore{A_n}}$ converges to $\dcore{A}$. \item The sequence $\suc{\core{A_n}A_n}$ converges to $\core{A}A$. \item The sequence $\suc{A_n\dcore{A_n}}$ converges to $\core{A}A$. \item The sequence $\suc{\gap{\rr(\core{A_n})}{\rr(\core{A})}}$ converges to 0. \item The sequence $\suc{\gap{\rr(A_n)}{\rr(A)}}$ converges to 0. \item The sequence $\suc{\gap{\nn (\core{A_n})} {\nn (\core{A})}}$ converges to 0. \item The sequence $\suc{\gap{\nn (A_n^*)}{\nn (A^*)}}$ converges to $0$. \item The sequence $\suc{\gap{\rr(\dcore{A_n})}{\rr(\dcore{A})}}$ converges to 0. \item The sequence $\suc{\gap{\rr(A_n^*)}{\rr(A^*)}}$ converges to 0. \item The sequence $\suc{\gap{\nn (\dcore{A_n})} {\nn (\dcore{A})}}$ converges to 0. \item The sequence $\suc{\gap{\nn (A_n)}{\nn (A)}}$ converges to $0$. \end{enumerate} \end{theorem} \begin{proof} First of all recall that $\core{\ll (\hh)}=\dcore{\ll (\hh)}$ (Theorem \ref{theo310}).\par Statements (i)-(iv) are equivalent (Theorem \ref{th1}). According to \cite[Lemma 3.3]{kr1}, statement (iii) implies statement (v) and according to Proposition \ref{proposition410} and \cite[Chapter 4, Section 2, Subsection 3, Theorem 2.9]{kato}, Statements (v)-(viii) are equivalent. \indent Now suppose that statement (vi) holds. Thus, according to what has been proved, the sequences $\suc{\gap{\rr(A_n)}{\rr(A)}}$ and $\suc{\gap{\nn(A_n^*)}{\nn(A^*)}}$ converge to $0$ (recall that according to \cite[Chapter 4, Section 2, Subsection 3, Theorem 2.9]{kato}, $\gap{\rr(A_n)}{\rr(A})=\gap{(\nn(A_n^*)}{\nn(A^*)}$, $n\in \ene$). In addition, according to Proposition \ref{proposition410}, for each $n\in\ene$, \begin{align*} & &&\core{A}_n=(A_n)^{(2)}_{\rr(A_n), \nn(A_n^*)},& &\core{A}=A^{(2)}_{\rr(A), \nn(A^*)}.& \end{align*} \noindent Let $\kappa=\parallel A\parallel\parallel \core{A}\parallel$ and consider $n_0\in\ene$ such that for all $n\ge n_0$, \begin{align*} w_n=&\gap{\nn((A_n)^{(2)}_{\rr(A_n), \nn(A_n^*)})}{\nn(A^{(2)}_{\rr(A), \nn(A^*)})}=\gap{(\nn(A_n^*)}{\nn(A^*)}\\ &=\gap{\rr(A_n)}{\rr(A)}=\gap{\rr((A_n)^{(2)}_{\rr(A_n), \nn(A_n^*)})}{\rr(A^{(2)}_{\rr(A), \nn(A^*)})}<\frac{1}{(3+\kappa)^2},\\ &z_n=\parallel \core{A}\parallel\parallel A-A_n\parallel<\frac{2\kappa}{(1+\kappa)(4+\kappa)}.\\ \end{align*} \noindent Since $ \frac{1}{(3+\kappa)^2}\le\min\{\frac{1}{3+\kappa}, \frac{1}{(1+\kappa)^2}\}$, according to \cite[Theorem 3.5]{dx}, $$ \parallel \core{A}_n-\core{A}\parallel \le\frac{2(1+\kappa)w_n+(1+w_n)z_n}{1-(1+2\kappa) w_n-(1+w_n)z_n}\parallel \core{A}\parallel, $$ which implies statement (i). Now, according to \cite[Lemma 3.3]{kr1}, statement (iv) implies statement (xi) and according to Proposition \ref{proposition420} and \cite[Chapter 4, Section 2, Subsection 3, Theorem 2.9]{kato}, Statements (ix)-(xii) are equivalent. Suppose that statement (x) holds. Since then statement (xii) also holds, to prove that statement (ii) holds, it is enough to apply an argument similar to the one used to prove that statement (vi) implies statement (i), interchanging in particular $A$ with $A^*$, $A_n$ with $A_n^*$, $\core{A}$ with $\dcore{A}$, $\core{A_n}$ with $\dcore{A_n}$, $(A_n)^{(2)}_{\rr(A_n), \nn(A_n^*)}$ with $(A_n)^{(2)}_{\rr(A_n^*), \nn(A_n)}$, $A^{(2)}_{\rr(A), \nn(A^*)}$ with $A^{(2)}_{\rr(A^*), \nn(A)}$, and $\kappa$ with $\kappa'=\parallel A\parallel \parallel \dcore{A}\parallel$. \end{proof} \indent Next the continuity of the (dual) core inverse will be studied in a particular case. To this end, two results from \cite{rs} need to be extended first. \begin{proposition}\label{pro540} Let $\xx$ be a Banach space and consider $A\in \ll (\xx)$ such that $A$ is group invertible and the codimension of $\rr(A)$ is finite. Suppose that there exists a sequence of operators $\suc{A_n}\subset \ll (\xx )$ such that for each $n\in\ene$, $A_n$ is group invertible and $\suc{A_n}$ converges to $A$. Then the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{A^\#_n}$ converges to $A^\#$.\par \item For all sufficiently large $n\in\ene$, $\hbox{\rm codim }\rr(A_n)=\hbox{\rm codim }\rr(A)$. \end{enumerate} \end{proposition} \begin{proof} Recall that $A\in\ll (\xx)$ is group invertible if and only if $A^*\in \ll(\xx^*)$ is group invertible. A similar statement holds for each $A_n\in\ll (\xx)$ ($n\in\ene$). In addition, $\dim \nn(A^*)$ is finite and $\suc{A^*_n}\subset \ll (\xx^* )$ converges to $A^*$. Thus, according to \cite[Theorem 3]{rs}, statement (i) is equivalent to the fact that for all sufficiently large $n\in\ene$, $\dim \nn(A^*_n)=\dim \nn(A^*)$, which in turn is equivalent to statement (ii). \end{proof} \begin{proposition}\label{pro550} Let $\hh$ be a Hilbert space and consider $A\in \ll (\hh)$ such that $A$ is Moore-Penrose invertible and the codimension of $\rr(A)$ is finite. Suppose that there exists a sequence of operators $\suc{A_n}\subset \ll (\hh )$ such that for each $n\in\mathbb{N}$, $A_n$ is Moore-Penrose invertible and $\suc{A_n}$ converges to $A$. Then the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{A^\dag_n}$ converges to $A^\dag$. \item For all sufficiently large $n\in\ene$, $\hbox{\rm codim }\rr(A_n)=\hbox{\rm codim }\rr(A)$. \end{enumerate} \end{proposition} \begin{proof} Apply a similar argument to the one in the proof of Proposition \ref{pro540}, using in particular \cite[Corollary 10]{rs} instead of \cite[Theorem 3]{rs}. \end{proof} \begin{corollary}\label{cor560}Let $\hh$ be a Hilbert space and consider $A\in \ll (\hh)$ such that $A$ is group invertible and either the codimension of $\rr(A)$ is finite or $\dim \nn(A)$ is finite. Suppose that there exists a sequence of operators $\suc{A_n}\subset \ll (\hh )$ such that for each $n\in\mathbb{N}$, $A_n$ is group invertible and $\suc{A_n}$ converges to $A$. Then, the following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{A^\#_n}$ converges to $A^\#$.\par \item The sequence $\suc{A^\dag_n}$ converges to $A^\dag$. \end{enumerate} \end{corollary} \begin{proof} Recall that given and operator $S\in\ll (\hh)$ such that $S$ is group invertible, then $S$ is Moore-Penrose invertible (\hspace{-1pt}\cite[Theorem 6]{hm1}). To conclude the proof apply, when $\dim\nn(A)$ is finite, \cite[Theorem 3]{rs} and \cite[Corollary 10]{rs}, and when codimension of $\rr(A)$ is finite, Proposition \ref{pro540} and Proposition \ref{pro550}. \end{proof} \indent Now a characterization of the continuity of the (dual) core inverse for a particular case of Hilbert spaces operators will be presented. \begin{theorem} \label{thm570}Let $\hh$ be a Hilbert space and consider $A\in \ll (\hh)$ such that $A$ is (dual) core invertible and either the codimension of $\rr(A)$ is finite or $\dim \nn(A)$ is finite. Suppose that there exists a sequence of operators $\suc{A_n}\subset \ll (\hh )$ such that for each $n\in\ene$, $A_n$ is (dual) core invertible and $\suc{A_n}$ converges to $A$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{\core{A}_n}$ converges to $\core{A}$. \item The sequence $\suc{\dcore{A_n}}$ converges to $\dcore{A}$. \item The sequence $\suc{A^\dag_n}$ converges to $A^\dag$. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent When $\dim \nn (A)$ is finite, statements {\rm (i)}-{\rm (iii)} are equivalent to the following statement. } \item For all sufficiently large $n\in\ene$, $\dim \nn(A_n)=\dim \nn(A)$. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent When $\hbox{\rm codim } (A)$ is finite, statements {\rm (i)}-{\rm (iii)} are equivalent to the following statement. } \item For all sufficiently large $n\in\ene$, $\hbox{\rm codim } \rr(A_n)=\hbox{\rm codim } \rr(A)$. \end{enumerate} \end{theorem} \begin{proof} Apply Theorem \ref{thm320}, Corollary \ref{cor560}, \cite[Theorem 3]{rs} and Proposition \ref{pro540}. For the case $A=0$, apply Remark \ref{rem3950} and Proposition \ref{prop3960}. \end{proof} \indent Now the finite dimensional case will be derived from Theorem \ref{thm570}. It is worth noticing that the following corollary also provides a different proof of a well known result concerning the continuity of the Moore-Penrose inverse in the matricial setting, see \cite[Theorem 5.2]{S}. \begin{corollary}\label{cor580} Let $A \in \ce_m$ be a (dual) core invertible matrix. Suppose that exists a sequence $\suc{A_n} \subset \ce_m$ of (dual) core invertible matrices such that $\suc{A_n}$ converges to $A$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The sequence $\suc{\core{A}_n}$ converges to $\core{A}$. \item The sequence $\suc{\dcore{A_n}}$ converges to $\dcore{A}$. \item The sequence $\suc{A_n^\dag}$ converges to $A^\dag$. \item There exists $n_0 \in \ene$ such that $\rk(A_n) = \rk(A)$, for $n \geq n_0$. \end{enumerate} \end{corollary} \begin{proof} Apply Theorem \ref{thm570}. \end{proof} \section{Differentiability of the (dual) core inverse} \noindent To prove the main results of this section, some preparation is needed. Let $U\subseteq \ere$ be an open set and consider ${\bold a}\colon U\to\aa$ a function such that ${\bold a}(U)\subseteq\core{\aa}$. Since according to Theorem \ref{theo310}, $\core{\aa}=\dcore{\aa}=\aa^\#\subset\aa^\dag$, it is possible to consider the functions $$ \core{{\bold a}},\, \dcore{{\bold a}},\, {\bold a}^\#,\, {\bold a}^\dag\colon U\to\aa, $$ \noindent which are defined as follows. Given $u\in U$, \begin{align*} & &&\core{{\bold a}}(u)=\core{({\bold a}(u))},&&\dcore{{\bold a}}(u)=\dcore{({\bold a}(u))},&\\ & &&{\bold a}^\#(u)=({\bold a}(u))^\#,&&{\bold a}^\dag (u)= ({\bold a}(u))^\dag.&\\ \end{align*} Since in this section functions instead of sequence will be considered and the notion of continuity will be central in the results concerning differentiability, Theorem \ref{thm320} will be reformulated for functions. \begin{theorem}\label{thm530} Let $\aa$ be a unital $C^*$-algebra and consider $U\subseteq \ere$ an open set and a function ${\bold a}\colon U\to\aa$ such that ${\bold a}(U)\subseteq\core{\aa}$ and ${\bold a}$ is continuous at $t_0\in U$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The element ${\bold a}(t_0)\in\core{\aa}$ and the function $\core{{\bold a}}$ is continuous at $t_0$. \item The element ${\bold a}(t_0)\in\dcore{\aa}$ and the function $\dcore{{\bold a}}$ is continuous at $t_0$. \item The element ${\bold a}(t_0)\in\aa^\#$ and the function ${\bold a}^\#$ is continuous at $t_0$. \item The element ${\bold a}(t_0)\in\core{\aa}$ and there exists an open set $V\subseteq U$ such that $t_0\in V$ and the function $\core{{\bold a}}$ is bounded on $V$. \item The element ${\bold a}(t_0)\in\dcore{\aa}$ and there exists an open set $W\subseteq U$ such that $t_0\in W$ and the function $\dcore{{\bold a}}$ is bounded on $W$. \item The element ${\bold a}(t_0)\in\aa^\dag$, the function ${\bold a}^\dag$ is continuous at $t_0$, and there exists and open set $I\subseteq U$ such that $t_0\in I$ and the function $\core{{\bold a}}{\bold a}$ is bounded on $I$. \item The element ${\bold a}(t_0)\in\aa^\dag$, the function ${\bold a}^\dag$ is continuous at $t_0$ , and there exists and open set $J\subseteq U$ such that $t_0\in J$ and the function ${\bold a}\core{{\bold a}}$ is bounded on $J$. \item The element ${\bold a}(t_0)\in\aa^\dag$, the function ${\bold a}^\dag$ is continuous at $t_0$, and there exist an open set $Z$ such that $t_0\in Z$and $\psi\in[0, \frac{\pi}{2})$ such that when ${\bold a}(t)\neq 0$ ($t\in Z$), $\psi_t=\psi_{{\bold a}(t)}\le \psi$. \end{enumerate} \end{theorem} \begin{proof} Apply Theorem \ref{thm320}. \end{proof} \begin{remark}\label{rem591}\rm Note that under the same hypotheses of Theorem \ref{thm530}, when ${\bold a}(t_0)=0$, the continuity of the function $\core{{\bold a}}$ (respectively $\dcore{{\bold a}}$, ${\bold a}^\#$, ${\bold a}^\dag$) at $t_0$ is equivalent to the following condition: there exists an open set $K\subseteq U$, $t_0\in K$ and ${\bold a}(t)=0$, for all $t\in K$ (Remark \ref{rem3950}, Proposition \ref{prop3960}). \end{remark} \indent To study the differentiability of the (dual) core inverse, the differentiability of the Moore-Penrose inverse need to be considered first. \begin{remark}\label{rem540} \rm Let $\aa$ be a unital $C^*$-algebra and consider an open set $U$ and ${\bold a} \colon U\to \aa$ a function such that ${\bold a}(U)\subset \aa^\dag$ and there is $t_0$ such that ${\bold a}$ is differentiable at $t_0$. Thus, a necessary and sufficient condition for ${\bold a}^\dag$ to be differentiable at $t_0$ is that ${\bold a}^\dag$ is continuous at $t_0$. In fact, if ${\bold a}(t_0)\neq 0$, there is an open set $V\subseteq U$ such that $t_0\in V$ and ${\bold a}(t)\neq 0$ for $t\in V$, and then according to \cite[Theorem 2.1]{kol}, this equivalence holds. On the other hand, if ${\bold a}(t_0)=0$, according to Remark \ref{rem3950} (vi)-(vii), the function ${\bold a}^\dag$ is continuous at $t_0$ if and only if there exists an open set $W$ such that $t_0\in W$ and ${\bold a}(t)=0$ for $t\in W$, which implies that ${\bold a}^\dag$ is differentiable at $t_0$. As a result, in \cite[Theorem 2.1]{kol} it is not necessary to assume that ${\bold a}(t)\neq 0$ for $t$ in a neighbourhood of $t_0$. \end{remark} \indent In the following theorem the differentiability of the (dual) core inverse will be studied. Note that the following notation will be used. Given a unital $C^*$-algebra $\aa$, if $U\subseteq \ere$ is an open set and ${\bold b}\colon U\to\aa$ is a function, then ${\bold b}^*\colon U\to\aa$ will denote the function ${\bold b}^*(t)=({\bold b}(t))^*$ ($t\in U$). In addition, if ${\bold b}\colon U\to\aa$ is differentiable at $t_0\in U$, then ${\bold b}'(t_0)$ will stand for the derivative of ${\bold b}$ at $t_0$. \begin{theorem}\label{thm510} Let $\aa$ be a unital $C^*$-algebra and consider $U\subseteq \ere$ an open set and ${\bold a}\colon U\to\aa$ a function such that is differentiable at $t_0\in U$ and ${\bold a}(U)\subset \core{\aa}=\dcore{\aa}=\aa^\#$. The following statements are equivalent. \begin{enumerate}[{\rm (i)}] \item The function $\core{{\bold a}}$ is continuous at $t_0$. \item The function $\core{{\bold a}}$ is differentiable at $t_0$. \item The function $\dcore{{\bold a}}$ is differentiable at $t_0$. \item The function ${\bold a}^\#$ is differentiable at $t_0$. \hspace*{\dimexpr\linewidth-\textwidth\relax}{\noindent Furthermore, the following formulas hold.} \item \begin{align*} (\core{{\bold a}})'(t_0)&=\core{{\bold a}}(t_0){\bold a}(t_0)({\bold a}^\dag)'(t_0)(\uno-{\bold a}(t_0)\core{{\bold a}}(t_0))- \core{{\bold a}}(t_0){\bold a}'(t_0)\core{{\bold a}}(t_0)\\ &+(\uno-\core{{\bold a}}(t_0){\bold a}(t_0)){\bold a}'(t_0){\bold a}^\dag(t_0)\core{{\bold a}}(t_0).\\ \end{align*} \item \begin{align*} (\dcore{{\bold a}})'(t_0)&=(\uno-\dcore{{\bold a}}(t_0){\bold a}(t_0))({\bold a}^\dag)'(t_0){\bold a}(t_0)\dcore{{\bold a}}(t_0) - \dcore{{\bold a}}(t_0){\bold a}'(t_0)\dcore{{\bold a}}(t_0)\\ &+\dcore{{\bold a}}(t_0){\bold a}^\dag(t_0){\bold a}'(t_0)(\uno-{\bold a}(t_0)\dcore{{\bold a}}(t_0)).\\ \end{align*} \item \begin{align*} ({\bold a}^\#)'(t_0)&=2\core{{\bold a}}(t_0)(\core{{\bold a}})'(t_0){\bold a}(t_0)+ (\core{{\bold a}}(t_0))^2{\bold a}'(t_0)\\ &={\bold a}'(t_0)(\dcore{{\bold a}}(t_0))^2 +2{\bold a}(t_0)\dcore{{\bold a}}(t_0)(\dcore{{\bold a}})'(t_0)\\ &= (\core{{\bold a}})'(t_0){\bold a}(t_0)\dcore{{\bold a}}(t_0)+\core{{\bold a}}(t_0){\bold a}'(t_0)\dcore{{\bold a}}(t_0)+\core{{\bold a}}(t_0){\bold a}(t_0)(\dcore{{\bold a}})'(t_0). \end{align*} \end{enumerate} \end{theorem} \begin{proof} According to Lemma \ref{lema4}, \begin{align*} \core{{\bold a}}(t)-\core{{\bold a}}(t_0)&= \core{{\bold a}}(t){\bold a}(t)({\bold a}^\dag(t)-{\bold a}^\dag (t_0))(\uno-{\bold a}(t_0)\core{{\bold a}}(t_0))\\ &+\core{{\bold a}}(t)({\bold a}(t_0)-{\bold a}(t))\core{{\bold a}}(t_0) + (\uno-\core{{\bold a}}(t){\bold a}(t))({\bold a}(t)-{\bold a}(t_0)){\bold a}^\dag(t_0)\core{{\bold a}}(t_0).\\ \end{align*} Now suppose that statement (i) holds. According to Theorem \ref{thm530}, the function ${\bold a}^\dag$ is continuous at $t_0$, and according to \cite[Theorem 2.1]{kol} and Remark \ref{rem540}, the function ${\bold a}^\dag$ is differentiable at $t_0$. Thus, $$\frac{\core{{\bold a}}(t){\bold a}(t)({\bold a}^\dag (t)-{\bold a}^\dag (t_0))(\uno-{\bold a}(t_0)\core{{\bold a}}(t_0))}{t-t_0}$$ \noindent converges to $\core{{\bold a}}(t_0){\bold a}(t_0)({\bold a}^\dag)'(t_0)(\uno-{\bold a}(t_0)\core{{\bold a}}(t_0))$. In addition, $$ \frac{\core{{\bold a}}(t)({\bold a}(t_0)-{\bold a}(t))\core{{\bold a}}(t_0)}{t-t_0} $$ \noindent converges to $- \core{{\bold a}}(t_0){\bold a}'(t_0)\core{{\bold a}}(t_0)$, and $$ \frac{(\uno-\core{{\bold a}}(t){\bold a}(t))({\bold a}(t)-{\bold a}(t_0)){\bold a}^\dag(t_0)\core{{\bold a}}(t_0)}{t-t_0} $$ \noindent converges to $(\uno-\core{{\bold a}}(t_0){\bold a}(t_0)){\bold a}'(t_0){\bold a}^\dag(t_0)\core{{\bold a}}(t_0)$. Consequently statements (ii) and (v) hold. It is evident that statement (ii) implies statement (i). Now observe that the function ${\bold a}^*\colon U\to\aa$ is differentiable at $t_0$ and ${\bold a}^*(U)\subset \core{\aa}$ (Theorem \ref{theo310}). Suppose that statement (i) holds. According to the identity $\core{({\bold a}^*)}(t)=(\dcore{{\bold a}})^*(t)$ and Theorem \ref{thm530}, the function $\core{({\bold a}^*)}\colon U\to \aa$ is continuous at $t_0$. Thus, according to what has been proved, the function $\core{({\bold a}^*)}\colon U\to \aa$ is differentiable at $t_0$. Therefore, the function $\dcore{{\bold a}}\colon U\to \dcore{\aa}$ is differentiable at $t_0$. Consequently, statement (iii) holds. Furthermore, since $(\dcore{{\bold a}})'(t_0)=((\core{{(\bold a})^*})')^*(t_0)$, to prove statement (vi), apply statement (v). On the other hand, if statement (iii) holds, then the function $\dcore{{\bold a}}$ is continuous at $t_0$. According to Theorem \ref{thm530}, statement (i) holds. Suppose that statement (i) holds. According to \cite[Theorem 2.19]{rdd} and Lemma \ref{lema2} (iv), the following identities hold. $$ {\bold a}^\#=(\core{{\bold a}})^2{\bold a}={\bold a}(\dcore{{\bold a}})^2=\core{{\bold a}}{\bold a}\dcore{{\bold a}}. $$ Therefore, according to what has been proved, the function ${\bold a}^\#$ is differentiable at $t_0$. Furthermore, from these idenetities statement (vii) can be derived. On the other hand, according to Theorem \ref{thm530}, statement (iv) implies statement (i). \end{proof} \begin{remark}\label{rem570}\rm Under the same hypothesis of Theorem \ref{thm510}, the following facts should be noted.\par \noindent (i). When $a(t_0)=0$, according to Remark \ref{rem591}, $$ (\core{{\bold a}})'(t_0)=(\dcore{{\bold a}})'(t_0)=({\bold a}^\#)'(t_0)=({\bold a}^\dag)' (t_0)=0. $$ \noindent (ii). Recall that in \cite[Theorem 2.1]{kol}, a formula concerning the derivative of the function ${\bold a}^\dag$ at $t_0$ was given.\par \noindent (iii). Note that according to Theorem \ref{thm530}, a necessary and sufficient condition for the function $\dcore{{\bold a}}$ (respectively ${\bold a}^\#$) to be differentiable at $t_0$ is that $\dcore{{\bold a}}$ (respectively ${\bold a}^\#$) is continous at $t_0$. In fact, the continuity of one of the functions $\core{{\bold a}}$, $\dcore{{\bold a}}$ and ${\bold a}^\#$ at a point $t_0$ is equivalent to the continuity and the differentiability of the three functions under consideration at $t_0$ (Theorem \ref{thm530} and Theorem \ref{thm510}).\par \noindent (iv). According to \cite[Theorem 2.19]{rdd}, \begin{align*} & &&{\bold a}^\dag=\dcore{{\bold a}}{\bold a}\core{{\bold a}},& &\core{{\bold a}}= {\bold a}^\#{\bold a}{\bold a}^\dag,& &\dcore{{\bold a}}={\bold a}^\dag {\bold a}{\bold a}^\#.&\\ \end{align*} \noindent Thus, the derivative of ${\bold a}^\dag$, $\core{{\bold a}}$ and $\dcore{{\bold a}}$ at $t_0$ can also be computed as follows: \begin{align*} ({\bold a}^\dag)'(t_0)&= (\dcore{{\bold a}})'(t_0){\bold a}(t_0)\core{{\bold a}}(t_0)+ \dcore{{\bold a}}(t_0){\bold a}'(t_0)\core{{\bold a}}(t_0) +\dcore{{\bold a}}(t_0){\bold a}(t_0)(\core{{\bold a}})'(t_0).\\ (\core{{\bold a}})'(t_0)&=({\bold a}^\#)'(t_0){\bold a}(t_0){\bold a}^\dag(t_0)+ {\bold a}^\#(t_0){\bold a}'(t_0){\bold a}^\dag(t_0)+{\bold a}^\#(t_0){\bold a}(t_0)({\bold a}^\dag)'(t_0).\\ (\dcore{{\bold a}})'(t_0)&=({\bold a}^\dag)'(t_0){\bold a}(t_0){\bold a}^\#(t_0)+{\bold a}^\dag(t_0){\bold a}'(t_0){\bold a}^\#(t_0)+ {\bold a}^\dag(t_0){\bold a}(t_0) ({\bold a}^\#)'(t_0).\\ \end{align*} \end{remark}
2023-04-23T08:17:32.570Z
2017-06-08T02:00:13.000Z
redpajama/arxiv
arxiv_0000
208
11,473
6b7da8d213c63f87e7c528f98ae6ca659fa5b903
\section{Introduction} In the past decade, optical lattice clocks \cite{Ludlow2015,Katori2011} have made dramatic progress in accuracy and stability, surpassing their microwave counterparts to have the lowest fractional uncertainty of any frequency standard to date. Ensuring the continuation of this progress demands that the environmental perturbations affecting their accuracy are characterized to increasingly precise levels. Motivated by this challenge, we report on a new method using highly excited Rydberg states to provide an \textit{in situ} measurement of the DC electric field. Uncharacterized electric fields can severely impact the accuracy of an atomic clock. For the $5s^2$~$^1$S$_0-5s5p$~$^3$P$_0$ clock transition in strontium, an electric field of 570 V/m yields a DC Stark shift of 1 Hz \cite{Middelmann2012}, or $2\times10^{-15}$ in fractional units, some three orders of magnitude above the lowest estimated total inaccuracy of a strontium optical lattice clock \cite{Nicholson2015}. Where dielectric surfaces are close to the atoms, shifts as large as 1$\times 10^{-13}$ have been observed \cite{Lodewyck2012}. While steps can be taken to reduce the residual electric field seen by the reference atoms, such as Faraday shielding \cite{Beloy2014}, or UV discharge of dielectric surfaces \cite{Pollack2010}, a characterization of the remaining field is necessary. This is typically done by direct spectroscopy of the clock transition with an externally applied electric field. With no residual electric field present, the quadratic nature of the perturbation implies the resulting induced frequency shift should be unchanged if the polarity of the applied field is reversed but the magnitude is left unchanged \cite{Matveev2011}. However, this method relies on the ability to apply sufficiently large and stable electric fields at the position of the atoms to induce a shift large enough to be quickly resolved during operation of the clock. For metallic vacuum chambers with minimal dielectric openings and no internal electrodes, producing such a shift is problematic. Furthermore, the applied field can charge dielectric materials such as the vacuum viewports \cite{Abel2011}, resulting in a time dependence of the effective applied field. We circumvent these challenges by performing \emph{in-situ} electrometry using electromagnetically induced transparency (EIT) spectroscopy \cite{Fleischhauer2005} to measure the quadratic Stark shift of the Sr $5s75d$~$^1D_2$~$m_J = 0,\pm1,\pm2$ Rydberg states. Rydberg states of alkaline earth atoms are of growing interest for applications in quantum information \cite{Daley2008} and many body physics \cite{Mukherjee2011}, motivating their study by several groups \cite{Millen2010,DeSalvo2016}. The low-frequency polarizability scales with principal quantum number $n$ as $n^7$, making Rydberg states well suited for AC \cite{Sedlacek2012,Holloway2014,Fan2015} and DC \cite{Osterwalder1999, Thiele2015, Doughty1984} electrometry, with EIT spectroscopy being a particularly convenient measurement technique \cite{Abel2011, Mohapatra2007, Mohapatra2008,Tauschinsky2010}. The polarizability of our chosen Rydberg state is eight orders of magnitude larger than that of the clock transition, which reduces the required spectroscopic resolution from sub Hz, as needed when using to clock transition, to MHz when using Rydberg states to achieve the desired level of inaccuracy. It has also been proposed to use Rydberg states to measure ambient black-body radiation \cite{Ovsiannikov2011} which is responsible for the leading systematic uncertainty in many current Sr lattice clocks \cite{LeTargat2013, Falke2014, Poli2014, Takamoto2005}. Using this spectroscopic method, we reduce the fractional uncertainty of the DC Stark shift of the clock transition to $2\times10^{-20}$. Furthermore the formation of Rydberg states in a system designed for the operation as an atomic clock opens the possibility to investigate proposals to use long range Rydberg interactions to generate squeezed states which exhibit reduced quantum projection noise \cite{Gil2014}. \section{Theory: Single electron model and Stark maps } \label{theory} Alongside their large polarizability, another key advantage of Rydberg states for precision electrometry is that their Stark map - the variation of energy levels with the applied electric field - may be calculated to a very high degree of accuracy \cite{Zimmerman1979}. Even in divalent atoms such as strontium, where inter-electronic Coulomb interactions lead to perturbations of the Rydberg states \cite{Gallagher1994}, it can be shown that accurate wavefunctions \cite{Vaillant2012,Vaillant2014,Ye2012}, and Stark maps \cite{Millen2011,Lochead2013,Hiller2014} can be obtained without recourse to the complex atomic structure calculations required for the clock states \cite{Safronova2013}. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{figure_theoryV2.pdf} \caption{\label{fig:fig1} (a) Predicted Stark shift of the $5s47d$~$^1$D$_2$ $m_J$ = 0 state compared to experimental data (black dots) without any adjustable parameters. (b) Predicted Stark shift of the three $\vert m_J \vert$ components of the $5s75d$~$^1$D$_2$ state. The shaded area represents the uncertainty arising from the experimental determination of the zero-field energy (see text).} \end{figure} This simplification occurs because for Rydberg states, the effect of interelectronic interactions occurs primarily through the existence of spatially compact doubly-excited perturber states that overlap in energy with the Rydberg manifold. However since the static polarizability is dominated by the long-range character of the wavefunction \cite{Vaillant2015}, these states do not significantly alter the Stark maps. In previous work we have shown that an effective one-electron treatment that neglects inter-electronic effects gives Stark maps that are agreement with measurements for high-lying strontium Rydberg states \cite{Millen2011}. Here we develop this approach to calculate Stark maps with the well-characterized uncertainty necessary to constrain the instability due to the electric field. The method is based on analytic expressions for the wavefunctions and dipole matrix elements generated using the Coulomb approximation \cite{Zimmerman1979}. The wavefunction is parametrized by a quantum defect, which is obtained by fitting to the experimentally measured zero-field energies. To obtain the Stark map, Rydberg states within a range of $[n-3,n+5]$ from the target states and with $l \in [0,15]$ are included in the Stark Hamiltonian, which is then diagonalised numerically for each value of the field. At low electric field, the Stark shift of non-degenerate states is approximately quadratic, and a fit of the form $\Delta_\mathrm{E} = \frac{1}{2}\alpha_0 E^2$ yields the static polarizability $\alpha_0$. In Fig.\ref{fig:fig1}(a) we show an example Stark map compared to experimental measurements taken in an atomic beam apparatus with a well-defined electrode geometry \cite{Hanley2017}. The data and the model are in quantitative agreement without any adjustable parameters. The predicted Stark map for the higher-lying state used to constrain the field in the lattice clock is shown in Fig.\ref{fig:fig1}(b). In both cases, the shaded band indicates the theoretical uncertainty in the Stark map. By far the dominant contribution to this uncertainty is the experimental uncertainty in the zero-field energies used to calculate the wavefunctions. In strontium, the current state-of-the-art absolute frequency measurements of Rydberg energy levels is $\pm30$ MHz for $S$ and $D$ states \cite{Beigang1982}, with measurements on the other series having much greater errors \cite{Rubbmark1978}. The zero-field energies for each series and the corresponding errors are obtained by fitting these experimental data with the Rydberg-Ritz formula \cite{Vaillant2012}. The shaded region corresponds to the extremal cases where the 1-sigma errors on each series are combined to give the extremal overall polarizabilities. Using ultracold atoms and frequency comb technology, it was recently shown that absolute Rydberg spectroscopy with 10 kHz uncertainty is possible \cite{Kliese2016}, opening the way to significant improvement in the Stark map uncertainty. \section{ Experimental Approach } We follow the standard approach for producing cold strontium samples \cite{katori1999, Loftus2004, Sorrentino2006}. We operate a blue magneto-optical trap (MOT) on the 461 nm transition for 650 ms followed by a broadband and single-frequency red MOT for 100 ms and 150 ms, respectively, which results in a sample of approximately $10^5$ $^{88}$Sr atoms at a temperature of around 1 $\upmu$K. Details of our apparatus can be found in \cite{Hill2014} and \cite{Hill2016}. Before implementing the EIT probe pulse, the cloud of atoms is released from the red MOT for 5 ms, giving time for the magnetic field to settle to the desired bias value, and for the atoms to expand ballistically to a lower density which was observed to improve the signal to noise level. For the implementation of the EIT spectroscopy counter-propagating beams, one resonant with the $5s^2$~$^1$S$_0-5s5p$~$^1$P$_1$ transition at 461~nm and the other with tunable frequency at 413 nm, excite atoms to a chosen 5$snd$~$^1$D$_2$ Rydberg state. The resulting EIT signal is measured using `lock in' detection of the 461~nm probe beam absorption via modulation of the 413 nm pump beam intensity by an optical chopper. A typical absorption measurement showing the modulated EIT signal induced by the pump beam is shown in the top of Fig.\ref{fig:timetrace}. The probe beam is derived from a commercial frequency doubled diode laser system. Its power and waist, as defined by the 1/$e^2$ radius of the intensity profile, are 800~fW and 120 $\upmu$m, respectively. At this power and atomic number, the probe beam absorption is between $20 - 40\%$. A home built extended-cavity diode laser (ECDL) provides 8 mW of pump light which is focused to an 80 $\upmu$m waist at the atoms' position. The bottom inset in Fig.\ref{fig:timetrace} shows typical spectra taken at zero magnetic field with and without the applied external electric field. The long term frequency stability of the pump and probe beam is maintained to within 10 kHz by locking to a transfer cavity referenced to the `clock' laser. In the case of the probe beam, the sub-harmonic at 922 nm is directly locked to the cavity. To stabilize the 413~nm pump laser frequency, a commercial Ti-sapphire laser, which is typically used to form the magic wavelength lattice at 813~nm, is first tuned to 826 nm and locked to the transfer cavity. This light is then frequency doubled by a LBO crystal \footnote{Purchased from Eksma Optics, Optolita UAB Mokslininku str. 11 LT-08412 Vilnius Lithuania} to produce 20 $\upmu$W as needed to generate a beat-note with the pump beam. The beat-note signal is mixed with a direct digital synthesiser (DDS) and the intermediate frequency is stabilized using a delay line offset lock scheme \cite{Schunemann1999} via fast feedback to the diode current and slow feedback to the ECDL piezo. The DDS provides the necessary tunability needed for scanning the pump laser frequency. In order to spectroscopically resolve the Zeeman sub-levels of the Rydberg state, an external magnetic field between 100-300 $\upmu$T is applied orthogonal to the propagation direction of the pump and probe beams. In this low field regime, the Zeeman splitting of the intermediate $5s5p$~$^1$P$_1$ state is negligible compared to its natural linewidth. The probe beam is linearly polarised orthogonal to the quantization axis to enable a balanced access to all the $m_J$ levels within the Rydberg manifold given the fixed polarisation of the pump light. Any background residual magnetic field is nulled using electron-shelving spectroscopy on the narrow-linewidth $5s^2$~$^1$S$_0-5s5p$~$^3$P$_1$ transition at 689 nm \cite{Akatsuka2008}. We resolve a Doppler-broadened linewidth of 40 kHz which constrains any residual field to below 2 $\upmu$T. To test the sensitivity of our method, an external plate electrode located directly opposite a radial DN40 viewport is used to apply a DC electric field in order to induce a Stark shift of the Rydberg states. Shielding from the metal vacuum chamber greatly attenuates the applied field at the atoms' position, meaning several kV potentials are needed to induce a substantial Stark shift. Such large potentials have the unfortunate effect of charging the dielectric viewport resulting in an exponential decay of the applied electric field strength at the atoms' position as inferred from a reduction in the Stark shift with time. To ameliorate this effect, we interleave measurements with opposite field polarity to avoid charging any external surfaces. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{EIT_time_trace_and_spectra_inkscape_mod.pdf} \caption{\label{fig:timetrace} \textit{top:} Modulation of the pump beam intensity (green) and its corresponding modulated EIT induced on the probe absorption signal (blue). The pump beam is resonant with the $n=53$ Rydberg state and both the external magnetic and electric fields set to zero. \textit{bottom:} An EIT spectrum is obtained by scanning the pump beam frequency and repeating such an absorption measurement. Example spectra are shown for zero applied magnetic field, with the applied electric field on (grey) and off (blue). } \end{figure} \section{ Rydberg Electrometry using EIT Spectroscopy } As we do not have a method to directly measure the pump and probe frequencies, we have instead developed a method for measuring the applied electric fields that is based on the relative splitting of spectral lines, rather than the absolute detuning. In the absence of an electric field, the Zeeman sublevels split symmetrically with applied magnetic field \textbf{\textsl{B}}. However, in the presence of an electric field \textbf{\textsl{E}}, the Stark shift will result in an asymmetry in the spectrum, since the Stark shift depends on $|m_\mathrm{J}|$. Example spectra with and without an applied electric field are shown in Fig.\ref{fig:Efit}. In order to extract the line centers, five Fano profiles are fit to each spectrum corresponding to each of the five possible $m_J$ transitions. The asymmetric effect of an applied electric field on the observed Zeeman shift of the $m_\mathrm{J}$ is clearly visible in Fig.\ref{fig:Efit}. To obtain the electric field from the spectroscopic data, the relative line positions are compared to a calculation of the combined Zeeman and Stark shift of each level. In the general case, the magnetic and electric field vectors are separated by an angle $\beta$, requiring transformation to a common basis. Choosing to work in the $\ket{J, m}$ basis defined by the magnetic field quantization axis, the matrix elements of the Zeeman Hamiltonian are give by \begin{equation} \bra{J, m_1}H_B\ket{J,m_2}=-m_1\upmu B \delta_{m_1m_2} \end{equation} \noindent where $\mu$, the magnitude of the magnetic dipole moment, is the Bohr magneton for a singlet state and $\delta$ is the Kronecker delta function. The Stark Hamiltonian, with eigenenergies $\Delta_E(m_J, E)$ that are computed as outlined in section \ref{theory}, is rotated by an angle $\beta$ by applying the appropriate Wigner D-matrix, $d^J_{m,m'}(\beta)$ for $J$ = 2. The matrix elements of this transformed Hamiltonian are given by \begin{equation} \bra{J, m_1}H_E\ket{J,m_2}= \sum_{m'} d^J_{m_1,m'}(\beta)d^J_{m_2,m'}(\beta)\Delta_E(m', E) \end{equation} \noindent Finally, the theoretical splitting is computed by diagonalizing the Hamiltonian $H = H_{\textrm{E}} + H_{\textrm{B}}$. Using this approach, we fit the experimentally observed energy splitting by varying the electric field strength \textit{E} and its angle $\beta$ relative to the applied magnetic field in the model. As the Stark shift is quadratic, our method only determines $\beta$ modulo $\pi$. The only other fitting parameter is an overall two photon detuning from the zero field resonance as we have no measure of the absolute frequency of the 413~nm laser. Fig.\ref{fig:Efit} shows the relative energy splitting for various magnetic fields with and without an applied electric field. An external electrode set to 2 kV, the maximum allowed by the high voltage supply, generated the applied electric field. From a fit to the Zeeman splitting, the electric field at the position of the atoms is estimated to be $5.75\pm 0.11(\textrm{stat})\pm 0.16 (\textrm{sys}) \textrm{V} \textrm{m}^{-1}$. The fitting procedure also returned a value of $\beta = 0.47(1)\pi$ that is consistent with the axial magnetic field and radially applied electric field. An electric field of such magnitude would result in a fractional frequency shift of the clock transition equal to $2\times10^{-19}$. Given this is the largest field we can apply, it would have taken approximately a year of continuous operation to resolve this frequency shift given our fractional frequency instability, highlighting the utility of this method when applying large external fields is not possible. Next the external field was switched off and the procedure was repeated. A fit to the resulting splitting revealed a residual electric field of $1.52^{+0.62(\textrm{stat})}_{-.22}$$^{+0.05(\textrm{sys})}_{-.03} \textrm{V} \textrm{m}^{-1}$ most likely due to patch potential on the surrounding chamber. The uncertainty for this electric field value is comprised of both statistical error resulting from the fitting procedure and systematic error arising the uncertainty in the Stark map for $75^1$D$_2$. The quoted statistical error corresponds to a $68\%$ confidence interval as determined by the fitting procedure. A weak correlation observed between the uncertainty in $\beta$ and the electric field is taken into account in this estimate \cite{Press1988}, see appendix \ref{fitting} for further details. The systematic error on the electric field value due to the uncertainty for the Rydberg polarizability was calculated by repeating the fitting procedure with revised Stark maps offset from the theoretically predicted value by $\pm\sigma$. Translating this electric field and corresponding uncertainty to the DC Stark shift of the $^1$S$_0$-$^3$P$_0$ clock transition results in a fractional frequency shift of $-1.6^{+0.4}_{-1.6}\times10^{-20}$. The fractional uncertainty of the differential polarizability of the clock states is negligible compared to that of the electric field and therefore has been ignored in the quoted uncertainty. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{Peak_Location_vs_E_field_with_spectra.pdf} \caption{Inset $a$ shows the Zeeman splitting without an applied electric field for the Rydberg state n = 75 (the error bars are smaller than the symbols). The line centers are extracted from the EIT spectra, an example of which is shown in inset $c$ for the highest field case. Even without the an applied field, a fit to the splitting reveals a slight asymmetry resulting from the tensor nature of the Stark shift consistent with a residual electric field of 1.52~$\textrm{V} \textrm{m}^{-1}$. For the applied field case, shown in insets $(b)$ and $(d)$, the Stark shift is clearly visible and the fitting procedure returns an electric field of 5.75$~\textrm{V} \textrm{m}^{-1}$} \label{fig:Efit} \end{figure} \section{Conclusion} In conclusion, we believe that Rydberg electrometry constitutes a valuable technique for controlling systematic errors in optical lattice clocks. DC Stark shifts can in principle be separated from other systematic uncertainties using measurements on the clock transition alone, but this is time-consuming and requires the application of well-characterized external electric fields. In contrast, Rydberg states selectively enhance the spectroscopic sensitivity to stray electric fields by several orders of magnitude. The high spectroscopic resolution provided by EIT thus enables rapid quantitative measurements of the stray electric field. On the practical side, all that is required is a single additional laser to provide the pump beam, and existing lattice clock setups need not be modified to include electrodes. The constraint on the clock uncertainty that we obtained is compatible with the accuracy of the current generation of lattice clocks, and improved spectroscopy of the relevant Rydberg levels would see this reduced to negligible levels. Lastly, we note that the combination of Rydberg states and optical lattice clocks could also be applied to measurements of blackbody-induced systematic errors, and the creation of non-classical states. \section{Acknowledgments} The authors would like to thank Marco Schioppo for careful reading of our manuscript and Elizabeth Bridge for useful discussion regarding the feasibility of strontium Rydberg atoms for electrometry. This work was done under the auspices of UK NMO program. WB would like to acknowledge the support from the Marie Curie Initial Training Network FACT. PH and MPAJ acknowledge C Vaillant for useful discussions and the early version of the Stark map code. The experimental Stark map in Fig 1 was measured by R Hanley. They also acknowledge financial support from EPSRC grant EP/J007021/, and EU grants FP7-ICT-2013-612862-HAIRS, H2020- FETPROACT- 2014-640378-RYSQ and H2020-MSCA-IF-2014-660028.
2023-04-23T08:17:32.994Z
2017-06-08T02:01:36.000Z
redpajama/arxiv
arxiv_0000
223
3,410
cb5554107d12ad92507682ac24dd37b7199a5952
\subsection{1. Introduction} The applications of gamma-ray are ubiquitous in our daily life, such as container security initiative\cite{lun2008institutional}, gamma-knife surgery\cite{ganz2012gamma}, nuclear medical imaging\cite{eisen1999cdte} and food storage\cite{LADO2002433}. While in the vastness of the universe, photons, ranging from several MeV to tens of TeV\cite{lamb1997point,abdo2010fermi,aharonian2004crab}, results from various different processes, such as energetic cosmic ray\cite{kulsrud1969effect,hunter1997egret}, luminous pulsars\cite{romani1996gamma} and gamma-ray burst\cite{gamma_ray_burst_RMP,gamma_ray_burst_theories}. The information of gamma-ray burst was firstly published by the results of Vela satellites\cite{klebesadel1973observations} and then were quickly verified by data from the Soviet satellites\cite{mazets1974cosmic}. The ability of cosmic sources to emit such intense gamma-rays indicates that investigating this extreme environment is a promising route to discover new physics which are impossible in earth-bound laboratories. An alternative method of generating violent emission of gamma-rays is through the interaction of petawatt ($10^{15}$W) lasers and plasmas in the laboratory. Several multi-PW laser facilities, such as Extreme Light Infrastructure (ELI)\cite{ELI} and Exawatt Center for Extreme Light Studies (XCELS)\cite{XCELS}, are expected to operate at intensities beyond 10$^{23}$W/cm$^{2}$ in next few years. Under $\sim$10$^{23}$W/cm$^{2}$, various theoretical schemes have been put forward for multi-MeV photon sources with tens of percent for the total conversion efficiency, such as reinjected electron synchrotron radiation\cite{Brady2012PRL}, skin-depth emission\cite{Ridgers2012PRL}, radiation reaction facilitating gamma-ray\cite{Nakamura2012PRL} and sandwich target design\cite{Stark2016PRL}. Nevertheless, none of them has the ability to extend the energy of gamma photon up to several GeV, which is highly desirable to explore the laboratory astrophysics\cite{RevModPhys.78.755,RRD_bulanov2015}. Recently, exploiting the interplay between pair cascades\cite{bell2008possibility} and anomalous radiative trapping\cite{gonoskov2014anomalous}, ultrabright GeV photon source can be achieved in laser-dipole waves\cite{gonoskov2017ultrabright}. However, the scheme of dipole wave field\cite{gonoskov2012dipole,gonoskov2013probing,gonoskov2014anomalous} requires multi beams focused into a tiny point symmetrically, which is still an experimental challenge nowadays. Here we report an alternative all-optical scheme to realize the brilliant GeV gamma-ray emission via irradiating only one multi-PW circularly polarized (CP) pulse on a compound target in QED regime. This all optical backscatter scheme is already available in experiment for relative lower intensity circumstance\cite{phuoc2012all,chen2013mev,powers2014quasi,sarri2014ultrahigh,khrennikov2015tunable}. \subsection{2. Theoretical model for coupling effect} In the realm of nonlinear QED, electrons are able to emit a huge amount of kinetic energy in the form of high-energy photons $\gamma_{ph}$, as a result of absorbing a certain number $n$ of laser photons $\gamma_l$, $e^- + n\gamma_l \rightarrow e^- + \gamma_{ph}$. The invariant parameters $\eta=(e\hbar/m_e^3c^4)|F_{\mu\nu}p^\nu|=E_{RF}/E_{Sch}$ and $\chi=(e\hbar^2/2m_e^3c^4)|F_{\mu\nu}k^\nu|$ characterize the discrete photon emission process, where $e$ the electron charge, $m_e$ the electron rest mass, $\hbar$ the Planck constant, $c$ the light velocity in vacuum, $F_{\mu\nu}$ the field tensor and $p^\nu$ ($k^\nu$) the electron's (photon's) four-momentum. $E_{RF}$ denotes the electric field in the electron's rest frame and $E_{Sch}=m_e^2c^3/e\hbar\approx1.3\times10^{18}Vm^{-1}$ is the characteristic field of Schwinger limit\cite{schwinger1951gauge}. When $\eta\lesssim1$:(1) The radiation process should be described by probabilistic quantum emission rather than continuous one. (2) The corresponding quantum weaken correction for radiation is inevitable\cite{kirk2009pair,QED_domian_di2012}. When an electron beam co-propagates with the laser pulse, the electric force offset by the magnetic field effect results in $\eta\approx0$, which is undesired for high-energy photon emission\cite{RRD_bulanov2015,QED_domian_di2012}. However, if the laser counter-propagates with the electron beam, it leads to an enhancement as $\eta\approx 2\gamma E_L/E_{Sch}$, where $\gamma$ is the relativistic Lorentz factor of electron and $E_L$ is the polarized laser field. This colliding configuration can not only lower down the threshold of QED cascade from seed electrons\cite{grismayer2016laser}, but also facilitate the generation of $\gamma$-ray explosion\cite{nerush2011laser,gong2016high} and pair plasma\cite{bell2008possibility,zhu2016dense,jirka2016electron,chang2015generation}. \begin{figure*}[tbp] \includegraphics[keepaspectratio=true,height=60mm]{fig1.jpg} \caption{Scheme of the ultra-brilliant GeV gamma-ray source with helical structure. (a) and (b) show light being reflected before and after respectively.} \label{fig_schematic} \end{figure*} To exploit the counter-propagating configuration, in this letter, a CP femtosecond pulse was irradiated on a compound target (in Fig.\ref{fig_schematic}) consisted of a near-critical-density (NCD) plasma slab and a solid foil. Here the solid foil plays the role as a plasma mirror\cite{phuoc2012all,chen2013mev,powers2014quasi,sarri2014ultrahigh,khrennikov2015tunable} to spontaneously reflect the driven light to trigger the subsequent Compton backscattering. Generally, when a CP pulse of 10$^{19-21}$W/cm$^{2}$ propagates in the NCD target, the ionized electrons can be transversely expelled from central area to form a plasma channel\cite{pukhov1999channel,pukhov2002strong}. Some injected electron can experience a direct laser acceleration process and a collimated energetic electron bunch can be produced when its oscillation frequency in the channel field is close to the light frequency witnessed by the electron\cite{liu2013generating,hu2015dense,arefiev2012parametric}. However, under the higher intensity of $\sim$10$^{23}$W/cm$^{2}$, the injected electrons are mostly expelled from the central region and a hollow channel is merely filled with laser radiation\cite{ji2014radiation}. More interestingly, a great amount of electrons will be trapped back into the channel if radiation reaction (RR) is taken into account\cite{ji2014radiation,RR2016Chang}, where transverse ponderomotive force is properly balanced by the radiation recoil. It should be noted that the interaction between laser and NCD plasma is very complicated, where the filamentation instability\cite{honda2000collective}, hosing instability\cite{huang2017relativistic} or non optimal laser-plasma matching\cite{mourou2002design} can destroy the laser propagating and the channel's shape. Here a relatively large spot radius and the small plasma density are adopted to avoid these detrimental influence and guarantee the stable channel. To understand the underlying mechanism of RR impact on this scheme, the single electron model is utilized to depict the interaction with laser transverse field $E_L$ and self-generated fields in the plasma channel. Based on previous work\cite{pukhov2002strong,liu2015quasimonoenergetic,hu2015dense}, self-generated fields in the channel include radial electrostatic field $\mathbf{E}_{Sr}=k_Er\hat{e}_r$, longitudinal electric field $\mathbf{E}_{S\parallel}$ and quasistatic azimuthal magnetic field $\mathbf{B}_{S\theta}=-k_Br\hat{e}_\theta$, where $k_E$ and $k_B$ can be seen as constant and are related to the plasma density. The time derivative of the ponderomotive phase $\psi$ can be written as \begin{equation} \begin{aligned}\label{eq1} \frac{d\psi}{dt}=\omega_{\beta}-\omega_L=\sqrt{\frac{e}{\gamma m_e}(v_\parallel \langle k_B\rangle+\langle k_E\rangle)}-(1-v_\parallel/v_{ph})\omega_0. \end{aligned} \end{equation} Here $\omega_{\beta}=\sqrt{e(v_\parallel k_B+k_E)/(\gamma m_e)}$ is the electron betatron frequency and $v_\parallel$ ($v_\perp$) the electron longitudinal (transverse) velocity. $\omega_L=(1-v_\parallel/v_{ph})\omega_0$ is the Doppler-shifted laser frequency witnessed by electron, where $\omega_0$ is the laser frequency and $v_{ph}=c/\sqrt{1-\omega_p^2/(\gamma\omega_0^2)}$ is the laser phase velocity\cite{decker1995group}. $\omega_p$ is the plasma frequency. The $\psi$ is relative phase between the electron rotation and the periodic laser field. The time derivative of the electron Lorentz factor is expressed as \begin{equation} \begin{aligned}\label{eq2} \frac{d\gamma}{dt}=\frac{-e\mathbf{E}\cdot\mathbf{v}-\mathbf{f_{rad}}\cdot\mathbf{v}}{m_ec^2}=-\frac{e( v_\perp E_L cos\psi+v_\parallel \langle E_\parallel\rangle)}{m_ec^2}-\epsilon_{rad}\omega_0\beta^2a_s^2\eta^2G(\eta). \end{aligned} \end{equation} Here $E_L$ is the light electric field amplitude. Since the stochasticity of photon emission is difficult to be simplified into a precise formula, the discontinuous influence is neglected in the single model and the quantum corrected RR force $\textbf{f}_{rad}\approx-G(\eta)\epsilon_{rad}m_ec\omega_0\vec{\beta}a_{s}^2\eta^2$ is used in Eq.(\ref{eq2}) to qualitatively analyze the RR influences, where $G(\eta)\approx(1+12\eta+31\eta^2+3.7\eta^3)^{-4/9}$ is the quantum weaken factor\cite{kirk2009pair}. The impacts issued from the discrete stochasticity in RR is beyond the scope of this manuscript and these are worth discussing in the future work. $\epsilon_{rad}=4\pi r_e/3\lambda_0$ is the dimensionless ratio, where $r_e=e^2/m_ec^2\approx2.8\times10^{-15}m$ is the classical electron radius and $\lambda_0$ is the laser wavelength. $\vec{\beta}=\vec{v}/c$ is the normalized electron velocity and $a_{s}=eE_{Sch}/m_e\omega_0c$ is the normalized Schwinger field. The parameters in above equations depend on time and are probably in especially complicated form so that the average values denoted by $\langle\ \rangle$ are used. From Eqs.(1)-(2), it can be found that the phase space ($\psi$,$\gamma$) has a fixed point\cite{jordan2007nonlinear,hirsch2012differential} at ($\psi_0$,$\gamma_0$) = ($\cos^{-1}\frac{-\epsilon_{rad}\omega_0\beta^2a_s^2\eta^2G(\eta)m_ec^2-e v_\parallel \langle E_\parallel\rangle}{e v_\perp E_L},\frac{e(v_\parallel \langle k_B\rangle+\langle k_E\rangle)}{m_e(1-v_\parallel/v_{ph})^2\omega_0^2}$). To determine the system dynamic property from Eqs.(1)-(2) in ($\psi$,$\gamma$) space, the perturbation expansion nearby ($\psi_0$,$\gamma_0$) of Eqs.(1)-(2) was made and quadratic terms were dropped to approach the characteristic Jacobian matrix $\mathbf{Ja}$\cite{jordan2007nonlinear,hirsch2012differential}: \begin{equation} \begin{aligned} \mathbf{Ja}\approx\begin{pmatrix} 0 & -\frac{1}{2}\sqrt{\frac{e}{\gamma^3m_e}(v_\parallel \langle k_B\rangle+\langle k_E\rangle)} \\ ev_\perp E_Lsin\psi & -\epsilon_{rad}m_ec^2\omega_0\beta^2a_{s}^2\frac{\partial G(\eta)\eta^2}{\partial\gamma} \end{pmatrix}_{\psi_0,\gamma_0}. \end{aligned} \label{eq3} \end{equation} Without RR effect, the trace and determinant of Jacobian matrix are tr(Ja)$=$0 and det(Ja)$>$0 when the right lower RR term is canceled, which manifests that ($\psi_0$,$\gamma_0$) is a center without any source or sink property\cite{jordan2007nonlinear,hirsch2012differential}. On the contrary, with RR effect included, at fixed point tr(Ja)$<$0 and det(Ja)$>$0 indicates that its behaviour converts from center to spiral sink attractor\cite{gonoskov2014anomalous,ESIRKEPOV20152044,gong2016radiation,kirk2016radiative}. The sink attractor emerging illustrates a large fraction of the radiation trapped electrons tends to possess the same relative phase $\psi_0$ with respect to laser electric field and the helical density structure is an intrinsic rotary manner of the electric field of CP laser. Due to electron moving in the same direction as the pulse, the electric field $E_L$ counteracts the force from laser magnetic field $B_L$ leading to $\eta\approx\gamma|\mathbf{E}_L+\mathbf{v}\times\mathbf{B}_L|/E_{Sch}\approx0$ and tr(Ja)$\sim$0. Notwithstanding, the strong self-generated magnetic field $B_{s\theta}\approx n_eR/(2\varepsilon_0c)$ (here $\varepsilon_0$ the permittivity of vacuum, $n_e$ the RR trapped electron density and $R$ the channel radius) approaching the order of driven laser field\cite{Stark2016PRL} gives $\eta\approx\gamma|\mathbf{E}_L+\mathbf{v}\times(\mathbf{B}_L+\mathbf{B}_{s\theta})|/E_{Sch}\approx\frac{\gamma B_{s\theta}}{E_{Sch}}$, which results in tr(Ja)$\approx-2\epsilon_{rad}\beta^2e^2B_{s\theta}^2/m_e\omega_0<$0 and enables the attractor effect on achieving such a helical electron bunch (HEB). The nearby electrons are attracted to possess the identical Lorentz factor $\gamma_0$=$e(v_\parallel \langle k_B\rangle+\langle k_E\rangle)/m_e(1-v_\parallel/v_{ph})^2\omega_0^2$. The total angular momentum (AM) along the longitudinal x-axis, i.e. $L=yp_z-zp_y$, acquired by the HEB can also be estimated as \begin{equation} {\centering \ L \approx -\int\Sigma_{i}er_\perp E_Lcos\psi_idt \ \ \ i=1,2,3... } \label{eq4} \end{equation} here r$_{\perp}$ is the electron transverse radius and the index i refers to the i-th electron. From Eq.(4) we can see that the laser could transfer its spin angular momentum (SAM) to HEB only when most of electrons possess the same ponderomotive phase $\psi_i$, otherwise ensemble average leads to $\sum_{i}cos\psi_i\approx$0. Therefore, coupling effects among RR trapping, self-generated magnetic field and spiral attractor in phase space, enhance the net AM gain and realize the HEB. Eventually the discrete photon emission\cite{ritus1985quantum,neitz2013stochasticity,blackburn2014quantum} is triggered through the inverse Compton scattering (ICS) between the HEB and reflected light, where prolific high-energy photons inheriting a large fraction of electrons' energy and AM are generated. \begin{figure}[tbp] \includegraphics[keepaspectratio=true,height=80mm]{fig2.jpg} \caption{Distribution of electron density in $\psi$-$\gamma$ space at t=50T$_0$ with RR (a) and without RR (b), respectively. (c) Normalized amplitude of $\mathbf{B}_{s\theta}$ averaged over the channel in the plane z=0 at time t=30,45,60 T$_0$, where solid (dash) line denotes the circumstance with (without) RR. (d) presents the number of the electrons inside the channel and total AM of electrons in x direction as a function of the interaction time $t$, where solid (dash) corresponds to the case with (without) RR.} \label{fig_attractor} \end{figure} \subsection{3. Particle-in-cell (PIC) simulation results} The feasibility and robustness of this scheme are demonstrated by using the self-consistent three dimension PIC code EPOCH\cite{arber2015contemporary}. A Monte Carlo probabilistic model\cite{duclous2010monte,ridgers2014modelling} has been successfully implemented, which is based on QED corrected synchrotron cross sections and coupled with the subsequent reduction of the electron momentum. Each particle is assigned an optical depth ($\tau$) at which it emits according to $P=1-e^{-\tau}$, where $P\in$[0,1] is chosen at random to consider the quantum correction in the emission processes as well as the straggling. The rates of photon production, $d\tau_\gamma/dt=(\sqrt{3}\alpha_fc\eta)/(\lambda_c\gamma)\int_{0}^{\eta/2}d\chi F(\eta,\chi)/\chi$, are then solved until the optical depth is reached, when the emission event occurs\cite{duclous2010monte}. Here, $\alpha_{fc}$ is the fine structure constant, $\lambda_c=\hbar/(m_ec)\approx3.9\times10^{-13}m$ is the Compton wavelength and $F(\eta,\chi)$ is the quantum synchrotron spectrum\cite{duclous2010monte}. The incident 1.2$\times$10$^{23}$W/cm$^{2}$ CP pulse propagates along X direction with a profile of $a$=$a_0e^{-(t-t_0)^4/\tau_0^4}e^{-(y^2+z^2)/r_0^2}sin(\omega_0t)$, where $\tau_0$=$5T_0$ denotes the intensity with a full width at half maximum (FWHM) of 25.6fs (T$_0$$\approx$3.3fs is the laser period) and $a_0$=$eE_L/m_e\omega_0c$$\approx$$300$ is the normalized amplitude of the laser field. $r_0$=$5\lambda_0$ is the spot size ($\lambda_0$=1.0$\mu m$). The simulation box is 80$\lambda_0 \times$ 40$\lambda_0 \times$ 40$\lambda_0$ in X $\times$ Y $\times$ Z direction, which has been uniformly divided into 3200 $\times$ 800 $\times$ 800 cells. A hydrogen slab with initial density of $n_e=2n_c$ locates between 10$\lambda_0$ to 60$\lambda_0$ and aluminum foil of $n_e=700n_c$ is placed from 60$\lambda_0$ to 80$\lambda_0$, where $n_c=m_e\omega_0^2/4\pi e^2$ is critical density\cite{gibbon2004short}. The hydrogen slab and aluminum foil contain 4 and 16 macroparticles per cell (for both species), respectively. For reference, there is no obvious difference in our results when we double the number of macroparticle per cell. \begin{figure*}[tbp] \includegraphics[keepaspectratio=true,height=100mm]{fig_insert.png} \caption{(a) and (b) correspond to the distributions of electron density $n_e$ for the case without and with RR, where the absolute value of laser electric field $|E_y|$ is also figured in grey with a transparency of 60\%. The distributions of longitudinal field $E_x$ generated in the plasma channel are shown in (c)(d) as well.} \label{fig_insert} \end{figure*} The electron density distributions in $\gamma-\psi$ space at t=50T$_0$ for the cases with and without RR are presented in Fig.\ref{fig_attractor}(a) and (b). Lorentz factor at the fixed point obtained from Eqs.(1)-(2) as $\gamma_0=\frac{e(v_\parallel\langle k_B\rangle+\langle k_E\rangle)}{m_e(1-v_{\parallel}/v_{ph})^2\omega_0^2}\approx\frac{(v_\parallel/c)(n_e/n_c)}{2[1-v_\parallel/c\sqrt{1-n_e/(a_0n_c)}]^2}$ where $\langle k_B\rangle\approx\frac{en_e}{2\epsilon_0}$,$\omega_0=\sqrt{\frac{n_ce^2}{\epsilon_0m_e}}$ and $v_{ph}\simeq\frac{c}{\sqrt{1-n_e/(a_0n_c)}}$ are taken into account and $\langle k_E\rangle$ is neglected as the transverse static electric field is relatively weak compared with self-generated magnetic field. Substituting $n_e=2n_c$, $a_0=300$ and $v_\parallel=0.9863c$ (from simulation parameters and results) into above equation leads to $\gamma_0=3416$. Considering $\epsilon_{rad}=1.18\times10^{-8}(\frac{1\mu m}{\lambda_0})$, $\beta\approx1$, $a_s\approx4.1\times10^5$, $\eta\approx\frac{\gamma_0B_{s\theta}}{a_s}\approx0.165$, $G(\eta)\approx1$, $\langle E_\parallel\rangle\approx0.015E_L$ and $v_\perp=0.165c$, the relative phase is deduced as $\psi_0=\arccos\frac{-\epsilon_{rad}\omega_0\beta^2a_s^2\eta^2G(\eta)m_ec^2-ev_\parallel\langle E_\parallel\rangle}{ev_\perp E_L}\approx2.24$. When RR force is switched on, most of electrons possess a relative phase $\psi$=2.3 in Fig.\ref{fig_attractor}(b) which is in good agreement with our theoretically derived attractor point ($\psi_0, \gamma_0$)=(2.24, 3416). Since neither RR trapping nor attractor emerging occurs, the number density of electron in Fig.\ref{fig_attractor}(a) is relatively small compared to RR case and it does not behave like the attractor modulated distribution. The self-generated azimuthal magnetic field B$_{s\theta}$ averaged over the channel cross plane z=0 is plotted in Fig.\ref{fig_attractor}(c) with maximum $\approx$0.6MT (normalized value equals 60$m_e\omega_0/e\approx$0.2B$_L$, where B$_L$ is the laser magnetic amplitude) at t=65T$_0$, which demonstrates that RR recoil enhances the $B_{s\theta}$ generation due to the more trapped electron current along longitudinal axis. This kind of self-generated magnetic field in channel can not only enhance the gamma photon emission\cite{Stark2016PRL}, but also help accelerate ions in the rear surface of target\cite{bulanov2015helium}, which has already been verified in experiment under lower laser intensity with shock-compressed gas target\cite{helle2016laser}. The temporal evolution of electron number inside the plasma channel and their total AM $L=\sum_i y_ip_{zi}-z_ip_{yi}$ are recorded in Fig.\ref{fig_attractor}(d) for both cases. It is found that RR not only boosts the electron accumulation inside the channel but also facilitates the AM transfer to HEB, which is in good agreement with the theoretical prediction of Eq.(4). The RR force prevents electrons from being expelled transversely, resulting in a increase of electrons from 172 nano-Coulombs(nC) to 291 nC at t=65T$_0$. The enhancement of electron current strengthens the B$_{s\theta}$, which gives a positive feedback on spiral attractor merging in phase space and effectively favors angular momentum transformation from laser's SAM to HEB's AM. The electron density distributions for the case with and without RR are shown in Fig.3(a)(b). Here the emergence of helical spatial structure depends on the RR impact, which accords with the attractor facilitating electron density modulation with the similar frequency as laser electric field in Eq.(\ref{eq3}). When RR is switched off, a ball of electrons are injected into the tail of plasma channel and can be accelerated by the longitudinal electric field $E_x$. The distributions of $E_x$ are plotted in Fig.\ref{fig_insert}(c)(d) for with RR case or not. Since the quantity of electron in the channel for RR case is much higher than that for no RR, the sheild effect weakens the accelerating field in RR case when compared to the no RR one. \begin{figure*}[tbp] \includegraphics[keepaspectratio=true,height=65mm]{fig3.jpg} \caption{(a) Volume distribution of the photon energy density where only photon with energy higher than 10MeV is recorded for less computation costs. (b)(c) Final photon angular-spectral distribution for energy higher than 1GeV and 100MeV respectively.} \label{fig_photon} \end{figure*} Since the ponderomotive force of CP pulse avoids the longitudinal oscillation at twice the optical frequency\cite{gibbon2004short}, plasma in the second layer cannot be heated violently and the driven light is substantially reflected. Under colliding configuration, the parameter $\eta\approx2\gamma E_L/E_{Sch}\gtrsim1$ indicates that the discrete incoherent photon emission\cite{QED_domian_di2012} gives a more appropriate description compared with the coherent electromagnetic wave radiation derived from the Li\'{e}nard-Wiechert retarded potential\cite{jackson1999classical}. The volume snapshot of the photon energy density at t=70T$_0$ is exhibited in Fig.\ref{fig_photon}(a) where photon beam inherits spatial helical structure and transverse size of the source is about 1.5$\mu$m. The gamma-ray flash duration is $\sim$16fs roughly equal to half of the laser because the driven pulse and trapped electrons completely overlap inside the channel. The angular-spectral distribution calculated by accumulating the forward photons at t=70T$_0$ over the entire simulation region is shown in Figs.\ref{fig_photon}(b) and (c). Most of energetic photons are highly collimated and predominantly located within an emission polar angle $\phi\leq$15$^\circ$ ($\phi\leq$30$^\circ$) for energies higher than 1GeV (100MeV). In a 0.1\% bandwidth (BW) around 1GeV we have 1.05$\times$10$^8$ photons, implying the brightness of 1.7$\times$10$^{23}$ photons/s/mm$^2$/mrad$^2$/0.1\%BW for the GeV gamma-ray emission. The corresponding source brilliances at 100 MeV and 10 MeV are 2.3$\times$10$^{24}$ and 1.5$\times$10$^{25}$ photons/s/mm$^2$/mrad$^2$/0.1\%BW, respectively. The comparasion among different photon source is illustrated in Fig.\ref{fig_brill}. Our ICS scheme predominantly aims at high brilliance around GeV. Another dipole wave field can achieve the brightest gamma photon emission with 9$\times$10$^{24}$ photons/s/mm$^2$/mrad$^2$/0.1\%BW at GeV\cite{gonoskov2017ultrabright}, but the dipole wave needs to be realized through symmetrically colliding multi pulses, which is still a challenge in experiment. Here our scheme shooting one laser pulse onto double layer target is the most efficient method to generate brilliant GeV gamma ray source\cite{Brady2012PRL,Ridgers2012PRL,Nakamura2012PRL,Stark2016PRL} and it is more experimentally accessible. \begin{figure*}[tbp] \includegraphics[keepaspectratio=true,height=60mm]{fig_brill.png} \caption{Comparison of the peak brilliance of our proposed ICS source with the other existing photon source, e.g., Synchrotron, XFEL and Dipole-cascade\cite{gonoskov2017ultrabright}.} \label{fig_brill} \end{figure*} \subsection{4. Discussion and conclusion} In Fig.\ref{fig_discussion}(a), the exponential decay spectrum of photon covers higher energy range from 1MeV to several GeV with a cutoff energy at 2.9GeV and that of the electron before(t=60T$_0$) and after(t=70T$_0$) ICS process are presented. The nonlinear QED regime predicts that most photons are emitted with an energy $h\nu_{ph}\approx$0.44$\eta\gamma m_ec^2$\cite{bell2008possibility,kirk2009pair} which carries a large fraction of electron's kinetic energy. It is obvious that the amount of high-energy electron is drastically curtailed with the cutoff-energy declining from 3.9GeV to 2.5GeV and simultaneously most of energy is converted to gamma photons. The temporal evolutions of the particle energy are illustrated in Fig.\ref{fig_discussion}(b), where 14.5\%;4.2\%;0.108\% of the total laser energy is transferred into the gamma-ray photon with energy above 1MeV;100MeV;1GeV. For energies above 100MeV and 1GeV, the photons are emitted almost exclusively by ICS process during 65T$_0<$t$<$70T$_0$. Based on power radiated by a single electron, $P_{rad}=(4\pi m_ec^3/3\lambda_c)\alpha_fc\eta^2G(\eta)$\cite{duclous2010monte,ridgers2014modelling}, the instantaneous radiation power of this regime can be estimated as \begin{equation} {\centering P_{rad}\approx\left\{ \begin{aligned} N_e\frac{4\pi\alpha_fm_ec^3}{3\lambda_c}(\frac{\gamma B_{s\theta}}{E_{Sch}})^2G(\frac{\gamma B_{s\theta}}{E_{Sch}}) &\ \ \ & t<t_{ref}, \\ N_e\frac{4\pi\alpha_fm_ec^3}{3\lambda_c}(\frac{2\gamma E_L}{E_{Sch}})^2G(\frac{2\gamma E_L}{E_{Sch}}) &\ \ \ & t\geq t_{ref}. \end{aligned} \right. } \label{eq5} \end{equation} Here t$_{ref}$=65T$_0$ is the time of light reflecting and $\eta$ is approximated by $\gamma B_{s\theta}/E_{Sch}$ at $t<t_{ref}$ and $2\gamma E_L/E_{Sch}$ at $t\geq t_{ref}$, respectively. The length of NCD plasma $l=50\mu m$ is not comparable with the laser depletion length $L_{depletion}\approx c\tau_0a_0n_c/n_e$=750$\mu m$\cite{pukhov2002strong,lu2007generating}, as a result a large part of laser energy is reflected and backscatter with the electron bunch. In addition, when laser propagates in the NCD plasma, both of its intensity and spot size will change due to the self-focusing, self-modulation, etc. The radius of the self-generated channel is defined by the balance of the ponderomotive and charge separation fields. Here, we choose the laser spot almost the same as the radius of such channel. That results in no significant change of the laser transverse size during the propagation in near critical plasma and we assume they are constant in estimation of Eq.(5). Eq.(5) predicts the radiation power P$_{rad}\approx$0.63PW (t$<$t$_{ref}$) and P$_{rad}\approx$19.2PW (t$\geq$t$_{ref}$) which qualitatively agrees with the simulation results in Fig.\ref{fig_discussion}(b), implying the nonlinear QED ICS based gamma-ray source power of the same order as the infrared incident laser. \begin{figure}[tbp] \includegraphics[keepaspectratio=true,height=80mm]{fig4.jpg} \caption{(a) The energy spectra of electrons at t=60,70T$_0$ and photons at t=70T$_0$. (b) The laser energy conversion to the electrons (black), protons (green) and gamma-ray photons ($>$1MeV in solid blue, $>$100MeV in dash blue and $>$1GeV in solid red). The photon with energy greater than 1GeV, rendering in red, corresponds to the right red axis. The orange solid line plots the theoretical radiation prediction from eq.(4). (c) Temporal evolution of the total AM of electrons, protons and photons ($>$1MeV). (d) The laser energy conversion to $\gamma$-photons with different plasma densities. Here the value of $\gamma_{ph}>$1GeV is times by 20 and horizontal (density) axis is on logarithmic scale.} \label{fig_discussion} \end{figure} The transfer of axial AM from the laser to the particles is plotted in Fig.\ref{fig_discussion}(c). The oscillation of electron and proton AM is due to charged particles interplaying with the laser electromagnetic field. The different sign of electric charge causes the opposite oscillation direction in electron and proton. Since the spiral attractor results in the fixed relative phase between electron velocity and laser electric field, the overall AM of electron rises gradually before backscattering with the reflected pulse. However, photons do not interplay with laser field and their AM has a moderate growth before the ICS. The photons are predominantly emitted from electron modulated by the spiral attractor during 65T$_0<$t$<$70T$_0$ so that a sharp photon AM increase and a pronounced electron AM drop occur in ICS process. In terms of quantum mechanics, the angular momentum carried by a photon of CP laser is $\sigma=\pm1$ for spin motion. The total angular momentum absorbed from laser is approximately expressed as $L_{l}=\delta\frac{W_{l}}{\hbar\omega_0}\sigma\hbar$=1.70$\times$10$^{-12}\delta$ kg$\ast$m$^2$/s, where $W_{l}$ is the whole laser energy and $\delta$ is the absorbing ratio. During ICS process, AM is more efficiently transferred from electron to gamma-ray and eventually the AM of photons reaches 8.2$\times$10$^{-14}$ kg$\ast$m$^2$/s, 4.8\% of the total laser SAM. In addition, a parameter scan has been carried out to investigate energy conversion efficiency for a wide range density 0.2-20n$_c$ of first layer plasma with thickness of 50$\mu$m in Fig.\ref{fig_discussion}(d) and find that there is an optimal condition $n_e\sim n_c$ for realizing the twisted GeV gamma-ray emission. The disadvantage for relatively rarefied plasma (n$_e$=0.2n$_c$) is lack in trapped helical electron amount so that insufficient electron quantity accounts for deficient gamma-ray production, while for relatively dense circumstance (n$_e\gtrsim$10n$_c$) driven laser tends to deplete most of their energy in the first slab and without any remnants to trigger ICS process. In conclusion, we have shown how the ultra-intense and ultra-bright GeV gamma-ray flash can be achieved by irradiating a prospective 1.2$\times$10$^{23}$W/cm$^{2}$ laser on a compound plasma target in nonlinear QED regime. The initial energetic HEB results from the coupling effects among RR trapping, self-generated magnetic field and emergency of spiral attractor in $\gamma$-$\psi$ space. The helical gamma-ray flash inherits a considerable AM and energy of the parent electron beam through Compton backscattering between HEB and the reflected driven pulse. The final photon source, with unprecedented power of 20 PW and brightness of 1.7$\times$10$^{23}$ photons/s/mm$^2$/mrad$^2$/0.1\% (at 1 GeV), might enable significant development of application in particles physics and laboratory astrophysics. Our scheme is also feasible in the laboratory system where cluster jets\cite{fukuda2009energy} or nano-tube foams\cite{ma2007directly} can be utilized for NCD plasma generation and a solid foil acts as a plasma mirror to reflect laser. Such parameters of the gamma-ray sources will be achieved with the next generation of multi-PW laser facilities in the future. \section*{Acknowledgements} The work has been supported by the National Basic Research Program of China (Grant No.2013CBA01502), NSFC (Grant Nos.11535001) and National Grand Instrument Project (2012YQ030142). The PIC code Epoch was in part funded by the UK EPSRC grants EP/G054950/1, EP/G056803/1, EP/G055165/1 and EP/M022463/1. J.Q. Yu wants to thank the Project (2016M600007,2017T100009) funded by China Postdoctoral Science Foundation. Our simulations were carried out in Max Planck Computing and Data Facility and Shanghai Super Computation Center. The author Z.Gong acknowledges fruitful discussions with Prof. S.V. Bulanov and H.X. Chang.
2023-04-23T08:17:33.069Z
2018-01-23T02:16:33.000Z
redpajama/arxiv
arxiv_0000
224
5,140
04e770e0a29d3a1324ffedae3420a4e611a9e048
\section*{Abstract} {\bf We propose using smeared boundary states $e^{-\tau H}|\cal B\rangle$ as variational approximations to the ground state of a conformal field theory deformed by relevant bulk operators. This is motivated by recent studies of quantum quenches in CFTs and of the entanglement spectrum in massive theories. It gives a simple criterion for choosing which boundary state should correspond to which combination of bulk operators, and leads to a rudimentary phase diagram of the theory in the vicinity of the RG fixed point corresponding to the CFT, as well as rigorous upper bounds on the universal amplitude of the free energy. In the case of the 2d minimal models explicit formulae are available. As a side result we show that the matrix elements of bulk operators between smeared Ishibashi states are simply given by the fusion rules of the CFT. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} . \section{Introduction}\label{sec1} Conformal field theories (CFTs) are supposed to correspond to the non-trivial renormalization group (RG) fixed points of relativistic quantum field theories (QFTs). Such theories typically contain a number of scaling operators of dimension $\Delta<d$ (where $d$ is the space-time dimension), which, if added to the action, are relevant and drive the theory to what is, generically, a trivial fixed point. The points along this trajectory then correspond to a massive QFT. In general there is a multiplicity of such basins of attraction of the RG flows, but enumerating them and determining which combinations of relevant operators lead to which basins, and therefore to what kind of massive QFT, in general requires non-perturbative methods. This problem is equivalent to mapping out the phase diagram in the vicinity of the critical point corresponding to the CFT. Another way of characterizing these massive theories is through the analysis of the possible boundary states of the CFT. Imagine the scenario in which the relevant operators are switched on in only a half-space, say $x_0<0$. This will then appear as some boundary condition on the CFT in $x_0>0$. However the boundary conditions themselves undergo RG flows, with fixed points corresponding to so-called conformal boundary conditions. Therefore on scales $\sim M^{-1}$, where $M$ is the mass scale of the perturbed theory, the correlations near the boundary should be those of a conformal boundary condition, deformed by \em irrelevant \em boundary operators. A similar question is raised through recent work on the spectrum of the entanglement hamiltonian in massive QFTs \cite{LR}. If the theory is defined in ${\bf R}^D$ and is in its ground state, and we study the entanglement between the degrees of freedom in the half-space $A: x_1>0$ and its complement, then the entanglement, or modular, hamiltonian $K_A=-(1/2\pi)\log\rho_A$, where $\rho_A$ is the reduced density matrix of $A$, takes the form \cite{HSK1,HSK2} \be K_A=\int_{x_1>0}x_1T_{00}(x)d^Dx\,, \ee which is nothing but the generator of rotations in euclidean space, or of boosts in lorentzian signature. In 1+1 dimensions we may consider a conformal transformation $z=x_1+ix_0=\epsilon e^w$ which sends the euclidean $z$-plane, punctured at the origin by a disc of radius $\epsilon$ representing the UV cutoff, to an semi-infinite cylinder of circumference $2\pi$. $K_A$ is then simply the generator of translations around this cylinder. However, if the QFT corresponds to a perturbed CFT, it is not conformally invariant, but rather the couplings transform as \be \lambda\to\lambda\,\epsilon^{2-\Delta}e^{(2-\Delta){\rm Re}\,w}\,, \ee where $\Delta<2$ is the scaling dimension of the perturbing operator. Thus the dimensionless coupling $g=\lambda\epsilon^{2-\Delta}$ is effectively switched on over a length scale $O(1)$ near ${\rm Re}\,w\sim\log(1/g) $. If we are interested the low-lying spectrum of $K_A$, corresponding to the R\'enyi entropies ${\rm Tr}\,\rho_A^n$ with $n\gg1$, the effective circumference of the cylinder is $2\pi n$ and we are then in a similar situation to the above, where the massive theory for ${\rm Re}\,w>\log(1/g)$ acts as an effective boundary condition on the CFT in ${\rm Re}\,w<\log(1/g)$. As concluded in \cite{LR}, the low-lying spectrum of $K_A$ should therefore be that of the (boundary) CFT, with an appropriate boundary condition depending on the bulk perturbation. The same question arises in the context of quantum quenches \cite{CCQQ}. In this case we are interested in the real time evolution of an initial state $|\Psi_0\rangle$ under a hamiltonian $H$ of which it is not an eigenstate. An example is the case where $H=H_{CFT}$ and $|\Psi_0\rangle$ is the ground state of the massive perturbed CFT. This is a difficult problem, and in \cite{CCQQ,JCQQ} the step was taken of replacing this ground state by a conformal boundary state perturbed by irrelevant operators.\footnote{In \cite{CCQQ} only the smeared states of the form (\ref{smeared}) were considered, which happen to lead to subsystem thermalization, while in \cite{JCQQ} it was argued that more general states should lead to a generalized Gibbs ensemble.} This allows the explicit computation of the imaginary time evolution and the continuation to real time, which would be very difficult for the exact ground state of the massive theory. Thus an important problem in all these cases is to determine to which conformal boundary condition a particular combination of bulk operators should correspond. For simple examples this is apparent by physical inspection. For example, the CFT corresponding to the critical Ising model has two relevant operators, coupling to the magnetic field $h$ and the deviation $t$ of the reduced temperature from its critical value. There are three stable RG fixed points at $(h=0,t\to+\infty)$ and $h\to\pm\infty$, respectively the sinks for the disordered and the two ordered phases, and corresponding to the three conformal boundary conditions when the Ising spins are respectively free and fixed, either up or down. One way to make this identification is to think of the boundary condition as defining a state $|\cal B\rangle$ when the theory is quantized on a time slice $x_0=$ constant. In that language we may regard the perturbed CFT as described by a hamiltonian operator \be \hat H=\hat H_{CFT}+\sum_j\lambda_j\int\hat\Phi_j(x)d^{d-1}x\,, \ee where the $\{\hat\Phi_j\}$ are relevant operators. We then ask which $|\cal B\rangle$, suitably deformed by boundary irrelevant operators, is closest in some sense to the ground state of $\hat H$ at strong coupling. Conformal boundary states by themselves contain no scale, and therefore cannot be good candidates for the ground state of $\hat H$. Indeed, they must have infinite energy compared to this state. In known examples in 2d (and, for example, for free theories in higher dimensions) they are also non-normalizable. We must therefore deform them by irrelevant boundary operators in order to give them a scale. The simplest such operator is the stress tensor $\hat T_{00}$, which has scaling dimension $d$ and therefore boundary RG eigenvalue $(d-1)-d=-1$. Since its space integral is the CFT hamiltonian, including only this operator is tantamount to considering boundary states `smeared' by evolution in imaginary time: \be\label{smeared} e^{-\tau \hat H_{CFT}}|\cal B\rangle\,, \ee where $\tau>0$ is parameter with the dimensions of length. Such states have finite energy and correlation length $\propto\tau^{-1}$, and also finite norm. Such smeared boundary states may be thought of as a continuum version of matrix product states (MPS). Indeed, a lattice discretization of the euclidean path integral, illustrated in Fig.~\ref{MPS}, suggests that such states correspond to matrices with internal dimension $\sim N^{\tau/\delta\tau}$, where $N$ is the number of states on each lattice edge and $\delta\tau$ is the time step. However, unlike discrete MPS states, the smeared boundary state (\ref{smeared}) automatically has the correct short-distance behavior of the CFT. \begin{figure}[h]\label{MPS} \centering \includegraphics[width=0.9\textwidth]{smeared} \caption{Path integral for smeared boundary state (left) and its lattice discretization (right) as a matrix product state. On the right, each vertical column of lattice sites represents a matrix. The horizontal lines represent contractions between these in the internal space, and the vertical dangling bonds label the physical degrees of freedom.} \end{figure} From this point of view it is therefore natural to regard (\ref{smeared}) as a variational \em ansatz\em, with $\tau$ and the choice of boundary state $|\cal B\rangle$ as variational parameters. In this paper we explore this idea further and show that this program can be carried through explicitly for the $A_m$ (diagonal) series of unitary minimal 2d CFTs. It should be extendable to the other non-diagonal minimal models, and in principle to other rational 2d CFTs, and indeed to higher dimensional theories if enough information is available about the CFT. More specifically, given a set of physical conformal boundary states $\{|a\rangle\}$, (whose definition is recalled in Sec.~\ref{sec2}) we take as a variational ground state \be |\{\alpha_a\},\{\tau_a\}\rangle=\sum_a\alpha_a\,e^{-\tau_a\hat H_{CFT}}|a\rangle\,, \ee and compute the variational energy per unit volume \be \lim_{L\to\infty}\frac1{L^D}\frac{\langle\{\alpha_a\},\{\tau_a\}|\hat H_{CFT}+\sum_j\lambda_j\int\hat\Phi_j(x)d^{d-1}x|\{\alpha_a\},\{\tau_a\}\rangle}{\langle\{\alpha_a\},\{\tau_a\}|\{\alpha_a\},\{\tau_a\}\rangle}\,, \ee minimizing this with respect to the $\{\alpha_a\}$ and $\{\tau_a\}$. An important general consequence of the analysis is that, in the limit $L\to\infty$, the minimizing states are always purely physical, that is all but one $\{\alpha_a\}$ vanishes. This is because both $H_{CFT}$ and the perturbing operators turn out to be diagonal in this basis. This is reassuring, as in principle the minimizers could be non-physical linear combinations of these, for example the Ishibashi states in 1+1 dimensions. Specializing now to the case of 1+1 dimensions, for the minimal models, the precise values of these diagonal matrix elements are related to the elements of the modular $S$-matrix of the CFT, and, with these in hand, it is straightforward, for a fixed set of couplings $\{\lambda_j\}$ to determine which values of $\tau_a$ and $a$ minimize the variational energy, and thus to map out a rudimentary phase diagram of the theory in the vicinity of the CFT. It then turns out that although this approach yields correct results in some aspects, for example in determining which combination of bulk couplings $\{\lambda_j\}$ best matches a given boundary state $a$, it is not capable of reproducing some of the finer details of the phase boundaries between different states $a$. In this approximation these are always first-order, and `massless' RG flows to other non-trivial CFTs are not properly accounted for. This can be seen as a limitation of the particular trial state, which could be remedied by including other operators acting on the boundary state, but at the cost of the loss of analytic tractability. However, an amusing side result of the analysis is that matrix elements of primary bulk operators $\hat\Phi_j$ between Ishibashi states $\langle\langle i|$, $|k\rangle\rangle$ (which are boundary states within a single Virasoro module) are simply proportional to the fusion rule coefficients: \be \langle\langle i|e^{-\tau H}\,\hat\Phi_j\,e^{-\tau H}|k\rangle\rangle\propto N_{ijk}\,. \ee This is a consequence of the Verlinde formula \cite{Verlinde}, and to our knowledge has not been previously observed. Although this matrix element should be proportional to the OPE coefficient $c_{ijk}$ which governs the matrix element $\langle i|\hat\Phi_j|k\rangle$ between highest weight states, it is rather surprising that the contributions of all the descendent states should conspire to give the integer-valued fusion rule coefficient. The outline of this paper is as follows. In Sec.~\ref{sec2} we set up the formalism and prove some general results. In Sec.~\ref{sec3} we apply this to the case of the diagonal minimal models, with the $A_3$ and $A_4$ cases as specific examples, and finally in Sec.~\ref{sec4} give a summary and some further remarks. After this paper was completed, I was made aware of Ref.~\cite{Kon}, in which similar ideas are explored. However that paper is based on comparing ratios of overlaps between different boundary states and numerical approximations to the exact ground state of the deformed theory, rather than the variational method adopted here. The overlap method is shown to work well for the case of the perturbed Ising model, but is computationally more intensive. \section{General formalism.}\label{sec2} As described in the Introduction, we consider a $d$ $(=D+1)$-dimensional CFT perturbed by its bulk primary operators $\{\Phi_j\}$ with coupling constants $\{\lambda_j\}$, so the hamiltonian is: \be\label{Hp} \hat H=\hat H_{CFT}+\sum_j\lambda_j\int\hat\Phi_j(x)d^{D}x\,. \ee The theory is quantized on a spatial torus of volume $L^{D}$, where $L$ is much larger than any other scale in the theory. We assume for simplicity that the $\{\Phi_j\}$ are all scalars and that they have their CFT normalization \be\label{norm} \langle\hat\Phi_j(x)\hat\Phi_j(0)\rangle_{CFT}=|x|^{-2\Delta_j}\,, \ee where $\Delta_j$ is the scaling dimension of $\Phi_j$. Although we are interested in relevant perturbations with $\Delta_j<d$, these will in general lead to a finite number of primitive UV divergences up to some finite order in the couplings (as for a super-renormalizable deformation of a free theory), in particular in the ground state energy which we are trying to approximate. These divergences may be subtracted by adding a finite number of counter-terms to $\hat H$ determined the OPEs of the $\{\Phi_j\}$. We assume this has been done. For example, if $2\Delta_j\geq d$ there is a UV divergence in the ground state energy at $O(\lambda_j^2)$. This is subtracted by a term in $\hat H$ proportional to the unit operator. This does not affect the variational procedure in general. The case $2\Delta_j=d$ is special and leads to a logarithmic anomaly in the energy. This will be discussed for the 2d Ising model in Sec.~\ref{seclog}. Conformal boundary states $|\cal B\rangle$ are defined by the condition \be\label{T0k} \hat T_{0k}(x)|{\cal B}\rangle=0\,,\quad(k=1,\ldots,D)\,, \ee where $\hat T_{ij}$ is the energy-momentum tensor of the CFT. That is, they are annihilated by the momentum density operator, and so are invariant under local time reparametrizations. (For boundaries with a space-like normal there is no energy flow across the boundary.) Although in higher dimensions these states, and their classification, are poorly understood except for free or weakly coupled CFTs, in 2d much more is known \cite{JC89,PZ}. The Hilbert space is acted on by two copies $({\cal V}\otimes\overline{\cal V})$ of the Virasoro algebra, generated by \be \hat L_n=\frac L{2\pi}\int e^{2\pi nix/L}\,\hat T(x)dx\,,\quad\hat{\overline L}_n=\frac L{2\pi}\int e^{-2\pi nix/L}\,\hat{\overline T}(x)dx\,, \ee where, as usual, $T\equiv T_{zz}=-T_{00}+T_{11}-2iT_{01}$ and $\overline T\equiv T_{\bar z\bar z}=-T_{00}+T_{01}+2iT_{01}$, in euclidean signature. It is spanned by states $|i,N\rangle\otimes\overline{|i',N'\rangle}$, where $N$ labels the states of a module of $\cal V$ with highest weight state labelled by $i$, and similarly for $\overline{\cal V}$. For CFTs with central charge $c\geq1$, this is a Virasoro module, while for the minimal models with $c<1$ it is a Kac module with the null states projected out. The condition (\ref{T0k}) then corresponds to \be \big(\hat T(x)-\hat{\overline T}(x)\big)|{\cal B}\rangle=0\,. \ee In terms of the Virasoro generators this becomes \be \big(\hat L_n-\hat{\overline L}_{-n}\big)|{\cal B}\rangle=0\,, \ee whose solution is the span of the Ishibashi states \be\label{Ishi} |i\rangle\rangle=\sum_N|i,N\rangle\otimes\overline{|i,N\rangle}\,, \ee where the sum is over all the orthonormalized states in the module. However, the Ishibashi states are not physical, in the sense that, when they are chosen as boundary states on the opposite edges $x_0=\pm\tau$ of an annulus, the partition function $Z={\rm Tr}\,e^{-L\hat H'}$ evaluated in terms of the generator $\hat H'$ of translations around the annulus does not have the form of a sum over eigenstates with non-negative integer coefficients, as it must if periodic spatial boundary conditions are imposed. For the diagonal minimal $A_m$ models, the physical states which do have this property are linear combinations of the Ishibashi states \be\label{phys} |a\rangle=\sum_j\frac{S^i_a}{(S^i_0)^{1/2}}\,|i\rangle\rangle\,, \ee where $S^i_k$ is the matrix by which the Virasoro characters transform under modular transformations. The multiplicities of the eigenstates $j$ of $\hat H'$ which do propagate are then given by the fusion rule coefficients $N^j_{ab}$. In particular the vacuum state propagates only if $a=b$, that is $N^0_{ab}=\delta_{ab}$. While similar results are available for the non-diagonal minimal models, wider results for general CFTs are not available, which is why we mainly restrict to the $A_m$ models in explicit calculations. In higher dimensions, the boundary states satisfying (\ref{T0k}) also form a linear space, and we assume that the physical states may be identified analogously. Consider the partition function in the slab ${\mathbb T}^D\times\{-\tau,\tau\}$ (where ${\mathbb T}^D$ is a $D$-dimensional torus of volume $L^D$) with boundary states $|a\rangle$, $|b\rangle$ at $x_0=\pm\tau$: \be Z_{ab}=\langle b|e^{-2\tau\hat H_{CFT}}|a\rangle\,, \ee (where $\hat H_{CFT}$ is the generator of translations in $x_0$) and, similarly to the 2d case, demand that when evaluated by quantizing in one of the spatial directions, it has the form of a trace over intermediate states whose energies all scale like $\tau^{-1}$. However this is difficult to implement since this spectrum on the torus is not related to the conformal spectrum for $d>2$. In fact, we shall need only a weaker condition: that physical states $\{a,b,\ldots\}$ should satisfy \be\label{gap} Z_{ab}/(Z_{aa}Z_{bb})^{1/2}=O\big(e^{-{\rm const.}(L/2\tau)^D}\big)\qquad\mbox{for $L/\tau\to\infty$}\,. \ee This may be understood as follows: when the boundary conditions are the same, we expect that \be Z_{aa}\sim e^{\sigma_a(L/2\tau)^D}\,, \ee where the exponent is (minus) the Casimir energy of a system between two identical plates. In this geometry this is always attractive, thus $\sigma_a>0$. It must scale as $L^D$, and, since the boundary conditions and the bulk theory are scale-invariant, also as $\tau_a^{-D}$. The quantity $-\sigma_a L^{D-1}/(2\tau)^D$ is the ground state energy of the generator of translations around one of the spatial cycles of the torus. On the other hand the exponent on the right hand side of (\ref{gap}) is the gap to the lowest-energy state in the sector with $ab$ boundary conditions. It is the interfacial energy from the point of view of $d$-dimensional classical statistical mechanics. Thus the physical boundary states may in principle be determined by diagonalizing the partition functions in the slab in the limit $L\gg\tau$. We assume that this has been done. As discussed in the introduction, we use as variational states for the ground state of the perturbed hamiltonian (\ref{Hp}), the ansatz \be |\{\alpha_a\},\{\tau_a\}\rangle=\sum_a\alpha_a\,e^{-\tau_a\hat H_{CFT}}|a\rangle\,. \ee We first discuss the inner product of these states \be \langle a|e^{-\tau_aH_{CFT}}e^{-\tau_bH_{CFT}}|b\rangle\,. \ee This is the partition function $Z_{ab}$ in slab of width $\tau_a+\tau_b$ with boundary conditions $a,b$ on opposite faces. As discussed above, for physical boundary states in the limit $L\gg\tau_a+\tau_b$ \be\label{Zab} Z_{ab}\sim\delta_{ab}\, e^{\sigma_a(L/(2\tau_a))^D}\,. \ee The matrix elements of the unperturbed hamiltonian $\hat H_{CFT}$ are, for the same reason, diagonal in this basis as long as $L\gg\tau_{a,b}$, and may be found by differentiating (\ref{Zab}) \be \langle a|\hat H_{CFT}\,e^{-2\tau_a\hat H_{CFT}}|a\rangle\sim\delta_{ab}\frac{D\sigma_aL^D}{(2\tau_a)^{D+1}} e^{\sigma_a(L/(2\tau_a))^D}\,. \ee Finally we need the matrix elements of the perturbation \be \langle a|e^{-\tau_a\hat H_{CFT}}\,\hat\Phi_j(x)\,e^{-\tau_b\hat H_{CFT}}|b\rangle\,, \ee which is a one-point function in the slab. Once again, if we evaluate this by inserting a complete set of eigenstates of a generator of translations around the torus, this is dominated by its ground state if $L\gg\tau_{a,b}$, but this contributes only if $a=b$. So the perturbation is also diagonal in the basis of physical states (but not in the Ishibashi basis: see Sec.~\ref{sec33}). When $a=b$ the one-point functions in the mid-plane of the slab have the form \be \langle\Phi_j(x)\rangle=\frac{A_a^j}{(2\tau_a)^{\Delta_j}}\,, \ee where the amplitudes $A_a^j$ are universal given the normalization (\ref{norm}) of the operator. Since the perturbed hamiltonian is diagonal in the physical basis of variational states, the problem becomes much simpler: for each $a$ we should minimize the variational energy per unit volume \be\label{Ea} E_a=\frac{D\sigma_a}{(2\tau_a)^{D+1}}+\sum_j\lambda_j\frac{A_a^j}{(2\tau_a)^{\Delta_j}}\,, \ee with respect to $\tau_a$, and then choose the $a$ which gives the absolute minimum. Note that having found this minimum $a$ for a particular set of couplings $\{\lambda_j\}$, since $E_a$ transforms multiplicatively under \be \lambda_j\to e^{(D+1-\Delta_j)\ell}\lambda_j\,,\quad \tau_a\to e^{-\ell}\tau_a\,,\quad E_a\to e^{(D+1)\ell}E_a\,, \ee the absolute minimum will occur for the same value of $a$ along an RG trajectory. This is reassuring, since each point on the trajectory should be described by the same massive QFT up to a rescaling of the mass, which is proportional to $1/\tau_a^{\rm min}$. Since the $\{\Phi_j\}$ are relevant, $\Delta_j<D+1$, so that the behavior of $E_a$ as $\tau_a\to0$ (but still $\gg\epsilon$) is dominated by the first term and is positive if $\sigma_a>0$ (which corresponds to the physically reasonable case of an attractive Casimir force.) As $\tau_a\to\infty$ it approaches zero, dominated by the term with smallest $\Delta_j$ and $\lambda_j\not=0$. If the sign is negative this implies that $E_a$ has a negative minimum at some finite value of $\tau_a$. At least for the 2d minimal models we can show that there is always some $a$ for which $\lambda_jA_a^j<0$, so this minimum always exists. \subsection{Trace of the energy-momentum tensor.} We may infer a general result about the trace $\langle\Theta\rangle=\langle T^i_i\rangle$ of the energy-momentum tensor in the perturbed theory as approximated by this method. For given set of relevant perturbations $\{\lambda_j\}$ this is given by the response of the action to a scale transformation \be \Theta(x)=-\sum_j(D+1-\Delta_j)\lambda_j\Phi_j(x)\,. \ee Differentiating (\ref{Ea}) we see that at the minimum \be \frac{(D+1)D\sigma_a}{(2\tau_a)^{D+1}}+\sum_j\Delta_j\lambda_j\langle\Phi_j\rangle=0\,, \ee and so \be E=-(D+1)^{-1}\sum_j\Delta_j\lambda_j\langle\Phi_j\rangle+\sum_j\lambda_j\langle\Phi_j\rangle=-(D+1)^{-1}\langle\Theta\rangle\,. \ee Once again, this is reassuring, as we expect that in the ground state of a relativistic theory $\langle T_{00}\rangle=-\langle T_{kk}\rangle$ for $k\not=0$, and so \be \langle\Theta\rangle =-\langle T_{00}\rangle+\sum_{k=1}^D\langle T_{kk}\rangle=-(D+1)\langle T_{00}\rangle=-(D+1)E\,. \ee The variational method therefore gives a lower bound on $\langle\Theta\rangle$. \section{2d minimal models}\label{sec3} We now specialize to the case of the 2d $A_m$ minimal models. In 2d, the Casimir amplitude $\sigma_a=\pi c/24$, independent of $a$, where $c$ is the central charge. When $a=b$ the expectation values of the one-point functions in a long strip of width $2\tau_a$ may be found by a conformal mapping from the half-plane to have the form \be\label{strip} \langle\Phi_j(x,\tau)\rangle_{\text{strip}} =\frac{\widetilde A_a^j}{\left((2\tau_a/\pi)\sin(\pi\tau/2\tau_a)\right)^{\Delta_j}}\,, \ee where the amplitude governs the behavior of the one-point function in the upper half-plane $y>0$ with boundary condition $a$ on the real axis: \be \langle\Phi_j(y)\rangle_{\text{half-plane}}=\frac{\widetilde A_a^j}{y^{\Delta_j}}\,. \ee In (\ref{strip}) we should set $\tau=\tau_a$, whence we read off that $A_a^j=\pi^{\Delta_j}\widetilde A_a^j$. If the operator $\Phi_j$ has its standard CFT normalization (\ref{norm}), the amplitudes $\widetilde A_a^j$ are universal. In \cite{CL} they were computed in terms of the overlap between the boundary state $|a\rangle$ and the highest weight state $|j\rangle$ corresponding to the primary operator $\Phi_j$: \be \widetilde A^j_a=\frac{\langle j|a\rangle}{\langle 0|a\rangle}\,. \ee This follows by conformally mapping the upper half plane to a semi-infinite cylinder $x>0$ with a boundary condition $a$ at $x=0$, and comparing the result for $x\to\infty$ with the result of inserting a complete set of eigenstates of the generator of translations along the cylinder. For the $A_m$ minimal models, inserting the expression (\ref{phys}) for $|a\rangle$ we then find \cite{CL} \be\label{At} \widetilde A^j_a=\frac{S_a^j}{S_a^0}\left(\frac{S_0^0}{S_0^j}\right)^{1/2}\,. \ee To summarize, the variational energy (\ref{Ea}) in this case is given by \be E_a=\frac{\pi c}{24(2\tau_a)^2}+\sum_j\lambda_j\frac{S_a^j}{S_a^0}\left(\frac{S_0^0}{S_0^j}\right)^{1/2}\frac{\pi^{\Delta_j}}{(2\tau_a)^{\Delta_j}}\,. \ee It is useful to rescale the couplings by positive constants $\tilde\lambda_j=\pi^{\Delta_j}(S_0^0/S^j_0)^{1/2}\lambda_j$ so that this simplifies to \be E_a=\frac{\pi c}{24(2\tau_a)^2}+\sum_{j\not=0}\frac{S_a^j}{S_a^0}\frac{\tilde\lambda_j}{(2\tau_a)^{\Delta_j}}\,. \ee Note that that sum over $j$ excludes $j=0$ which corresponds to adding the unit operator and therefore a constant shift in the energy. There are two general statements which follow from the fact that $S$ is a symmetric orthogonal matrix, and that the elements $S_0^j$ are all positive. First, since all its rows are orthogonal and non-zero it follows that, for $j\not=0$, some of the elements $S_a^j$ are positive and some negative. Therefore, if ${j^*}$ corresponds to the smallest value of $\Delta_j$ such that $\lambda_j\not=0$, and therefore dominates the behavior of $E_a$ as $\tau_a\to\infty$, no matter what the sign of $\lambda_{j^*}$ we may always find at least one $a$ such that $\lambda_{j^*}S_a^j<0$, and so $E_a$ approaches zero from below. Since $E_a\to+\infty$ as $\tau_a\to0$, this implies that, for these $a$, $E_a$ has a negative minimum at finite $\tau_a$, corresponding to a finite correlation length. This rules out the possibility that this variational ansatz can describe massless flows to another non-trivial CFT. Second, we may ask whether there is a combination of couplings $\{\lambda_j\}$ which will lead to a prescribed $b$ as overall minimum. The answer is affirmative. For, suppose we choose \be\label{combo} \tilde\lambda_j=-g(2\mu)^{\Delta_j-2}\,S^j_b\,, \ee where $g$ is a positive constant and $\mu$ is some fixed scale $>\epsilon$. Then the second term in (\ref{Ea}) is, when $\tau_a=\mu$, \be -\frac g{S^0_a\mu^2}\sum_{j\not=0}S^j_aS^j_b=-\frac g{S^0_a\mu^2}\left(\delta_{ab}-S_a^0S_b^0\right)\,. \ee Since $0<S^0_a<1$, this is $<0$ if $a=b$ and $>0$ otherwise. Thus, at this scale, the boundary state $b$ will correspond to the lowest trial energy\footnote{This does not rule out the possibility that some other $E_a$ might come lower than this at some other scale, but in practice this does not seem to happen.}. Note that (\ref{combo}) implies including some irrelevant couplings in the mix of deformations. Further results depend on the detailed form of the modular $S$-matrix for the $A_m$ models. In particular, we may ask what happens if a single $\lambda_j$ is non-zero. Depending on whether $\lambda_j>0$ or $<0$, we have to determine which value of $a$ minimizes (maximizes) the ratio $S^j_a/S^0_a$. Label the positions of the bulk operators in the Kac table by $j=(r,s)$, with $1\leq r\leq m-1$ and $1\leq s\leq m$, and $(r,s)$ identified with $(m-r,m+1-s)$. The label $j=0$ corresponds to $(r,s)=(1,1)$). Similarly label the boundary states $a$ by $(\alpha,\beta)$. The $A_m$ minimal series of CFTs is conjectured to be the scaling limit of the critical lattice RSOS $A_m$ models \cite{RSOS1,RSOS2}. These models are defined on a square lattice. At each node $R$ there is an integer-valued height variable $h(R)$ satisfying $1\leq h(R)\leq m$, with the RSOS constraint that $|h(R)-h(R')|=1$ if $R$ and $R'$ are nearest neighbors. The heights may be thought of as living at the nodes of the Dynkin diagram $A_m$, so each configuration is a many-to-one embedding of the diagram into the square lattice. The critical Boltzmann weights of the lattice model are specified in terms of the elements $s_h^0(m)$ of the Perron-Frobenius eigenvector corresponding to the largest eigenvalue of the adjacency matrix of $A_m$. The general eigenvector has the form \be s_h^j(m)\propto\sin\frac{\pi jh}{m+1}\,. \ee The microscopic interpretation of the conformal boundary states (\ref{phys}) for these models has been given in \cite{SB,PB}. The simplest boundary states are when the boundary lies at 45$^{\circ}$ to the principal lattice vectors, and the heights on the boundary are all fixed to the same particular value $h$, say. These have been identified with the conformal boundary conditions with the Kac labels $(\alpha,\beta)=(1,h)$. The second simplest type of microscopic boundary condition is when the boundary heights are fixed to $h$ and on the neighboring diagonal they are fixed to $h+1$. These have been identified with $(\alpha,\beta)=(h,1)$. In \cite{PB} a complete set of microscopic boundary conditions was identified for each Kac label $(\alpha,\beta)$ but these become increasingly complicated. In general the microscopic boundary states corresponding to labels near the center of the Kac table are increasingly disordered. The ratios of elements of the modular $S$-matrix for the diagonal $A_m$ models are \be\label{ratio} \frac{S^j_a}{S^0_a}=\frac{S^{r,s}_{\alpha,\beta}}{S^{1,1}_{\alpha,\beta}} =(-1)^{(r+s)(\alpha+\beta)}\,\frac{\sin\frac{\pi r\alpha}m}{\sin\frac{\pi \alpha}m} \,\frac{\sin\frac{\pi s\beta}{m+1}}{\sin\frac{\pi \beta}{m+1}}=(-1)^{(r+s)(\alpha+\beta)}\,\frac{s_\alpha^r(m-1)}{s_\alpha^1(m-1)}\frac{s_\beta^s(m)}{s_\beta^1(m)}\,. \ee Locating the global maximum and minimum of this expression for general $(r,s)$ is simplified by the fact that, for fixed $(r,s)$, it factorizes into expressions depending only on $\alpha$ and $\beta$ respectively. Thus we can restrict to the four possible products of the maximum and minimum of each factor, and compare these values. In each factor the numerator is an oscillating function which is modulated by the positive denominator, which itself has minima at $\alpha=1,m-1$ (and $\beta=1,m$), which for the lattice $A_m$ models correspond to the most ordered states. The most relevant bulk operator corresponds to $(r,s)=(2,2)$, when (\ref{ratio}) becomes \be 4\cos\frac{\pi\alpha}m\cos\frac{\pi\beta}{m+1}\,. \ee The extrema of each factor are at $\alpha=1, m-1$ and $\beta=1,m$. Thus for $\lambda_{2,2}>0$ the minimum energy corresponds to $(\alpha,\beta)=(1,m)=(m-1,1)$, and, for $\lambda_{2,2}<0$, $(\alpha,\beta)=(1,1)=(m-1,m)$. These correspond to the most ordered states, at the ends of the Dynkin diagram. This is to be expected as, in the Landau-Ginzburg correspondence, $\Phi_{2,2}$ is the most relevant Z$_2$ symmetry breaking operator. Similarly, the most relevant Z$_2$ even operator is $\Phi_{3,3}$, when (\ref{ratio}) becomes \be (2\cos\frac{2\pi\alpha}m+1)(2\cos\frac{2\pi\beta}{m+1}+1)\,. \ee If $m$ is even, the first factor varies between $2\cos\frac{2\pi}m+1$ at $\alpha=1,m-1$, and $-1$ at $\alpha=\frac12m$, and the second factor varies between $2\cos\frac{2\pi}{m+1}+1$ at $\beta=1,m$, and $2\cos\frac{\pi m}{m+1}+1$ at $\beta=\frac12m, \frac12m+1$. Thus for $\lambda_{3,3}<0$ there are degenerate minima for $(\alpha,\beta)=(1,1)=(m-1,m)$ and $(\alpha,\beta)=(1,m)=(m-1,1)$ (the most ordered states, which break the Z$_2$ symmetry.). On the other hand for $\lambda_{3,3}>0$ we need to compare the quantities \be (-1)(2\cos\frac{2\pi}{m+1}+1)\,, \quad (2\cos\frac{\pi m}{m+1}+1)(2\cos\frac{2\pi}m+1)\,. \ee Numerically, the first is more negative, so the minimum energy in this case corresponds to $\alpha=\frac12m$, $\beta=1,m$. These are Z$_2$-symmetric states. For odd $m$ the same story holds, with $\alpha$ and $\beta$ interchanged. Another interesting special case is $\Phi_{1,3}$. This is a perturbation which, with the correct sign, is supposed to flow to the $A_{m-1}$ minimal CFT, and for the other sign to a state with large degeneracy \cite{RSOS1}. As we have seen, such massless flows cannot be accounted for within this set of trial states. In this case (\ref{ratio}) simplifies to \be 2\cos\frac{2\pi\beta}{m+1}+1\,, \ee independent of $\alpha$. Depending on the sign of the coupling, this picks out the boundary states either with $\beta=1,m$ or with $\beta\approx\frac12m$. In both cases, however, there is an $(m-1)$-fold degeneracy of candidate ground states. This reflects a flow towards a true first-order transition, as expected for one sign of the coupling \cite{RSOS1}, or the best attempt of this approximation to reproduce the critical point of the $A_{m-1}$ model, as expected for the other sign. This is an important check on the effectiveness of our approach. These somewhat cryptic general remarks are best illustrated with some simple examples. \subsection{The Ising model.} This corresponds to $A_3$. The perturbed hamiltonian is \be\label{HIsing} \widehat H=\widehat H_{CFT}+t\int\hat\varepsilon dx+h\int\hat\sigma dx\,, \ee where $\varepsilon=\Phi_{2,1}=\Phi_{1,3}$ and $\sigma=\Phi_{1,2}=\Phi_{2,2}$ are the energy density and magnetization operators respectively. In this case, the bulk operators are $\{\Phi_j\}=(1,\epsilon,\sigma)$, and the boundary states in the same labeling are $(+,-,f)$, corresponding to fixed$(+)$, fixed$(-)$ and free boundary conditions on the Ising spins. The $S$-matrix in this ordering of the basis is \be S=\left(\begin{array}{ccc}\ffrac12&\ffrac12&\ffrac1{\sqrt2}\\ \ffrac12&\ffrac12&-\ffrac1{\sqrt2}\\ \ffrac1{\sqrt2}&-\ffrac1{\sqrt2}&0\end{array}\right)\,. \ee After rescaling the couplings as above, we find that \begin{eqnarray} E_+&=&\frac\pi{48(2\tau_+)^2}+\frac t{2\tau_+}+\sqrt2\frac h{(2\tau_+)^{1/8}}\,,\label{Eaa}\\ E_-&=&\frac\pi{48(2\tau_-)^2}+\frac t{2\tau_-}-\sqrt2\frac h{(2\tau_-)^{1/8}}\,,\label{E2a}\\ E_f&=&\frac\pi{48(2\tau_f)^2}-\frac t{2\tau_f}\,.\label{Ef} \end{eqnarray} For $h=0$, $t>0$, corresponding to the disordered state, it is clear that the minimizer is $E_f$. For the opposite sign of $t$ with $h>0$, the minimizer is $E_-$, corresponding to negative magnetization (recall the definition of the sign of the couplings in (\ref{HIsing})), and vice versa. As $h\to0$ from either side with $t<0$, we remain in one or the other of these states, corresponding to spontaneous symmetry breaking. However, for $t>0$ and $0<h\ll t^{15/8}$ there is a problem. The minimum of $E_-$ is found by balancing the last two terms in (\ref{E2a}), and therefore occurs when $\tau_-=O((t/h)^{7/8})$ at a value $E_-=-O(h^{8/7}/t^{1/7})$. On the other hand the minimum of $E_f=-O(t^2)$ is much lower in this limit. This would suggest, incorrectly, that the magnetization is zero in the ground state. As we increase the ratio $h/t^{15/8}$, eventually these levels cross, but there is no reason for $\tau_-$ and $\tau_f$ to be equal at this point. This appears to be an inherent problem of using a variational ansatz which is not sufficiently complex. It could presumably be overcome by using a trial state of the form \be e^{-\tau\hat H_{CFT}}\,e^{-h_s\int\hat\sigma_s dx}\,|f\rangle\,, \ee where $\hat\sigma_s$ is the boundary magnetization coupling to a boundary magnetic field $h_s$, at the cost of loss of analytic tractability. \subsubsection{Logarithmic anomaly.}\label{seclog} When $h=0$ it follows from (\ref{Eaa},\ref{E2a},\ref{Ef}) that the minimum energy scales like $t^2$. Yet it has been known since Onsager that the correct behavior is $t^2\log t$. The origin of this logarithmic anomaly is a cancellation between the scaling term $t^{2/(2-\Delta_\varepsilon)}$ and the analytic background $\propto t^2$, both of which occur with amplitudes which diverge as $\Delta_\varepsilon\to1$. This may be accounted for within the variational approach by adding a counter-term as before, proportional to the space-time integral of the 2-point function, which will now also have a logarithmic dependence on the IR cutoff $\tau$. Thus, for example, (\ref{Ef}) becomes \be E_f=\frac\pi{48(2\tau_f)^2}-\frac t{2\tau_f}-At^2\log(\tau_f/\epsilon)\,, \ee where $\epsilon$ is the short-distance cutoff and $A$ is a (calculable) $O(1)$ constant. The minimum still occurs at $\tau_f\sim t^{-1}$, but the last term now contributes the desired logarithm at the minimum. \subsection{The tricritical Ising model.} This corresponds to the $A_4$ lattice model with heights $h(R)\in\{1,2,3,4\}$. The RSOS condition means that $h$ is even on even sites odd $s$ on odd sites, or vice versa. In the Landau-Ginzburg picture it corresponds to a scalar field $\phi$ with a $\phi^6$ interaction, and a Z$_2$ symmetry under $\phi\to-\phi$. Note that in the lattice model this Z$_2$ symmetry is implemented by reflecting the Dynkin diagram \em and \em a sublattice shift. The Kac table with bulk operators labelled by Landau-Ginzburg is shown in Fig.~2. Note that odd $r$ is Z$_2$ odd and vice versa. However another model in the same universality class is the spin-1 (Blume-Capel) Ising model, which may be thought of as an Ising model with vacancies. \begin{figure}[h]\label{LG} \centering \includegraphics[width=0.25\textwidth]{A4LG} \caption{Landau-Ginzburg assignment of bulk operators in the $A_4$ Kac table.} \end{figure} \begin{figure}[h]\label{A4} \centering \includegraphics[width=0.4\textwidth]{A4bcv2} \caption{Correspondence between boundary conditions in lattice models and Kac labels of conformal boundary states: in the $A_4$ model according to Ref.~\cite{PB} (upper labels), and in the Blume-Capel model, according to Ref.~\cite{Affleck} (lower labels).} \end{figure} The usually accepted phase diagram and RG flows of the tricritical Ising model near the tricritical fixed point are quite complex. (See for example Fig.~4.2 of \cite{JCbook}.) In the Z$_2$-even sector, turning on the most relevant operator $\Phi_{3,3}\sim\phi^2$ gives flows either to the high-temperature disordered phase, or to the 2 coexisting low-temperature ordered phases. Turning on the $\Phi_{1,3}\sim\phi^4$ operator gives flows either to a first-order transition between these ordered phases and a disordered phase with vacancies, or to the $A_3$ Ising fixed point. As for the Ising model, turning on the $\Phi_{2,2}\sim\phi$ operator leads to broken-symmetry phases. However, at low temperatures there may be coexistence between two such phases with different densities of vacancies. These persist to finite temperature, giving `wings' in the phase diagram which end in lines of Ising-like transitions. These lines meet in the tricritical point and correspond to flows generated by the non-leading but relevant Z$_2$-odd operator $\Phi_{2,1}\sim\phi^3$. According to Behrend and Pearce \cite{PB}, the labelling of the boundary states in the $A_4$ lattice model is as shown in Fig.~3. On the same diagram we have indicated their interpretation in the Blume-Capel model, due to Chim \cite{Chim} and Affleck \cite{Affleck}, which is perhaps more intuitive. Here $(\pm)$ label totally ordered states, $(0)$ is a vacancy-rich state, and $(0\pm)$ are partially ordered states. $(d)$ is a multicritical point separating these in the boundary RG flows \cite{Affleck}. Note that the $\alpha=2$ states are Z$_2$ even while the Z$_2$ symmetry interchanges $\alpha=1$ and $\alpha=3$ (keeping $\beta$ the same.) Let us see how well the variational approach reproduces this picture. According to the earlier analysis, turning on the $\Phi_{2,2}$ operator corresponds to the boundary states at the corners of the Kac table in Fig.~\ref{A4}. These are the most ordered states. Again, turning on the $\Phi_{3,3}$ perturbation corresponds to the boundary states which extremize $\big(2\cos(\pi\alpha/2)+1\big)\big(2\cos(2\pi\beta/5)+1\big)$. This gives $\beta=1,4$, and, depending on the sign of the coupling, either $\alpha=2$ or $\alpha=1,4$. These correspond to the disordered and ordered Ising-like phases, respectively, as expected. Turning on $\Phi_{1,3}$ corresponds to maximizing only the second factor $\big(2\cos(2\pi\beta/5)+1\big)$, and so, depending on the sign of the coupling gives either $\beta=1,4$, corresponding to coexistence between these Ising-like phases (instead of a second order critical point as it should), or $\beta=2,3$ coexistence between partially ordered phases and a disordered, vacancy-rich phase. Turning on $\Phi_{2,1}$, on the other hand, corresponds to extremizing $(-1)^{\alpha+\beta}\cos(\pi\alpha/4)$. For one sign of the coupling we get coexistence between the strongly ordered phase $(-)=(1,2,1,2)$ and the partially ordered phase $(0+)=(3,2/4,3,2/4)$, and for the other sign we get coexistence between their Z$_2$ partners. This is once again in general agreement with the wings of the phase diagram, except that the approximation suggests a first-order rather than an Ising-like continuous transition. We conclude that for this model the boundary states roughly reproduce the expected RG flows when a single relevant operator is turned on, with the exception that flows to non-trivial CFTs are approximated by first-order rather than continuous transitions. \subsection{Matrix elements between Ishibashi states and the fusion rules.}\label{sec33} As an aside, we mention a curiosity which follows from the result (\ref{At}) for the matrix element of a bulk primary field between physical states in the limit $L\to\infty$: \be \langle a|e^{-\tau\hat H}\hat\Phi_je^{-\tau\hat H}|b\rangle=\delta_{ab}\left(\frac\pi{2\tau}\right)^{\Delta_j}\, \frac{S_a^j}{S_a^0}\left(\frac{S_0^0}{S_0^j}\right)^{1/2} \ee and the definition of these states in terms of the Ishibashi states (\ref{phys}), which, on inverting becomes: \be |i\rangle\rangle=\sum_aS^i_a(S^i_0)^{1/2}\,|a\rangle\,. \ee Hence \be \langle\langle i|e^{-\tau\hat H}\hat\Phi_je^{-\tau\hat H}|k\rangle\rangle= \left(\frac\pi{2\tau}\right)^{\Delta_j}\left(\frac{S_0^0}{S_0^j}\right)^{1/2}(S^i_0)^{1/2}(S^k_0)^{1/2} \,\sum_a\frac{S_a^iS_a^jS_a^k}{S^0_a}\,. \ee We recognize the sum over $a$ as the Verlinde formula \cite{Verlinde} for the fusion rule coefficient $N_{ijk}$. Taking into account the normalization of the states, we have \be\label{fusion} \frac{\langle\langle i|e^{-\tau\hat H}\hat\Phi_je^{-\tau\hat H}|k\rangle\rangle} {\big(\langle\langle i|e^{-2\tau\hat H}|i\rangle\rangle \langle\langle k|e^{-2\tau\hat H}|k\rangle\rangle\big)^{1/2}} =\left(\frac\pi{2\tau}\right)^{\Delta_j}\left(\frac{S_0^0}{S_0^j}\right)^{1/2}\,\,N_{ijk}\,. \ee Note that the first factor could be absorbed into a redefinition of the normalization of $\hat\Phi_j$. This result is somewhat surprising, and, to our knowledge, has not been noticed before. If we insert the definition (\ref{Ishi}) of the Ishibashi states into the numerator, the leading term for $\tau\gg L$ is proportional to the OPE coefficient $c_{ijk}$, which certainly vanishes whenever $N_{ijk}$ does. The contributions of all the descendent states are all proportional to $c_{ijk}$, with coefficients which could, in principle, be computed from the Virasoro algebra. However, it is remarkable that they all conspire to sum to the integer-valued fusion rule coefficient $N_{ijk}$. If there were an independent way of establishing (\ref{fusion}) this would give an alternative derivation of the Verlinde formula. It would be interesting to see whether this result extends to non-diagonal minimal models and to other rational CFTs. Although it has been derived here only for the Virasoro minimal models, there seems to be no obstacle in principle to its generalization to other rational CFTs, and it then suggests that the 1-point functions of bulk fields between suitable boundary states are determined by purely topological data of the fusion algebra. It also gives a possible way to \em define \em (at least ratios of) the fusion rules for non-rational CFTs. \section{Conclusion.}\label{sec4} We have proposed using smeared boundary states as trial variational states for massive deformations of CFTs. This is motivated by the uses of these states in quantum quenches and entanglement studies. In the case of the 2d minimal models we can perform explicit calculations which show this method gives a qualitative picture of the phase diagram in the vicinity of the CFT. Its main failing is that it cannot correctly predict a flow to a non-trivial CFT. In this case it appears to suggest phase coexistence rather than a continuous transition. In addition, the boundaries between different states corresponding to different renormalization group sinks are always first-order transitions. This is a necessary consequence of the variational method. However the method, by its nature, always gives the correct scaling of the energy with the coupling constants. From a numerical point of view it cannot be competitive with earlier methods such as the truncated conformal space approach \cite{TCFT1,TCFT2}, but it is much simpler and moreover gives new insight into the physical relationship between conformal boundary states and ground states of gapped theories. Since it gives a bound on the universal term in the free energy, it would be interesting to make a detailed comparison with exact results available for integrable perturbations \cite{Fateev}. \section*{Acknowledgements} The author thanks V.~Pasquier, H.~Saleur and G.~Vidal for helpful discussions, and A.~Konechny for pointing out Ref.~\cite{Kon}. \paragraph{Funding information.} This work was supported in part by funds from the Simons Foundation, and by the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research and Innovation.
2023-04-23T08:17:33.504Z
2017-07-31T02:01:24.000Z
redpajama/arxiv
arxiv_0000
235
8,158
d508cd01fece8b0e844a623d5e046c90456224b6
\section{\\Modelled EIG properties} \label{App:ModelledEIG_Properties} The SFH, dust attenuation and stellar mass of the EIGs were estimated by fitting a model to their UV-to-near-IR SEDs as described section \ref{sec:ResultsModel}. Figure \ref{f:ResGalMC_1D} shows, for each of the modelled EIGs, the marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and {\Mstar}. The extreme $\tau^{-1}$ values where $\left| \tau \right| \ll Age_{1}$ represent scenarios in which the first stellar population was created in a short burst. The low $\tau^{-1}$ values (where $\left| \tau \right| \gg Age_{1}$) represent an almost constant star formation for the first (main) stellar population. \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-07.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-08.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-09.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-10.pdf} \caption [Modelled one-dimensional marginalized posterior distributions] { Modelled one-dimensional marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and $M_{*}$. Data for each EIG is plotted in a separate row.\label{f:ResGalMC_1D} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-11.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-12.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-13.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-14.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-07.pdf} \contcaption { \label{f:ResGalMC_1D_2} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-05.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-07.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-08.pdf} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-04.pdf} % \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-05.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-07.pdf} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3a-02.pdf} \contcaption { } \end{centering} \end{figure*} \section*{Acknowledgements} {\rem{ALFALFA}} We are grateful to Martha Haynes, Riccardo Giovanelli and the entire ALFALFA team for providing an unequalled HI data set. {\rem{NED}} This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. {\rem{SDSS}} Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. \rem{this is part of the Official SDSS-III Acknowledgement, which can be found on: http://www.sdss3.org/collaboration/boiler-plate.php} {\rem{GALEX}} This research has made use of observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. \rem{IRAS - used for downloading 2MASS, WISE and Spitzer data} This research has also made use of the NASA/IPAC Infrared Science Archive (IRSA), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. {\rem{2MASS}} This publication makes use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. {\rem{WISE}} This publication makes use of data products from the Wide-field Infrared Survey Explorer (WISE), which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. \section{Discussion and conclusions} \label{s:DisConc} We have found surprising environmental dependencies of the HI content, \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi, and of the morphological type of EIGs (sections \ref{s:Res_MassHisgrms} and \ref{s:Res_Morphology} respectively). It is generally accepted that galaxies in cluster environments typically have atomic gas deficiencies \citep{1998ARA&A..36..189K}, while void galaxies are typically gas-rich \markChange{\citep{2011MNRAS.415.1797C, 2012AJ....144...16K, 2014ApJ...788...29L\rem{section 3.2.1}}}. It is also generally accepted that early-type galaxies are more abundant in clusters than in isolated environments \citep{2009MNRAS.393.1324B}. It was, therefore, expected that a sample of the most isolated galaxies (subsample EIG-1) would be the most gas-rich and would contain the lowest fraction of early-types. However, contrary to these expectations, we have found that EIG-1 galaxies, which lack neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content at distances {$<$3\,\Mpch}, tend to have lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to EIG-2 galaxies that have such neighbours (the average {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of EIG-1 galaxies is lower than that of EIG-2 galaxies with $2.5\,\sigma$ confidence). \cite{2014MNRAS.444.3559M} have found a similar {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} environmental dependence, in which their sample of void galaxies\rem{ (less isolated than the EIGs)} showed a tendency for lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to their sample of wall galaxies. Similarly unexpected, we have found that the most isolated galaxies (subsample EIG-1) have a higher tendency to be early-types compared to EIG-2 galaxies (with 0.94 confidence). To the best of our knowledge this is the first time where an isolated galaxies' sample shows a higher fraction of early-types compared to a less isolated sample. \markChange{These findings do not contradict the results of \cite{2011MNRAS.415.1797C, 2012AJ....144...16K, 2014ApJ...788...29L} and \cite{2009MNRAS.393.1324B} which compared isolated galaxies with galaxies in clusters or with the general population of galaxies. Here we compared between two galaxy populations of extreme isolation levels, and showed that the trends of increased {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and decreased early-type fraction with the increase of the isolation level reverse at extreme isolation (or when the isolation is tested also with respect to the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of possible neighbours).} There is considerable evidence from cosmological simulations that the spins and major axes of haloes are correlated with the direction of the walls or filaments in which they reside \citep[e.g.,][]{2007ApJ...655L...5A, 2009ApJ...706..747Z, 2012MNRAS.427.3320C, 2014MNRAS.443.1274L}. For low-mass haloes ($\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 13$ according to \citealt{2009ApJ...706..747Z}, or $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 12.6$ according to \citealt{2012MNRAS.427.3320C}), the halo spin is more likely to be aligned with the closest filament. This preferred spin direction is probably a result of the direction from which material is accreted to the halo. As shown in \markChange{Figure \ref{f:MhaloFromEIG_I}}, simulation analysis indicates that almost all EIGs reside in haloes that would be considered low-mass in this respect, and are, therefore, expected to have spins that correlate fairly well with the direction of the filaments and walls closest to them. We speculate that this effect may be connected to the low abundance of early-types in the EIG-2 subsample and to its higher average {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to the EIG-1 subsample. Underdense filaments and walls may be the hosts of EIG-2 galaxies. The halo spins induced by their filament or wall environment may significantly reduce their early-type fraction and possibly also increase their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (because the gas may spend more time before reaching their centres). If they indeed reside in filaments or walls, they are also expected to have some neighbours with similar tendency for being late-type and containing significant amounts of {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. We further speculate that a significant fraction of EIG-1 galaxies are not parts of filaments or walls, but rather reside in environments with no preferred direction for accreting material (e.g., at the junction points between filaments so extremely underdense that no galaxies were detected in them). This may increase their probability of being early-types and may also affect them in such a way that they would contain less HI on average (either because there is not much gas in their environment or because the available HI gas is accreted faster to the halo's centre and forms stars more quickly). Further study of the early-type EIGs found in this work may be of interest, since if they indeed reside at junction points between filaments, they may resemble cluster early-types at early stages of their development. \vspace{12pt} Both the early-type and late-type EIGs follow the same colour-to-{\Mstar} relation (Fig.~\ref{f:Res_ColorMass}), SFR-to-{\Mstar} `main sequence' relation (Fig.~\ref{f:Res_MainSeq}) and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation (Fig.~\ref{f:Res_HI_Mstar}), and fit the SFR predictor of eq.~\eqref{e:Res_SFR_predictor} (Fig.~\ref{f:Res_SFR_predictor}). This indicates that the mechanisms and factors governing star formation, colour and the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation are similar in early-type and late-type EIGs. It further indicates that the morphological type of EIGs is not governed by their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content, {\SFR} or colour. EIGs with high {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content, high SFR or blue colour are not necessarily late-types. \vspace{12pt} Our observations indicate that EIGs typically fit the `main sequence of star forming galaxies' found by \cite{2012ApJ...756..113H}\rem{ for a general sample detected by ALFLFA, SDSS and GALEX}. This indicates that the extreme isolation of the EIGs does not affect their {\SFR}\rem{ or their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}} considerably compared to field galaxies. This is supported by \cite{2016arXiv160108228B} who found no significant difference in SF between void galaxies of the VGS sample and field galaxies. We have found that EIGs follow a colour-to-{\Mstar} relation, in which EIGs with {\Mstar} smaller than $10^{(10.6 \pm 0.9)}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$ are typically `blue cloud' galaxies irrespective of their morphological type (Figure \ref{f:Res_ColorMass}). Since {\Mstar} of most EIGs is below this threshold, most of the EIGs are blue. A similar result was found by \cite{2009MNRAS.393.1324B} who found that in low density environments low {\Mstar} galaxies are mostly blue, while galaxies with high {\Mstar} are mostly red (irrespective of morphology). This is contrary to what is found in high density environments, where galaxies are mostly red irrespective of their {\Mstar} and morphology. \vspace{24pt} With respect to the `Nature vs.~Nurture' question, which was the primary driver of this work, we conclude the following: It is well known that cluster environments have a strong effect on star formation, colours and morphologies of galaxies. With the exception of these high density environments the {\SFR} is not significantly affected by the environment, i.e.~the `main sequence of star-forming galaxies' holds in a range of environments from walls to the most extremely isolated regions measureable. Outside high density regions, the colours of galaxies are mostly related to their stellar mass, {\Mstar}, and are less affected (if at all) by the environment. We have found that the HI content, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, and the morphological type of galaxies do depend on their environment. In the most isolated environments, where no neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} are present (to a distance of {3\,\Mpch}), galaxies tend more to be early-types and have lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, on average, compared to less isolated environments. We speculate that this might reflect the large scale structure of these extremely isolated regions. Late-type and high-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} galaxies may be more abundant in underdense filaments and walls, while early-type and lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} galaxies may be more abundant at the junctions of filaments so extremely underdense that no galaxies were detected in them. \section{Results} \label{ch:Results} \subsection{Apparent magnitudes and fluxes} Table \ref{T:EIG_ugriz} lists the SDSS apparent magnitudes of the \markChange{39} EIGs measured as described in section \ref{sec:Wise_Photometry}.\footnote{\markChange{Two of the EIGs, 1a-05 and 1a-06, are not in the SDSS footprint and were not imaged in the WO.}} Photometrically calibrated {\U\B\V\R\I} (Bessell) measurements were made for eight of the EIGs. Their apparent magnitudes are listed in Table \ref{T:EIG_ubvri}. \input{Tables/EIG_ugriz} \rem{label = {T:EIG_ugriz}} \input{Tables/EIG_ubvri} \rem{label = {T:EIG_ubvri}} The combined R and {Net-\Halpha} ({\nHa}) images are shown in figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} (in negative).\footnote{\markChange{The images of EIG 2s-04 are not shown due to a bright foreground star that does not allow to clearly identify it in the image (see details in Appendix \ref{App:EIGdata}).}} Each row of images in the figures relates to a different EIG. The name of the EIG is given on the leftmost image, which shows the combined R image. The second image from the left shows the same R image using a logarithmic scale (log R). The third image from the left shows the combined {\nHa} image. The rightmost image shows the EIG in false colour; R in orange, and {\nHa} in azure (both using a negative linear scale). The upper bar in the rightmost image shows the physical scale calculated using the distance estimate described in section \ref{s:ObsNPrc_AbsMagLum}. The lower bar in the rightmost image shows the angular size scale. Regions of interest within some of the EIGs were measured individually. The polygonal apertures used for these measurements are drawn (along with their names) on the rightmost images of figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3}. Observational results of these regions of interest are described in section \ref{s:rsltsSFR}. Remarks for specific EIGs are listed in Appendix \ref{App:EIGdata}. \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-02.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-04.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-05.pdf} \caption [R and {\nHa} images (EIG-1)] { \markChange { R and {\nHa} images of the EIG-1 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-1} } } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-08.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-09.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-10.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-11.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-12.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-13.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-14.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1a-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1a-04.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-02.pdf} \rem{\includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-04.pdf}} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-05.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-08.pdf} \caption [R and {\nHa} images (EIG-2)] { R and {\nHa} images of the EIG-2 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-2} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2a-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2a-02.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-02.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-04.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-05.pdf} \caption [R and {\nHa} images (EIG-3)] { R and {\nHa} images of the EIG-3 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-3} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3a-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3a-02.pdf} \contcaption { } \end{centering} \end{figure*} In Figure \ref{f:RSurBrightR} the R surface brightness, $\mu_{\mbox{\scriptsize R}}$, is plotted as function of the distance from the galactic centre, $r$. The surface brightness was measured on a set of ellipses with different semi-major axes, fitted to each EIG. The $\mu_{\mbox{\scriptsize R}}$ of the innermost $2\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi$ of each galaxy is not shown to avoid confusion due to point spread function (PSF) effects. $\mu_{\mbox{\scriptsize R}}$ measurements with uncertainty $\geq 0.5\,\magAsecSq$ are also not shown. Two profiles were fitted for each EIG's $\mu_{\mbox{\scriptsize R}}$ measurements, one typical of a late type galaxy disc (blue dashed line) and the other representing an early-type elliptical galaxy (red solid curve). The disc profile has a S\'{e}rsic's index $n=1$, and was fitted to the outskirts of the galaxy (from half of the maximum $r$ shown in the figure and further). The elliptical profile is a de Vaucouleurs relation, i.e. a S\'{e}rsic's index $n=4$. It was fitted to all the measured points shown in the figure. \begin{figure*} \begin{centering} \includegraphics[width=16.3cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SurBrightR1.pdf} \caption [R surface brightness] { R surface brightness, $\mu_{\mbox{\scriptsize R}}$, as function of the distance from the galactic centre, $r$. The black horizontal bars show the angular scale. The blue dashed lines are linear relations, fitted to the points above half of the maximum shown $r$. The red solid curves show best fits to a de Vaucouleurs formula. \label{f:RSurBrightR} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=16.3cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SurBrightR2.pdf} \contcaption { } \end{centering} \end{figure*} \markChange{The EIGs were classified as early-types or late-types by visual inspection of the images of the EIGs, and the $\mu_{\mbox{\scriptsize R}}$ profiles of Figure \ref{f:RSurBrightR}.} Whenever \markChange{the combination} of the images and $\mu_{\mbox{\scriptsize R}}$ profiles did not yield a clear identification, the EIG was classified as `unknown'. \markChange{Six out of 31 EIGs (19 per cent) were classified as `unknown'. We chose to classify such a large fraction as 'unknown' in order to reduce the probability of false identification to a minimum. The morphological types of EIGs 1s-05, 2s-04 and 2s-06 were not classified. EIG 1s-05 was not classified because it cannot be identified in optical images, and EIGs 2s-04 and 2s-06 were not classified because of bright foreground stars projected close to them (see details in Appendix \ref{App:EIGdata})}. The morphological classifications are listed in Table \ref{T:Res_Morphology}. \input{Tables/Res_Morphology} \rem{label = {T:Res_Morphology}} Ultraviolet data were downloaded from the GALEX \citep{2005ApJ...619L...1M} GR6/7 data release\footnote{http://galex.stsci.edu/GR6/}. The available apparent magnitudes of EIGs in the GALEX far-ultraviolet band ({\magFUV}) and near-ultraviolet band ({\magNUV}) are listed in Table \ref{T:EIG_GALEX}. \input{Tables/EIG_GALEX} \rem{label = {T:EIG_GALEX}} 2MASS \citep{2006AJ....131.1163S} and WISE \citep{2010AJ....140.1868W} data were downloaded from the NASA/IPAC Infrared Science Archive (IRSA)\footnote{http://irsa.ipac.caltech.edu}. The 2MASS data were taken from the All-Sky data release. Thirteen EIGs were identified in its Extended Source Catalogue. For two of these (EIGs 1a-01 and 2a-02) the quoted {\magJ}, {\magH} and {\magKs} magnitudes were not used since they translate to flux suspiciously lower than that of the {\SDSSi} band. The {\magJ}, {\magH} and {\magKs} apparent magnitudes\rem{(wavelengths {1.24\,\um}, {1.66\,\um} and {2.16\,\um} respectively)} of the remaining eleven EIGs are listed in Table \ref{T:EIG_2MASS}. \input{Tables/EIG_2MASS} \rem{label = {T:EIG_2MASS}} WISE data taken from the All-WISE catalogue are listed in Table \ref{T:EIG_StarFormation}. These include apparent magnitudes in the {\WThree}\rem{({12\,\um})} and {\WFour}\rem{({22\,\um})} bands (columns {\magWThree} and {\magWFour} respectively). For some of the EIGs these are measurements through elliptical apertures based on the 2MASS {\Ks} isophotal apertures. Data for EIGs, for which the elliptical aperture measurements were not possible, are profile-fitting photometry measurements, or for low SNR measurements the 0.95 confidence magnitude lower limits. The WISE profile-fitting photometry measurements are less accurate for extended objects than the elliptical aperture measurements. To estimate their uncertainty, a comparison was made between profile-fitting photometry magnitudes and elliptical aperture based magnitudes for eight EIGs for which both were available. On average, the profile-fitting magnitudes were found to be {$0.08 \pm 0.03$\,\magnitude} ({\WThree}) and {$0.1 \pm 0.1$\,\magnitude} ({\WFour}) lower than the elliptical aperture magnitudes. The standard deviation of the difference between the two measurement methods was found to be {$\sim$0.3\,\magnitude} for {\WThree} and {$\sim$0.4\,\magnitude} for {\WFour}. These standard deviations were added to the estimated uncertainties of the profile-fitting photometry magnitudes. Table \ref{T:EIG_ALFALFA} lists data from the ALFALFA {$\alpha$}.40 catalogue \citep{2011AJ....142..170H}. For each EIG that was detected by ALFALFA the velocity width of the HI line profile, {\Whalf}, corrected for instrumental broadening but not for disc inclination is listed. This is followed by the total HI line flux, {\FHI}, the estimated signal to noise ratio of the detection, {\SNR}, and the HI mass content, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. Finally, the category of the HI detection, Code, is listed. Code 1 refers to a source of SNR and general qualities that make it a reliable detection. Code 2 refers to a source with $\SNR \lesssim 6.5$ that does not qualify for code 1 but was matched with a counterpart with a consistent optical redshift, and is very likely to be real. \input{Tables/EIG_ALFALFA} \rem{label = {T:EIG_ALFALFA}} \subsection{Star formation rate} \label{s:rsltsSFR} SFRs of the EIGs were calculated using the WO {\Halpha} measurements and the WISE {\WThree} and {\WFour} measurements. First, the {\Halpha} flux and the WISE {\WThree} and {\WFour} apparent magnitudes were corrected for Galactic extinction as described in section \ref{s:ObsNPrc_AbsMagLum} (the corrections for WISE were quite small, up to {0.018\,\magnitude} in W3, and up to {0.013\,\magnitude} in W4). Then, the {\WThree} and {\WFour} Galactic corrected magnitudes were converted to fluxes using the procedure described in section IV.4.h.i.1 of \cite{2013wise.rept....1C}\rem{\footnote{http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4\_4h.html\#conv2flux}} for a constant power-law spectra (same as the method used by \citealt{2014MNRAS.438...97W}). The {\Halpha}, {\WThree} and {\WFour} fluxes were then converted to luminosities ($\ifmmode L\else $L$\fi_{\Halpha, obs}$, $\SpecLum \left( 12\,\um \right)$ and $\SpecLum \left( 22\,\um \right)$ respectively) as described in section \ref{s:ObsNPrc_AbsMagLum}. \cite{2014MNRAS.438...97W} found that dust extinction-corrected {\Halpha} flux, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, can be accurately derived from the observed {\Halpha} flux and either the {\WThree} or {\WFour} bands using: \begin{equation} \begin{IEEEeqnarraybox*}{rCl} \ifmmode L\else $L$\fi_{\Halpha, corr} & = & \ifmmode L\else $L$\fi_{\Halpha, obs} + 0.038 \cdot \nu \SpecLum \left( 12\,\um \right) \\ \ifmmode L\else $L$\fi_{\Halpha, corr} & = & \ifmmode L\else $L$\fi_{\Halpha, obs} + 0.035 \cdot \nu \SpecLum \left( 22\,\um \right) \end{IEEEeqnarraybox*} \label{e:SFR_HaWISE_crct} \end{equation} These relations are independent of the metallicity. The relation that uses $\SpecLum \left( 12\,\um \right)$ was found to be slightly more accurate. The corrected {\Halpha} luminosity, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, was calculated using equation \eqref{e:SFR_HaWISE_crct} for both {\WThree} and {\WFour}, and the average of the two results was used. If for one of the bands only a lower limit to the magnitude was given, the result of the other band was used. If both bands had only lower limits to their magnitudes, the nominal SFR was calculated assuming zero IR dust emission \markChange{(i.e., with $\SpecLum \left( 12\,\um \right) = \SpecLum \left( 22\,\um \right) = 0$)} and the effect of the possible IR dust emission was added to the uncertainty of the measurement \markChange{($\ifmmode L\else $L$\fi_{\Halpha, corr}$ was calculated using both options of \eqref{e:SFR_HaWISE_crct} with uncertainties in $\SpecLum$ calculated using the 0.95 confidence lower limit of the {\WThree} and {\WFour} fluxes, and the option with the lower uncertainty was used)}. Finally, the dust-extinction-corrected {\Halpha} luminosity, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, was used to calculate the SFR using the following equation adapted from \cite{2012ARA&A..50..531K}, \cite{2011ApJ...737...67M} and \cite{2011ApJ...741..124H}: \begin{equation} \ifmmode \mbox{log}\else log\fi\, \frac{SFR}{\MsunPerYr} = \ifmmode L\else $L$\fi_{\Halpha, corr} - 41.27 \label{e:SFR_GenCalibKen2009} \end{equation} \vspace{12pt} Table \ref{T:EIG_StarFormation} lists measured star formation properties of the sample galaxies. For each EIG, the equivalent width, EW, and the {\Halpha} flux, $F_{\Halpha}$, are listed. These are followed by the WISE magnitudes, {\magWThree} and {\magWFour}, used for calculating the {\Halpha} flux that was extinguished within the EIG. Listed finally, are the fraction of the total {\Halpha} flux extinguished within the EIG, $\mbox{frac}_{{\Halpha}, \mbox{\scriptsize ext}}$, and the SFR. \input{Tables/EIG_StarFormation} \rem{label = {T:EIG_StarFormation}} The {\Halpha} flux and EW were measured for each region of interest within the EIGs (plotted in figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3}). Table \ref{T:EIG_PlgnHaFlux} lists for each region the {\Halpha} flux as a fraction of the total EIGs {\Halpha} flux. Table \ref{T:EIG_PlgnEW} lists the EWs of the regions. The regions of interest were defined with some additional area around the star-forming regions so that all the {\Halpha} flux will be measured. Therefore, the EWs of star-forming regions may be larger than those listed in the Table \ref{T:EIG_PlgnEW}. Similarly, the {\Halpha} flux fractions of the star-forming regions may be smaller than those listed in Table \ref{T:EIG_PlgnHaFlux}, since some diffuse {\Halpha} flux from the surrounding area may have been included in the measurement. \input{Tables/EIG_PlgnHaFlux} \rem{label = {T:EIG_PlgnHaFlux}} \input{Tables/EIG_PlgnEW} \rem{label = {T:EIG_PlgnEW}} As can be seen, the {\Halpha} flux fractions of the distinct star-forming regions of EIGs do not account for the entire {\Halpha} flux. In most EIGs the diffuse {\Halpha} is a significant component. In many of the EIGs (e.g., 1S-13) there are no detectable star-forming regions at all. In some of these the diffused {\Halpha} component is not detectable in the {\nHa} images, even though the total {\Halpha} EW, shown in table \ref{T:EIG_StarFormation}, is considerable. This is a result of noise in the {\nHa} images, which was reduced in the EW measurements by averaging over the whole galaxy. This noise is significantly less considerable in the {\R} images, because the spectral width of the {\R} filter is more than an order of magnitude larger than the {\nHa} EW of the EIGs for which {\Halpha} emission is not easily detectable in the {\nHa} images. The fact that diffused {\Halpha} is the dominant component in most EIGs indicates that a possible active galactic nucleus (AGN) contribution to the {\Halpha} flux is insignificant. This is supported by the fact that SDSS did not classify any of the EIGs' measured spectra as containing detectable AGN emission lines. Since the {\Halpha} measurements utilize narrow bands they include a contribution of the [NII] lines flanking the {\Halpha} line. For the central parts of the 22 EIGs which have SDSS spectra this correction was found to be: $\ifmmode \mbox{log}\else log\fi \left( F_{\Halpha} / F_{\Halpha+[NII]} \right) = -0.07 \pm 0.05$ (with 0.95 confidence level). The correction factor for a whole galaxy is expected to be significantly lower than this value, since central parts of galaxies typically have high metal abundance and high [NII] to {\Halpha} flux ratio compared to those measured for whole galaxies. This difference in the correction factor is expected to be more significant for EIGs in which the diffused {\Halpha} is the dominant component. In light of this we chose not to correct for these effects. \subsection{Model fitting} \label{sec:ResultsModel} The SFH, dust attenuation and stellar mass of the EIGs were estimated by fitting a five-parameter model to their UV-to-near-IR spectral energy distributions (SEDs). The model assumes that the SFH can be described by a first population of stars with an exponentially decreasing or increasing star formation rate (SFR), and a possible addition to recent star formation (a second population). The second population is modelled as an instantaneous star formation that occurred $1\,\Myr$ ago and is meant to compensate for a possible recent deviation from an exponential SFH, which may have a significant effect on the emission from the galaxy (especially in the UV and {\Halpha}). The model can also describe a scenario of a constant star formation or a sudden star formation (first population) with or without a recent star formation burst (second population). This model is obviously a simplification of the actual SFH and is limited in the diversity of possible scenarios. However, not much more can be done given the available SED measured points. \vspace{12pt} The five free parameters of the model are: \begin{description} \item[$\ifmmode M \else $M$ \fi_{1}$] - The mass of the first population of stars, created along the history of the galaxy (integral over time of the SFR of the first population). \item[$Age_{1}$] - The look-back time of the beginning of star formation of the first population. \item[$\tau$] - The e-folding time of the exponentially decreasing (positive $\tau$) or increasing (negative $\tau$) SFH of the first population. $\tau \ll Age_{1}$ indicates a sudden star formation, while $\tau \gg Age_{1}$ indicates an almost constant star formation. \item[\EBV] - The {\BMinV} colour excess that results from dust within the galaxy. \item[$\ifmmode M \else $M$ \fi_{2}$] - The mass of the second population of stars that was created. \item[] \end{description} The model fitting procedure used Bayesian statistical inference with uniform prior probability distributions of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{1} \right)$, $\ifmmode \mbox{log}\else log\fi \left( Age_{1} \right)$, $\tau^{-1}$, {\EBV} and $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{2} \right)$. The parameters were restricted to the following ranges: $10^{7}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi < \ifmmode M \else $M$ \fi_{1} < 1.6 \cdot 10^{15}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$, \; $10^{8}\,\yr < Age_{1} < $ age of the Universe at the redshift of the galaxy, \; $0 < \EBV < 2$, \; $2.7\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi < \ifmmode M \else $M$ \fi_{2} < 10^{7}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$. The metallicities of the first and second populations were set to {0.4\,\ifmmode \Metallicity_\sun\else $\Metallicity_\sun$\fi} and {1\,\ifmmode \Metallicity_\sun\else $\Metallicity_\sun$\fi} respectively. The model fitting was computed using the GalMC software \citep{IAU:8669162, 2011ApJ...737...47A}\footnote{http://ctp.citytech.cuny.edu/$\sim$vacquaviva/web/GalMC.html}. GalMC is a Markov Chain Monte Carlo (MCMC) algorithm designed to fit the SEDs of galaxies to infer physical properties such as age, stellar mass, dust reddening, metallicity, redshift, and star formation rate. The Markov chains produced by GalMC were analysed using the GetDist software, a part of the CosmoMC software \citep{2002PhRvD..66j3511L}\footnote{http://cosmologist.info/cosmomc/}. The stellar emission was calculated using the Charlot \& Bruzual 2007 stellar population synthesis model (Charlot \& Bruzual, private communication, CB07\footnote{http://www.bruzual.org/}) assuming a Salpeter initial mass function. Nebular emission was calculated following \cite{1998ApJ...497..618S} and \cite{2009A&A...502..423S}, as described in section 2.2.4 of \cite{2011ApJ...737...47A}. Dust extinction within the EIG was calculated from the {\EBV} parameter using the \cite{1994ApJ...429..582C}\rem{Calzetti} law with $\R_{v} = 4.05$ \citep{2000ApJ...533..682C}. The emission of the EIGs were also corrected for absorption by neutral hydrogen in the intergalactic medium (IGM) using \cite{1995ApJ...441...18M}. The input to the model-fitting software included the EIG's redshift and the data from measured bands with wavelengths shorter than {3\,\um} (the CB07 model does not estimate correctly the emission beyond the first PAH feature at {$\sim$3\,\um}). Each band measurement was corrected for Galactic extinction and then converted to flux, which was used as input to the GalMC software. The calibrated Bessell U, B, V, R and I magnitudes, measured at the Wise Observatory, were translated to AB magnitudes (and then to flux) using the relations listed in Table 1 of \cite{2007AJ....133..734B}. The 2MASS magnitudes were converted to fluxes using the zero magnitude isophotal monochromatic intensities listed in Table 2 of \cite{2003AJ....126.1090C}. Foreground Galactic extinction was corrected as described in section \ref{s:ObsNPrc_AbsMagLum}. For each EIG four MCMC runs were made, each with a different set of free parameters as a starting point. Best-fitting parameters and covariance matrices of these four runs were then used as inputs for continued runs. Each run included 50000 sampled parameter sets. Only one of each EIG's MCMC runs (chains) was used for analysis. This run was chosen based on the speed of convergence of the chain, on its average likelihood, on its multiplicity (number of trial steps before moving to the next location in parameter space), and on how well it covered the parameter space. Chains that probed the parameter space with $Age_{1} <0.2\,\Gyr$ for a large fraction of their length were disfavoured (if another good chain existed it was selected instead of them). Models were not fitted to EIG 1s-05 and EIG 2s-04, because these do not have the necessary SED data. EIG 1s-05 has only {21\,cm} data, and EIG 2s-04 has only {\SDSSg}, {\SDSSr}, {\SDSSi} and {\SDSSz} measurements (due to a bright foreground star close to it). The model that was fitted to EIG 1s-11 did not reproduce its {\Halpha} emission successfully ($\mbox{EW} = 28 \pm 4\,\Angst$). The best-fitting parameters of all MCMC runs of EIG 1s-11 yielded lower EW values. \vspace{12pt} Marginalized posterior distributions (the predicted probability distribution functions, PDFs) of the free parameters and of the calculated total mass of stars, {\Mstar}, considering mass-loss mechanisms, were calculated from the selected chain of each EIG. Figure \ref{f:ResGalMC_1D} in Appendix \ref{App:ModelledEIG_Properties} shows, for each of the modelled EIGs, the marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and {\Mstar}. Table \ref{T:EIG_MStar} lists the {\Mstar} values predicted by the model. \input{Tables/EIG_MStar} \rem{label = {T:EIG_MStar}} Two-dimensional marginalized PDFs of pairs of the model parameters were also analysed. It was found that in most cases there is some dependence between pairs of the free parameters. The $\ifmmode M \else $M$ \fi_{1}$ and $Age_{1}$ parameters were found to be highly correlated. The $\left( Age_{1}, \ifmmode M \else $M$ \fi_{1} \right)$ two-dimensional marginalized PDFs do not seem to depend on whether they are part of the EIG-1, EIG-2 or EIG-3 subsample, i.e. the $\left( Age_{1}, \ifmmode M \else $M$ \fi_{1} \right)$ space is filled similarly by the \markChange{three subsamples}. \subsection{Dynamic mass} We estimated dynamic mass lower limits for the EIGs using the ALFALFA measured HI rotation, the elliptical isophotes fitted to the combined {\R} images (described in section \ref{sec:Wise_Photometry}) and the surface brightness measurements, $\mu_{\mbox{\scriptsize R}}$. The calculations were based on the methods described by \cite{2014RvMP...86...47C}. First, we estimated the inclination of the galaxies using eq.~6 of \cite{2014RvMP...86...47C} for the $\mu_{\mbox{\scriptsize R}} = 24\,\magAsecSq$ elliptical isophote: \begin{equation} i \cong \cos ^{-1} \sqrt{\frac{\left( b_{24}/a_{24} \right)^2 - q_0^2}{1 - q_0^2}} \label{e:rsltsInclination} \end{equation} where: \begin{description} \item[$i$] is the estimated inclination of the galaxy; \item[$a_{24}$, $b_{24}$] are the semi-major axis and semi-minor axis (respectively) at $\mu_{\mbox{\scriptsize R}} = 24\,\magAsecSq$; \item[$q_0$] is the axial ratio of a galaxy viewed edge on. \item[] \end{description} The inclination of EIGs classified as early-types was not measured, because their $q_0$ is unknown. For a sample of 13482 spiral galaxies \cite{1992MNRAS.258..404L} found $q_0 \cong 0.2$ by applying statistical techniques to explore triaxial models. \cite{2012MNRAS.425.2741H} found $q_0 \cong 0.13$ for spirals using SDSS data on a sample of 871 edge-on galaxies. Here we adopted $q_0 = 0.17 \pm 0.05$ for the galaxies classified as late-types or `unknown' (see table \ref{T:Res_Morphology}). we measured $a_{24}$ using the linear fit of figure \ref{f:RSurBrightR}. The semi-minor to semi-major axes ratio, $b_{24}/a_{24}$, was calculated by linear interpolation of its values for the EIG's ellipse isophotes just below and just above $a_{24}$. The speed of rotation of the HI gas, $v_{rot}$, was calculated using the HI velocity width, {\Whalf}, listed in Table \ref{T:EIG_ALFALFA} and the inclination, $i$, using: \begin{equation} v_{rot} = \frac{\Whalf}{2 \cdot \sin i} \label{e:v_rot} \end{equation} The dynamic mass lower limit, $M_{dyn,24}$, was then calculated using: \begin{equation} \ifmmode M \else $M$ \fi_{dyn,24} = \frac{v_{rot}^2 \cdot a_{24}}{G} \end{equation} $M_{dyn,24}$ is a lower limit to the galactic total mass, since the extent of the neutral gas in spiral galaxies can often exceed twice that of the stars \citep{2014RvMP...86...47C}, and the dark matter (DM) halo may extend even further. An additional source of uncertainty in $M_{dyn,24}$ comes from the assumption behind \eqref{e:v_rot} that all of the HI velocity width, {\Whalf}, is due to the rotational velocity, $v_{rot}$. This may somewhat increase the $M_{dyn,24}$ estimate, but probably by much less than it is decreased due to the underestimation of the dark mass diameter. Table \ref{T:EIG_DynamicMass} lists $\ifmmode M \else $M$ \fi_{dyn,24}$ of EIGs along with the values used for its calculation. It also lists the ratio of the measured dynamic mass to stellar plus HI mass, $\ifmmode M \else $M$ \fi_{dyn,24} / \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$. \input{Tables/EIG_DynamicMass} \rem{label = {T:EIG_DynamicMass}} \section{\\EIG Specific Data} \label{App:EIGdata} This appendix contains general notes for some of the EIGs. \subsection*{EIG 1s-05} No optical counterpart could be identified for EIG 1s-05 (an ALFALFA object). In the Wise Observatory images, no {\Halpha} emission was identified around the ALFALFA coordinates. Within one arcminute from the ALFALFA coordinates of EIG 1s-05, all galaxies detected by SDSS have ${\SDSSg} > 21.6$, and none have spectroscopic redshifts. All GALEX detected objects in the same region have ${\magFUV} > 24$ and ${\magNUV} > 21$. EIG 1s-05 may, therefore, be a `dark galaxy' with an extremely high HI to stellar mass ratio and a very low SFR. It may also be an \markChange{ALFALFA} false detection, even though its SNR is 8.1 and it is considered a `code 1' object, i.e. a source of SNR and general qualities that make it a reliable detection \citep{2011AJ....142..170H}. \subsection*{EIG 1s-09} SDSS DR10 shows an edge-on galaxy, SDSS J112157.63+102959.6, {$\sim$13\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} east of the centre of EIG 1s-09. The angular size of SDSS J112157.63+102959.6 is similar to that of EIG 1s-09. Its magnitude is ${\SDSSg} = 18.6$, compared to ${\SDSSg} = 16.9$ of EIG 1s-09. The redshift of SDSS J112157.63+102959.6 is unknown. Although there is a possibility that SDSS J112157.63+102959.6 is a close neighbour of EIG 1s-09, this seems unreasonable, since tidal tails are neither visible in the SDSS images nor in the images shown in figure \ref{f:RHacolorImgEIG-1} \markChange{(which combine 40 minute exposure in the {\R} band and 120 minute exposure in an \Halpha band, both using the WO {1\,meter} telescope)}. \subsection*{EIG 1s-10} SDSS DR10 shows two objects at an angular distance of {$\sim$6\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} from the centre of EIG 1s-10. One is north of EIG 1s-10, and is classified as a star by SDSS DR10. The second, classified as a galaxy, is south-west of EIG 1s-10. Both objects do not have measured redshifts. Although there is a possibility that one or both of these are galaxies merging with EIG 1s-10, this seems unreasonable, since tidal tails are neither visible in the SDSS images nor in the images shown in figure \ref{f:RHacolorImgEIG-1} \markChange{(which combine 100 minute exposure in the {\R} band and 260 minute exposure in an \Halpha band, both using the WO {1\,meter} telescope)}. \subsection*{EIG 1s-11} The only redshift measurement found for EIG 1s-11 is from \cite{1993A&AS...98..275B} that quotes \cite{1987ApJS...63..247H}. This is a HI measurement made at the Arecibo observatory. The HI-profile for the galaxy was not published by \cite{1987ApJS...63..247H}. It is possible that the measurement ($4725 \pm 10$\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi) is a result of HI-confusion, and that EIG 1s-11 is actually a part of the Virgo cluster. \subsection*{EIG 1s-14} EIG 1s-14 is projected close to a bright foreground star, which prevented SDSS from measuring its spectrum. This also affected the accuracy of measurement of its magnitudes and {\Halpha} flux here. Its relative {\Halpha} flux uncertainty was 0.3. Its estimated uncertainty in {\SDSSu}{\SDSSg}{\SDSSr}{\SDSSi}{\SDSSz} was {0.1\,\magnitude}. \subsection*{EIG 1a-02} SDSS DR10 shows a galaxy, SDSS J005629.17+241913.3, {$\sim$2\,\ifmmode ^\prime\else $^\prime$\fi} west of EIG 1a-02 with unknown redshift. The angular size of SDSS J005629.17+241913.3 is not very different from that of EIG 1a-02. Its magnitude is ${\SDSSg} = 16.6$, compared to ${\SDSSg} = 17.0$ of EIG 1a-02. Although there is a possibility that SDSS J005629.17+241913.3 is a close neighbour of EIG 1a-02, this \markChange{is probably not the case}, since no tidal tails or other signs of interaction are visible in the SDSS images. \markChange{However, since EIG 1a-02 was not imaged using the WO {1\,meter} telescope we cannot be certain of this, as tidal tails and other fine structure features are hard to see or detect in shallow images like those of SDSS.} \subsection*{EIG 1a-04} {\Halpha} images of EIG 1a-04 showed strong star formation in LEDA 213033, a galaxy separated by {107\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} from EIG 1a-04. Since LEDA 213033 has no measured redshift, its distance from EIG 1a-04 is unknown. The fact that it shows emission in the two narrow {\Halpha} filters used for the measurement, indicates that its redshift is $\cz \cong 6000 \pm 1500\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$. Therefore, the probability that it is less than {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} away from EIG 1a-04 is estimated to be $\sim$0.1. No sign of interaction between EIG 1a-04 and LEDA 213033 was detected. \subsection*{EIG 2s-04} The {\Halpha} flux of EIG 2s-04 could not be measured due to a foreground star ($\SDSSr = 14.30$) at a projected distance of {12\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi}. Although EIG 2s-04 has a GALEX measurement (in NUV only) it was not used, since it is contaminated with flux from this foreground star. \subsection*{EIG 2s-06} A foreground star of magnitude $\SDSSr = 15.6$, which is comparable to that of EIG 2s-06, is projected close to the EIG's centre. Its presence interfered with the photometric measurements, somewhat reducing the measured flux. The SDSS automatic photometry of EIG 2s-06 did not produce reliable results; the EIG was identified as two galaxies separated by the foreground star. GALEX measurements for this EIG were not used, since they also are contaminated by the foreground star. \markChange{The morphological type of EIG 2s-06 was not classified because of the foreground star.} \subsection*{EIG 3s-06} This is the only EIG that passes the isolation criterion using the ALFALFA dataset, but had neighbours closer than {3\,\Mpch} in the NED dataset. It was classified as part of subsample EIG-3s, because all of its NED neighbours are more than {2\,\Mpch} away from it. \section{Analysis} \label{ch:Analysis} \subsection{Colours of the EIGs} Colour-mass and colour-colour diagrams of large scale surveys show that galaxies tend to populate two main regions, the `blue cloud' of star-forming galaxies and the `red sequence' of quiescent galaxies, with a small fraction of galaxies in a `green valley' range in between \citep{2001AJ....122.1861S, 2006MNRAS.373..469B, 2014MNRAS.440..889S}. Star-forming main sequence galaxies populate the blue cloud, whether their star formation started recently or a long time ago. When star formation is quenched, galaxies leave the main sequence and their changing colours can be interpreted as a reflection of the quenching process \citep{2014MNRAS.440..889S}. Figure \ref{f:Res_ColorMass} shows a {\uMinr} to {\Mstar} colour-mass diagram of the EIGs, with the approximate edges of the `green valley' marked in green bold lines. The {\uMinr} colour was corrected both for Galactic extinction (as described in section \ref{s:ObsNPrc_AbsMagLum}), and for dust within the EIG using the \cite{2000ApJ...533..682C} extinction law (with $R'_{V} = 4.05$). \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_colorMass.pdf} \caption [Colour to stellar mass diagram] { Colour-to-stellar mass diagram. {\uMinr}, corrected for both Galactic extinction and dust extinction within the EIG, as function of stellar mass, {\Mstar}. The green thick lines show the limits of the `green valley', based on equations (1) and (2) of \cite{2014MNRAS.440..889S}. The thin blue line shows a linear fit to the EIG data. The dashed blue lines show the $\pm 1\sigma$ deviation from this fit. Filled symbols indicate EIGs classified as early-types. \label{f:Res_ColorMass} } \end{centering} \end{figure*} It is evident from Figure \ref{f:Res_ColorMass} that most EIGs are `blue cloud' galaxies. There are no EIGs in the `red sequence' and only one that is certainly within the `green valley' (EIG 1a-04). Based on comparison between the measurements and the `green valley' limit shown in Figure \ref{f:Res_ColorMass}, we can conclude with 0.95 confidence that the probability for an EIG to be in the `blue cloud' is $>$0.76. The probability for an EIG to be in the `red sequence' is $<$0.12. A linear relation was fitted to the measured EIG points of Figure \ref{f:Res_ColorMass} (the thin blue line in the figure): $\uMinr_{\mbox{\scriptsize dust corrected}} = 0.52 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) - 3.61$ . The expected standard deviation in {\uMinr} around this fit is {0.23\,\magnitude} (marked in the figure by dashed blue lines). The 0.52 slope of this fit is significantly larger than the slope of the `green valley' limits, 0.25 \citep{2014MNRAS.440..889S}. It can be concluded from this that the EIGs will be closer to the `red sequence', the higher their stellar mass, {\Mstar}, is. EIGs with stellar mass smaller than $10^{(10.6 \pm 0.9)}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$ are typically `blue cloud' galaxies. A similar colour-mass relation probably holds also for less isolated galaxies, as indicated by the results of \cite{2012A&A...540A..47F} who studied the AMIGA sample. They have measured colour-luminosity correlation in different morphological subtypes, and found that the more massive spirals show redder colours, and that there is little evidence for `green valley' galaxies in the AMIGA sample. \vspace{12pt} Figure \ref{f:Res_NUVu_ur} shows an {\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIGs, with the approximate limit between the `blue cloud' and the `green valley' marked with a green line (the `blue cloud' is to the left of the line, and the `green valley' is to the right). These colours were corrected for Galactic extinction and dust within the EIG, as in Figure \ref{f:Res_ColorMass}. Other than distinguishing between blue and red galaxies, this diagram is useful for diagnosing how rapidly star formation quenches in `green valley' galaxies. The faster the star formation quenching is, the higher the {\NUVMinu} colour of galaxies would be as their {\uMinr} is gradually increased \citep[Fig.~7]{2014MNRAS.440..889S}. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_NUVu_ur.pdf} \caption [{\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIG] { Dust corrected {\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIG. The green line shows the approximated limit between the `blue cloud' and the `green valley', based on Fig.~7 of \cite{2014MNRAS.440..889S} (the `blue cloud' is to the left of the line, and the `green valley' is to the right). Filled symbols indicate EIGs classified as early-types. \label{f:Res_NUVu_ur} } \end{centering} \end{figure*} The `green valley' galaxy, EIG 1a-04, is significantly redder in both colours, ${\uMinr} = 2.32 \pm 0.08$ and ${\NUVMinu} = 3.5 \pm 0.3$, compared to the other EIGs as Figure \ref{f:Res_NUVu_ur} shows. A comparison to the simulated SFH scenarios from Fig.~7 of \cite{2014MNRAS.440..889S} shows that EIG 1a-04 fits a scenario of a galaxy that passed, more than {1\,\Gyr} ago, a rapid star formation quenching (with e-folding time significantly shorter than {1\,\Gyr}). The model fitted in this work (see Figure \ref{f:ResGalMC_1D}, page \pageref{f:ResGalMC_1D_2}, row 8) matches this scenario. Another evidence supporting this scenario is that EIG 1a-04 was not detected by ALFALFA. This indicates that it might have lost most of its HI content which, given its extremely high stellar mass, perhaps was once very high. It should be noted in this context that EIG 1a-04 is possibly not an extremely isolated galaxy (a possible false positive) as discussed in Appendix \ref{App:EIGdata}. Its {\Halpha} images show that LEDA 213033, a galaxy separated by {$\sim$2\ifmmode ^\prime\else $^\prime$\fi} from it, may be a neighbour less than {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} away. Other EIGs with less than 0.90 confidence of being in the `blue cloud' are 2s-02, 2s-07, and 1s-11 (calculated from data shown in Figure \ref{f:Res_ColorMass}). From Figure \ref{f:Res_NUVu_ur} it is evident that EIG 1s-11 (as well as EIG 2a-01) deviate towards the red section. EIG 2s-02 is somewhat redder in {\uMinr} or bluer in {\NUVMinu} than the bulk. EIG 2s-07 seems to be well within the bulk of EIGs in Figure \ref{f:Res_NUVu_ur}, which indicates that it is a regular `blue cloud' galaxy (data of Figure \ref{f:Res_ColorMass} indicates 0.77 probability that it is in the `blue cloud'). \subsection{Comparison to the main sequence of star-forming galaxies} Most star-forming galaxies exhibit a tight, nearly linear correlation between galaxy stellar mass and SFR (on a log-log scale; \citealt{2007ApJ...660L..43N}). This correlation is termed `the main sequence of star-forming galaxies' (or simply `the main sequence'). Up to redshifts \z$\sim$2 the correlation changes only in its normalization \citep{2012ApJ...752...66L, 2014MNRAS.440..889S}. Models of \cite{2010ApJ...718.1001B} and \cite{2013ApJ...772..119L} suggest that the main sequence is a result of an equilibrium between galaxy inflows and outflows. For a specific range of redshift, \z, and stellar mass, \Mstar, the main sequence can be expressed as: \begin{equation} \ifmmode \mbox{log}\else log\fi \left( \frac{\SFR}{\MsunPerYr} \right) = \alpha \cdot \ifmmode \mbox{log}\else log\fi \left( \frac{\Mstar}{\ifmmode \Mass_\sun\else $\Mass_\sun$\fi} \right) + \beta \label{e:Intr_MS} \end{equation} where: \begin{description} \item[$\alpha$, $\beta$] are the free parameters fitted to the observed data. \item[] \end{description} The $\alpha$ and $\beta$ parameters somewhat vary with redshift. \cite{2007ApJS..173..267S} and \cite{2012ApJ...756..113H} indicated that below $\ifmmode \mbox{log}\else log\fi \left( \Mstar \right) \sim 9.5$ the slope, $\alpha$, of the main sequence increases. \cite{2012ApJ...756..113H} have studied a sample of local Universe ALFALFA galaxies with SDSS and GALEX photometry and have found the following main sequence relation: \begin{equation} \begin{IEEEeqnarraybox*}{rCl} \alpha & = & \begin{cases} 0.851 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \leq 9.5 \\ 0.241 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5 \end{cases} \\ \vspace{6pt} \\ \beta & = & \begin{cases} -8.207 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \leq 9.5 \\ -2.402 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5 \end{cases} \label{e:Intr_MS_ALFALFA}\rem{based on equation. 8 of \cite{2012ApJ...756..113H}} \end{IEEEeqnarraybox*} \end{equation} \begin{figure*} \begin{centering} \includegraphics[width=13.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MainSeqSFR.pdf} \includegraphics[width=13.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MainSeqSSFR.pdf} \caption [EIGs compared to the main sequence of star-forming galaxies] { EIGs compared to the main sequence of star-forming galaxies \markChange{({\SFR} vs. {\Mstar} in the top chart and {\SSFR} vs. {\Mstar} in the bottom chart)}. The solid blue lines are the best fit main sequence found by \cite{2012ApJ...756..113H} and described by \eqref{e:Intr_MS_ALFALFA}. The dashed blue lines indicate deviations of $\pm\,0.5\,\ifmmode \mbox{dex}\else dex\fi$ from the main sequence (a typical $1\,\sigma$ deviation of main sequence fits). Filled symbols indicate EIGs classified as early-types. \label{f:Res_MainSeq} } \end{centering} \end{figure*} Figure \ref{f:Res_MainSeq} shows the current SFR (upper plot) and SSFR (lower plot) of the EIGs as function of stellar mass, \Mstar, compared to the `main sequence' of \cite{2012ApJ...756..113H}. It indicates that, in general, EIGs fit the `main sequence of star-forming galaxies'. A fraction of $0.88^{+0.08}_{-0.16}$ of the EIGs fit the `main sequence' to within $\pm 0.5\,\ifmmode \mbox{dex}\else dex\fi$ (assuming that whether an EIG deviates by more than $\pm 0.5\,\ifmmode \mbox{dex}\else dex\fi$ or not follows a binomial distribution, and using a Wilson score interval with 0.95 confidence level). On average, the SFR of the EIGs is $0.1\,\ifmmode \mbox{dex}\else dex\fi$ lower than the main sequence, with a standard deviation of $0.4\,\ifmmode \mbox{dex}\else dex\fi$. This deviation from the main sequence is similar for all EIG sub-samples (1, 2 and 3). It may indicate a tendency of the EIGs to have slightly lower SFRs compared to main sequence galaxies. However, it could also result from differences between the SFR and {\Mstar} estimation methods used by \cite{2012ApJ...756..113H} and the ones used here. \markChange{\cite{2012ApJ...756..113H} derived stellar masses and SFRs by SED fitting of the seven GALEX and SDSS bands\rem{ (as described in section 4.1 of \citealt{2012AJ....143..133H})}, while in this work the {\Halpha} emission line and 2MASS data were also used when available for the SED fitting. Furthermore, in this work the SED fitting was used only for {\Mstar} estimation. The SFR was estimated based on {\Halpha} and WISE fluxes.} The EIGs that deviate by more than $0.5\,\ifmmode \mbox{dex}\else dex\fi$ in {\SFR} from the main sequence are EIG 1s-11 ($-0.7 \pm 0.2\,\ifmmode \mbox{dex}\else dex\fi$), EIG 2s-02 ($-0.6 \pm 0.1\,\ifmmode \mbox{dex}\else dex\fi$), EIG 2s-08 ($1.49 \pm 0.09\,\ifmmode \mbox{dex}\else dex\fi$) and EIG 2a-02 ($-0.73 \pm 0.06\,\ifmmode \mbox{dex}\else dex\fi$). EIG 2s-08 is, therefore, the only EIG known to deviate significantly from the main sequence. It has the highest {\SSFR} of all the measured EIGs, $\ifmmode \mbox{log}\else log\fi \left( \SSFR/ \yr^{-1} \right) = -7.80 \pm 0.09$, as well as the highest {\Halpha} EW ($460 \pm 39\,\Angst$). It also has the lowest stellar mass, $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) = 7.28 \pm 0.06$, as well as the lowest model estimated age, $\ifmmode \mbox{log}\else log\fi(Age_{1}/\yr) = 8.4 \pm 0.2$. Based on this we conclude (with 0.95 confidence) that the probability for an EIG to have SFR that deviates significantly from the main sequence is $<0.16$. For the LOG catalogue of isolated galaxies \cite{2013AstBu..68..243K} measured {\SSFR} values as function of {\Mstar} lower than those measured here for the EIGs and lower than the main sequence as measured by \cite{2012ApJ...756..113H}. This may be a result of their different method of estimating {\Mstar}. \cite{2013AstBu..68..243K} used {\Ks} band measurements assuming a {\Ks} luminosity to stellar mass ratio as that of the Sun, as opposed to fitting a model to measurements in several bands as was done here and by \cite{2012ApJ...756..113H}. They also observed that almost all of the LOG galaxies have $\ifmmode \mbox{log}\else log\fi \left( \SSFR / \yr^{-1} \right)$ lower than $-9.4$ \citep[Fig. 4]{2013AstBu..68..243K}. As the lower plot of Figure \ref{f:Res_MainSeq} indicates, the $\ifmmode \mbox{log}\else log\fi \left( \SSFR / \yr^{-1} \right)$ limit we measured for the EIGs is $-8.9$ with the exception of EIG 2s-08 that is above this value. \subsection{Mass histograms} \label{s:Res_MassHisgrms} The stellar mass, {\Mstar}, HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, and dynamic mass, $\ifmmode M \else $M$ \fi_{dyn,24}$, of EIGs were analysed in a fashion similar to that of the analysis of the dark matter (DM) subhalo mass, {\Mhalo}, of the Millennium II-SW7 simulation (Mill2; \citealt{2013MNRAS.428.1351G}) described in section 3.5 of \markChange{SB16}. Figures \ref{f:Res_MstarHist}, \ref{f:Res_M_HI_Hist} and \ref{f:Res_Mdyn24Hist} show histograms of {\Mstar}, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and $\ifmmode M \else $M$ \fi_{dyn,24}$ (respectively) for the EIGs of the Spring and Autumn sky regions. Figure \ref{f:Res_MStarHI_Hist} shows histograms of $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ of all EIGs for which {\Mstar} was estimated. For EIGs not detected by ALFALFA, $\Mstar$ was used as representing $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$. Thus the $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ statistics presented here, is expected to be slightly biased towards lower masses and a wider distribution. Similarly, Figure \ref{f:Res_M_HI_Hist} includes only EIGs that were detected by ALFALFA, and is, therefore, expected to be slightly biased towards higher masses and a narrower distribution. The right-hand side charts of figures \ref{f:Res_MstarHist}, \ref{f:Res_M_HI_Hist}, \ref{f:Res_MStarHI_Hist} and \ref{f:Res_Mdyn24Hist} may be compared to \markChange{Figure \ref{f:MhaloFromEIG_I}, an adaptation of Figure 6 of SB16,} which shows the simulation-based estimates of {\Mhalo} calculated for the combination of subsamples EIG-1 and EIG-2. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MstarHist.pdf} \caption [Stellar mass histograms for EIGs] { Stellar mass, {\Mstar}, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_MstarHist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_M_HI_Hist.pdf} \caption [HI mass histograms for EIGs] { HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_M_HI_Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MStarHI_Hist.pdf} \caption [Stellar plus HI mass histograms for EIGs] { Stellar plus HI mass, $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_MStarHI_Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_Mdyn24Hist.pdf} \caption [Dynamic mass histograms for EIGs] { $\ifmmode M \else $M$ \fi_{dyn,24}$ histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_Mdyn24Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=6.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/HaloMass_for_EIG_II_Paper.pdf} \caption [Halo mass histogram for mock EIGs (Mill2 simulation)] { \markChange{Simulation-based halo mass, {\Mhalo}, histogram calculated for the combination of subsamples EIG-1 and EIG-2. Adapted from Figure 6 of SB16.} \label{f:MhaloFromEIG_I} } \end{centering} \end{figure*} As can be seen in these figures, the distributions of both {\Mstar} and $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ are more scattered than those of {\Mhalo}. For EIGs in the Spring sky region the standard deviation of $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ is $\sim$0.3, compared to $\sim$0.6 for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ and $\sim$0.7 for $\ifmmode \mbox{log}\else log\fi \left[ \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right) / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right]$. For the Autumn EIGs the standard deviation of $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ is $\sim$0.4, compared to $\sim$1.1 for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ and $\sim$0.8 for $\ifmmode \mbox{log}\else log\fi \left[ \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right) / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right]$. \markChange{Such higher scatter of {\Mstar} compared to {\Mhalo} is expected from the stellar mass to halo mass (SMHM) relation derived from simulations (e.g., Fig. 2 of \citealt{2009ApJ...696..620C},\rem{ Fig. 5 of \citealt{2010ApJ...717..379B},} Fig. 7 of \citealt{2013ApJ...770...57B}, Fig. 2 of \citealt{2015A&A...576L...7D}, Fig. 5 of \citealt{2015ApJ...799..130R} and Fig. 1 of \citealt{2017MNRAS.465.2381M}). The vast majority of EIGs have masses $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 10.5$ and $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 12$ in which the SMHM relation's slope is large, i.e. in which {\Mstar} varies faster than {\Mhalo}.} The distribution of the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, seems to have a width similar to that of {\Mhalo} (the standard deviation of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$0.3 for the Spring EIGs and $\sim$0.4 for the Autumn EIGs). The distribution of the simulation predicted {\Mhalo} \markChange{(shown in Figure \ref{f:MhaloFromEIG_I})} is very different from the distribution of $\ifmmode M \else $M$ \fi_{dyn,24}$ (shown in Figure \ref{f:Res_Mdyn24Hist}). The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{dyn,24} / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$10.2 for the Spring and $\sim$9.9 for the Autumn. This is about an order of magnitude lower than the average $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ (11.0 for Spring and 11.3 for Autumn). The standard deviation of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{dyn,24} / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$0.3 for the Spring EIGs and $\sim$0.9 for the Autumn EIGs (compared to $\sim$0.3 for the Spring and $\sim$0.4 for the Autumn $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$). This difference between the simulated distribution of {\Mhalo} and the measured distribution of $\ifmmode M \else $M$ \fi_{dyn,24}$ may be the result of a large discrepancy between $\ifmmode M \else $M$ \fi_{dyn,24}$ and the actual dynamic mass (had it been measured using HI rotation curves), a large discrepancy of the simulation results from reality, or both. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ for EIG-1 and EIG-2 is $\sim$8.8 (Spring) and $\sim$9.4 (Autumn). From a comparison to the average $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ (11.0 for Spring, and 11.3 for Autumn) we conclude that the stellar masses, {\Mstar}, of EIGs are {$\sim$2.4\,\ifmmode \mbox{dex}\else dex\fi} (Spring) and {$\sim$2.0\,\ifmmode \mbox{dex}\else dex\fi} (Autumn) lower on average than the EIGs' DM masses. This, compared to a {$\sim$0.7\,\ifmmode \mbox{dex}\else dex\fi} difference between the baryonic to DM average densities in the Universe (according to WMAP7). If the dark-to-baryonic matter ratio of isolated galaxies is similar to the Universal average, then the geometric average of the fraction of baryonic mass turned into stars is {$\sim$0.02} (Spring) and {$\sim$0.05} (Autumn). \vspace{12pt} It is interesting to compare the stellar and HI content of galaxies from subsample EIG-1 with those of subsample EIG-2. As described in section \ref{sec:Introduction}, the EIG-1 subsample contains galaxies that passed the isolation criterion using both NED and ALFALFA data. The EIG-2 subsample contains galaxies that passed the criterion using NED data, but did not pass it using ALFALFA data (have neighbours closer than {3\,\Mpch} with sufficient HI content to be detected by ALFALFA). Therefore, the distance to the closest ALFALFA neighbour for EIG-1 galaxies is {$>$3\,\Mpch} by definition. For EIG-2 galaxies this distance is in the range {0.66--2.74\,\Mpch} ({0.9--3.9\,\Mpc}; see Table 8 of \citealt{2016MNRAS.456..885S}). Figure \ref{f:Res_EIG-1_2_Hist} shows histograms of {\Mstar} (left charts), and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (right charts) comparing subsample EIG-1s with EIG-2s (upper charts) and subsample EIG-1a with EIG-2a (lower charts). As can be seen, the {\Mstar} distribution of EIG-1s is similar to that of EIG-2s, and the distribution of EIG-1a is similar to that of EIG-2a. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the measured EIG-1s galaxies is $8.8 \pm 0.1$ with a standard deviation of {0.5}. This, compared to an average of $8.7 \pm 0.3$ with standard deviation {0.8} for the measured EIG-2s galaxies. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the measured EIG-1a galaxies is $9.3 \pm 0.6$ with a standard deviation of {1.3}. This, compared to an average of $9.6 \pm 0.4$ with standard deviation {0.9} for the measured EIG-2a galaxies. These differences are not statistically significant. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_EIG-1_2_Hist.pdf} \caption [Stellar and HI content histograms comparing subsamples EIG-1 and EIG-2] { {\Mstar} (left charts), and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (right charts) histograms comparing subsample EIG-1s with EIG-2s (upper charts) and EIG-1a with EIG-2a (lower charts). One EIG-1s galaxy, one EIG-2s galaxy and two EIG-1a galaxies are not included in the left charts, because they do not have {\Mstar} data. Four EIG-1s galaxies, three EIG-2s galaxies and two EIG-1a galaxies are not included in the right charts, because they were not detected by ALFALFA. \label{f:Res_EIG-1_2_Hist} } \end{centering} \end{figure*} In contrast to this, the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} distributions differ significantly between the EIG-1 and the EIG-2 subsamples. The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the EIG-1s galaxies detected by ALFALFA is $9.13 \pm 0.09$ (with standard deviation {0.3}). This, compared to an average of $9.5 \pm 0.1$ (with standard deviation {0.2}) for the ALFALFA-detected EIG-2s galaxies; a $2.3\,\sigma$ difference. The difference in the average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ between the Autumn subsamples is the same as for the Spring subsamples ({$\sim$0.3}), but with lower statistical significance ($1.2\,\sigma$) due to the smaller numbers of measured galaxies. The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the EIG-1a galaxies detected by ALFALFA is $9.1 \pm 0.2$ (with standard deviation {0.4}). This, compared to an average of $9.4 \pm 0.2$ (with standard deviation {0.3}) for the ALFALFA-detected EIG-2a galaxies. Combining the differences measured for the Spring and Autumn, gives an expected difference between $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of EIG-2 and EIG-1 of $0.3 \pm 0.1\,\ifmmode \mbox{dex}\else dex\fi$. Therefore, from the data of the ALFALFA-detected galaxies we conclude with $2.5\,\sigma$ significance that EIG-2 galaxies have higher {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, on average, than EIG-1 galaxies (by a factor of $2.1 \pm 0.6$). The actual average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of all EIGs of these subsamples are expected to be lower than the above values, since the EIGs not detected by ALFALFA are expected to have low {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} values. However, adding the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} values of those EIGs is not expected to make the distributions of the EIG-1 and EIG-2 subsamples significantly more similar. We can therefore conclude that extremely isolated galaxies that have neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content at distances {$<$3\,\Mpch} tend to have higher {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to extremely isolated galaxies lacking such neighbours. The HI content of galaxies, therefore, seems to be environmentally dependent even in extremely isolated regions. \subsection{Scaling relations} The HI gas content can be combined with the {\SFR}-to-{\Mstar} relation of Figure \ref{f:Res_MainSeq} in an attempt to investigate its connection to the main sequence of star-forming galaxies. This is done by breaking the {\SFR}-to-{\Mstar} relation into a relation between the star formation and the HI gas content (Figure \ref{f:Res_SF_MHI}) and a relation between the HI content and {\Mstar} (Figure \ref{f:Res_HI_Mstar}). Figure \ref{f:Res_SF_MHI} shows {\SFR} and \markChange{star formation efficiency ($\SFE \equiv \SFR / \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi$)} vs.~the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of the EIGs. Figure \ref{f:Res_HI_Mstar} shows {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and $f_{HI} \equiv \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \Mstar$ vs.~the stellar mass, {\Mstar}, of the EIGs. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFR_MHI.pdf} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFE_MHI.pdf} \caption [Star formation vs.~HI mass] { Star formation ({\SFR} in the upper chart, and {\SFE} in the lower chart) vs.~HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of the EIGs. The green thick dashed lines show the fit found by \cite[Fig.~4.b]{2012ApJ...756..113H} for {\SFR} to {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of star-forming ALFALFA galaxies. Filled symbols indicate EIGs classified as early-types. \label{f:Res_SF_MHI} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_M_HI_Mstar.pdf} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_f_HI_Mstar.pdf} \caption [HI content vs.~stellar mass] { HI content ({\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} in the upper chart, and $f_{HI}$ in the lower chart) vs.~stellar mass, {\Mstar}, of the EIGs. The blue solid lines \markChange{show} a linear fit to the EIGs' {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} to {\Mstar} data. The dashed blue lines show the $\pm 1\sigma$ deviation from this fit. The green thick dashed lines show the fit found by \cite[eq.~1]{2012ApJ...756..113H} for {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} to {\Mstar} of star-forming ALFALFA galaxies. Filled symbols indicate EIGs classified as early-types. \label{f:Res_HI_Mstar} } \end{centering} \end{figure*} The average deviation of EIGs from the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} fit of \cite{2012ApJ...756..113H} is $-0.04 \pm 0.09\,\ifmmode \mbox{dex}\else dex\fi$ (with standard deviation of {0.5\,\ifmmode \mbox{dex}\else dex\fi}). Despite the EIG's small range of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ this indicates that the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} relation of EIGs is similar to that of the general population of star-forming galaxies. The near-unity slope in the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} fit translates to a near-zero slope in {\SFE} to {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (lower chart of Figure \ref{f:Res_SF_MHI}), implying that the {\SFE} may be (statistically) independent of {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. To test this hypothesis, the Pearson product-moment correlation coefficient between {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of the EIGs was calculated. The resultant correlation coefficient, $-0.14$, is insignificant. If {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} are not correlated, there is 0.52 chance of finding a correlation coefficient measurement at least this high. Therefore, the EIGs' measured data, supports independence of {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. The following linear relation was fitted to the measured {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} vs.~{\Mstar} EIG points (marked by blue solid lines in Figure \ref{f:Res_HI_Mstar}): \begin{equation} \ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \cong 0.34 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) + 6.20 \label{e:Res_MHI_Mstar_fit} \end{equation} The expected standard deviation in $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ around this fit is marked in the figure by dashed blue lines ({0.25} on average). The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ deviation of EIGs from the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} fit of \cite{2012ApJ...756..113H} is $-0.16 \pm 0.05$, implying that for a given stellar mass, {\Mstar}, the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of EIGs is slightly lower, on average, than that of the general population of star-forming galaxies. However, some of this deviation may be a result of the difference between the {\Mstar} estimation method used by \cite{2012ApJ...756..113H} and the one used here. \cite{2011Ap.....54..445K} analysed the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation for the 2MIG catalogue of isolated galaxies. The galaxies of the 2MIG catalogue have a range of {\Mstar} higher than that of the EIG sample (with the bulk in the range $9.5 < \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 11.5$). Most of these higher mass and less isolated galaxies have $\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi < \Mstar$ \citep[Fig. 10]{2011Ap.....54..445K}. This is in agreement with the results shown in Figure \ref{f:Res_HI_Mstar}, where for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5$ most galaxies have $\ifmmode \mbox{log}\else log\fi \left( f_{HI} \right) < 0$. For their higher {\Mstar} sample \cite{2011Ap.....54..445K} found a slope ($1.00 \pm 0.04$) in the linear fit of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ as function of $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ which is larger than the slope found here. \vspace{12pt} The following log-log predictor for {\SFR}, using {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, was fitted to the EIGs data by a partial least-squares regression: \begin{equation} \begin{IEEEeqnarraybox*}{lCl} \ifmmode \mbox{log}\else log\fi \left( \SFR / \MsunPerYr \right) & \cong & 0.580 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) + \\ & & 0.209 \cdot \ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) - 7.95 \end{IEEEeqnarraybox*} \label{e:Res_SFR_predictor} \end{equation} The EIGs' {\SFR}s are shown vs.~this predictor in Figure \ref{f:Res_SFR_predictor}. The expected standard deviation around this predictor is {0.29\,\ifmmode \mbox{dex}\else dex\fi} (shown in the figure as dashed blue lines). It is somewhat lower than the {0.5\,\ifmmode \mbox{dex}\else dex\fi} {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} standard deviation around the \cite{2012ApJ...756..113H} relation, and the {0.4\,\ifmmode \mbox{dex}\else dex\fi} standard deviation in the {\SFR} to main sequence difference (Figure \ref{f:Res_MainSeq}). Therefore, \eqref{e:Res_SFR_predictor} that considers both {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} is a more accurate estimate of {\SFR} compared to predictors based on {\Mstar} or on {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} alone. It applies to EIGs, but is possibly also a good estimate for galaxies in denser regions that have not interacted with neighbours in the last few {\Gyr}. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFR_predictor.pdf} \caption [{\SFR} predictor] { Star formation rate, {\SFR}, vs.~the {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} partial least-squares regression predictor of \eqref{e:Res_SFR_predictor}. The blue solid line shows the one-to-one line. The dashed blue lines show the $\pm 1\sigma$ deviation from the one-to-one line. Filled symbols indicate EIGs classified as early-types. \label{f:Res_SFR_predictor} } \end{centering} \end{figure*} \subsection{Morphology} \label{s:Res_Morphology} It is evident from Table \ref{T:Res_Morphology} that EIGs are typically late types (20 were classified as late-types, compared to six unknown and five early-types). Based on the 20 EIGs (65 per cent) that were classified as late-types, we conclude with 0.95 confidence that the probability for an EIG to be a late-type is $\geq$0.47 (using Wilson score interval for binomial distribution). Similarly, based on the five EIGs (16 per cent) classified as early-type we conclude with 0.95 confidence that the probability for an EIG to be an early-type is $\geq$0.07. For comparison, \cite{2012AJ....144...16K}\rem{ and \cite{2012PhDT.........5K}} identified three early-type galaxies (5 per cent) of the 60 isolated galaxies of the VGS sample. \cite{2006A&A...449..937S} found in the most isolated sub-sample of AMIGA that\rem{ 82 per cent of the galaxies are late-types (Sa--Sd) while} 14 per cent are early-types (E--S0). \cite{2013AstBu..68..243K} found that 5 per cent of the LOG sample galaxies are early-types (E--S0/a). Of the five early-types, four are part of the EIG-1 subsample (galaxies that passed the isolation criterion using both NED and ALFALFA HI data as described in section \ref{sec:Introduction}). These make an early-type fraction significantly larger in the EIG-1 subsample (27 per cent) than in the entire EIG sample (16 per cent). The only early-type galaxy that is not part of the EIG-1 subsample, EIG 3s-01, passed the isolation criterion in the ALFALFA $\alpha$.40 dataset (but was slightly short of passing it in the NED dataset). This means that none of the five early-type EIGs have ALFALFA (high HI content) neighbours within {3\,\Mpch}. We can, therefore, conclude (with 0.94 confidence\rem{1-0.5^4~=0.94}) that EIGs lacking high HI content neighbours within {3\,\Mpch} have a higher tendency to be early-types, compared to EIGs that have such neighbours. As discussed in section \ref{s:Res_MassHisgrms}, such EIGs with no high HI content neighbours tend to have a lower HI content compared to ones with high HI content neighbours. This lower HI content may be linked to the fact that EIG-1 galaxies tend more to be early-types. From the EIG-1 subsample classification it can be concluded that for a galaxy that passes the strict isolation criterion of the EIG-1 subsample, the probability to be early-type is $\geq$0.11 (using Wilson score interval with 0.95 confidence). \vspace{12pt} Only one of the early-types, EIG 1a-04, is not classified as blue in Figure \ref{f:Res_ColorMass} but is rather a `green valley' galaxy. Three others (1s-02, 1s-12 and 3s-01) are blue, and the colour classification of the last, EIG 3s-01, is unknown. This means that an extremely isolated early-type galaxy has a probability $\geq$0.30 of being blue (with 0.95 confidence). This may be compared to the $0.057 \pm 0.004$ fraction of blue galaxies found by \cite{2009MNRAS.396..818S} in the low-redshift early-type galaxy population\markChange{, and to the $\sim$0.20 fraction of blue galaxies found by \cite{2016A&A...588A..79L} in their sample of isolated early-types}. All five early-type EIGs are within {0.5\,\ifmmode \mbox{dex}\else dex\fi} of the main sequence in Figure \ref{f:Res_MainSeq}. From this we conclude that an extremely isolated early-type galaxy has a probability $\geq$0.57 (with 0.95 confidence) of fitting the main sequence of star-forming galaxies to within {0.5\,\ifmmode \mbox{dex}\else dex\fi}. \vspace{12pt} A large fraction of the EIGs show asymmetric star formation, and many show strong compact star-forming regions (see Figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} and Tables \ref{T:EIG_PlgnHaFlux} and \ref{T:EIG_PlgnEW}). This indicates that star formation is a stochastic process that may occur unevenly across a galaxy in a given time, even in the most isolated galaxies. Sources of the randomness of star formation may include uneven `fuelling' of gas, and collisions with very small satellites (e.g., with $\Mhalo < 10^{9}\,\Msunh$) that could not be detected around the EIGs studied here. \section{The Sample} \label{sec:Sample} We have chosen the sample of EIGs using a simple isolation criterion: a galaxy is considered an EIG and is included in the sample if it has no known neighbours closer than \markChange{a certain neighbour distance limit in 3D redshift space ({200\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} or {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} as explained below)} and if its redshift is in the range $2000<\cz<7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$. No magnitude, HI mass or size limit was used in the selection of candidate neighbours. The use of such limits would have somewhat reduced the level of isolation of the sample (especially for the closer EIGs) and therefore was not preferred. Not using such limits, however, complicates somewhat the analysis of the sample's isolation level (described in section 3 of SB16 \markChange{and summarized below}). \markChange{It also causes the sensitivity limits (listed below) and the isolation level to depend on redshift. Higher redshift EIGs are less isolated on average than lower redshift EIGs. For this reason, the redshift of EIGs was limited to {7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}.} One of the unique advantages of the EIG sample we study here is that, apart from the optical redshift data commonly used to estimate environment density, it also utilized HI redshifts from the Arecibo Legacy Fast ALFA survey (ALFALFA) survey. The ALFALFA survey is a second-generation untargeted extragalactic HI survey initiated in 2005 \citep{2005AJ....130.2598G, 2007AJ....133.2569G, 2007AJ....133.2087S}. This survey utilizes the superior sensitivity and angular resolution of the Arecibo 305\,m radio telescope to conduct the deepest ever census of the local HI Universe. ALFALFA was particularly useful in verifying the isolation of the target galaxies, since by being an HI survey it easily measures redshifts of low surface brightness galaxies (LSBs) and other low-luminosity late-type neighbours that are often difficult to detect optically but abound with HI\rem{\citep{2013JApA...34...19D}}. \markChange{ The ALFALFA dataset we used was the ``$\alpha$.40 HI source catalogue'' \citep[$\alpha$.40;][]{2011AJ....142..170H}. This catalogue covers 40 percent of the final ALFALFA survey area ($\sim$2800\,\sqDeg) and contains 15855 sources. The sensitivity limit of the ALFALFA dataset is given by eq. (6) and (7) of \cite{2011AJ....142..170H} as function of the velocity width of the HI line profile, {\Whalf}. For a typical value $\Whalf = 100\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$ the sensitivity limit of the ALFALFA dataset is {$\sim$0.6\,\Jy\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}. For the redshift range of the EIG sample this translates to HI mass, $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$, sensitivity limit of {$\sim$8.0} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$9.1} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$). } The search criterion was applied to two sky regions, one in the spring sky (Spring) and the other in the autumn sky (Autumn) as described in Table \ref{T:Sample-Regions}. These particular regions were selected since they are covered by the $\alpha$.40 ALFALFA catalogue \citep{2011AJ....142..170H}. Both regions include mainly high Galactic latitudes. The Spring region is almost fully covered by spectroscopic data in SDSS DR10 \citep{2014ApJS..211...17A}. \begin{ctable} [ caption = Sample search regions, doinside = \small, label = T:Sample-Regions ] {@{}cccc@{}} {} { \FL & {\RA} (J2000) & {\Dec} (J2000) & \cz~$\left[ \ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi \right]$ \ML Spring & 7h30m--16h30m & $4\ifmmode ^{\circ}\else $^{\circ}$\fi$--$16\ifmmode ^{\circ}\else $^{\circ}$\fi$ & 2000--7000 \NN Autumn & 22h00m--03h00m & $24\ifmmode ^{\circ}\else $^{\circ}$\fi$--$28\ifmmode ^{\circ}\else $^{\circ}$\fi$ & 2000--7000 \LL } \end{ctable} In addition to ALFALFA, the NASA/IPAC Extragalactic Database\footnote{http://ned.ipac.caltech.edu/} (NED) was also used as a source for coordinates and redshifts in and around the search regions. \markChange{The NED dataset we used includes data downloaded from NED on November 13, 2012 for object types: galaxies, galaxy clusters, galaxy pairs, galaxy triples, galaxy groups, and QSO. The completeness functions derived in section 3.2 of SB16 indicate that the sensitivity limit of the NED dataset in terms of $\SDSSg$ magnitude is $\sim$18.5 for the Spring sky region and $\sim$17 for the Autumn sky region. For the redshift range of the EIG sample this translates to absolute $\SDSSg$ magnitude sensitivity limit of {$\sim$-13.8} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$-16.5} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) for the Spring sky region, and {$\sim$-15.3} to {$\sim$-18.0} for the Autumn sky region. A rough conversion to stellar mass, {\Mstar}, assuming $\SDSSg$ luminosity to mass ratio as that of the sun, gives a $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ sensitivity limit of {$\sim$7.6} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$8.6} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) for the Spring sky region, and {$\sim$8.2} to {$\sim$9.2} for the Autumn sky region. } The EIGs, studied here, were divided to three subsamples: \begin{enumerate} \item[1.] Galaxies that passed the criterion using both NED and ALFALFA data \markChange{with a neighbour distance limit of {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}. This translates to not having any known neighbour within a distance of $3\,\Mpch \cong 4.26\,\Mpc$.} \item[2.] Galaxies that passed the criterion using NED data \markChange{with a neighbour distance limit of {3\,\Mpch}}, but did not pass using ALFALFA data (had neighbours closer than {3\,\Mpch} in the ALFALFA database). \item[3.] Galaxies for which the distance to the closest neighbour in NED's data is {2 -- 3\,\Mpch} (regardless of the distance to the closest neighbour in ALFALFA's data). \item[] \end{enumerate} \markChange{Subsamples 1 and 2 contain all catalogued galaxies that passed their criteria in the studied sky regions. Subsample 3 contains only those galaxies that seemed to be isolated in the various searches performed over the years, but were later found to have neighbours in the range {2 -- 3\,\Mpch} (with the 2012 NED dataset described above)}. It also contains a galaxy, EIG 3s-06, which was found by searching the ALFALFA data alone, but had neighbours in the range {2 -- 3\,\Mpch} in the NED dataset. The galaxies were named according to their subsample and sky region, using the following format: \begin{equation*} \mbox{EIG BR-XX} \end{equation*} where: \begin{description} \item[B] is the galaxy's subsample (1, 2 or 3, as described above); \item[R] is the sky region: `s' - Spring, `a' - Autumn; \item[XX] is the serial number of the galaxy in the subsample. \item[] \end{description} So, for example, object EIG 3s-06 is the sixth galaxy in subsample 3 of the spring sky region. The galaxies of the different subsamples are listed in Tables 2 through 7 of SB16. Subsample EIG-1 contains 21 galaxies (14 Spring and 7 Autumn galaxies). Subsample EIG-2 contains 11 galaxies (7 Spring and 4 Autumn galaxies). Subsample EIG-3 contains 9 galaxies (7 Spring and 2 Autumn galaxies). \markChange{In total, the sample contains 41 EIGs.} Notes regarding specific EIGs are listed in Appendix \ref{App:EIGdata}. \vspace{12pt} The use of the ALFALFA unbiased HI data significantly improved the quality of the sample. Out of 32 galaxies that passed the \markChange{3\,\Mpch} criterion using NED data alone, 11 galaxies did not pass the criterion when tested with ALFALFA data. Neighbourhood properties of the sample EIGs were analysed using both observational data and cosmological simulations. The analysis based on observational data is described in detail in section 2.4 of SB16. Tables 8 and 9 of SB16 list properties such as the distance to the closest neighbour and neighbour counts for each sample EIG. A comparison to random galaxies show that on average the neighbourhood density of EIGs is about one order of magnitude lower than that of field galaxies. Observational neighbourhood data further indicates that EIGs tend to reside close to walls and filaments rather than in centres of voids. Using cosmological simulations, we confirmed that the EIG-1 and EIG-2 subsamples are a subset of galaxies significantly more isolated than the general galaxy population. Apart from the low density regions in which they reside, EIGs are characterized by normal mass haloes, which have evolved gradually with little or no major mergers or major mass-loss events. As a result of their low-density environments, the tidal acceleration exerted on EIGs is typically about one order of magnitude lower than the average tidal acceleration exerted on the general population of galaxies. The level of contamination in the sample, i.e. the fraction of EIGs which are not in extremely isolated environments or which experienced strong interactions in the last {3\,\Gyr}, was found to be {5\%--10\%}. The Spring EIGs seem to be more isolated than the Autumn EIGs. For further details about the analysis using cosmological simulations and its results see section 3 of SB16. \vspace{12pt} For similar purposes, other samples of isolated galaxies were defined and studied in `the Analysis of the interstellar Medium of Isolated GAlaxies' (AMIGA) international project \citep{2007A&A...472..121V, 2013MNRAS.434..325F}\rem{http://amiga.iaa.es}, in the `Two Micron Isolated Galaxy' catalogue (2MIG; \citealt{2010AstBu..65....1K}), in the `Local Orphan Galaxies' catalogue (LOG; \citealt{2011AstBu..66....1K, 2013AstBu..68..243K}), and in the 'Void Galaxy Survey' (VGS; \citealt{2012AJ....144...16K}). In section 2.5 of SB16 these were discussed and compared to the EIG sample studied here. The comparison showed that the EIG sample galaxies are significantly more isolated than the AMIGA, 2MIG and LOG galaxies (in terms of the distance to the closest neighbour) and that the \mbox{EIG-1} galaxies are more isolated than the VGS galaxies. \markChange{Other notable isolated galaxy samples, not analysed in SB16, are the UNAM-KIAS catalogue of isolated galaxies \citep{2010AJ....139.2525H} and the catalogues of isolated galaxies, isolated pairs, and isolated triplets in the local Universe of \cite{2015A&A...578A.110A}.} \section{Introduction} \label{sec:Introduction} The research described here is part of an extensive study of star formation properties and evolution of galaxies in different environments and of various morphological types, conducted in the past few decades \citep[e.g.,][]{1983PhDT.........1B, 1995PhDT........86A, 1998MNRAS.298..920A, 1998ApJ...504..720B, 2001PhDT..Ana_Heller, 2006MNRAS.368..864B, 2008arXiv0806.2722B, 2008MNRAS.390..408Z}. Specifically, we studied galaxies in the most extremely underdense regions of the local Universe. These galaxies are particularly interesting since they evolved with little or no environmental interference in the last few {\Gyr}, and are therefore useful for validating and calibrating galaxy evolution models. Furthermore, when compared to galaxies in denser regions, they illuminate the overall effects of the environment on the evolution of galaxies. It is well-known that extremely dense environments can greatly influence the star formation (SF) in galaxies. Tidal interactions and mergers of galaxies can trigger extreme starbursts with SFR up to $10^{3}\,\MsunPerYr$, while isolated galaxies hardly ever exhibit {SFR $>$20\,\MsunPerYr} \citep{1998ARA&A..36..189K}. Although the effect on SFR may be extreme during mergers in clusters as well as in pairs and loose groups, the effect on SFR averaged over the whole history of a galaxy may be small \citep{2003A&A...405...31B, 2009ApJ...692..556R}. In cluster environments, apart from the higher rate of interactions, ram pressure by the intracluster medium strips the galaxies of their gas and, therefore, reduces SF. It has also been suggested that in some cases the ram pressure might increase SF \citep{1985ApJ...294L..89G}. Galaxies in isolated environments are generally considered to be gas-rich, fainter, bluer, of later type, and exhibit higher specific star formation rates (SSFRs; SFRs per unit stellar mass) than galaxies in average density environments \citep{1980ApJ...236..351D, 1999AJ....118.2561G, 2002A&A...389..405P, 2004ApJ...617...50R, 2005ApJ...624..571R, 2012ApJ...753..166D, 2012AJ....144...16K, 2014MNRAS.438..548M, 2016arXiv160104092M}. Some claim that this is not just an effect of the higher abundance of late-type galaxies, and that the late-type galaxies themselves are fainter in under-dense regions than in average density regions \citep{2004A&A...420..873V, 2005MNRAS.356.1155C, IAU:949432}. \markChange{Numerous other studies also indicate that the properties of galaxies are influenced by their neighbourhood.} \cite{1982ApJ...253..526B} found that the inner regions of isolated galaxies are bluer, compared to `field' galaxies. This was later suggested to be a consequence of intensive formation of massive stars in the nuclei \citep{1982A&A...113..231B}. \cite{2004A&A...420..873V} found that bars are less frequent in isolated galaxies than in perturbed galaxies. \cite{2014ApJ...788L..39F} found that bluer pseudo-bulges tend to reside in neighbourhoods with a higher probability of tidal perturbation. They suggest that the environment could be playing a role in rejuvenating pseudo-bulges. \cite{2012MNRAS.424.2574W} found that satellite galaxies around isolated bright primary galaxies are systematically redder than field galaxies of the same stellar mass, except around primaries with $log\left(\Mstar/\ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 10.8$, where the satellites' colours were similar or even bluer. \vspace{12pt} This work attempts, among other things, to help resolve the question of `Nature vs. Nurture'; does the evolution of galaxies depend only on their content or do their large-scale environments have a significant evolutionary influence. Some argue that galaxy formation is driven predominantly by the mass of the host DM halo, and is nearly independent of the larger-scale halo environment (e.g., \citealt{2008MNRAS.386.2285C, 2009ApJ...691..633T}). This is supported by their simulation models that produce void galaxies conforming to some observed statistical properties (e.g., colour distribution, luminosity function and nearest neighbour statistics). However, since there are many galaxy properties that most simulations cannot predict (e.g., HI content), and since the halo mass of galaxies cannot be directly measured, this hypothesis is hard to prove or disprove. \vspace{12pt} We have chosen a sample of extremely isolated galaxies (EIGs) from the local universe based on a simple isolation criterion. The neighbourhood properties of this sample were analysed using both observational data and cosmological simulations. The cosmological simulations were further used to estimate the properties and histories of the dark matter (DM) haloes in which the sample EIGs reside. The sample and its analysis are described in detail in the first paper of this series, \cite{2016MNRAS.456..885S} (SB16), and are summarized here in Section \ref{sec:Sample}. Extensive optical observations of the sample EIGs in broad-band and rest-frame {\Halpha} were performed using the one meter telescope of the Florence and George Wise Observatory\footnote{IAU code 097 - http://wise-obs.tau.ac.il/} (WO). Section \ref{ch:ObsNProcess} describes these observations and their processing. The results of these observations, along with public observational data, were used to measure the current SFRs and to estimate SFHs. These observational results are described in section \ref{ch:Results}. Analysis of these results is presented in section \ref{ch:Analysis}, and the findings are discussed in section \ref{s:DisConc}. \vspace{12pt} Throughout this work, unless indicated otherwise, $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with the seven-year Wilkinson Microwave Anisotropy Probe data (WMAP7, \citealt{2011ApJS..192...17B}) parameters are used, including the dimensionless Hubble parameter $\h = 0.704$. We adopt here the solar {\SDSSg}-band absolute magnitude of $\AbsMgSun = +5.12$ (according to the Sloan Digital Sky Survey, SDSS, DR7 web site\footnote{www.sdss.org/dr7/algorithms/sdssUBVRITransform.html\\\#vega\_sun\_colors}). \section{Observations and Data Processing} \label{ch:ObsNProcess} \subsection{Instrumentation} Optical observations were performed using the 1\,meter (40\,inch) telescope of the WO. The telescope was equipped with a $1300\times1340$ back-illuminated Princeton Instruments CCD with pixel size of $0.57 \pm 0.01$ \,arcsec\,pixel$^{-1}$ and an overall field of view of $\sim12.5\,\ifmmode ^\prime\else $^\prime$\fi$. EIGs were imaged using wide band Bessell U, B, V, R and I filters and a set of narrow-band rest-frame {\Halpha} filters for various redshifts. A thorough description of the {\Halpha} filter set is provided in appendix A of \cite{2015PhDT}. \subsection{Observations} \markChange{We observed 34 of the EIGs} in the {\R} and in one or two appropriate {\Halpha} narrow bands. \markChange{Of these EIGs}, those not covered by SDSS (and a few that are) were imaged also in the U, B, V and I bands. At least six dithered exposures were obtained in each filter. Exposures were 20 minutes long for the {\Halpha} and U bands, and 10 minutes long for the B, V, and I bands. Exposures in the R band were 5 minutes long for EIGs that were observed only in R and {\Halpha}, and 10 minutes long for EIGs observed in all bands. Whenever possible, the exposures of the R and {\Halpha} bands were taken in time proximity so that their atmospheric conditions and air-masses would remain similar. This is important for the accurate measurement of the {\Halpha} equivalent width (EW), as described in \cite{2012MNRAS.419.2156S} (S12). Photometric calibrations of the wide bands were performed for EIGs not covered by SDSS, and for a few that are, using \cite{1992AJ....104..340L}\rem{Landolt} standards. Spectrophotometric calibrations of the {\Halpha} band were performed using \cite{1990AJ.....99.1621O} standard stars that have well known spectra, are stable, and have as few features around {\Halpha} as possible. Images were processed using the Image Reduction and Analysis Facility ({\small IRAF}) software\footnote{http://iraf.noao.edu/}. The reduction pipeline included standard bias subtraction, flat-fielding and image alignment. For images taken in the I band, a fringe removal step was added. \subsection{{Net-\Halpha} images} \label{s:ObsNPrc_NetHa} {Net-\Halpha} ({\nHa}) data were derived from the measurements using the recipes described in S12. EW values were derived using eq.~12 and 16 of S12. The {\nHa} fluxes were derived using eq.~7 and 12 of S12 after applying the photometric calibrations described in section 3 of S12. Eq. 12, 16 and 7 of SB12 are shown here for reference: \begin{equation} \begin{IEEEeqnarraybox*}{lCl} \mbox{cps}_{N,line} & \cong & \left(\mbox{cps}_{N} - \frac { \mbox{cps}_{W} }{ \mbox{WNCR} } \right) \\ && \times \left[1 - \frac{1}{\mbox{WNCR}} \cdot \frac{ \mbox{T}_{atm,W}(\lambda_{line}) \: \mbox{T}_{W}(\lambda_{line}) } { \mbox{T}_{atm,N}(\lambda_{line}) \: \mbox{T}_{N}(\lambda_{line}) } \right]^{-1} \end{IEEEeqnarraybox*} \label{e:cps_line_N_solved} \end{equation} \begin{equation} \mbox{EW} \cong \frac { \mbox{cps}_{N,line} } {\mbox{cps}_{N} - \mbox{cps}_{N,line}} \cdot \frac {\int_0^\infty \mbox{T}_{N}(\lambda) \: d\lambda} { \mbox{T}_{N}(\lambda_{line}) } \label{e:EW_solved} \end{equation} \begin{equation} \mbox{F}_{line} \cong \frac { \mbox{cps}_{N,line} } { \mbox{T}_{atm,N}(\lambda_{line}) \: \mbox{T}_{N}(\lambda_{line}) \: \mbox{R}_{\lambda}(\lambda_{line}) } \label{e:F_line} \end{equation} where: \begin{description} \item[$\mbox{cps}_{N}$, $\mbox{cps}_{W}$] are the measured count rates of the narrow-band (N) and wide-band (W) filters (respectively) in instrumental units (typically analogue to digital units per second, ADU s$^{-1}$); \item[$\mbox{cps}_{N,line}$] is the line contribution to $\mbox{cps}_{N}$ (see also eq. 3 of SB12); \item[WNCR] (wide to narrow continuum ratio) is the ratio between the count rate contributed by the continuum in the W band and the count rate contributed by the continuum in the N band (see also eq. 10 of SB12); \item[$\mbox{T}_{N}(\lambda)$, $\mbox{T}_{W}(\lambda)$] are the transmittance functions of the N and W bands, respectively; \item[$\mbox{T}_{atm,N}(\lambda)$, $\mbox{T}_{atm,W}(\lambda)$] are the atmospheric transmittance as function of wavelength, including effects of weather, elevation and airmass of observation, when the N and W bands (respectively) were imaged; \item[$\mbox{R}_{\lambda}$] is the responsivity as function of wavelength of the rest of the electro-optical system (i.e. the telescope and sensors, excluding the transmittance effect of the filters) typically in ADU~erg$^{-1}$~cm$^{2}$; \item[$\lambda_{line}$] is the central wavelength of the emission line; \item[$\mbox{F}_{line}$] is the emission-line's flux. \item[] \end{description} The WNCR required for \eqref{e:cps_line_N_solved} was estimated using the method of WNCR-to-colour linear fit suggested in section 6 of S12 (sixth paragraph). The process included selecting a reference wavelength band for each EIG, the first band with a good quality image from the following list: {\V}, {\I}, {\B}, SDSS {\SDSSg} and SDSS {\SDSSi}. In the combined images of these {\R} and reference bands foreground stars were identified (using their intensity profiles), and their instrumental colours (reference minus {\R}) were measured along with that of the EIG. All {\nHa} measurements were performed on the individual {\Halpha} images, each paired with an {\R} image taken at the closest time and \markChange{airmass} available. For each such pair, the foreground stars were measured in both the {\R} and {\Halpha} images, and their WNCR values were calculated (the {\R} to {\Halpha} cps ratio). A linear relation between WNCR and the uncalibrated colour was fitted to the results of these foreground stars. The WNCR of the pair of {\Halpha} and {\R} images was then calculated using the fit and the EIG's measured uncalibrated colour. Next, an {\nHa} image was created for each {\Halpha} and {\R} image pair. The sky level, measured around the EIG, was first subtracted from the {\Halpha} and {\R} images. Then, the images were scaled by their exposure time. Finally, the pixel values of the {\nHa} images were calculated using \eqref{e:cps_line_N_solved}. \subsection{Photometry} \label{sec:Wise_Photometry} Apertures for the photometric measurements of the EIGs were defined as polygons or as elliptical isophotes fitted to the combined R-band images. The polygonal apertures approximately trace the $\R = 26\,\magAsecSq$ isophote of the EIGs, but exclude foreground Galactic stars and galaxies, projected close to the EIG. This resulted in some reduction in the measured flux from the EIGs, which was significant only for EIG 2s-06 that has a foreground star of magnitude $\SDSSr = 15.6$ projected close to its centre. Polygonal apertures were also defined for some resolved HII regions and other regions of interest within the EIGs (see figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} below). Wherever possible, photometric measurements were made using SDSS calibrated images using the same apertures defined for the WO images. The SDSS calibrations were tested by comparing results of seven EIGs that had photometric calibrations performed at the WO as well as SDSS data. The {\magu \magb \magv \magr \magi} magnitudes were converted to {\SDSSu \SDSSg \SDSSr \SDSSi \SDSSz} SDSS magnitudes using the transformation recommended in Table 1 of \cite{2005AJ....130..873J} for all stars with $\R-\I < 1.15$. This is the transformation recommended by SDSS for galaxies.\footnote{http://www.sdss3.org/dr9/algorithms/sdssUBVRITransform.php} On average, the results were similar to the SDSS calibrated magnitudes. The standard deviation of the difference between the WO calibration (converted to {\SDSSu \SDSSg \SDSSr \SDSSi \SDSSz}) and the SDSS calibration was {0.05\,\magnitude} for {\SDSSr} and {\SDSSi}, {0.07\,\magnitude} for {\SDSSg}, and {0.11\,\magnitude} for {\SDSSu} and {\SDSSz}. Where available, SDSS measurements were used to calibrate the {\nHa} flux using the method described in section 3 of S12, in which {\SDSSg}, {\SDSSr} and {\SDSSi} are used to estimate the continuum flux at the rest-frame {\Halpha} wavelength. This continuum flux estimate is then multiplied by the equivalent width to obtain the {\nHa} flux. Thirteen EIGs had both spectrophotometric and SDSS {\nHa} calibrations. We found random deviations between the results of the two calibrations, which the original uncertainty propagation estimates did not predict. These may be attributed to the inaccuracy introduced by estimating the continuum at {\Halpha} using an interpolation of two or three SDSS magnitude measurements (see section 3 of S12). This was compensated for by adding 0.2 relative uncertainty to the SDSS calibrations. \subsection{Absolute magnitudes and luminosities} \label{s:ObsNPrc_AbsMagLum} To calculate the absolute magnitudes and luminosities, the calibrated apparent magnitudes and fluxes were first corrected for foreground Galactic extinction. The Wise Observatory (\U\V\B\R\I) and SDSS magnitudes were corrected using Galactic extinction NED data based on \cite{2011ApJ...737..103S}. The Galactic extinctions of the {\Halpha} fluxes, $A_{\lambda_\Halpha}$, were estimated using an interpolation between the {\SDSSr} and {\SDSSi} extinctions. The interpolation was linear in $ln \left( A_{\lambda} \right)$ vs.~{$\lambda$}, since this fits well $A_{\lambda}$ of {\SDSSu\SDSSg\SDSSr\SDSSi\SDSSz} and {\U\B\V\R\I}. This work utilizes data from the Galaxy Evolution Explorer mission (GALEX; \citealt{2005ApJ...619L...1M}), the Two Micron All Sky Survey (2MASS; \citealt{2006AJ....131.1163S}) and the Wide-field Infrared Survey Explorer (WISE; \citealt{2010AJ....140.1868W}). The GALEX ({\NUV} and {\FUV}) and 2MASS ({\J}, {\H} and {\Ks}) Galactic extinctions were calculated using the $A_{\mbox{\scriptsize \B}}$ and $A_{\mbox{\scriptsize \V}}$ of \cite{2011ApJ...737..103S} and the \second column of Table 2 of \cite{2013MNRAS.430.2188Y}, which gives $R_{band} = A_{band} / \left( A_{\mbox{\scriptsize \B}}-A_{\mbox{\scriptsize \V}} \right)$ for each band. The Galactic extinction of the WISE {\WThree} and {\WFour} bands were estimated using the calculated $A \left( \Ks \right)$ and the values for $A_{\mbox{\scriptsize 12\,\um}} / A_{\mbox{\scriptsize K}}$ and $A_{\mbox{\scriptsize 22\,\um}} / A_{\mbox{\scriptsize K}}$ quoted in column 2 of Table 2 of \cite{2009ApJ...693L..81M}. \vspace{12pt} Distance estimates, required for calculating absolute magnitudes and fluxes, were based on the local velocity field model of \cite{2000ApJ...529..786M} that includes terms for the influence of the Virgo Cluster, the Great Attractor, and the Shapley Supercluster. As customary in this field \citep[e.g., ][]{2012A&A...540A..47F, 2011AJ....142..170H, 2012AJ....144...16K} uncertainties were not estimated for these distances. At the low redshifts of the EIGs ($\z < 0.024$) K-corrections are not significant compared to the uncertainty that they introduce. Therefore, K-corrections were not applied to the measured magnitudes and fluxes. The apparent magnitudes, {\m}, and fluxes, {\F} (after correcting for Galactic extinction) were converted to absolute magnitudes, {\AbsM}, and luminosities, {\L}, using: $ \AbsM = \m - 5 \cdot \ifmmode \mbox{log}\else log\fi \frac{\left( 1 + \z \right) D_m}{10\,\pc} $ and $ \L = 4 \pi D_m^2 \left( 1 + \z \right)^{2} \cdot \F $, where $D_m$ is the distance estimate (comoving transverse distance). \vspace{24pt} \rem{ To conclude, we observed 36 EIGs, produced deep maps of their {\Halpha} content and measured their total {\Halpha} luminosity and equivalent width. These data were not available in the literature or in public databases. We have also produced {\U\B\V\R\I} images and absolute magnitude measurements of some of these EIGs, deeper than available before. } Further details about the observations and data processing can be found in section 5 of \cite{2015PhDT}. \section{\\Modelled EIG properties} \label{App:ModelledEIG_Properties} The SFH, dust attenuation and stellar mass of the EIGs were estimated by fitting a model to their UV-to-near-IR SEDs as described section \ref{sec:ResultsModel}. Figure \ref{f:ResGalMC_1D} shows, for each of the modelled EIGs, the marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and {\Mstar}. The extreme $\tau^{-1}$ values where $\left| \tau \right| \ll Age_{1}$ represent scenarios in which the first stellar population was created in a short burst. The low $\tau^{-1}$ values (where $\left| \tau \right| \gg Age_{1}$) represent an almost constant star formation for the first (main) stellar population. \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-07.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-08.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-09.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-10.pdf} \caption [Modelled one-dimensional marginalized posterior distributions] { Modelled one-dimensional marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and $M_{*}$. Data for each EIG is plotted in a separate row.\label{f:ResGalMC_1D} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-11.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-12.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-13.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1s-14.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_1a-07.pdf} \contcaption { \label{f:ResGalMC_1D_2} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-05.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-07.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2s-08.pdf} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_2a-04.pdf} % \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-02.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-03.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-04.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-05.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-06.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3s-07.pdf} % \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3a-01.pdf} \includegraphics[width=15.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/GalMC/GalMC_PDFs_EIG_3a-02.pdf} \contcaption { } \end{centering} \end{figure*} \section*{Acknowledgements} {\rem{ALFALFA}} We are grateful to Martha Haynes, Riccardo Giovanelli and the entire ALFALFA team for providing an unequalled HI data set. {\rem{NED}} This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. {\rem{SDSS}} Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. \rem{this is part of the Official SDSS-III Acknowledgement, which can be found on: http://www.sdss3.org/collaboration/boiler-plate.php} {\rem{GALEX}} This research has made use of observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. \rem{IRAS - used for downloading 2MASS, WISE and Spitzer data} This research has also made use of the NASA/IPAC Infrared Science Archive (IRSA), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. {\rem{2MASS}} This publication makes use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. {\rem{WISE}} This publication makes use of data products from the Wide-field Infrared Survey Explorer (WISE), which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. \section{Discussion and conclusions} \label{s:DisConc} We have found surprising environmental dependencies of the HI content, \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi, and of the morphological type of EIGs (sections \ref{s:Res_MassHisgrms} and \ref{s:Res_Morphology} respectively). It is generally accepted that galaxies in cluster environments typically have atomic gas deficiencies \citep{1998ARA&A..36..189K}, while void galaxies are typically gas-rich \markChange{\citep{2011MNRAS.415.1797C, 2012AJ....144...16K, 2014ApJ...788...29L\rem{section 3.2.1}}}. It is also generally accepted that early-type galaxies are more abundant in clusters than in isolated environments \citep{2009MNRAS.393.1324B}. It was, therefore, expected that a sample of the most isolated galaxies (subsample EIG-1) would be the most gas-rich and would contain the lowest fraction of early-types. However, contrary to these expectations, we have found that EIG-1 galaxies, which lack neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content at distances {$<$3\,\Mpch}, tend to have lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to EIG-2 galaxies that have such neighbours (the average {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of EIG-1 galaxies is lower than that of EIG-2 galaxies with $2.5\,\sigma$ confidence). \cite{2014MNRAS.444.3559M} have found a similar {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} environmental dependence, in which their sample of void galaxies\rem{ (less isolated than the EIGs)} showed a tendency for lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to their sample of wall galaxies. Similarly unexpected, we have found that the most isolated galaxies (subsample EIG-1) have a higher tendency to be early-types compared to EIG-2 galaxies (with 0.94 confidence). To the best of our knowledge this is the first time where an isolated galaxies' sample shows a higher fraction of early-types compared to a less isolated sample. \markChange{These findings do not contradict the results of \cite{2011MNRAS.415.1797C, 2012AJ....144...16K, 2014ApJ...788...29L} and \cite{2009MNRAS.393.1324B} which compared isolated galaxies with galaxies in clusters or with the general population of galaxies. Here we compared between two galaxy populations of extreme isolation levels, and showed that the trends of increased {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and decreased early-type fraction with the increase of the isolation level reverse at extreme isolation (or when the isolation is tested also with respect to the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of possible neighbours).} There is considerable evidence from cosmological simulations that the spins and major axes of haloes are correlated with the direction of the walls or filaments in which they reside \citep[e.g.,][]{2007ApJ...655L...5A, 2009ApJ...706..747Z, 2012MNRAS.427.3320C, 2014MNRAS.443.1274L}. For low-mass haloes ($\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 13$ according to \citealt{2009ApJ...706..747Z}, or $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 12.6$ according to \citealt{2012MNRAS.427.3320C}), the halo spin is more likely to be aligned with the closest filament. This preferred spin direction is probably a result of the direction from which material is accreted to the halo. As shown in \markChange{Figure \ref{f:MhaloFromEIG_I}}, simulation analysis indicates that almost all EIGs reside in haloes that would be considered low-mass in this respect, and are, therefore, expected to have spins that correlate fairly well with the direction of the filaments and walls closest to them. We speculate that this effect may be connected to the low abundance of early-types in the EIG-2 subsample and to its higher average {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to the EIG-1 subsample. Underdense filaments and walls may be the hosts of EIG-2 galaxies. The halo spins induced by their filament or wall environment may significantly reduce their early-type fraction and possibly also increase their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (because the gas may spend more time before reaching their centres). If they indeed reside in filaments or walls, they are also expected to have some neighbours with similar tendency for being late-type and containing significant amounts of {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. We further speculate that a significant fraction of EIG-1 galaxies are not parts of filaments or walls, but rather reside in environments with no preferred direction for accreting material (e.g., at the junction points between filaments so extremely underdense that no galaxies were detected in them). This may increase their probability of being early-types and may also affect them in such a way that they would contain less HI on average (either because there is not much gas in their environment or because the available HI gas is accreted faster to the halo's centre and forms stars more quickly). Further study of the early-type EIGs found in this work may be of interest, since if they indeed reside at junction points between filaments, they may resemble cluster early-types at early stages of their development. \vspace{12pt} Both the early-type and late-type EIGs follow the same colour-to-{\Mstar} relation (Fig.~\ref{f:Res_ColorMass}), SFR-to-{\Mstar} `main sequence' relation (Fig.~\ref{f:Res_MainSeq}) and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation (Fig.~\ref{f:Res_HI_Mstar}), and fit the SFR predictor of eq.~\eqref{e:Res_SFR_predictor} (Fig.~\ref{f:Res_SFR_predictor}). This indicates that the mechanisms and factors governing star formation, colour and the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation are similar in early-type and late-type EIGs. It further indicates that the morphological type of EIGs is not governed by their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content, {\SFR} or colour. EIGs with high {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content, high SFR or blue colour are not necessarily late-types. \vspace{12pt} Our observations indicate that EIGs typically fit the `main sequence of star forming galaxies' found by \cite{2012ApJ...756..113H}\rem{ for a general sample detected by ALFLFA, SDSS and GALEX}. This indicates that the extreme isolation of the EIGs does not affect their {\SFR}\rem{ or their {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}} considerably compared to field galaxies. This is supported by \cite{2016arXiv160108228B} who found no significant difference in SF between void galaxies of the VGS sample and field galaxies. We have found that EIGs follow a colour-to-{\Mstar} relation, in which EIGs with {\Mstar} smaller than $10^{(10.6 \pm 0.9)}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$ are typically `blue cloud' galaxies irrespective of their morphological type (Figure \ref{f:Res_ColorMass}). Since {\Mstar} of most EIGs is below this threshold, most of the EIGs are blue. A similar result was found by \cite{2009MNRAS.393.1324B} who found that in low density environments low {\Mstar} galaxies are mostly blue, while galaxies with high {\Mstar} are mostly red (irrespective of morphology). This is contrary to what is found in high density environments, where galaxies are mostly red irrespective of their {\Mstar} and morphology. \vspace{24pt} With respect to the `Nature vs.~Nurture' question, which was the primary driver of this work, we conclude the following: It is well known that cluster environments have a strong effect on star formation, colours and morphologies of galaxies. With the exception of these high density environments the {\SFR} is not significantly affected by the environment, i.e.~the `main sequence of star-forming galaxies' holds in a range of environments from walls to the most extremely isolated regions measureable. Outside high density regions, the colours of galaxies are mostly related to their stellar mass, {\Mstar}, and are less affected (if at all) by the environment. We have found that the HI content, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, and the morphological type of galaxies do depend on their environment. In the most isolated environments, where no neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} are present (to a distance of {3\,\Mpch}), galaxies tend more to be early-types and have lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, on average, compared to less isolated environments. We speculate that this might reflect the large scale structure of these extremely isolated regions. Late-type and high-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} galaxies may be more abundant in underdense filaments and walls, while early-type and lower {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} galaxies may be more abundant at the junctions of filaments so extremely underdense that no galaxies were detected in them. \section{Results} \label{ch:Results} \subsection{Apparent magnitudes and fluxes} Table \ref{T:EIG_ugriz} lists the SDSS apparent magnitudes of the \markChange{39} EIGs measured as described in section \ref{sec:Wise_Photometry}.\footnote{\markChange{Two of the EIGs, 1a-05 and 1a-06, are not in the SDSS footprint and were not imaged in the WO.}} Photometrically calibrated {\U\B\V\R\I} (Bessell) measurements were made for eight of the EIGs. Their apparent magnitudes are listed in Table \ref{T:EIG_ubvri}. \input{Tables/EIG_ugriz} \rem{label = {T:EIG_ugriz}} \input{Tables/EIG_ubvri} \rem{label = {T:EIG_ubvri}} The combined R and {Net-\Halpha} ({\nHa}) images are shown in figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} (in negative).\footnote{\markChange{The images of EIG 2s-04 are not shown due to a bright foreground star that does not allow to clearly identify it in the image (see details in Appendix \ref{App:EIGdata}).}} Each row of images in the figures relates to a different EIG. The name of the EIG is given on the leftmost image, which shows the combined R image. The second image from the left shows the same R image using a logarithmic scale (log R). The third image from the left shows the combined {\nHa} image. The rightmost image shows the EIG in false colour; R in orange, and {\nHa} in azure (both using a negative linear scale). The upper bar in the rightmost image shows the physical scale calculated using the distance estimate described in section \ref{s:ObsNPrc_AbsMagLum}. The lower bar in the rightmost image shows the angular size scale. Regions of interest within some of the EIGs were measured individually. The polygonal apertures used for these measurements are drawn (along with their names) on the rightmost images of figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3}. Observational results of these regions of interest are described in section \ref{s:rsltsSFR}. Remarks for specific EIGs are listed in Appendix \ref{App:EIGdata}. \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-02.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-04.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-05.pdf} \caption [R and {\nHa} images (EIG-1)] { \markChange { R and {\nHa} images of the EIG-1 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-1} } } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-08.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-09.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-10.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-11.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-12.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-13.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1s-14.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1a-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_1a-04.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-02.pdf} \rem{\includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-04.pdf}} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-05.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2s-08.pdf} \caption [R and {\nHa} images (EIG-2)] { R and {\nHa} images of the EIG-2 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-2} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2a-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_2a-02.pdf} \contcaption { } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-02.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-03.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-04.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-05.pdf} \caption [R and {\nHa} images (EIG-3)] { R and {\nHa} images of the EIG-3 subsample (each EIG in a separate row). The columns from left to right show negative images of: the combined R image (in linear scale), the combined R image in logarithmic scale, the combined {\nHa} image (linear scale), the EIG in false colour; R in orange and {\nHa} in azure (linear scale). The rightmost column also includes a physical distance scale, an angular size scale and where applicable the regions of interest measured individually (along with their names).\label{f:RHacolorImgEIG-3} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-06.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3s-07.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3a-01.pdf} \includegraphics[width=17.0cm,trim=0mm 0mm 0mm 0, clip]{Figs/EIGs/R_NetHa_EIG_3a-02.pdf} \contcaption { } \end{centering} \end{figure*} In Figure \ref{f:RSurBrightR} the R surface brightness, $\mu_{\mbox{\scriptsize R}}$, is plotted as function of the distance from the galactic centre, $r$. The surface brightness was measured on a set of ellipses with different semi-major axes, fitted to each EIG. The $\mu_{\mbox{\scriptsize R}}$ of the innermost $2\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi$ of each galaxy is not shown to avoid confusion due to point spread function (PSF) effects. $\mu_{\mbox{\scriptsize R}}$ measurements with uncertainty $\geq 0.5\,\magAsecSq$ are also not shown. Two profiles were fitted for each EIG's $\mu_{\mbox{\scriptsize R}}$ measurements, one typical of a late type galaxy disc (blue dashed line) and the other representing an early-type elliptical galaxy (red solid curve). The disc profile has a S\'{e}rsic's index $n=1$, and was fitted to the outskirts of the galaxy (from half of the maximum $r$ shown in the figure and further). The elliptical profile is a de Vaucouleurs relation, i.e. a S\'{e}rsic's index $n=4$. It was fitted to all the measured points shown in the figure. \begin{figure*} \begin{centering} \includegraphics[width=16.3cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SurBrightR1.pdf} \caption [R surface brightness] { R surface brightness, $\mu_{\mbox{\scriptsize R}}$, as function of the distance from the galactic centre, $r$. The black horizontal bars show the angular scale. The blue dashed lines are linear relations, fitted to the points above half of the maximum shown $r$. The red solid curves show best fits to a de Vaucouleurs formula. \label{f:RSurBrightR} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=16.3cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SurBrightR2.pdf} \contcaption { } \end{centering} \end{figure*} \markChange{The EIGs were classified as early-types or late-types by visual inspection of the images of the EIGs, and the $\mu_{\mbox{\scriptsize R}}$ profiles of Figure \ref{f:RSurBrightR}.} Whenever \markChange{the combination} of the images and $\mu_{\mbox{\scriptsize R}}$ profiles did not yield a clear identification, the EIG was classified as `unknown'. \markChange{Six out of 31 EIGs (19 per cent) were classified as `unknown'. We chose to classify such a large fraction as 'unknown' in order to reduce the probability of false identification to a minimum. The morphological types of EIGs 1s-05, 2s-04 and 2s-06 were not classified. EIG 1s-05 was not classified because it cannot be identified in optical images, and EIGs 2s-04 and 2s-06 were not classified because of bright foreground stars projected close to them (see details in Appendix \ref{App:EIGdata})}. The morphological classifications are listed in Table \ref{T:Res_Morphology}. \input{Tables/Res_Morphology} \rem{label = {T:Res_Morphology}} Ultraviolet data were downloaded from the GALEX \citep{2005ApJ...619L...1M} GR6/7 data release\footnote{http://galex.stsci.edu/GR6/}. The available apparent magnitudes of EIGs in the GALEX far-ultraviolet band ({\magFUV}) and near-ultraviolet band ({\magNUV}) are listed in Table \ref{T:EIG_GALEX}. \input{Tables/EIG_GALEX} \rem{label = {T:EIG_GALEX}} 2MASS \citep{2006AJ....131.1163S} and WISE \citep{2010AJ....140.1868W} data were downloaded from the NASA/IPAC Infrared Science Archive (IRSA)\footnote{http://irsa.ipac.caltech.edu}. The 2MASS data were taken from the All-Sky data release. Thirteen EIGs were identified in its Extended Source Catalogue. For two of these (EIGs 1a-01 and 2a-02) the quoted {\magJ}, {\magH} and {\magKs} magnitudes were not used since they translate to flux suspiciously lower than that of the {\SDSSi} band. The {\magJ}, {\magH} and {\magKs} apparent magnitudes\rem{(wavelengths {1.24\,\um}, {1.66\,\um} and {2.16\,\um} respectively)} of the remaining eleven EIGs are listed in Table \ref{T:EIG_2MASS}. \input{Tables/EIG_2MASS} \rem{label = {T:EIG_2MASS}} WISE data taken from the All-WISE catalogue are listed in Table \ref{T:EIG_StarFormation}. These include apparent magnitudes in the {\WThree}\rem{({12\,\um})} and {\WFour}\rem{({22\,\um})} bands (columns {\magWThree} and {\magWFour} respectively). For some of the EIGs these are measurements through elliptical apertures based on the 2MASS {\Ks} isophotal apertures. Data for EIGs, for which the elliptical aperture measurements were not possible, are profile-fitting photometry measurements, or for low SNR measurements the 0.95 confidence magnitude lower limits. The WISE profile-fitting photometry measurements are less accurate for extended objects than the elliptical aperture measurements. To estimate their uncertainty, a comparison was made between profile-fitting photometry magnitudes and elliptical aperture based magnitudes for eight EIGs for which both were available. On average, the profile-fitting magnitudes were found to be {$0.08 \pm 0.03$\,\magnitude} ({\WThree}) and {$0.1 \pm 0.1$\,\magnitude} ({\WFour}) lower than the elliptical aperture magnitudes. The standard deviation of the difference between the two measurement methods was found to be {$\sim$0.3\,\magnitude} for {\WThree} and {$\sim$0.4\,\magnitude} for {\WFour}. These standard deviations were added to the estimated uncertainties of the profile-fitting photometry magnitudes. Table \ref{T:EIG_ALFALFA} lists data from the ALFALFA {$\alpha$}.40 catalogue \citep{2011AJ....142..170H}. For each EIG that was detected by ALFALFA the velocity width of the HI line profile, {\Whalf}, corrected for instrumental broadening but not for disc inclination is listed. This is followed by the total HI line flux, {\FHI}, the estimated signal to noise ratio of the detection, {\SNR}, and the HI mass content, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. Finally, the category of the HI detection, Code, is listed. Code 1 refers to a source of SNR and general qualities that make it a reliable detection. Code 2 refers to a source with $\SNR \lesssim 6.5$ that does not qualify for code 1 but was matched with a counterpart with a consistent optical redshift, and is very likely to be real. \input{Tables/EIG_ALFALFA} \rem{label = {T:EIG_ALFALFA}} \subsection{Star formation rate} \label{s:rsltsSFR} SFRs of the EIGs were calculated using the WO {\Halpha} measurements and the WISE {\WThree} and {\WFour} measurements. First, the {\Halpha} flux and the WISE {\WThree} and {\WFour} apparent magnitudes were corrected for Galactic extinction as described in section \ref{s:ObsNPrc_AbsMagLum} (the corrections for WISE were quite small, up to {0.018\,\magnitude} in W3, and up to {0.013\,\magnitude} in W4). Then, the {\WThree} and {\WFour} Galactic corrected magnitudes were converted to fluxes using the procedure described in section IV.4.h.i.1 of \cite{2013wise.rept....1C}\rem{\footnote{http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4\_4h.html\#conv2flux}} for a constant power-law spectra (same as the method used by \citealt{2014MNRAS.438...97W}). The {\Halpha}, {\WThree} and {\WFour} fluxes were then converted to luminosities ($\ifmmode L\else $L$\fi_{\Halpha, obs}$, $\SpecLum \left( 12\,\um \right)$ and $\SpecLum \left( 22\,\um \right)$ respectively) as described in section \ref{s:ObsNPrc_AbsMagLum}. \cite{2014MNRAS.438...97W} found that dust extinction-corrected {\Halpha} flux, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, can be accurately derived from the observed {\Halpha} flux and either the {\WThree} or {\WFour} bands using: \begin{equation} \begin{IEEEeqnarraybox*}{rCl} \ifmmode L\else $L$\fi_{\Halpha, corr} & = & \ifmmode L\else $L$\fi_{\Halpha, obs} + 0.038 \cdot \nu \SpecLum \left( 12\,\um \right) \\ \ifmmode L\else $L$\fi_{\Halpha, corr} & = & \ifmmode L\else $L$\fi_{\Halpha, obs} + 0.035 \cdot \nu \SpecLum \left( 22\,\um \right) \end{IEEEeqnarraybox*} \label{e:SFR_HaWISE_crct} \end{equation} These relations are independent of the metallicity. The relation that uses $\SpecLum \left( 12\,\um \right)$ was found to be slightly more accurate. The corrected {\Halpha} luminosity, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, was calculated using equation \eqref{e:SFR_HaWISE_crct} for both {\WThree} and {\WFour}, and the average of the two results was used. If for one of the bands only a lower limit to the magnitude was given, the result of the other band was used. If both bands had only lower limits to their magnitudes, the nominal SFR was calculated assuming zero IR dust emission \markChange{(i.e., with $\SpecLum \left( 12\,\um \right) = \SpecLum \left( 22\,\um \right) = 0$)} and the effect of the possible IR dust emission was added to the uncertainty of the measurement \markChange{($\ifmmode L\else $L$\fi_{\Halpha, corr}$ was calculated using both options of \eqref{e:SFR_HaWISE_crct} with uncertainties in $\SpecLum$ calculated using the 0.95 confidence lower limit of the {\WThree} and {\WFour} fluxes, and the option with the lower uncertainty was used)}. Finally, the dust-extinction-corrected {\Halpha} luminosity, $\ifmmode L\else $L$\fi_{\Halpha, corr}$, was used to calculate the SFR using the following equation adapted from \cite{2012ARA&A..50..531K}, \cite{2011ApJ...737...67M} and \cite{2011ApJ...741..124H}: \begin{equation} \ifmmode \mbox{log}\else log\fi\, \frac{SFR}{\MsunPerYr} = \ifmmode L\else $L$\fi_{\Halpha, corr} - 41.27 \label{e:SFR_GenCalibKen2009} \end{equation} \vspace{12pt} Table \ref{T:EIG_StarFormation} lists measured star formation properties of the sample galaxies. For each EIG, the equivalent width, EW, and the {\Halpha} flux, $F_{\Halpha}$, are listed. These are followed by the WISE magnitudes, {\magWThree} and {\magWFour}, used for calculating the {\Halpha} flux that was extinguished within the EIG. Listed finally, are the fraction of the total {\Halpha} flux extinguished within the EIG, $\mbox{frac}_{{\Halpha}, \mbox{\scriptsize ext}}$, and the SFR. \input{Tables/EIG_StarFormation} \rem{label = {T:EIG_StarFormation}} The {\Halpha} flux and EW were measured for each region of interest within the EIGs (plotted in figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3}). Table \ref{T:EIG_PlgnHaFlux} lists for each region the {\Halpha} flux as a fraction of the total EIGs {\Halpha} flux. Table \ref{T:EIG_PlgnEW} lists the EWs of the regions. The regions of interest were defined with some additional area around the star-forming regions so that all the {\Halpha} flux will be measured. Therefore, the EWs of star-forming regions may be larger than those listed in the Table \ref{T:EIG_PlgnEW}. Similarly, the {\Halpha} flux fractions of the star-forming regions may be smaller than those listed in Table \ref{T:EIG_PlgnHaFlux}, since some diffuse {\Halpha} flux from the surrounding area may have been included in the measurement. \input{Tables/EIG_PlgnHaFlux} \rem{label = {T:EIG_PlgnHaFlux}} \input{Tables/EIG_PlgnEW} \rem{label = {T:EIG_PlgnEW}} As can be seen, the {\Halpha} flux fractions of the distinct star-forming regions of EIGs do not account for the entire {\Halpha} flux. In most EIGs the diffuse {\Halpha} is a significant component. In many of the EIGs (e.g., 1S-13) there are no detectable star-forming regions at all. In some of these the diffused {\Halpha} component is not detectable in the {\nHa} images, even though the total {\Halpha} EW, shown in table \ref{T:EIG_StarFormation}, is considerable. This is a result of noise in the {\nHa} images, which was reduced in the EW measurements by averaging over the whole galaxy. This noise is significantly less considerable in the {\R} images, because the spectral width of the {\R} filter is more than an order of magnitude larger than the {\nHa} EW of the EIGs for which {\Halpha} emission is not easily detectable in the {\nHa} images. The fact that diffused {\Halpha} is the dominant component in most EIGs indicates that a possible active galactic nucleus (AGN) contribution to the {\Halpha} flux is insignificant. This is supported by the fact that SDSS did not classify any of the EIGs' measured spectra as containing detectable AGN emission lines. Since the {\Halpha} measurements utilize narrow bands they include a contribution of the [NII] lines flanking the {\Halpha} line. For the central parts of the 22 EIGs which have SDSS spectra this correction was found to be: $\ifmmode \mbox{log}\else log\fi \left( F_{\Halpha} / F_{\Halpha+[NII]} \right) = -0.07 \pm 0.05$ (with 0.95 confidence level). The correction factor for a whole galaxy is expected to be significantly lower than this value, since central parts of galaxies typically have high metal abundance and high [NII] to {\Halpha} flux ratio compared to those measured for whole galaxies. This difference in the correction factor is expected to be more significant for EIGs in which the diffused {\Halpha} is the dominant component. In light of this we chose not to correct for these effects. \subsection{Model fitting} \label{sec:ResultsModel} The SFH, dust attenuation and stellar mass of the EIGs were estimated by fitting a five-parameter model to their UV-to-near-IR spectral energy distributions (SEDs). The model assumes that the SFH can be described by a first population of stars with an exponentially decreasing or increasing star formation rate (SFR), and a possible addition to recent star formation (a second population). The second population is modelled as an instantaneous star formation that occurred $1\,\Myr$ ago and is meant to compensate for a possible recent deviation from an exponential SFH, which may have a significant effect on the emission from the galaxy (especially in the UV and {\Halpha}). The model can also describe a scenario of a constant star formation or a sudden star formation (first population) with or without a recent star formation burst (second population). This model is obviously a simplification of the actual SFH and is limited in the diversity of possible scenarios. However, not much more can be done given the available SED measured points. \vspace{12pt} The five free parameters of the model are: \begin{description} \item[$\ifmmode M \else $M$ \fi_{1}$] - The mass of the first population of stars, created along the history of the galaxy (integral over time of the SFR of the first population). \item[$Age_{1}$] - The look-back time of the beginning of star formation of the first population. \item[$\tau$] - The e-folding time of the exponentially decreasing (positive $\tau$) or increasing (negative $\tau$) SFH of the first population. $\tau \ll Age_{1}$ indicates a sudden star formation, while $\tau \gg Age_{1}$ indicates an almost constant star formation. \item[\EBV] - The {\BMinV} colour excess that results from dust within the galaxy. \item[$\ifmmode M \else $M$ \fi_{2}$] - The mass of the second population of stars that was created. \item[] \end{description} The model fitting procedure used Bayesian statistical inference with uniform prior probability distributions of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{1} \right)$, $\ifmmode \mbox{log}\else log\fi \left( Age_{1} \right)$, $\tau^{-1}$, {\EBV} and $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{2} \right)$. The parameters were restricted to the following ranges: $10^{7}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi < \ifmmode M \else $M$ \fi_{1} < 1.6 \cdot 10^{15}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$, \; $10^{8}\,\yr < Age_{1} < $ age of the Universe at the redshift of the galaxy, \; $0 < \EBV < 2$, \; $2.7\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi < \ifmmode M \else $M$ \fi_{2} < 10^{7}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$. The metallicities of the first and second populations were set to {0.4\,\ifmmode \Metallicity_\sun\else $\Metallicity_\sun$\fi} and {1\,\ifmmode \Metallicity_\sun\else $\Metallicity_\sun$\fi} respectively. The model fitting was computed using the GalMC software \citep{IAU:8669162, 2011ApJ...737...47A}\footnote{http://ctp.citytech.cuny.edu/$\sim$vacquaviva/web/GalMC.html}. GalMC is a Markov Chain Monte Carlo (MCMC) algorithm designed to fit the SEDs of galaxies to infer physical properties such as age, stellar mass, dust reddening, metallicity, redshift, and star formation rate. The Markov chains produced by GalMC were analysed using the GetDist software, a part of the CosmoMC software \citep{2002PhRvD..66j3511L}\footnote{http://cosmologist.info/cosmomc/}. The stellar emission was calculated using the Charlot \& Bruzual 2007 stellar population synthesis model (Charlot \& Bruzual, private communication, CB07\footnote{http://www.bruzual.org/}) assuming a Salpeter initial mass function. Nebular emission was calculated following \cite{1998ApJ...497..618S} and \cite{2009A&A...502..423S}, as described in section 2.2.4 of \cite{2011ApJ...737...47A}. Dust extinction within the EIG was calculated from the {\EBV} parameter using the \cite{1994ApJ...429..582C}\rem{Calzetti} law with $\R_{v} = 4.05$ \citep{2000ApJ...533..682C}. The emission of the EIGs were also corrected for absorption by neutral hydrogen in the intergalactic medium (IGM) using \cite{1995ApJ...441...18M}. The input to the model-fitting software included the EIG's redshift and the data from measured bands with wavelengths shorter than {3\,\um} (the CB07 model does not estimate correctly the emission beyond the first PAH feature at {$\sim$3\,\um}). Each band measurement was corrected for Galactic extinction and then converted to flux, which was used as input to the GalMC software. The calibrated Bessell U, B, V, R and I magnitudes, measured at the Wise Observatory, were translated to AB magnitudes (and then to flux) using the relations listed in Table 1 of \cite{2007AJ....133..734B}. The 2MASS magnitudes were converted to fluxes using the zero magnitude isophotal monochromatic intensities listed in Table 2 of \cite{2003AJ....126.1090C}. Foreground Galactic extinction was corrected as described in section \ref{s:ObsNPrc_AbsMagLum}. For each EIG four MCMC runs were made, each with a different set of free parameters as a starting point. Best-fitting parameters and covariance matrices of these four runs were then used as inputs for continued runs. Each run included 50000 sampled parameter sets. Only one of each EIG's MCMC runs (chains) was used for analysis. This run was chosen based on the speed of convergence of the chain, on its average likelihood, on its multiplicity (number of trial steps before moving to the next location in parameter space), and on how well it covered the parameter space. Chains that probed the parameter space with $Age_{1} <0.2\,\Gyr$ for a large fraction of their length were disfavoured (if another good chain existed it was selected instead of them). Models were not fitted to EIG 1s-05 and EIG 2s-04, because these do not have the necessary SED data. EIG 1s-05 has only {21\,cm} data, and EIG 2s-04 has only {\SDSSg}, {\SDSSr}, {\SDSSi} and {\SDSSz} measurements (due to a bright foreground star close to it). The model that was fitted to EIG 1s-11 did not reproduce its {\Halpha} emission successfully ($\mbox{EW} = 28 \pm 4\,\Angst$). The best-fitting parameters of all MCMC runs of EIG 1s-11 yielded lower EW values. \vspace{12pt} Marginalized posterior distributions (the predicted probability distribution functions, PDFs) of the free parameters and of the calculated total mass of stars, {\Mstar}, considering mass-loss mechanisms, were calculated from the selected chain of each EIG. Figure \ref{f:ResGalMC_1D} in Appendix \ref{App:ModelledEIG_Properties} shows, for each of the modelled EIGs, the marginalized posterior distributions of $Age_{1}$, $\tau^{-1}$, \EBV, $\ifmmode M \else $M$ \fi_{2}$ and {\Mstar}. Table \ref{T:EIG_MStar} lists the {\Mstar} values predicted by the model. \input{Tables/EIG_MStar} \rem{label = {T:EIG_MStar}} Two-dimensional marginalized PDFs of pairs of the model parameters were also analysed. It was found that in most cases there is some dependence between pairs of the free parameters. The $\ifmmode M \else $M$ \fi_{1}$ and $Age_{1}$ parameters were found to be highly correlated. The $\left( Age_{1}, \ifmmode M \else $M$ \fi_{1} \right)$ two-dimensional marginalized PDFs do not seem to depend on whether they are part of the EIG-1, EIG-2 or EIG-3 subsample, i.e. the $\left( Age_{1}, \ifmmode M \else $M$ \fi_{1} \right)$ space is filled similarly by the \markChange{three subsamples}. \subsection{Dynamic mass} We estimated dynamic mass lower limits for the EIGs using the ALFALFA measured HI rotation, the elliptical isophotes fitted to the combined {\R} images (described in section \ref{sec:Wise_Photometry}) and the surface brightness measurements, $\mu_{\mbox{\scriptsize R}}$. The calculations were based on the methods described by \cite{2014RvMP...86...47C}. First, we estimated the inclination of the galaxies using eq.~6 of \cite{2014RvMP...86...47C} for the $\mu_{\mbox{\scriptsize R}} = 24\,\magAsecSq$ elliptical isophote: \begin{equation} i \cong \cos ^{-1} \sqrt{\frac{\left( b_{24}/a_{24} \right)^2 - q_0^2}{1 - q_0^2}} \label{e:rsltsInclination} \end{equation} where: \begin{description} \item[$i$] is the estimated inclination of the galaxy; \item[$a_{24}$, $b_{24}$] are the semi-major axis and semi-minor axis (respectively) at $\mu_{\mbox{\scriptsize R}} = 24\,\magAsecSq$; \item[$q_0$] is the axial ratio of a galaxy viewed edge on. \item[] \end{description} The inclination of EIGs classified as early-types was not measured, because their $q_0$ is unknown. For a sample of 13482 spiral galaxies \cite{1992MNRAS.258..404L} found $q_0 \cong 0.2$ by applying statistical techniques to explore triaxial models. \cite{2012MNRAS.425.2741H} found $q_0 \cong 0.13$ for spirals using SDSS data on a sample of 871 edge-on galaxies. Here we adopted $q_0 = 0.17 \pm 0.05$ for the galaxies classified as late-types or `unknown' (see table \ref{T:Res_Morphology}). we measured $a_{24}$ using the linear fit of figure \ref{f:RSurBrightR}. The semi-minor to semi-major axes ratio, $b_{24}/a_{24}$, was calculated by linear interpolation of its values for the EIG's ellipse isophotes just below and just above $a_{24}$. The speed of rotation of the HI gas, $v_{rot}$, was calculated using the HI velocity width, {\Whalf}, listed in Table \ref{T:EIG_ALFALFA} and the inclination, $i$, using: \begin{equation} v_{rot} = \frac{\Whalf}{2 \cdot \sin i} \label{e:v_rot} \end{equation} The dynamic mass lower limit, $M_{dyn,24}$, was then calculated using: \begin{equation} \ifmmode M \else $M$ \fi_{dyn,24} = \frac{v_{rot}^2 \cdot a_{24}}{G} \end{equation} $M_{dyn,24}$ is a lower limit to the galactic total mass, since the extent of the neutral gas in spiral galaxies can often exceed twice that of the stars \citep{2014RvMP...86...47C}, and the dark matter (DM) halo may extend even further. An additional source of uncertainty in $M_{dyn,24}$ comes from the assumption behind \eqref{e:v_rot} that all of the HI velocity width, {\Whalf}, is due to the rotational velocity, $v_{rot}$. This may somewhat increase the $M_{dyn,24}$ estimate, but probably by much less than it is decreased due to the underestimation of the dark mass diameter. Table \ref{T:EIG_DynamicMass} lists $\ifmmode M \else $M$ \fi_{dyn,24}$ of EIGs along with the values used for its calculation. It also lists the ratio of the measured dynamic mass to stellar plus HI mass, $\ifmmode M \else $M$ \fi_{dyn,24} / \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$. \input{Tables/EIG_DynamicMass} \rem{label = {T:EIG_DynamicMass}} \section{\\EIG Specific Data} \label{App:EIGdata} This appendix contains general notes for some of the EIGs. \subsection*{EIG 1s-05} No optical counterpart could be identified for EIG 1s-05 (an ALFALFA object). In the Wise Observatory images, no {\Halpha} emission was identified around the ALFALFA coordinates. Within one arcminute from the ALFALFA coordinates of EIG 1s-05, all galaxies detected by SDSS have ${\SDSSg} > 21.6$, and none have spectroscopic redshifts. All GALEX detected objects in the same region have ${\magFUV} > 24$ and ${\magNUV} > 21$. EIG 1s-05 may, therefore, be a `dark galaxy' with an extremely high HI to stellar mass ratio and a very low SFR. It may also be an \markChange{ALFALFA} false detection, even though its SNR is 8.1 and it is considered a `code 1' object, i.e. a source of SNR and general qualities that make it a reliable detection \citep{2011AJ....142..170H}. \subsection*{EIG 1s-09} SDSS DR10 shows an edge-on galaxy, SDSS J112157.63+102959.6, {$\sim$13\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} east of the centre of EIG 1s-09. The angular size of SDSS J112157.63+102959.6 is similar to that of EIG 1s-09. Its magnitude is ${\SDSSg} = 18.6$, compared to ${\SDSSg} = 16.9$ of EIG 1s-09. The redshift of SDSS J112157.63+102959.6 is unknown. Although there is a possibility that SDSS J112157.63+102959.6 is a close neighbour of EIG 1s-09, this seems unreasonable, since tidal tails are neither visible in the SDSS images nor in the images shown in figure \ref{f:RHacolorImgEIG-1} \markChange{(which combine 40 minute exposure in the {\R} band and 120 minute exposure in an \Halpha band, both using the WO {1\,meter} telescope)}. \subsection*{EIG 1s-10} SDSS DR10 shows two objects at an angular distance of {$\sim$6\,\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} from the centre of EIG 1s-10. One is north of EIG 1s-10, and is classified as a star by SDSS DR10. The second, classified as a galaxy, is south-west of EIG 1s-10. Both objects do not have measured redshifts. Although there is a possibility that one or both of these are galaxies merging with EIG 1s-10, this seems unreasonable, since tidal tails are neither visible in the SDSS images nor in the images shown in figure \ref{f:RHacolorImgEIG-1} \markChange{(which combine 100 minute exposure in the {\R} band and 260 minute exposure in an \Halpha band, both using the WO {1\,meter} telescope)}. \subsection*{EIG 1s-11} The only redshift measurement found for EIG 1s-11 is from \cite{1993A&AS...98..275B} that quotes \cite{1987ApJS...63..247H}. This is a HI measurement made at the Arecibo observatory. The HI-profile for the galaxy was not published by \cite{1987ApJS...63..247H}. It is possible that the measurement ($4725 \pm 10$\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi) is a result of HI-confusion, and that EIG 1s-11 is actually a part of the Virgo cluster. \subsection*{EIG 1s-14} EIG 1s-14 is projected close to a bright foreground star, which prevented SDSS from measuring its spectrum. This also affected the accuracy of measurement of its magnitudes and {\Halpha} flux here. Its relative {\Halpha} flux uncertainty was 0.3. Its estimated uncertainty in {\SDSSu}{\SDSSg}{\SDSSr}{\SDSSi}{\SDSSz} was {0.1\,\magnitude}. \subsection*{EIG 1a-02} SDSS DR10 shows a galaxy, SDSS J005629.17+241913.3, {$\sim$2\,\ifmmode ^\prime\else $^\prime$\fi} west of EIG 1a-02 with unknown redshift. The angular size of SDSS J005629.17+241913.3 is not very different from that of EIG 1a-02. Its magnitude is ${\SDSSg} = 16.6$, compared to ${\SDSSg} = 17.0$ of EIG 1a-02. Although there is a possibility that SDSS J005629.17+241913.3 is a close neighbour of EIG 1a-02, this \markChange{is probably not the case}, since no tidal tails or other signs of interaction are visible in the SDSS images. \markChange{However, since EIG 1a-02 was not imaged using the WO {1\,meter} telescope we cannot be certain of this, as tidal tails and other fine structure features are hard to see or detect in shallow images like those of SDSS.} \subsection*{EIG 1a-04} {\Halpha} images of EIG 1a-04 showed strong star formation in LEDA 213033, a galaxy separated by {107\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi} from EIG 1a-04. Since LEDA 213033 has no measured redshift, its distance from EIG 1a-04 is unknown. The fact that it shows emission in the two narrow {\Halpha} filters used for the measurement, indicates that its redshift is $\cz \cong 6000 \pm 1500\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$. Therefore, the probability that it is less than {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} away from EIG 1a-04 is estimated to be $\sim$0.1. No sign of interaction between EIG 1a-04 and LEDA 213033 was detected. \subsection*{EIG 2s-04} The {\Halpha} flux of EIG 2s-04 could not be measured due to a foreground star ($\SDSSr = 14.30$) at a projected distance of {12\ifmmode ^{\prime\prime}\else $^{\prime\prime}$\fi}. Although EIG 2s-04 has a GALEX measurement (in NUV only) it was not used, since it is contaminated with flux from this foreground star. \subsection*{EIG 2s-06} A foreground star of magnitude $\SDSSr = 15.6$, which is comparable to that of EIG 2s-06, is projected close to the EIG's centre. Its presence interfered with the photometric measurements, somewhat reducing the measured flux. The SDSS automatic photometry of EIG 2s-06 did not produce reliable results; the EIG was identified as two galaxies separated by the foreground star. GALEX measurements for this EIG were not used, since they also are contaminated by the foreground star. \markChange{The morphological type of EIG 2s-06 was not classified because of the foreground star.} \subsection*{EIG 3s-06} This is the only EIG that passes the isolation criterion using the ALFALFA dataset, but had neighbours closer than {3\,\Mpch} in the NED dataset. It was classified as part of subsample EIG-3s, because all of its NED neighbours are more than {2\,\Mpch} away from it. \section{Analysis} \label{ch:Analysis} \subsection{Colours of the EIGs} Colour-mass and colour-colour diagrams of large scale surveys show that galaxies tend to populate two main regions, the `blue cloud' of star-forming galaxies and the `red sequence' of quiescent galaxies, with a small fraction of galaxies in a `green valley' range in between \citep{2001AJ....122.1861S, 2006MNRAS.373..469B, 2014MNRAS.440..889S}. Star-forming main sequence galaxies populate the blue cloud, whether their star formation started recently or a long time ago. When star formation is quenched, galaxies leave the main sequence and their changing colours can be interpreted as a reflection of the quenching process \citep{2014MNRAS.440..889S}. Figure \ref{f:Res_ColorMass} shows a {\uMinr} to {\Mstar} colour-mass diagram of the EIGs, with the approximate edges of the `green valley' marked in green bold lines. The {\uMinr} colour was corrected both for Galactic extinction (as described in section \ref{s:ObsNPrc_AbsMagLum}), and for dust within the EIG using the \cite{2000ApJ...533..682C} extinction law (with $R'_{V} = 4.05$). \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_colorMass.pdf} \caption [Colour to stellar mass diagram] { Colour-to-stellar mass diagram. {\uMinr}, corrected for both Galactic extinction and dust extinction within the EIG, as function of stellar mass, {\Mstar}. The green thick lines show the limits of the `green valley', based on equations (1) and (2) of \cite{2014MNRAS.440..889S}. The thin blue line shows a linear fit to the EIG data. The dashed blue lines show the $\pm 1\sigma$ deviation from this fit. Filled symbols indicate EIGs classified as early-types. \label{f:Res_ColorMass} } \end{centering} \end{figure*} It is evident from Figure \ref{f:Res_ColorMass} that most EIGs are `blue cloud' galaxies. There are no EIGs in the `red sequence' and only one that is certainly within the `green valley' (EIG 1a-04). Based on comparison between the measurements and the `green valley' limit shown in Figure \ref{f:Res_ColorMass}, we can conclude with 0.95 confidence that the probability for an EIG to be in the `blue cloud' is $>$0.76. The probability for an EIG to be in the `red sequence' is $<$0.12. A linear relation was fitted to the measured EIG points of Figure \ref{f:Res_ColorMass} (the thin blue line in the figure): $\uMinr_{\mbox{\scriptsize dust corrected}} = 0.52 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) - 3.61$ . The expected standard deviation in {\uMinr} around this fit is {0.23\,\magnitude} (marked in the figure by dashed blue lines). The 0.52 slope of this fit is significantly larger than the slope of the `green valley' limits, 0.25 \citep{2014MNRAS.440..889S}. It can be concluded from this that the EIGs will be closer to the `red sequence', the higher their stellar mass, {\Mstar}, is. EIGs with stellar mass smaller than $10^{(10.6 \pm 0.9)}\,\ifmmode \Mass_\sun\else $\Mass_\sun$\fi$ are typically `blue cloud' galaxies. A similar colour-mass relation probably holds also for less isolated galaxies, as indicated by the results of \cite{2012A&A...540A..47F} who studied the AMIGA sample. They have measured colour-luminosity correlation in different morphological subtypes, and found that the more massive spirals show redder colours, and that there is little evidence for `green valley' galaxies in the AMIGA sample. \vspace{12pt} Figure \ref{f:Res_NUVu_ur} shows an {\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIGs, with the approximate limit between the `blue cloud' and the `green valley' marked with a green line (the `blue cloud' is to the left of the line, and the `green valley' is to the right). These colours were corrected for Galactic extinction and dust within the EIG, as in Figure \ref{f:Res_ColorMass}. Other than distinguishing between blue and red galaxies, this diagram is useful for diagnosing how rapidly star formation quenches in `green valley' galaxies. The faster the star formation quenching is, the higher the {\NUVMinu} colour of galaxies would be as their {\uMinr} is gradually increased \citep[Fig.~7]{2014MNRAS.440..889S}. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_NUVu_ur.pdf} \caption [{\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIG] { Dust corrected {\NUVMinu} vs.~{\uMinr} colour-colour diagram of the EIG. The green line shows the approximated limit between the `blue cloud' and the `green valley', based on Fig.~7 of \cite{2014MNRAS.440..889S} (the `blue cloud' is to the left of the line, and the `green valley' is to the right). Filled symbols indicate EIGs classified as early-types. \label{f:Res_NUVu_ur} } \end{centering} \end{figure*} The `green valley' galaxy, EIG 1a-04, is significantly redder in both colours, ${\uMinr} = 2.32 \pm 0.08$ and ${\NUVMinu} = 3.5 \pm 0.3$, compared to the other EIGs as Figure \ref{f:Res_NUVu_ur} shows. A comparison to the simulated SFH scenarios from Fig.~7 of \cite{2014MNRAS.440..889S} shows that EIG 1a-04 fits a scenario of a galaxy that passed, more than {1\,\Gyr} ago, a rapid star formation quenching (with e-folding time significantly shorter than {1\,\Gyr}). The model fitted in this work (see Figure \ref{f:ResGalMC_1D}, page \pageref{f:ResGalMC_1D_2}, row 8) matches this scenario. Another evidence supporting this scenario is that EIG 1a-04 was not detected by ALFALFA. This indicates that it might have lost most of its HI content which, given its extremely high stellar mass, perhaps was once very high. It should be noted in this context that EIG 1a-04 is possibly not an extremely isolated galaxy (a possible false positive) as discussed in Appendix \ref{App:EIGdata}. Its {\Halpha} images show that LEDA 213033, a galaxy separated by {$\sim$2\ifmmode ^\prime\else $^\prime$\fi} from it, may be a neighbour less than {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} away. Other EIGs with less than 0.90 confidence of being in the `blue cloud' are 2s-02, 2s-07, and 1s-11 (calculated from data shown in Figure \ref{f:Res_ColorMass}). From Figure \ref{f:Res_NUVu_ur} it is evident that EIG 1s-11 (as well as EIG 2a-01) deviate towards the red section. EIG 2s-02 is somewhat redder in {\uMinr} or bluer in {\NUVMinu} than the bulk. EIG 2s-07 seems to be well within the bulk of EIGs in Figure \ref{f:Res_NUVu_ur}, which indicates that it is a regular `blue cloud' galaxy (data of Figure \ref{f:Res_ColorMass} indicates 0.77 probability that it is in the `blue cloud'). \subsection{Comparison to the main sequence of star-forming galaxies} Most star-forming galaxies exhibit a tight, nearly linear correlation between galaxy stellar mass and SFR (on a log-log scale; \citealt{2007ApJ...660L..43N}). This correlation is termed `the main sequence of star-forming galaxies' (or simply `the main sequence'). Up to redshifts \z$\sim$2 the correlation changes only in its normalization \citep{2012ApJ...752...66L, 2014MNRAS.440..889S}. Models of \cite{2010ApJ...718.1001B} and \cite{2013ApJ...772..119L} suggest that the main sequence is a result of an equilibrium between galaxy inflows and outflows. For a specific range of redshift, \z, and stellar mass, \Mstar, the main sequence can be expressed as: \begin{equation} \ifmmode \mbox{log}\else log\fi \left( \frac{\SFR}{\MsunPerYr} \right) = \alpha \cdot \ifmmode \mbox{log}\else log\fi \left( \frac{\Mstar}{\ifmmode \Mass_\sun\else $\Mass_\sun$\fi} \right) + \beta \label{e:Intr_MS} \end{equation} where: \begin{description} \item[$\alpha$, $\beta$] are the free parameters fitted to the observed data. \item[] \end{description} The $\alpha$ and $\beta$ parameters somewhat vary with redshift. \cite{2007ApJS..173..267S} and \cite{2012ApJ...756..113H} indicated that below $\ifmmode \mbox{log}\else log\fi \left( \Mstar \right) \sim 9.5$ the slope, $\alpha$, of the main sequence increases. \cite{2012ApJ...756..113H} have studied a sample of local Universe ALFALFA galaxies with SDSS and GALEX photometry and have found the following main sequence relation: \begin{equation} \begin{IEEEeqnarraybox*}{rCl} \alpha & = & \begin{cases} 0.851 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \leq 9.5 \\ 0.241 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5 \end{cases} \\ \vspace{6pt} \\ \beta & = & \begin{cases} -8.207 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \leq 9.5 \\ -2.402 & \text{ for\quad} \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5 \end{cases} \label{e:Intr_MS_ALFALFA}\rem{based on equation. 8 of \cite{2012ApJ...756..113H}} \end{IEEEeqnarraybox*} \end{equation} \begin{figure*} \begin{centering} \includegraphics[width=13.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MainSeqSFR.pdf} \includegraphics[width=13.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MainSeqSSFR.pdf} \caption [EIGs compared to the main sequence of star-forming galaxies] { EIGs compared to the main sequence of star-forming galaxies \markChange{({\SFR} vs. {\Mstar} in the top chart and {\SSFR} vs. {\Mstar} in the bottom chart)}. The solid blue lines are the best fit main sequence found by \cite{2012ApJ...756..113H} and described by \eqref{e:Intr_MS_ALFALFA}. The dashed blue lines indicate deviations of $\pm\,0.5\,\ifmmode \mbox{dex}\else dex\fi$ from the main sequence (a typical $1\,\sigma$ deviation of main sequence fits). Filled symbols indicate EIGs classified as early-types. \label{f:Res_MainSeq} } \end{centering} \end{figure*} Figure \ref{f:Res_MainSeq} shows the current SFR (upper plot) and SSFR (lower plot) of the EIGs as function of stellar mass, \Mstar, compared to the `main sequence' of \cite{2012ApJ...756..113H}. It indicates that, in general, EIGs fit the `main sequence of star-forming galaxies'. A fraction of $0.88^{+0.08}_{-0.16}$ of the EIGs fit the `main sequence' to within $\pm 0.5\,\ifmmode \mbox{dex}\else dex\fi$ (assuming that whether an EIG deviates by more than $\pm 0.5\,\ifmmode \mbox{dex}\else dex\fi$ or not follows a binomial distribution, and using a Wilson score interval with 0.95 confidence level). On average, the SFR of the EIGs is $0.1\,\ifmmode \mbox{dex}\else dex\fi$ lower than the main sequence, with a standard deviation of $0.4\,\ifmmode \mbox{dex}\else dex\fi$. This deviation from the main sequence is similar for all EIG sub-samples (1, 2 and 3). It may indicate a tendency of the EIGs to have slightly lower SFRs compared to main sequence galaxies. However, it could also result from differences between the SFR and {\Mstar} estimation methods used by \cite{2012ApJ...756..113H} and the ones used here. \markChange{\cite{2012ApJ...756..113H} derived stellar masses and SFRs by SED fitting of the seven GALEX and SDSS bands\rem{ (as described in section 4.1 of \citealt{2012AJ....143..133H})}, while in this work the {\Halpha} emission line and 2MASS data were also used when available for the SED fitting. Furthermore, in this work the SED fitting was used only for {\Mstar} estimation. The SFR was estimated based on {\Halpha} and WISE fluxes.} The EIGs that deviate by more than $0.5\,\ifmmode \mbox{dex}\else dex\fi$ in {\SFR} from the main sequence are EIG 1s-11 ($-0.7 \pm 0.2\,\ifmmode \mbox{dex}\else dex\fi$), EIG 2s-02 ($-0.6 \pm 0.1\,\ifmmode \mbox{dex}\else dex\fi$), EIG 2s-08 ($1.49 \pm 0.09\,\ifmmode \mbox{dex}\else dex\fi$) and EIG 2a-02 ($-0.73 \pm 0.06\,\ifmmode \mbox{dex}\else dex\fi$). EIG 2s-08 is, therefore, the only EIG known to deviate significantly from the main sequence. It has the highest {\SSFR} of all the measured EIGs, $\ifmmode \mbox{log}\else log\fi \left( \SSFR/ \yr^{-1} \right) = -7.80 \pm 0.09$, as well as the highest {\Halpha} EW ($460 \pm 39\,\Angst$). It also has the lowest stellar mass, $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) = 7.28 \pm 0.06$, as well as the lowest model estimated age, $\ifmmode \mbox{log}\else log\fi(Age_{1}/\yr) = 8.4 \pm 0.2$. Based on this we conclude (with 0.95 confidence) that the probability for an EIG to have SFR that deviates significantly from the main sequence is $<0.16$. For the LOG catalogue of isolated galaxies \cite{2013AstBu..68..243K} measured {\SSFR} values as function of {\Mstar} lower than those measured here for the EIGs and lower than the main sequence as measured by \cite{2012ApJ...756..113H}. This may be a result of their different method of estimating {\Mstar}. \cite{2013AstBu..68..243K} used {\Ks} band measurements assuming a {\Ks} luminosity to stellar mass ratio as that of the Sun, as opposed to fitting a model to measurements in several bands as was done here and by \cite{2012ApJ...756..113H}. They also observed that almost all of the LOG galaxies have $\ifmmode \mbox{log}\else log\fi \left( \SSFR / \yr^{-1} \right)$ lower than $-9.4$ \citep[Fig. 4]{2013AstBu..68..243K}. As the lower plot of Figure \ref{f:Res_MainSeq} indicates, the $\ifmmode \mbox{log}\else log\fi \left( \SSFR / \yr^{-1} \right)$ limit we measured for the EIGs is $-8.9$ with the exception of EIG 2s-08 that is above this value. \subsection{Mass histograms} \label{s:Res_MassHisgrms} The stellar mass, {\Mstar}, HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, and dynamic mass, $\ifmmode M \else $M$ \fi_{dyn,24}$, of EIGs were analysed in a fashion similar to that of the analysis of the dark matter (DM) subhalo mass, {\Mhalo}, of the Millennium II-SW7 simulation (Mill2; \citealt{2013MNRAS.428.1351G}) described in section 3.5 of \markChange{SB16}. Figures \ref{f:Res_MstarHist}, \ref{f:Res_M_HI_Hist} and \ref{f:Res_Mdyn24Hist} show histograms of {\Mstar}, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and $\ifmmode M \else $M$ \fi_{dyn,24}$ (respectively) for the EIGs of the Spring and Autumn sky regions. Figure \ref{f:Res_MStarHI_Hist} shows histograms of $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ of all EIGs for which {\Mstar} was estimated. For EIGs not detected by ALFALFA, $\Mstar$ was used as representing $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$. Thus the $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ statistics presented here, is expected to be slightly biased towards lower masses and a wider distribution. Similarly, Figure \ref{f:Res_M_HI_Hist} includes only EIGs that were detected by ALFALFA, and is, therefore, expected to be slightly biased towards higher masses and a narrower distribution. The right-hand side charts of figures \ref{f:Res_MstarHist}, \ref{f:Res_M_HI_Hist}, \ref{f:Res_MStarHI_Hist} and \ref{f:Res_Mdyn24Hist} may be compared to \markChange{Figure \ref{f:MhaloFromEIG_I}, an adaptation of Figure 6 of SB16,} which shows the simulation-based estimates of {\Mhalo} calculated for the combination of subsamples EIG-1 and EIG-2. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MstarHist.pdf} \caption [Stellar mass histograms for EIGs] { Stellar mass, {\Mstar}, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_MstarHist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_M_HI_Hist.pdf} \caption [HI mass histograms for EIGs] { HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_M_HI_Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_MStarHI_Hist.pdf} \caption [Stellar plus HI mass histograms for EIGs] { Stellar plus HI mass, $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$, histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_MStarHI_Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_Mdyn24Hist.pdf} \caption [Dynamic mass histograms for EIGs] { $\ifmmode M \else $M$ \fi_{dyn,24}$ histograms for Spring and Autumn EIGs. The left chart shows the histograms for all the EIGs. The right chart shows the histograms of subsamples EIG-1 and EIG-2. \label{f:Res_Mdyn24Hist} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=6.8cm,trim=0mm 0mm 0mm 0, clip]{Figs/HaloMass_for_EIG_II_Paper.pdf} \caption [Halo mass histogram for mock EIGs (Mill2 simulation)] { \markChange{Simulation-based halo mass, {\Mhalo}, histogram calculated for the combination of subsamples EIG-1 and EIG-2. Adapted from Figure 6 of SB16.} \label{f:MhaloFromEIG_I} } \end{centering} \end{figure*} As can be seen in these figures, the distributions of both {\Mstar} and $\left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right)$ are more scattered than those of {\Mhalo}. For EIGs in the Spring sky region the standard deviation of $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ is $\sim$0.3, compared to $\sim$0.6 for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ and $\sim$0.7 for $\ifmmode \mbox{log}\else log\fi \left[ \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right) / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right]$. For the Autumn EIGs the standard deviation of $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ is $\sim$0.4, compared to $\sim$1.1 for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ and $\sim$0.8 for $\ifmmode \mbox{log}\else log\fi \left[ \left( \Mstar + \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi \right) / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right]$. \markChange{Such higher scatter of {\Mstar} compared to {\Mhalo} is expected from the stellar mass to halo mass (SMHM) relation derived from simulations (e.g., Fig. 2 of \citealt{2009ApJ...696..620C},\rem{ Fig. 5 of \citealt{2010ApJ...717..379B},} Fig. 7 of \citealt{2013ApJ...770...57B}, Fig. 2 of \citealt{2015A&A...576L...7D}, Fig. 5 of \citealt{2015ApJ...799..130R} and Fig. 1 of \citealt{2017MNRAS.465.2381M}). The vast majority of EIGs have masses $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 10.5$ and $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right] < 12$ in which the SMHM relation's slope is large, i.e. in which {\Mstar} varies faster than {\Mhalo}.} The distribution of the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, seems to have a width similar to that of {\Mhalo} (the standard deviation of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$0.3 for the Spring EIGs and $\sim$0.4 for the Autumn EIGs). The distribution of the simulation predicted {\Mhalo} \markChange{(shown in Figure \ref{f:MhaloFromEIG_I})} is very different from the distribution of $\ifmmode M \else $M$ \fi_{dyn,24}$ (shown in Figure \ref{f:Res_Mdyn24Hist}). The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{dyn,24} / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$10.2 for the Spring and $\sim$9.9 for the Autumn. This is about an order of magnitude lower than the average $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ (11.0 for Spring and 11.3 for Autumn). The standard deviation of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode M \else $M$ \fi_{dyn,24} / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ is $\sim$0.3 for the Spring EIGs and $\sim$0.9 for the Autumn EIGs (compared to $\sim$0.3 for the Spring and $\sim$0.4 for the Autumn $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$). This difference between the simulated distribution of {\Mhalo} and the measured distribution of $\ifmmode M \else $M$ \fi_{dyn,24}$ may be the result of a large discrepancy between $\ifmmode M \else $M$ \fi_{dyn,24}$ and the actual dynamic mass (had it been measured using HI rotation curves), a large discrepancy of the simulation results from reality, or both. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ for EIG-1 and EIG-2 is $\sim$8.8 (Spring) and $\sim$9.4 (Autumn). From a comparison to the average $\ifmmode \mbox{log}\else log\fi \left[ \Mhalo / \left( \Msunh \right) \right]$ (11.0 for Spring, and 11.3 for Autumn) we conclude that the stellar masses, {\Mstar}, of EIGs are {$\sim$2.4\,\ifmmode \mbox{dex}\else dex\fi} (Spring) and {$\sim$2.0\,\ifmmode \mbox{dex}\else dex\fi} (Autumn) lower on average than the EIGs' DM masses. This, compared to a {$\sim$0.7\,\ifmmode \mbox{dex}\else dex\fi} difference between the baryonic to DM average densities in the Universe (according to WMAP7). If the dark-to-baryonic matter ratio of isolated galaxies is similar to the Universal average, then the geometric average of the fraction of baryonic mass turned into stars is {$\sim$0.02} (Spring) and {$\sim$0.05} (Autumn). \vspace{12pt} It is interesting to compare the stellar and HI content of galaxies from subsample EIG-1 with those of subsample EIG-2. As described in section \ref{sec:Introduction}, the EIG-1 subsample contains galaxies that passed the isolation criterion using both NED and ALFALFA data. The EIG-2 subsample contains galaxies that passed the criterion using NED data, but did not pass it using ALFALFA data (have neighbours closer than {3\,\Mpch} with sufficient HI content to be detected by ALFALFA). Therefore, the distance to the closest ALFALFA neighbour for EIG-1 galaxies is {$>$3\,\Mpch} by definition. For EIG-2 galaxies this distance is in the range {0.66--2.74\,\Mpch} ({0.9--3.9\,\Mpc}; see Table 8 of \citealt{2016MNRAS.456..885S}). Figure \ref{f:Res_EIG-1_2_Hist} shows histograms of {\Mstar} (left charts), and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (right charts) comparing subsample EIG-1s with EIG-2s (upper charts) and subsample EIG-1a with EIG-2a (lower charts). As can be seen, the {\Mstar} distribution of EIG-1s is similar to that of EIG-2s, and the distribution of EIG-1a is similar to that of EIG-2a. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the measured EIG-1s galaxies is $8.8 \pm 0.1$ with a standard deviation of {0.5}. This, compared to an average of $8.7 \pm 0.3$ with standard deviation {0.8} for the measured EIG-2s galaxies. The average $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the measured EIG-1a galaxies is $9.3 \pm 0.6$ with a standard deviation of {1.3}. This, compared to an average of $9.6 \pm 0.4$ with standard deviation {0.9} for the measured EIG-2a galaxies. These differences are not statistically significant. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_EIG-1_2_Hist.pdf} \caption [Stellar and HI content histograms comparing subsamples EIG-1 and EIG-2] { {\Mstar} (left charts), and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (right charts) histograms comparing subsample EIG-1s with EIG-2s (upper charts) and EIG-1a with EIG-2a (lower charts). One EIG-1s galaxy, one EIG-2s galaxy and two EIG-1a galaxies are not included in the left charts, because they do not have {\Mstar} data. Four EIG-1s galaxies, three EIG-2s galaxies and two EIG-1a galaxies are not included in the right charts, because they were not detected by ALFALFA. \label{f:Res_EIG-1_2_Hist} } \end{centering} \end{figure*} In contrast to this, the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} distributions differ significantly between the EIG-1 and the EIG-2 subsamples. The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the EIG-1s galaxies detected by ALFALFA is $9.13 \pm 0.09$ (with standard deviation {0.3}). This, compared to an average of $9.5 \pm 0.1$ (with standard deviation {0.2}) for the ALFALFA-detected EIG-2s galaxies; a $2.3\,\sigma$ difference. The difference in the average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ between the Autumn subsamples is the same as for the Spring subsamples ({$\sim$0.3}), but with lower statistical significance ($1.2\,\sigma$) due to the smaller numbers of measured galaxies. The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of the EIG-1a galaxies detected by ALFALFA is $9.1 \pm 0.2$ (with standard deviation {0.4}). This, compared to an average of $9.4 \pm 0.2$ (with standard deviation {0.3}) for the ALFALFA-detected EIG-2a galaxies. Combining the differences measured for the Spring and Autumn, gives an expected difference between $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of EIG-2 and EIG-1 of $0.3 \pm 0.1\,\ifmmode \mbox{dex}\else dex\fi$. Therefore, from the data of the ALFALFA-detected galaxies we conclude with $2.5\,\sigma$ significance that EIG-2 galaxies have higher {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, on average, than EIG-1 galaxies (by a factor of $2.1 \pm 0.6$). The actual average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ of all EIGs of these subsamples are expected to be lower than the above values, since the EIGs not detected by ALFALFA are expected to have low {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} values. However, adding the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} values of those EIGs is not expected to make the distributions of the EIG-1 and EIG-2 subsamples significantly more similar. We can therefore conclude that extremely isolated galaxies that have neighbours with significant {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} content at distances {$<$3\,\Mpch} tend to have higher {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} compared to extremely isolated galaxies lacking such neighbours. The HI content of galaxies, therefore, seems to be environmentally dependent even in extremely isolated regions. \subsection{Scaling relations} The HI gas content can be combined with the {\SFR}-to-{\Mstar} relation of Figure \ref{f:Res_MainSeq} in an attempt to investigate its connection to the main sequence of star-forming galaxies. This is done by breaking the {\SFR}-to-{\Mstar} relation into a relation between the star formation and the HI gas content (Figure \ref{f:Res_SF_MHI}) and a relation between the HI content and {\Mstar} (Figure \ref{f:Res_HI_Mstar}). Figure \ref{f:Res_SF_MHI} shows {\SFR} and \markChange{star formation efficiency ($\SFE \equiv \SFR / \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi$)} vs.~the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of the EIGs. Figure \ref{f:Res_HI_Mstar} shows {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} and $f_{HI} \equiv \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \Mstar$ vs.~the stellar mass, {\Mstar}, of the EIGs. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFR_MHI.pdf} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFE_MHI.pdf} \caption [Star formation vs.~HI mass] { Star formation ({\SFR} in the upper chart, and {\SFE} in the lower chart) vs.~HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of the EIGs. The green thick dashed lines show the fit found by \cite[Fig.~4.b]{2012ApJ...756..113H} for {\SFR} to {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of star-forming ALFALFA galaxies. Filled symbols indicate EIGs classified as early-types. \label{f:Res_SF_MHI} } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_M_HI_Mstar.pdf} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_f_HI_Mstar.pdf} \caption [HI content vs.~stellar mass] { HI content ({\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} in the upper chart, and $f_{HI}$ in the lower chart) vs.~stellar mass, {\Mstar}, of the EIGs. The blue solid lines \markChange{show} a linear fit to the EIGs' {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} to {\Mstar} data. The dashed blue lines show the $\pm 1\sigma$ deviation from this fit. The green thick dashed lines show the fit found by \cite[eq.~1]{2012ApJ...756..113H} for {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} to {\Mstar} of star-forming ALFALFA galaxies. Filled symbols indicate EIGs classified as early-types. \label{f:Res_HI_Mstar} } \end{centering} \end{figure*} The average deviation of EIGs from the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} fit of \cite{2012ApJ...756..113H} is $-0.04 \pm 0.09\,\ifmmode \mbox{dex}\else dex\fi$ (with standard deviation of {0.5\,\ifmmode \mbox{dex}\else dex\fi}). Despite the EIG's small range of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ this indicates that the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} relation of EIGs is similar to that of the general population of star-forming galaxies. The near-unity slope in the {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} fit translates to a near-zero slope in {\SFE} to {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} (lower chart of Figure \ref{f:Res_SF_MHI}), implying that the {\SFE} may be (statistically) independent of {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. To test this hypothesis, the Pearson product-moment correlation coefficient between {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} of the EIGs was calculated. The resultant correlation coefficient, $-0.14$, is insignificant. If {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} are not correlated, there is 0.52 chance of finding a correlation coefficient measurement at least this high. Therefore, the EIGs' measured data, supports independence of {\SFE} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}. The following linear relation was fitted to the measured {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} vs.~{\Mstar} EIG points (marked by blue solid lines in Figure \ref{f:Res_HI_Mstar}): \begin{equation} \ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) \cong 0.34 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) + 6.20 \label{e:Res_MHI_Mstar_fit} \end{equation} The expected standard deviation in $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ around this fit is marked in the figure by dashed blue lines ({0.25} on average). The average $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ deviation of EIGs from the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} fit of \cite{2012ApJ...756..113H} is $-0.16 \pm 0.05$, implying that for a given stellar mass, {\Mstar}, the HI mass, {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, of EIGs is slightly lower, on average, than that of the general population of star-forming galaxies. However, some of this deviation may be a result of the difference between the {\Mstar} estimation method used by \cite{2012ApJ...756..113H} and the one used here. \cite{2011Ap.....54..445K} analysed the {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}-to-{\Mstar} relation for the 2MIG catalogue of isolated galaxies. The galaxies of the 2MIG catalogue have a range of {\Mstar} higher than that of the EIG sample (with the bulk in the range $9.5 < \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 11.5$). Most of these higher mass and less isolated galaxies have $\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi < \Mstar$ \citep[Fig. 10]{2011Ap.....54..445K}. This is in agreement with the results shown in Figure \ref{f:Res_HI_Mstar}, where for $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) > 9.5$ most galaxies have $\ifmmode \mbox{log}\else log\fi \left( f_{HI} \right) < 0$. For their higher {\Mstar} sample \cite{2011Ap.....54..445K} found a slope ($1.00 \pm 0.04$) in the linear fit of $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ as function of $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ which is larger than the slope found here. \vspace{12pt} The following log-log predictor for {\SFR}, using {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi}, was fitted to the EIGs data by a partial least-squares regression: \begin{equation} \begin{IEEEeqnarraybox*}{lCl} \ifmmode \mbox{log}\else log\fi \left( \SFR / \MsunPerYr \right) & \cong & 0.580 \cdot \ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) + \\ & & 0.209 \cdot \ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) - 7.95 \end{IEEEeqnarraybox*} \label{e:Res_SFR_predictor} \end{equation} The EIGs' {\SFR}s are shown vs.~this predictor in Figure \ref{f:Res_SFR_predictor}. The expected standard deviation around this predictor is {0.29\,\ifmmode \mbox{dex}\else dex\fi} (shown in the figure as dashed blue lines). It is somewhat lower than the {0.5\,\ifmmode \mbox{dex}\else dex\fi} {\SFR}-to-{\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} standard deviation around the \cite{2012ApJ...756..113H} relation, and the {0.4\,\ifmmode \mbox{dex}\else dex\fi} standard deviation in the {\SFR} to main sequence difference (Figure \ref{f:Res_MainSeq}). Therefore, \eqref{e:Res_SFR_predictor} that considers both {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} is a more accurate estimate of {\SFR} compared to predictors based on {\Mstar} or on {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} alone. It applies to EIGs, but is possibly also a good estimate for galaxies in denser regions that have not interacted with neighbours in the last few {\Gyr}. \begin{figure*} \begin{centering} \includegraphics[width=14cm,trim=0mm 0mm 0mm 0, clip]{Figs/Res_SFR_predictor.pdf} \caption [{\SFR} predictor] { Star formation rate, {\SFR}, vs.~the {\Mstar} and {\ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi} partial least-squares regression predictor of \eqref{e:Res_SFR_predictor}. The blue solid line shows the one-to-one line. The dashed blue lines show the $\pm 1\sigma$ deviation from the one-to-one line. Filled symbols indicate EIGs classified as early-types. \label{f:Res_SFR_predictor} } \end{centering} \end{figure*} \subsection{Morphology} \label{s:Res_Morphology} It is evident from Table \ref{T:Res_Morphology} that EIGs are typically late types (20 were classified as late-types, compared to six unknown and five early-types). Based on the 20 EIGs (65 per cent) that were classified as late-types, we conclude with 0.95 confidence that the probability for an EIG to be a late-type is $\geq$0.47 (using Wilson score interval for binomial distribution). Similarly, based on the five EIGs (16 per cent) classified as early-type we conclude with 0.95 confidence that the probability for an EIG to be an early-type is $\geq$0.07. For comparison, \cite{2012AJ....144...16K}\rem{ and \cite{2012PhDT.........5K}} identified three early-type galaxies (5 per cent) of the 60 isolated galaxies of the VGS sample. \cite{2006A&A...449..937S} found in the most isolated sub-sample of AMIGA that\rem{ 82 per cent of the galaxies are late-types (Sa--Sd) while} 14 per cent are early-types (E--S0). \cite{2013AstBu..68..243K} found that 5 per cent of the LOG sample galaxies are early-types (E--S0/a). Of the five early-types, four are part of the EIG-1 subsample (galaxies that passed the isolation criterion using both NED and ALFALFA HI data as described in section \ref{sec:Introduction}). These make an early-type fraction significantly larger in the EIG-1 subsample (27 per cent) than in the entire EIG sample (16 per cent). The only early-type galaxy that is not part of the EIG-1 subsample, EIG 3s-01, passed the isolation criterion in the ALFALFA $\alpha$.40 dataset (but was slightly short of passing it in the NED dataset). This means that none of the five early-type EIGs have ALFALFA (high HI content) neighbours within {3\,\Mpch}. We can, therefore, conclude (with 0.94 confidence\rem{1-0.5^4~=0.94}) that EIGs lacking high HI content neighbours within {3\,\Mpch} have a higher tendency to be early-types, compared to EIGs that have such neighbours. As discussed in section \ref{s:Res_MassHisgrms}, such EIGs with no high HI content neighbours tend to have a lower HI content compared to ones with high HI content neighbours. This lower HI content may be linked to the fact that EIG-1 galaxies tend more to be early-types. From the EIG-1 subsample classification it can be concluded that for a galaxy that passes the strict isolation criterion of the EIG-1 subsample, the probability to be early-type is $\geq$0.11 (using Wilson score interval with 0.95 confidence). \vspace{12pt} Only one of the early-types, EIG 1a-04, is not classified as blue in Figure \ref{f:Res_ColorMass} but is rather a `green valley' galaxy. Three others (1s-02, 1s-12 and 3s-01) are blue, and the colour classification of the last, EIG 3s-01, is unknown. This means that an extremely isolated early-type galaxy has a probability $\geq$0.30 of being blue (with 0.95 confidence). This may be compared to the $0.057 \pm 0.004$ fraction of blue galaxies found by \cite{2009MNRAS.396..818S} in the low-redshift early-type galaxy population\markChange{, and to the $\sim$0.20 fraction of blue galaxies found by \cite{2016A&A...588A..79L} in their sample of isolated early-types}. All five early-type EIGs are within {0.5\,\ifmmode \mbox{dex}\else dex\fi} of the main sequence in Figure \ref{f:Res_MainSeq}. From this we conclude that an extremely isolated early-type galaxy has a probability $\geq$0.57 (with 0.95 confidence) of fitting the main sequence of star-forming galaxies to within {0.5\,\ifmmode \mbox{dex}\else dex\fi}. \vspace{12pt} A large fraction of the EIGs show asymmetric star formation, and many show strong compact star-forming regions (see Figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} and Tables \ref{T:EIG_PlgnHaFlux} and \ref{T:EIG_PlgnEW}). This indicates that star formation is a stochastic process that may occur unevenly across a galaxy in a given time, even in the most isolated galaxies. Sources of the randomness of star formation may include uneven `fuelling' of gas, and collisions with very small satellites (e.g., with $\Mhalo < 10^{9}\,\Msunh$) that could not be detected around the EIGs studied here. \section{The Sample} \label{sec:Sample} We have chosen the sample of EIGs using a simple isolation criterion: a galaxy is considered an EIG and is included in the sample if it has no known neighbours closer than \markChange{a certain neighbour distance limit in 3D redshift space ({200\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} or {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi} as explained below)} and if its redshift is in the range $2000<\cz<7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$. No magnitude, HI mass or size limit was used in the selection of candidate neighbours. The use of such limits would have somewhat reduced the level of isolation of the sample (especially for the closer EIGs) and therefore was not preferred. Not using such limits, however, complicates somewhat the analysis of the sample's isolation level (described in section 3 of SB16 \markChange{and summarized below}). \markChange{It also causes the sensitivity limits (listed below) and the isolation level to depend on redshift. Higher redshift EIGs are less isolated on average than lower redshift EIGs. For this reason, the redshift of EIGs was limited to {7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}.} One of the unique advantages of the EIG sample we study here is that, apart from the optical redshift data commonly used to estimate environment density, it also utilized HI redshifts from the Arecibo Legacy Fast ALFA survey (ALFALFA) survey. The ALFALFA survey is a second-generation untargeted extragalactic HI survey initiated in 2005 \citep{2005AJ....130.2598G, 2007AJ....133.2569G, 2007AJ....133.2087S}. This survey utilizes the superior sensitivity and angular resolution of the Arecibo 305\,m radio telescope to conduct the deepest ever census of the local HI Universe. ALFALFA was particularly useful in verifying the isolation of the target galaxies, since by being an HI survey it easily measures redshifts of low surface brightness galaxies (LSBs) and other low-luminosity late-type neighbours that are often difficult to detect optically but abound with HI\rem{\citep{2013JApA...34...19D}}. \markChange{ The ALFALFA dataset we used was the ``$\alpha$.40 HI source catalogue'' \citep[$\alpha$.40;][]{2011AJ....142..170H}. This catalogue covers 40 percent of the final ALFALFA survey area ($\sim$2800\,\sqDeg) and contains 15855 sources. The sensitivity limit of the ALFALFA dataset is given by eq. (6) and (7) of \cite{2011AJ....142..170H} as function of the velocity width of the HI line profile, {\Whalf}. For a typical value $\Whalf = 100\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$ the sensitivity limit of the ALFALFA dataset is {$\sim$0.6\,\Jy\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}. For the redshift range of the EIG sample this translates to HI mass, $\ifmmode \mbox{log}\else log\fi \left( \ifmmode \Mass_{HI}\else $\Mass_{HI}$\fi / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$, sensitivity limit of {$\sim$8.0} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$9.1} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$). } The search criterion was applied to two sky regions, one in the spring sky (Spring) and the other in the autumn sky (Autumn) as described in Table \ref{T:Sample-Regions}. These particular regions were selected since they are covered by the $\alpha$.40 ALFALFA catalogue \citep{2011AJ....142..170H}. Both regions include mainly high Galactic latitudes. The Spring region is almost fully covered by spectroscopic data in SDSS DR10 \citep{2014ApJS..211...17A}. \begin{ctable} [ caption = Sample search regions, doinside = \small, label = T:Sample-Regions ] {@{}cccc@{}} {} { \FL & {\RA} (J2000) & {\Dec} (J2000) & \cz~$\left[ \ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi \right]$ \ML Spring & 7h30m--16h30m & $4\ifmmode ^{\circ}\else $^{\circ}$\fi$--$16\ifmmode ^{\circ}\else $^{\circ}$\fi$ & 2000--7000 \NN Autumn & 22h00m--03h00m & $24\ifmmode ^{\circ}\else $^{\circ}$\fi$--$28\ifmmode ^{\circ}\else $^{\circ}$\fi$ & 2000--7000 \LL } \end{ctable} In addition to ALFALFA, the NASA/IPAC Extragalactic Database\footnote{http://ned.ipac.caltech.edu/} (NED) was also used as a source for coordinates and redshifts in and around the search regions. \markChange{The NED dataset we used includes data downloaded from NED on November 13, 2012 for object types: galaxies, galaxy clusters, galaxy pairs, galaxy triples, galaxy groups, and QSO. The completeness functions derived in section 3.2 of SB16 indicate that the sensitivity limit of the NED dataset in terms of $\SDSSg$ magnitude is $\sim$18.5 for the Spring sky region and $\sim$17 for the Autumn sky region. For the redshift range of the EIG sample this translates to absolute $\SDSSg$ magnitude sensitivity limit of {$\sim$-13.8} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$-16.5} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) for the Spring sky region, and {$\sim$-15.3} to {$\sim$-18.0} for the Autumn sky region. A rough conversion to stellar mass, {\Mstar}, assuming $\SDSSg$ luminosity to mass ratio as that of the sun, gives a $\ifmmode \mbox{log}\else log\fi \left( \Mstar / \ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right)$ sensitivity limit of {$\sim$7.6} (for $\cz = 2000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) to {$\sim$8.6} (for $\cz = 7000\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi$) for the Spring sky region, and {$\sim$8.2} to {$\sim$9.2} for the Autumn sky region. } The EIGs, studied here, were divided to three subsamples: \begin{enumerate} \item[1.] Galaxies that passed the criterion using both NED and ALFALFA data \markChange{with a neighbour distance limit of {300\,\ifmmode {\rm km\,s}^{-1}\else {km\,s$^{-1}$}\fi}. This translates to not having any known neighbour within a distance of $3\,\Mpch \cong 4.26\,\Mpc$.} \item[2.] Galaxies that passed the criterion using NED data \markChange{with a neighbour distance limit of {3\,\Mpch}}, but did not pass using ALFALFA data (had neighbours closer than {3\,\Mpch} in the ALFALFA database). \item[3.] Galaxies for which the distance to the closest neighbour in NED's data is {2 -- 3\,\Mpch} (regardless of the distance to the closest neighbour in ALFALFA's data). \item[] \end{enumerate} \markChange{Subsamples 1 and 2 contain all catalogued galaxies that passed their criteria in the studied sky regions. Subsample 3 contains only those galaxies that seemed to be isolated in the various searches performed over the years, but were later found to have neighbours in the range {2 -- 3\,\Mpch} (with the 2012 NED dataset described above)}. It also contains a galaxy, EIG 3s-06, which was found by searching the ALFALFA data alone, but had neighbours in the range {2 -- 3\,\Mpch} in the NED dataset. The galaxies were named according to their subsample and sky region, using the following format: \begin{equation*} \mbox{EIG BR-XX} \end{equation*} where: \begin{description} \item[B] is the galaxy's subsample (1, 2 or 3, as described above); \item[R] is the sky region: `s' - Spring, `a' - Autumn; \item[XX] is the serial number of the galaxy in the subsample. \item[] \end{description} So, for example, object EIG 3s-06 is the sixth galaxy in subsample 3 of the spring sky region. The galaxies of the different subsamples are listed in Tables 2 through 7 of SB16. Subsample EIG-1 contains 21 galaxies (14 Spring and 7 Autumn galaxies). Subsample EIG-2 contains 11 galaxies (7 Spring and 4 Autumn galaxies). Subsample EIG-3 contains 9 galaxies (7 Spring and 2 Autumn galaxies). \markChange{In total, the sample contains 41 EIGs.} Notes regarding specific EIGs are listed in Appendix \ref{App:EIGdata}. \vspace{12pt} The use of the ALFALFA unbiased HI data significantly improved the quality of the sample. Out of 32 galaxies that passed the \markChange{3\,\Mpch} criterion using NED data alone, 11 galaxies did not pass the criterion when tested with ALFALFA data. Neighbourhood properties of the sample EIGs were analysed using both observational data and cosmological simulations. The analysis based on observational data is described in detail in section 2.4 of SB16. Tables 8 and 9 of SB16 list properties such as the distance to the closest neighbour and neighbour counts for each sample EIG. A comparison to random galaxies show that on average the neighbourhood density of EIGs is about one order of magnitude lower than that of field galaxies. Observational neighbourhood data further indicates that EIGs tend to reside close to walls and filaments rather than in centres of voids. Using cosmological simulations, we confirmed that the EIG-1 and EIG-2 subsamples are a subset of galaxies significantly more isolated than the general galaxy population. Apart from the low density regions in which they reside, EIGs are characterized by normal mass haloes, which have evolved gradually with little or no major mergers or major mass-loss events. As a result of their low-density environments, the tidal acceleration exerted on EIGs is typically about one order of magnitude lower than the average tidal acceleration exerted on the general population of galaxies. The level of contamination in the sample, i.e. the fraction of EIGs which are not in extremely isolated environments or which experienced strong interactions in the last {3\,\Gyr}, was found to be {5\%--10\%}. The Spring EIGs seem to be more isolated than the Autumn EIGs. For further details about the analysis using cosmological simulations and its results see section 3 of SB16. \vspace{12pt} For similar purposes, other samples of isolated galaxies were defined and studied in `the Analysis of the interstellar Medium of Isolated GAlaxies' (AMIGA) international project \citep{2007A&A...472..121V, 2013MNRAS.434..325F}\rem{http://amiga.iaa.es}, in the `Two Micron Isolated Galaxy' catalogue (2MIG; \citealt{2010AstBu..65....1K}), in the `Local Orphan Galaxies' catalogue (LOG; \citealt{2011AstBu..66....1K, 2013AstBu..68..243K}), and in the 'Void Galaxy Survey' (VGS; \citealt{2012AJ....144...16K}). In section 2.5 of SB16 these were discussed and compared to the EIG sample studied here. The comparison showed that the EIG sample galaxies are significantly more isolated than the AMIGA, 2MIG and LOG galaxies (in terms of the distance to the closest neighbour) and that the \mbox{EIG-1} galaxies are more isolated than the VGS galaxies. \markChange{Other notable isolated galaxy samples, not analysed in SB16, are the UNAM-KIAS catalogue of isolated galaxies \citep{2010AJ....139.2525H} and the catalogues of isolated galaxies, isolated pairs, and isolated triplets in the local Universe of \cite{2015A&A...578A.110A}.} \section{Introduction} \label{sec:Introduction} The research described here is part of an extensive study of star formation properties and evolution of galaxies in different environments and of various morphological types, conducted in the past few decades \citep[e.g.,][]{1983PhDT.........1B, 1995PhDT........86A, 1998MNRAS.298..920A, 1998ApJ...504..720B, 2001PhDT..Ana_Heller, 2006MNRAS.368..864B, 2008arXiv0806.2722B, 2008MNRAS.390..408Z}. Specifically, we studied galaxies in the most extremely underdense regions of the local Universe. These galaxies are particularly interesting since they evolved with little or no environmental interference in the last few {\Gyr}, and are therefore useful for validating and calibrating galaxy evolution models. Furthermore, when compared to galaxies in denser regions, they illuminate the overall effects of the environment on the evolution of galaxies. It is well-known that extremely dense environments can greatly influence the star formation (SF) in galaxies. Tidal interactions and mergers of galaxies can trigger extreme starbursts with SFR up to $10^{3}\,\MsunPerYr$, while isolated galaxies hardly ever exhibit {SFR $>$20\,\MsunPerYr} \citep{1998ARA&A..36..189K}. Although the effect on SFR may be extreme during mergers in clusters as well as in pairs and loose groups, the effect on SFR averaged over the whole history of a galaxy may be small \citep{2003A&A...405...31B, 2009ApJ...692..556R}. In cluster environments, apart from the higher rate of interactions, ram pressure by the intracluster medium strips the galaxies of their gas and, therefore, reduces SF. It has also been suggested that in some cases the ram pressure might increase SF \citep{1985ApJ...294L..89G}. Galaxies in isolated environments are generally considered to be gas-rich, fainter, bluer, of later type, and exhibit higher specific star formation rates (SSFRs; SFRs per unit stellar mass) than galaxies in average density environments \citep{1980ApJ...236..351D, 1999AJ....118.2561G, 2002A&A...389..405P, 2004ApJ...617...50R, 2005ApJ...624..571R, 2012ApJ...753..166D, 2012AJ....144...16K, 2014MNRAS.438..548M, 2016arXiv160104092M}. Some claim that this is not just an effect of the higher abundance of late-type galaxies, and that the late-type galaxies themselves are fainter in under-dense regions than in average density regions \citep{2004A&A...420..873V, 2005MNRAS.356.1155C, IAU:949432}. \markChange{Numerous other studies also indicate that the properties of galaxies are influenced by their neighbourhood.} \cite{1982ApJ...253..526B} found that the inner regions of isolated galaxies are bluer, compared to `field' galaxies. This was later suggested to be a consequence of intensive formation of massive stars in the nuclei \citep{1982A&A...113..231B}. \cite{2004A&A...420..873V} found that bars are less frequent in isolated galaxies than in perturbed galaxies. \cite{2014ApJ...788L..39F} found that bluer pseudo-bulges tend to reside in neighbourhoods with a higher probability of tidal perturbation. They suggest that the environment could be playing a role in rejuvenating pseudo-bulges. \cite{2012MNRAS.424.2574W} found that satellite galaxies around isolated bright primary galaxies are systematically redder than field galaxies of the same stellar mass, except around primaries with $log\left(\Mstar/\ifmmode \Mass_\sun\else $\Mass_\sun$\fi \right) < 10.8$, where the satellites' colours were similar or even bluer. \vspace{12pt} This work attempts, among other things, to help resolve the question of `Nature vs. Nurture'; does the evolution of galaxies depend only on their content or do their large-scale environments have a significant evolutionary influence. Some argue that galaxy formation is driven predominantly by the mass of the host DM halo, and is nearly independent of the larger-scale halo environment (e.g., \citealt{2008MNRAS.386.2285C, 2009ApJ...691..633T}). This is supported by their simulation models that produce void galaxies conforming to some observed statistical properties (e.g., colour distribution, luminosity function and nearest neighbour statistics). However, since there are many galaxy properties that most simulations cannot predict (e.g., HI content), and since the halo mass of galaxies cannot be directly measured, this hypothesis is hard to prove or disprove. \vspace{12pt} We have chosen a sample of extremely isolated galaxies (EIGs) from the local universe based on a simple isolation criterion. The neighbourhood properties of this sample were analysed using both observational data and cosmological simulations. The cosmological simulations were further used to estimate the properties and histories of the dark matter (DM) haloes in which the sample EIGs reside. The sample and its analysis are described in detail in the first paper of this series, \cite{2016MNRAS.456..885S} (SB16), and are summarized here in Section \ref{sec:Sample}. Extensive optical observations of the sample EIGs in broad-band and rest-frame {\Halpha} were performed using the one meter telescope of the Florence and George Wise Observatory\footnote{IAU code 097 - http://wise-obs.tau.ac.il/} (WO). Section \ref{ch:ObsNProcess} describes these observations and their processing. The results of these observations, along with public observational data, were used to measure the current SFRs and to estimate SFHs. These observational results are described in section \ref{ch:Results}. Analysis of these results is presented in section \ref{ch:Analysis}, and the findings are discussed in section \ref{s:DisConc}. \vspace{12pt} Throughout this work, unless indicated otherwise, $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with the seven-year Wilkinson Microwave Anisotropy Probe data (WMAP7, \citealt{2011ApJS..192...17B}) parameters are used, including the dimensionless Hubble parameter $\h = 0.704$. We adopt here the solar {\SDSSg}-band absolute magnitude of $\AbsMgSun = +5.12$ (according to the Sloan Digital Sky Survey, SDSS, DR7 web site\footnote{www.sdss.org/dr7/algorithms/sdssUBVRITransform.html\\\#vega\_sun\_colors}). \section{Observations and Data Processing} \label{ch:ObsNProcess} \subsection{Instrumentation} Optical observations were performed using the 1\,meter (40\,inch) telescope of the WO. The telescope was equipped with a $1300\times1340$ back-illuminated Princeton Instruments CCD with pixel size of $0.57 \pm 0.01$ \,arcsec\,pixel$^{-1}$ and an overall field of view of $\sim12.5\,\ifmmode ^\prime\else $^\prime$\fi$. EIGs were imaged using wide band Bessell U, B, V, R and I filters and a set of narrow-band rest-frame {\Halpha} filters for various redshifts. A thorough description of the {\Halpha} filter set is provided in appendix A of \cite{2015PhDT}. \subsection{Observations} \markChange{We observed 34 of the EIGs} in the {\R} and in one or two appropriate {\Halpha} narrow bands. \markChange{Of these EIGs}, those not covered by SDSS (and a few that are) were imaged also in the U, B, V and I bands. At least six dithered exposures were obtained in each filter. Exposures were 20 minutes long for the {\Halpha} and U bands, and 10 minutes long for the B, V, and I bands. Exposures in the R band were 5 minutes long for EIGs that were observed only in R and {\Halpha}, and 10 minutes long for EIGs observed in all bands. Whenever possible, the exposures of the R and {\Halpha} bands were taken in time proximity so that their atmospheric conditions and air-masses would remain similar. This is important for the accurate measurement of the {\Halpha} equivalent width (EW), as described in \cite{2012MNRAS.419.2156S} (S12). Photometric calibrations of the wide bands were performed for EIGs not covered by SDSS, and for a few that are, using \cite{1992AJ....104..340L}\rem{Landolt} standards. Spectrophotometric calibrations of the {\Halpha} band were performed using \cite{1990AJ.....99.1621O} standard stars that have well known spectra, are stable, and have as few features around {\Halpha} as possible. Images were processed using the Image Reduction and Analysis Facility ({\small IRAF}) software\footnote{http://iraf.noao.edu/}. The reduction pipeline included standard bias subtraction, flat-fielding and image alignment. For images taken in the I band, a fringe removal step was added. \subsection{{Net-\Halpha} images} \label{s:ObsNPrc_NetHa} {Net-\Halpha} ({\nHa}) data were derived from the measurements using the recipes described in S12. EW values were derived using eq.~12 and 16 of S12. The {\nHa} fluxes were derived using eq.~7 and 12 of S12 after applying the photometric calibrations described in section 3 of S12. Eq. 12, 16 and 7 of SB12 are shown here for reference: \begin{equation} \begin{IEEEeqnarraybox*}{lCl} \mbox{cps}_{N,line} & \cong & \left(\mbox{cps}_{N} - \frac { \mbox{cps}_{W} }{ \mbox{WNCR} } \right) \\ && \times \left[1 - \frac{1}{\mbox{WNCR}} \cdot \frac{ \mbox{T}_{atm,W}(\lambda_{line}) \: \mbox{T}_{W}(\lambda_{line}) } { \mbox{T}_{atm,N}(\lambda_{line}) \: \mbox{T}_{N}(\lambda_{line}) } \right]^{-1} \end{IEEEeqnarraybox*} \label{e:cps_line_N_solved} \end{equation} \begin{equation} \mbox{EW} \cong \frac { \mbox{cps}_{N,line} } {\mbox{cps}_{N} - \mbox{cps}_{N,line}} \cdot \frac {\int_0^\infty \mbox{T}_{N}(\lambda) \: d\lambda} { \mbox{T}_{N}(\lambda_{line}) } \label{e:EW_solved} \end{equation} \begin{equation} \mbox{F}_{line} \cong \frac { \mbox{cps}_{N,line} } { \mbox{T}_{atm,N}(\lambda_{line}) \: \mbox{T}_{N}(\lambda_{line}) \: \mbox{R}_{\lambda}(\lambda_{line}) } \label{e:F_line} \end{equation} where: \begin{description} \item[$\mbox{cps}_{N}$, $\mbox{cps}_{W}$] are the measured count rates of the narrow-band (N) and wide-band (W) filters (respectively) in instrumental units (typically analogue to digital units per second, ADU s$^{-1}$); \item[$\mbox{cps}_{N,line}$] is the line contribution to $\mbox{cps}_{N}$ (see also eq. 3 of SB12); \item[WNCR] (wide to narrow continuum ratio) is the ratio between the count rate contributed by the continuum in the W band and the count rate contributed by the continuum in the N band (see also eq. 10 of SB12); \item[$\mbox{T}_{N}(\lambda)$, $\mbox{T}_{W}(\lambda)$] are the transmittance functions of the N and W bands, respectively; \item[$\mbox{T}_{atm,N}(\lambda)$, $\mbox{T}_{atm,W}(\lambda)$] are the atmospheric transmittance as function of wavelength, including effects of weather, elevation and airmass of observation, when the N and W bands (respectively) were imaged; \item[$\mbox{R}_{\lambda}$] is the responsivity as function of wavelength of the rest of the electro-optical system (i.e. the telescope and sensors, excluding the transmittance effect of the filters) typically in ADU~erg$^{-1}$~cm$^{2}$; \item[$\lambda_{line}$] is the central wavelength of the emission line; \item[$\mbox{F}_{line}$] is the emission-line's flux. \item[] \end{description} The WNCR required for \eqref{e:cps_line_N_solved} was estimated using the method of WNCR-to-colour linear fit suggested in section 6 of S12 (sixth paragraph). The process included selecting a reference wavelength band for each EIG, the first band with a good quality image from the following list: {\V}, {\I}, {\B}, SDSS {\SDSSg} and SDSS {\SDSSi}. In the combined images of these {\R} and reference bands foreground stars were identified (using their intensity profiles), and their instrumental colours (reference minus {\R}) were measured along with that of the EIG. All {\nHa} measurements were performed on the individual {\Halpha} images, each paired with an {\R} image taken at the closest time and \markChange{airmass} available. For each such pair, the foreground stars were measured in both the {\R} and {\Halpha} images, and their WNCR values were calculated (the {\R} to {\Halpha} cps ratio). A linear relation between WNCR and the uncalibrated colour was fitted to the results of these foreground stars. The WNCR of the pair of {\Halpha} and {\R} images was then calculated using the fit and the EIG's measured uncalibrated colour. Next, an {\nHa} image was created for each {\Halpha} and {\R} image pair. The sky level, measured around the EIG, was first subtracted from the {\Halpha} and {\R} images. Then, the images were scaled by their exposure time. Finally, the pixel values of the {\nHa} images were calculated using \eqref{e:cps_line_N_solved}. \subsection{Photometry} \label{sec:Wise_Photometry} Apertures for the photometric measurements of the EIGs were defined as polygons or as elliptical isophotes fitted to the combined R-band images. The polygonal apertures approximately trace the $\R = 26\,\magAsecSq$ isophote of the EIGs, but exclude foreground Galactic stars and galaxies, projected close to the EIG. This resulted in some reduction in the measured flux from the EIGs, which was significant only for EIG 2s-06 that has a foreground star of magnitude $\SDSSr = 15.6$ projected close to its centre. Polygonal apertures were also defined for some resolved HII regions and other regions of interest within the EIGs (see figures \ref{f:RHacolorImgEIG-1}, \ref{f:RHacolorImgEIG-2} and \ref{f:RHacolorImgEIG-3} below). Wherever possible, photometric measurements were made using SDSS calibrated images using the same apertures defined for the WO images. The SDSS calibrations were tested by comparing results of seven EIGs that had photometric calibrations performed at the WO as well as SDSS data. The {\magu \magb \magv \magr \magi} magnitudes were converted to {\SDSSu \SDSSg \SDSSr \SDSSi \SDSSz} SDSS magnitudes using the transformation recommended in Table 1 of \cite{2005AJ....130..873J} for all stars with $\R-\I < 1.15$. This is the transformation recommended by SDSS for galaxies.\footnote{http://www.sdss3.org/dr9/algorithms/sdssUBVRITransform.php} On average, the results were similar to the SDSS calibrated magnitudes. The standard deviation of the difference between the WO calibration (converted to {\SDSSu \SDSSg \SDSSr \SDSSi \SDSSz}) and the SDSS calibration was {0.05\,\magnitude} for {\SDSSr} and {\SDSSi}, {0.07\,\magnitude} for {\SDSSg}, and {0.11\,\magnitude} for {\SDSSu} and {\SDSSz}. Where available, SDSS measurements were used to calibrate the {\nHa} flux using the method described in section 3 of S12, in which {\SDSSg}, {\SDSSr} and {\SDSSi} are used to estimate the continuum flux at the rest-frame {\Halpha} wavelength. This continuum flux estimate is then multiplied by the equivalent width to obtain the {\nHa} flux. Thirteen EIGs had both spectrophotometric and SDSS {\nHa} calibrations. We found random deviations between the results of the two calibrations, which the original uncertainty propagation estimates did not predict. These may be attributed to the inaccuracy introduced by estimating the continuum at {\Halpha} using an interpolation of two or three SDSS magnitude measurements (see section 3 of S12). This was compensated for by adding 0.2 relative uncertainty to the SDSS calibrations. \subsection{Absolute magnitudes and luminosities} \label{s:ObsNPrc_AbsMagLum} To calculate the absolute magnitudes and luminosities, the calibrated apparent magnitudes and fluxes were first corrected for foreground Galactic extinction. The Wise Observatory (\U\V\B\R\I) and SDSS magnitudes were corrected using Galactic extinction NED data based on \cite{2011ApJ...737..103S}. The Galactic extinctions of the {\Halpha} fluxes, $A_{\lambda_\Halpha}$, were estimated using an interpolation between the {\SDSSr} and {\SDSSi} extinctions. The interpolation was linear in $ln \left( A_{\lambda} \right)$ vs.~{$\lambda$}, since this fits well $A_{\lambda}$ of {\SDSSu\SDSSg\SDSSr\SDSSi\SDSSz} and {\U\B\V\R\I}. This work utilizes data from the Galaxy Evolution Explorer mission (GALEX; \citealt{2005ApJ...619L...1M}), the Two Micron All Sky Survey (2MASS; \citealt{2006AJ....131.1163S}) and the Wide-field Infrared Survey Explorer (WISE; \citealt{2010AJ....140.1868W}). The GALEX ({\NUV} and {\FUV}) and 2MASS ({\J}, {\H} and {\Ks}) Galactic extinctions were calculated using the $A_{\mbox{\scriptsize \B}}$ and $A_{\mbox{\scriptsize \V}}$ of \cite{2011ApJ...737..103S} and the \second column of Table 2 of \cite{2013MNRAS.430.2188Y}, which gives $R_{band} = A_{band} / \left( A_{\mbox{\scriptsize \B}}-A_{\mbox{\scriptsize \V}} \right)$ for each band. The Galactic extinction of the WISE {\WThree} and {\WFour} bands were estimated using the calculated $A \left( \Ks \right)$ and the values for $A_{\mbox{\scriptsize 12\,\um}} / A_{\mbox{\scriptsize K}}$ and $A_{\mbox{\scriptsize 22\,\um}} / A_{\mbox{\scriptsize K}}$ quoted in column 2 of Table 2 of \cite{2009ApJ...693L..81M}. \vspace{12pt} Distance estimates, required for calculating absolute magnitudes and fluxes, were based on the local velocity field model of \cite{2000ApJ...529..786M} that includes terms for the influence of the Virgo Cluster, the Great Attractor, and the Shapley Supercluster. As customary in this field \citep[e.g., ][]{2012A&A...540A..47F, 2011AJ....142..170H, 2012AJ....144...16K} uncertainties were not estimated for these distances. At the low redshifts of the EIGs ($\z < 0.024$) K-corrections are not significant compared to the uncertainty that they introduce. Therefore, K-corrections were not applied to the measured magnitudes and fluxes. The apparent magnitudes, {\m}, and fluxes, {\F} (after correcting for Galactic extinction) were converted to absolute magnitudes, {\AbsM}, and luminosities, {\L}, using: $ \AbsM = \m - 5 \cdot \ifmmode \mbox{log}\else log\fi \frac{\left( 1 + \z \right) D_m}{10\,\pc} $ and $ \L = 4 \pi D_m^2 \left( 1 + \z \right)^{2} \cdot \F $, where $D_m$ is the distance estimate (comoving transverse distance). \vspace{24pt} \rem{ To conclude, we observed 36 EIGs, produced deep maps of their {\Halpha} content and measured their total {\Halpha} luminosity and equivalent width. These data were not available in the literature or in public databases. We have also produced {\U\B\V\R\I} images and absolute magnitude measurements of some of these EIGs, deeper than available before. } Further details about the observations and data processing can be found in section 5 of \cite{2015PhDT}.
2023-04-23T08:17:33.867Z
2017-06-08T02:00:20.000Z
redpajama/arxiv
arxiv_0000
253
39,800
47b692112543118f3e17b339d04f8e2fbc57f67e
\section{Introduction} \vspace{-0.10cm} \todo{Need motivating sentence.} Chemical space is huge: it is estimated to contain over $10^{60}$ molecules. Among these, fewer than 100 million compounds can be found in public repositories or databases \cite{Reymond_2012}. This discrepancy between \textit{known} compounds and \textit{possible} compounds indicates the potential for discoverying many new compounds with highly desirable functionality (e.g., new energy materials, pharmaceuticals, dyes, etc.). While the vast size of chemical space makes this an enormous opportunity, it also presents a significant difficulty in the identification of new relevant compounds among the many unimportant ones. This challenge is so great that any discovery process relying purely on the combination of scientific intuition with trial and error experimentation is slow, tedious and in many cases infeasible. To accelerate the search, high-throughput approaches can be used in a combinatorial exploration of small specific areas of chemical space \cite{Rajan_2008}. These have led to the development of high-throughput virtual screening \cite{Pyzer_Knapp_2015,G_mez_Bombarelli_2016} in which large libraries of molecules are created and then analyzed using theoretical and computational techniques, typically by running a large number of parallel simulations in a computer cluster. The objective is to reduce an initially very large library of molecules to a small set of promising leads for which expensive experimental evaluation is justified. However, even though these techniques only search a tiny drop in the ocean of chemical space, they can result in massive libraries whose magnitude exceeds traditional computational capabilities. As a result, at present, there is an urgent need to accelerate high-throughput screening approaches. Bayesian optimization (BO) \cite{jones1998efficient} can speed up the discovery process by using machine learning to guide the search and make improved decisions about what molecules to analyze next given the data collected so far. \todo{This really needs to connect with the larger literature on Bayesian optimal experimental design.} However, current BO methods cannot scale to the large number of parallel measurements and the massive libraries of candidate molecules currently used in high-throughput screening \cite{Pyzer_Knapp_2015}. \todo{Explain why BO doesn't scale.} While there are BO methods that allow parallel data collection, these methods have typically been limited to tens of data points per batch \cite{snoek2012practical,shahriari2014entropy,GonDaiHenLaw16}. In contrast, high-throughput screening may allow the simultaneous collection of thousands of data points via large-scale parallel computation. This creates a need for new scalable methods for parallel Bayesian optimization. To address the above difficulty, we present here a scalable solution for parallel Bayesian optimization based on a distributed implementation of the Thompson sampling heuristic \cite{Thompson_1933,Chapelle2011}. We show that, for the case of small batch sizes, the proposed parallel and distributed Thompson sampling (PDTS) method performs as well as a parallel implementation of expected improvement (EI) \cite{snoek2012practical,ginsbourger2011dealing}, the most widely used Bayesian optimization heuristic. Parallel EI selects the batch entries sequentially and so EI proposals can't be parallelized, which limits its scalability properties. PDTS generates each batch of evaluation locations by selecting the different batch entries independently and in parallel. Consequently, PDTS is highly scalable and applicable to large batch sizes. We also evaluate the performance of PDTS in several real-world high-throughput screening experiments for material and drug discovery, where parallel EI is infeasible. In these problems, PDTS outperforms other scalable baselines such as a greedy search strategy, $\epsilon$-greedy approaches and a random search method. These results indicate that PDTS is a successful solution for large-scale parallel Bayesian optimization. \begin{algorithm} \caption{Sequential Thompson sampling} \label{alg:seq_thompson_sampling} \begin{algorithmic} \STATE {\bfseries Input:} initial data $\mathcal{D}_{\mathcal{I}(1)}=\{ (\mathbf{x}_i, y_i) \}_{i\in \mathcal{I}(1)}$ \FOR{$t=1$ {\bfseries to} $T$} \STATE Compute current posterior $p(\bm \theta|\mathcal{D}_{\mathcal{I}(t)})$ \STATE Sample $\bm \theta$ from $p(\bm \theta|\mathcal{D}_{\mathcal{I}(t)})$ \STATE Select $k\leftarrow \text{argmax}_{j \not \in {\mathcal{I}(t)}} \mathbf{E}[y_j|\mathbf{x}_j,\bm \theta]$ \STATE Collect $y_k$ by evaluating $f$ at $\mathbf{x}_k$ \STATE $\mathcal{D}_{\mathcal{I}(t+1)}\leftarrow \mathcal{D}_{\mathcal{I}(t)}\cup \{(\mathbf{x}_k,y_k)\}$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{small_figure.pdf} \caption{Illustration of Thompson sampling and PDTS.} \vspace{-0.5cm} \label{fig:illustration_thompson_sampling} \end{figure} \vspace{-0.2cm} \section{BO and Thompson Sampling} \vspace{-0.1cm} Let us assume we have a large library of candidate molecules ${\mathcal{M}=\{ m_1,\ldots,m_{|\mathcal{M}|}\}}$. Our goal is to identify a small subset of elements ${\{m_i\} \subset \mathcal{M}}$ for which the $f(m_i)$ are as high as possible, with $f$ being an expensive-to-evaluate objective function. The objective $f$ could be, for example, an estimate of the power-conversion efficiency of organic photovoltaics, as given by expensive quantum mechanical simulations \cite{ADMA200501717}, and we may want to identify the top 1\% elements in $\mathcal{M}$ according to this score. Bayesian optimization methods can be used to identify the inputs that maximize an expensive objective function $f$ by performing only a reduced number of function evaluations. For this, BO uses a model to make predictions for the value of $f$ at new inputs given data from previous evaluations. The next point to evaluate is then chosen by maximizing an acquisition function that quantifies the benefit of evaluating the objective at a particular location. Let $\mathbf{x}_1,\ldots,\mathbf{x}_{|\mathcal{M}|}$ be $D$-dimensional feature vectors for the molecules in~$\mathcal{M}$ and let~${\mathcal{D}_\mathcal{I}=\{ (\mathbf{x}_i, y_i): i\in I \}}$ be a dataset with information about past evaluations, where~$I$ is a set with the indices of the molecules already evaluated,~$\mathbf{x}_i$ is the feature vector for the $i$-th molecule in~$\mathcal{M}$ and~${y_i = f(m_i)}$ is the result of evaluating the objective function $f$ on that molecule. We assume that the evaluations of $f$ are noise free, however, the methods described here can be applied to the case in which the objective evaluations are corrupted with additive Gaussian noise. BO typically uses a probabilistic model to describe how the~$y_i$ in~$\mathcal{D}_\mathcal{I}$ are generated as a function of the corresponding features~$\mathbf{x}_i$ and some model parameters~$\bm \theta$, that is, the model specifies~$p(y_i|\mathbf{x}_i,\bm\theta)$. Given the data~$\mathcal{D}_\mathcal{I}$ and a prior distribution~$p(\bm \theta)$, the model also specifies a posterior distribution~${p(\bm \theta |\mathcal{D}_\mathcal{I})\propto p(\bm\theta)\prod_{i \in I} p(y_i|\mathbf{x}_i,\bm\theta)}$. The predictive distribution for any~${m_j\in\mathcal{M}\setminus \{m_i : i \in I\}}$ is then given by~${p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})=\int p(y_j|\mathbf{x}_j,\bm\theta)p(\bm \theta |\mathcal{D}_\mathcal{I})\,d\bm\theta}$. BO methods use this predictive distribution to compute an acquisition function (AF) given by \vspace{-0.4cm} {\small \begin{equation} \alpha(\mathbf{x}_j|\mathcal{D}_\mathcal{I}) = \mathbf{E}_{p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})} \left[ U(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I}) \right]\,,\label{eq:AF} \end{equation}}where $U(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})$ is the utility of obtaining value~$y_j$ when evaluating~$f$ at~$m_j$. Eq.~(\ref{eq:AF}) is then maximized with respect to~${j\not \in I}$ to select the next molecule~$m_j$ on which to evaluate~$f$. The most common choice for the utility is the improvement:~${U(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})= \max(0, y_j - y_\star)}$, where~$y_\star$ is equal to the best~$y_i$ in~$\mathcal{D}_\mathcal{I}$. In this case, Eq.~(\ref{eq:AF}) is called the expected improvement (EI) \cite{jones1998efficient}. Ideally, the AF should encourage both exploration and exploitation. For this, the expected utility should increase when $y_j$ takes high values on average (to exploit), but also when there is high uncertainty about~$y_j$ (to explore). The EI utility function satisfies these two requirements. \todo[inline]{This description of Thompson sampling is confusing and doesn't seem correct. It is needless formalization that does not provide insight. Thompson sampling is not trying to form a Monte Carlo estimate of the distribution over $y_i$. It is sampling from the distribution over maxima implied by the posterior. The exploration in TS does not arise from Monte Carlo variance, but from the true uncertainty associated with the distribution over maxima. I don't see how this maps into the utility framework.} Thompson sampling (TS) \cite{Thompson_1933} can be understood as a version of the previous framework in which the utility function is defined as~${U(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I}) = y_j}$ and the expectation in (\ref{eq:AF}) is taken with respect to $p(y_j|\mathbf{x}_j,\bm\theta)$ instead of $p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})$, with $\bm \theta$ being a sample from the posterior $p(\bm \theta|\mathcal{D}_\mathcal{I})$. That is, when computing the AF, TS approximates the integral in $p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})=\int p(y_j|\mathbf{x}_j,\bm\theta)p(\bm \theta |\mathcal{D}_\mathcal{I})\,d\bm\theta$ by Monte Carlo, using a single sample from $p(\bm \theta|\mathcal{D}_\mathcal{I})$ in the approximation. The TS utility function enforces only exploitation because the expected utility is insensitive to any variance in $y_j$. Despite this, TS still enforces exploration because of the variance produced by the Monte Carlo approximation to $p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I})$. Under TS, the probability of evaluating the objective at a particular location matches the probability of that location being the maximizer of the objective, given the model assumptions and the data from past evaluations. Algorithm \ref{alg:seq_thompson_sampling} contains the pseudocode for TS. The plots in the top of Figure \ref{fig:illustration_thompson_sampling} illustrate how TS works. The top-left plot shows several samples from a posterior distribution on $f$ induced by $p(\bm\theta|\mathcal{D}_\mathcal{I})$ since each value of the parameters $\bm \theta$ corresponds to an associated value of $f$. Sampling from $p(\bm\theta|\mathcal{D}_\mathcal{I})$ is then equivalent to selecting one of these samples for $f$. The selected sample represents the current AF, which is optimized in the top-right plot in Figure \ref{fig:illustration_thompson_sampling} to select the next evaluation. \vspace{-0.1cm} \subsection{Parallel BO}\label{sec:parallelBO} \vspace{-0.10cm} So far we have considered the sequential evaluation setting, where BO methods collect just a single data point in each iteration. However, BO can also be applied in the parallel setting, which involves choosing a batch of multiple points to evaluate next in each iteration. For example, when we run $S$ parallel simulations in a computer cluster and each simulation performs one evaluation of $f$. \todo[inline]{One thing that is missing here is clarity on what the nature of the parallelism is. There are certainly situations where you have a big batch of experiments that are all running in lockstep, in particular in biology with microarrays. However, a much more common situation is that you're being asked to choose a new place for evaluation, when many other things are currently pending. Computational chemistry is certainly in this category, since the run times are different for different molecules. It needs to be clear what kind of parallelism is being discussed.} \citet{snoek2012practical} describe how to extend sequential BO methods to the parallel setting. The idea is to select the first evaluation location in the batch in the same way as in the sequential setting. However, the next evaluation location is then selected while the previous one is still pending. In particular, given a set $K$ with indexes of pending evaluation locations, we choose a new location in the batch based on the expectation of the AF under all possible outcomes of the pending evaluations according to the predictions of the model. Therefore, at any point, the next evaluation location is obtained by optimizing the AF \vspace{-0.5cm} { \small \begin{align} & \alpha_\text{parallel} (\mathbf{x}_j|\mathcal{D}_\mathcal{I},\mathcal{K}) = \nonumber\\ &\hspace{0.9cm} \mathbf{E}_{p(\{y_k\}_{k \in \mathcal{K}}|\{ \mathbf{x}_k\}_{k \in \mathcal{K}},\mathcal{D}_\mathcal{I})}\left[ \alpha(\mathbf{x}_j|\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K}) \right] \,,\label{eq:integratedAF} \end{align}}where $\mathcal{D}_\mathcal{K}=\{(y_k,\mathbf{x}_k)\}_{k\in \mathcal{K}}$ and $\alpha(\mathbf{x}_j|\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K})$ is given by (\ref{eq:AF}). Computing this expression exactly is infeasible in most cases. \citet{snoek2012practical} propose a Monte Carlo approximation in which the expectation in the second line is approximated by averaging across a few samples from the predictive distribution at the pending evaluations, that is, $p(\{y_k\}_{k \in \mathcal{K}}|\{ \mathbf{x}_k\}_{k \in \mathcal{K}},\mathcal{D}_\mathcal{I})$. These samples are referred to as \emph{fantasized} data. \newcommand*\circled[1]{\tikz[baseline=(char.base)]{\node[shape=circle,draw,inner sep=0.4pt] (char) {#1};}} This approach for parallel BO has been successfully used to collect small batches of data (about 10 elements in size), with EI as utility function and with a Gaussian process as the model for the data \cite{snoek2012practical}. However, it lacks scalability to large batch sizes, failing when we need to collect thousands of simultaneous measurements. The reason for this is the high computational cost of adding a new evaluation to the current batch. The corresponding cost includes: \circled{1} sampling the fantasized data, \circled{2} updating the posterior predictive distribution to $p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I}\cup \mathcal{D}_\mathcal{K})$, which is required for evaluating $\alpha(\mathbf{x}_j|\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K})$, and \circled{3} optimizing the Monte Carlo approximation to (\ref{eq:integratedAF}). Step \circled{2} can be very expensive when the number of training points in $\mathcal{D}_\mathcal{I}$ is very large. This step is also considerably challenging when the model does not allow for exact inference, as it is often the case with Bayesian neural networks. Step \circled{3} can also take a very long time when the library of candidate molecules $\mathcal{M}$ is very large (e.g., when it contains millions of elements) and among all the remaining molecules we have to find one that maximizes the AF. Despite these difficulties, the biggest disadvantage in this approach for parallel BO is that it cannot be parallelized since it is a sequential process in which (\ref{eq:integratedAF}) needs to be iteratively optimized, with each optimization step having a direct effect on the next one. This prevents this method from fully exploiting the acceleration provided by multiple processors in a computer cluster. \todo{Let's be honest here: this only matters if the ``expensive'' computation is not that expensive relative to the parallelism. That is, if it takes 1 minute to make a sequential prediction and we have $N$ machines, then once the optimization is going, the bottleneck only arises if the expensive function takes less than $N$ minutes. In other words, this argument somewhat contradicts the philosophy behind BO.} The sequential nature of the algorithm is illustrated by the plot in the left of Figure \ref{fig:parallel_visualization}. In this plot computer node 1 is controlling the BO process and decides the batch evaluation locations. Nodes $2,\ldots,5$ then perform the evaluations in parallel. Note that steps \circled{2} and \circled{3} from the above description have been highlighted in green and magenta colors. In the following section we describe an algorithm \todo[inline]{This algorithm is useful, but no way it's novel. It's the most obvious thing to do with Thompson sampling in the batch setting. We can argue that this is a sensible thing to do for chemistry, but I think basically everyone who has thought about TS realizes you can do it trivially in parallel. } for batch BO which can be implemented in a fully parallel and distributed manner and which, consequently, can take full advantage of multiple processors in a computer cluster. This novel method is based on a parallel implementation of the Thompson sampling heuristic. \begin{algorithm} \caption{Parallel and distributed Thompson sampling}\label{alg:thompson_sampling_distributed} \begin{algorithmic} \STATE {\bfseries Input:} initial data $\mathcal{D}_{\mathcal{I}(1)}=\{ \mathbf{x}_i, y_i \}_{i \in \mathcal{I}(1)}$, batch size $S$ \FOR{$t=1$ {\bfseries to} $T$} \STATE \tikzmark{a} Compute current posterior $p(\bm \theta|\mathcal{D}_{\mathcal{I}(t)})$ \FOR{$s=1$ {\bfseries to} $S$} \STATE Sample $\bm \theta$ from $p(\bm \theta|\mathcal{D}_{\mathcal{I}(t)})$ \STATE Select $k(s)\leftarrow \text{argmax}_{j \not \in {\mathcal{I}(t)}} \mathbf{E}[y_j|\mathbf{x}_j,\bm \theta]$ \STATE Collect $y_{k(s)}$ by evaluating $f$ at $\mathbf{x}_{k(s)}$ \ENDFOR \STATE $\mathcal{D}_{\mathcal{I}(t+1)}=\mathcal{D}_{\mathcal{I}(t)}\cup \{\mathbf{x}_{k(s)},y_{k(s)}\}_{s=1}^S$ \ENDFOR \begin{tikzpicture}[remember picture, overlay] \node [draw=blue!80!black, very thick, dotted, rectangle, anchor=north west, minimum width=5.9cm, minimum height=1.3cm] (box) at ($(a) + (0.15, -0.5)$) {}; \node[text=blue!80!black, rotate=90, anchor=south] at ($(a) + (6.6, -1.16)$) {Executed}; \node[text=blue!80!black, rotate=90, anchor=south] at ($(a) + (7.0, -1.16)$) {in parallel}; \node[text=blue!80!black, rotate=90, anchor=south] at ($(a) + (7.3, -1.16)$) {in node $s$}; \end{tikzpicture} \end{algorithmic} \end{algorithm} \begin{figure*} \centering \includegraphics[width=1.00\textwidth]{drawing.pdf} \caption{A visualization of one iteration of BO using parallel EI as implemented in \cite{snoek2012practical} and PDTS. Note that in PDTS the model is updated once and sample points are acquired independently by the nodes. With parallel EI, the the location of the next sample points is dependent on the location of previous sample points in the batch so these are computed sequentially. \todo[inline]{I think this figure could be helpful, but it doesn't really address the situation here in which the experiments take variable amounts of time. Also, these figures make it clear that it only matters if the red lines are short!} } \label{fig:parallel_visualization} \vspace{-3mm} \end{figure*} \vspace{-0.3cm} \section{Parallel and Distributed Thompson Sampling} \vspace{-0.1cm} We present an implementation of the parallel BO method from Section \ref{sec:parallelBO} based on the Thompson sampling (TS) heuristic. In particular, we propose to apply to (\ref{eq:integratedAF}) the same approximation that TS applied to (\ref{eq:AF}). For this, we choose in (\ref{eq:integratedAF}) the same utility function used by TS in the sequential setting, that is, $U(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K}) = y_j$. Then, we approximate the expectation with respect to $\{y_k\}_{k \in \mathcal{K}}$ in (\ref{eq:integratedAF}) by Monte Carlo, averaging across just one sample of $\{y_k\}_{k \in \mathcal{K}}$ drawn from $p(\{y_k\}_{k \in \mathcal{K}}|\{ \mathbf{x}_k\}_{k \in \mathcal{K}},\mathcal{D}_\mathcal{I})$. After that, $\alpha(\mathbf{x}_j|\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K})$ in (\ref{eq:integratedAF}) is approximated in the same way as in the sequential setting by first sampling $\bm \theta$ from $p(\bm \theta|\mathcal{D}_\mathcal{I}\cup \mathcal{D}_\mathcal{K})$ and then approximating $p(y_j|\mathbf{x}_j,\mathcal{D}_\mathcal{I}\cup \mathcal{D}_\mathcal{K})$ with $p(y_j|\mathbf{x}_j,\bm\theta)$. Importantly, in this process, sampling first $\{y_k\}_{k \in \mathcal{K}}$ from $p(\{y_k\}_{k \in \mathcal{K}}|\{ \mathbf{x}_k\}_{k \in \mathcal{K}},\mathcal{D}_\mathcal{I})$ and then $\bm \theta$ from $p(\bm \theta|\mathcal{D}_\mathcal{I}\cup \mathcal{D}_\mathcal{K})$ is equivalent to sampling $\bm \theta$ from just $p(\bm \theta|\mathcal{D}_\mathcal{I})$. The reason for this is that updating a posterior distribution with synthetic data sampled from the model's predictive distribution produces on average the same initial posterior distribution. The result is that parallel TS with batch size $S$ is the same as running sequential TS $S$ times without updating the current posterior $p(\bm \theta|\mathcal{D}_\mathcal{I})$, where each execution of sequential TS produces one of the evaluation locations in the batch. Importantly, these executions can be done in distributed manner, with each one running in parallel in a different node. The resulting parallel and distributed TS (PDTS) method is highly scalable and can be applied to very large batch sizes by running each execution of sequential TS on the same computer node that will then later evaluate $f$ at the selected evaluation location. Algorithm \ref{alg:thompson_sampling_distributed} contains the pseudocode for PDTS. The parallel nature of the algorithm is illustrated by the plot in the right of Figure \ref{fig:parallel_visualization}. In this plot computer node 1 is controlling the BO process. To collect four new function evaluations in parallel, computer node 1 sends the current posterior $p(\bm \theta|\mathcal{D}_\mathcal{I})$ and $\mathcal{I}$ to nodes $2,\ldots,5$. Each of them samples then a value for $\bm \theta$ from the posterior and optimizes its own AF given by $\mathbf{E}[y_j|\mathbf{x}_j,\bm \theta]$, with $j \not \in \mathcal{I}$. The objective function is evaluated at the selected input and the resulting data is sent back to node 1. Figure \ref{fig:illustration_thompson_sampling} illustrates how PDTS selects two parallel evaluation locations. For this, sequential TS is run twice. The scalability of PDTS makes it a promising method for parallel BO in high-throughput screening. However, in this type of problem, the optimization of the AF is done over a discrete set of molecules. Therefore, whenever we collect a batch of data in parallel with PDTS, several of the simultaneous executions of sequential TS may choose to evaluate the same molecule. A central computer node (e.g. the node controlling the BO process) maintaining a list of molecules currently selected for evaluation can be used to avoid this problem. In this case, each sequential TS node sends to the central node a ranked list with the top $S$ (the batch size) molecules according to its AF. From this list, the central node then selects the highest ranked molecule that has not been selected for evaluation before. \vspace{-0.1cm} \section{Related Work} \vspace{-0.1cm} Ginsbourger et al. \cite{ginsbourger2010kriging} proposed the following framework for parallel BO: given a set of current observations $\mathcal{D}_{\mathcal{I}}$ and pending experiments $\{\mathbf{x}_k\}_{k=1}^\mathcal{K}$, an additional set of fantasies $\mathcal{D}_\mathcal{K} = \{(\mathbf{x}_k, {y}_k)\}_{k=1}^\mathcal{K}$ can be assumed to be the result of those pending experiments. A step of Bayesian optimization can then be performed using the augmented dataset $\mathcal{D}_\mathcal{I} \cup \mathcal{D}_\mathcal{K}$ and the acquisition function $\alpha(\mathbf{x}| \mathcal{D}_\mathcal{I} \cup {\mathcal{D}_\mathcal{K}})$. Two different values are proposed for the fantasies: the \textit{constant liar}, where ${y}_k = L$ for some constant $L$ and all $k=1\ldots \mathcal{K}$, and the \textit{Kriging believer}, where ${y}_k$ is given by the GP predictive mean at $\mathbf{x}_k$. \citet{snoek2012practical} compute a Monte Carlo approximation of the expected acquisition function over potential fantasies sampled from the model's predictive distribution. Recent methods have been proposed to modify the parallel EI procedure to recommend points jointly \cite{chevalier2013fast,marmin2015differentiating,wang2016parallel}. \citet{azimi2010batch} describe a procedure called \textit{simulated matching} whose goal is to propose a batch ${\mathcal{D}_\mathcal{K}}$ of points which is a good match for the set of samples that a sequential BO policy $\pi$ would recommend. The authors consider a batch ``good'' if it contains a sample that yields, with high probability, an objective value close to that of the best sample produced by a sequential execution of $\pi$. Several authors have proposed to extend the \textit{upper confidence bound} (UCB) heuristic to the parallel setting. Since the GP predictive variance depends only on the input location of the observations, \citet{desautels2014parallelizing} propose GP-BUCP acquisition which uses the UCB acquisition with this updated variance. \citet{contal2013parallel} introduce the Gaussian Process Upper Confidence Bound with Pure Exploration (GP-UCB-PE). Under this procedure, the first point is obtained using the standard UCB acquisition function while the remaining points are sequentially selected to be the ones yielding the highest predictive variance, while still lying in a region that contains the maximizer with high probability. \citet{shah2015parallel} extend the Predictive Entropy Search (PES) heuristic to the parallel setting (PPES). PPES seeks to recommend a collection of samples ${\mathcal{D}_\mathcal{\mathcal{K}}}$ that yields the greatest reduction in entropy for the posterior distribution of $\mathbf{x}^\star$, the latent objective maximizer. \citet{wu2016parallel} propose the \text{Parallel Knowledge Gradient Method} which optimizes an acquisition function called the parallel knowledge gradient (q-KG), a measure of the expected incremental solution quality after $q$ samples. An advantage of PDTS over parallel EI and other related methods is that the approximate marginalization of potential experimental outcomes adds no extra computational cost to our procedure and so PDTS is highly parallelizable. Finally, unlike other approaches, PDTS can be applied to a wide variety of models, such as GPs and Bayesian neural networks, since it only requires samples from an exact or approximate posterior distribution. \newcommand{\text{Gam}}{\text{Gam}} \vspace{-0.2cm} \section{Bayesian Neural Networks for High-throughput Screening} \vspace{-0.1cm} Neural networks are well-suited for implementing BO on molecules. They produce state-of-the-art predictions of chemical properties \cite{Ma_2015,Mayr_2016,ramsundar2015massively} and can be applied to large data sets by using stochastic optimization \cite{bousquet2008tradeoffs}. Typical applications of neural networks focus on the deterministic prediction scenario. However, in large search spaces with multiple local optima (which is the case when navigating chemical space), it is desirable to use a probabilistic approach that can produce accurate estimates of uncertainty for efficient exploration and so, we use \emph{probabilistic back-propagation} (PBP), a recently-developed technique for the scalable training of Bayesian neural networks \cite{hernandez2015probabilistic}. Note that other methods for approximate inference in Bayesian neural networks could have been chosen as well \cite{BlundellCKW15,SnoekRSKSSPPA15,GalG16}. We prefer PBP because it is fast and it does not require the tuning of hyper-parameters such as learning rates or regularization constants \cite{hernandez2015probabilistic}. Given a dataset $\mathcal{D}_\mathcal{I} = \{ (\mathbf{x}_i, y_i) \}_{i \in\mathcal{I}}$, we assume that ${y_i = f(\mathbf{x}_i;\mathcal{W}) + \epsilon_i}$, where $f(\cdot ;\mathcal{W})$ is the output of a neural network with weights $\mathcal{W}$. The network output is corrupted with additive noise variables ${\epsilon_i \sim \mathcal{N}(0,\gamma^{-1})}$. The network has~$L$ layers, with $V_l$ hidden units in layer $l$, and ${\mathcal{W} = \{ \mathbf{W}_l \}_{l=1}^L}$ is the collection of $V_l \times (V_{l-1}+1)$ synaptic weight matrices. The $+1$ is introduced here to account for the additional per-layer biases. The activation functions for the hidden layers are rectifiers: ${\varphi(x) = \max(x,0)}$. The likelihood for the network weights~$\mathcal{W}$ and the noise precision~$\gamma$ is \vspace{-0.5cm} {\small \begin{align} p(\{y_i\}_{i\in|\mathcal{I}}|\mathcal{W},\{\mathbf{x}_i\}_{i\in \mathcal{I}},\gamma) &= \prod_{i\in \mathcal{I}}\mathcal{N}(y_i | f(\mathbf{x}_i;\mathcal{W}),\gamma^{-1})\,.\nonumber \end{align} }We specify a Gaussian prior distribution for each entry in each of the weight matrices in $\mathcal{W}$: \begin{align} p(\mathcal{W}|\lambda) &= \prod_{l=1}^L \prod_{k=1}^{V_l} \prod_{j=1}^{V_{l-1}+1} \mathcal{N}(w_{kj,l}|0,\lambda^{-1})\,,\label{eq:prior_weights} \end{align} where $w_{kj,l}$ is the entry in the $k$-th row and $j$-th column of $\mathbf{W}_l$ and $\lambda$ is a precision parameter. The hyper-prior for~$\lambda$ is gamma: $p(\lambda) = \text{Gam}(\lambda|\alpha_0^\lambda,\beta_0^\lambda)$ with shape $\alpha^\lambda_0 = 6$ and inverse scale $\beta^\lambda_0 = 6$. This relatively low value for the shape and inverse scale parameters makes this prior weakly-informative. The prior for the noise precision $\gamma$ is also gamma: $p(\gamma) = \text{Gam}(\gamma|\alpha_0^{\gamma},\beta_0^{\gamma})$. We assume that the $y_i$ have been normalized to have unit variance and, as above, we fix ${\alpha^{\gamma}_0 = 6}$ and~${\beta^{\gamma}_0 = 6}$. The exact computation of the posterior distribution for the model parameters $p(\mathcal{W},\gamma, \lambda|\mathcal{D}_\mathcal{I})$ is not tractable in most cases. PBP approximates the intractable posterior on $\mathcal{W}$, $\gamma$ and $\lambda$ with the tractable approximation \vspace{-0.5cm} { \small \begin{align} q(\mathcal{W},\gamma, \lambda) = & \left[ \prod_{l=1}^L\! \prod_{k=1}^{V_l}\! \prod_{j=1}^{V_{l\!-\!1}\!+\!1} \mathcal{N}(w_{kj,l}| m_{kj,l},v_{kj,l})\right ]\nonumber\\ & \text{Gam}(\gamma \,|\, \alpha^\gamma, \beta^\gamma) \text{Gam}(\lambda \,|\, \alpha^\lambda, \beta^\lambda)\,,\label{eq:posterior_approximation} \end{align}}whose parameters are tuned by iteratively running an assumed density filtering (ADF) algorithm over the training data \cite{Opper1998}. The main operation in PBP is the update of the mean and variance parameters of $q$, that is, the $m_{kj,l}$ and $v_{kj,l}$ in (\ref{eq:posterior_approximation}), after processing each data point $\{(\mathbf{x}_i,y_i)\}$. For this, PBP matches moments between the new $q$ and the product of the old $q$ with the corresponding likelihood factor $\mathcal{N}(y_i \,|\, f(\mathbf{x}_i;\mathcal{W}),\gamma^{-1})$. The matching of moments for the distributions on the weights is achieved by using well-known Gaussian ADF updates, see equations 5.12 and 5.1 in \cite{minka2001family}. To compute the ADF updates, PBP finds a Gaussian approximation to the distribution of the network output $f(\mathbf{x}_i;\mathcal{W})$ when $\mathcal{W} \sim q$. This is achieved by doing a forward pass of $\mathbf{x}_i$ through the network, with the weights $\mathcal{W}$ being randomly sampled from $q$. In this forward pass the non-Gaussian distributions followed by the output of the neurons are approximated with Gaussians that have the same means and variances as the original distributions. This is a Gaussian approximation by moment matching. We refer the reader to \citet{hernandez2015probabilistic} for full details on PBP. After several ADF iterations over the data by PBP, we can then make predictions for the unknown target variable $y_\star$ associated with a new feature vector $\mathbf{x}_\star$. For this, we obtain a Gaussian approximation to $f(\mathbf{x}_\star;\mathcal{W})$ when $\mathcal{W}\sim q$ by applying the forward pass process described above. To implement TS, as described in Algorithm \ref{alg:seq_thompson_sampling}, we first sample the model parameters $\bm \theta$ from the posterior $p(\bm \theta|\mathcal{D}_{\mathcal{I}})$ and then optimize the AF given by $\mathbf{E}[y_j|\mathbf{x}_j,\bm \theta]$, with $j \not \in {\mathcal{I}}$. When the model is a Bayesian neural network trained with PBP, the corresponding operations are sampling $\mathcal{W}$ from $q$ and then optimizing the AF given by $f(\mathbf{x}_j;\mathcal{W})$, with $j \not \in {\mathcal{I}}$. This last step requires the use of a deterministic neural network, with weight values given by the posterior sample from $q$, to make predictions on all the molecules that have not been evaluated yet. Then, the molecule with highest predictive value is selected for the next evaluation. \section{Experiments with GPs and Parallel EI} \begin{figure*}[h!] \includegraphics[width=1.00\textwidth]{results_gps.pdf} \caption{Immediate regret in experiments with GPs, using TS, EI, PDTS and parallel EI for optimizing synthetic functions (first 3 plots) and functions sampled from a GP prior (fourth plot).}\label{fig:gp_experiments} \vspace{-4mm} \end{figure*} We first compare the performance of our parallel and distributed Thompson sampling (PDTS) algorithm with the most popular approach for parallel BO: the parallel EI method from Section \ref{sec:parallelBO}. Existing implementations of parallel EI such as spearmint\footnote{\url{https://github.com/HIPS/Spearmint}} use a Gaussian process (GP) model for the objective function. To compare with these methods, we also adopt a GP as the model in PDTS. Note that parallel EI cannot scale to the large batch sizes used in high-throughput screening. Therefore, we consider here only parallel optimization problems with small batch sizes and synthetic objective functions. Besides PDTS and parallel EI, we also analyze the performance of the sequential versions of these algorithms: TS and EI. To implement Thompson sampling (TS) with a GP model, we approximate the non-parametric GP with a parametric approximation based on random features, as described in the supplementary material of \cite{hernandez2014predictive}. For the experiments, we consider a cluster with 11 nodes: one central node for controlling the BO process and 10 additional nodes for parallel evaluations. We assume that all objective evaluations take a very large amount of time and that the cost of training the GPs and recomputing and optimizing the AF is negligible in comparison. Thus, in practice, we perform these experiments in a sequential (non-parallel) fashion with the GP model being updated only in blocks of 10 consecutive data points at a time. As objective functions we consider the two dimensional Bohachevsky and Branin-Hoo functions and the six dimensional Hartmann function, all available in Benchfunk\footnote{\url{https://github.com/mwhoffman/benchfunk}}. We also consider the optimization of functions sampled from the GP prior over the 2D unit square using a squared exponential covariance function with fixed 0.1 length scale. After each objective evaluation, we compute the immediate regret (IR), which we define as the difference between the best objective value obtained so far and the minimum value of the objective function. The measurement noise is zero in these experiments. Figure \ref{fig:gp_experiments} reports mean and standard errors for the logarithm of the best IR seen so far, averaged across 50 repetitions of the experiments. In the plots, the horizontal axis shows the number of function evaluations performed so far. Note that in these experiments TS and EI update their GPs once per sample, while PDTS and parallel EI update only every 10 samples. Figure \ref{fig:gp_experiments} shows that EI is better than TS in most cases, although the differences between these two methods are small in the Branin-Hoo function. However, EI is considerably much better than TS in Hartmann. The reason for this is that in Hartmann there are multiple equivalent global minima and TS tends to explore all of them. EI is by contrast more exploitative and focuses on evaluating the objective around only one of the minima. The differences between parallel EI and PDTS are much smaller, with both obtaining very similar results. The exception is again Hartmann, where parallel EI is much better than PDTS, probably because PDTS is more explorative than parallel EI. Interestingly, PDTS performs better than parallel EI on the random samples from the GP prior, although parallel EI eventually catches up. These results indicate that PDTS performs in practice very similarly to parallel EI, one of the most popular methods for parallel BO. \section{Experiments with Molecule Data Sets}\label{sec:data_sets} We describe the molecule data sets used in our experiments. The input features for all molecules are 512-bit Morgan circular fingerprints \cite{Rogers_2010}, calculated with a bond radius of 2, and derived from the canonical SMILES as implemented in the RDkit package \cite{rdkit}. \textbf{Harvard Clean Energy Project}: The Clean Energy Project is the world's largest materials high-throughput virtual screening effort \cite{Hachmann_2014,Hachmann_2011}, and has scanned more than 3.5 million molecules to find those with high power conversion efficiency (PCE) using quantum-chemical techniques, taking over 30,000 years of CPU time. The target value within this data set is the power conversion efficiency (PCE), which is calculated for the 2.3 million publicly released molecules, using the Scharber model \cite{Dennler_2008} and frontier orbitals calculated at the BP86 \cite{Perdew_1986,Becke_1993} \/ def2-SVP \cite{Weigend_2005} level of theory. \textbf{Dose-Response Data Set}: These data sets were obtained from the NCI-cancer database \cite{_nci_}. The dose-response target value has a potential range of -100 to 100, and reports a percentage cell growth relative to a no-drug control. Thus, a value of +40 would correspond to a 60\% growth inhibition and a value of -40 would correspond to 40\% lethality. Molecules with a positive value for the dose-response are known as inhibitors, molecules with a score less than 0 have a cytotoxic effect. Results against the NCI-H23 cell line were taken against a constant log-concentration of -8.00M and where multiple identical conditions were present in the data an average was used for the target variables. In this data set we are interested in finding molecules with smaller values of the target variable. \textbf{Malaria Data Set}: The Malaria data set was taken from the \textit{P. falciparum} whole cell screening derived by combining the GSK TCAMS data set, the Novatis-GNF Malaria Box data set and the St Jude's Research Hospital data set, as released through the Medicines for Malaria Venture website \cite{Spangenberg_2013}. The target variable is the EC50 value, which is defined as the concentration of the drug which gives half maximal response. Much like the Dose response data set, the focus here is on minimization: the lower the concentration, the stronger the drug. \begin{figure*}[h!] \begin{center} \includegraphics[width=1\textwidth]{thompson_tile4.png} \vspace{-3mm} \caption{{Recall obtained by PDTS on each data set. For the CEP data, the recall for molecules with a PCE $>10\%$ is reported, whilst for One-dose and Malaria we report the recall for the molecules in the top 1\%. In addition to the Monte Carlo sampling baseline, we also include results for a greedy sampling approach, in which there is no exploration, and the molecules are chosen according to the mean of the predictive distribution given by PBP. The overall lower performance of this greedy strategy illustrates the importance of exploration in this type of problems. \label{fig:thompson_1pc} }} \end{center} \vspace{-4mm} \end{figure*} \subsection{Results}\label{sec:thompson_sampling} We evaluate the gains produced by PDTS in experiments simulating a high throughput virtual screening setting. In these experiments, we sequentially sample molecules from libraries of candidate molecules given by the data sets from Section \ref{sec:data_sets}. After each sampling step, we calculate the 1\% recall, that is, the fraction of the top 1\% of molecules from the original library that are found among the sampled ones. For the CEP data, we compute recall by focusing on molecules with PCE larger than 10\%. In all data sets, each sampling step involves selecting a batch of molecules among those that have not been sampled so far. In the Malaria and One-dose data sets we use batches of size 200. These data sets each contain about 20,000 molecules. By contrast, the CEP data set contains 2 million molecules. In this latter case, we use batches of size 500. We use Bayesian neural networks with one hidden layer and 100 hidden units. We compare the performance of PDTS with two baselines. The first one, \emph{greedy}, is a sampling strategy that only considers exploitation and does not perform any exploration. We implement this approach by selecting molecules according to the average of the probabilistic predictions generated by PBP. That is, the greedy approach ignores any variance in the predictions of the Bayesian neural network and generates batches by just ranking molecules according to the mean of the predictive distribution given by PBP. The second baseline is a Monte Carlo approach in which the batches of molecules are selected uniformly at random. These two baselines are comparable to PDTS in that they can be easily implemented in a large scale setting in which the library of candidate molecules contains millions of elements and data is sampled using large batch sizes. In the Malaria and One-dose data sets, we average across 50 different realizations of the experiments. This is not possible in the CEP data set, which is 100 times larger than the two other data sets. In the CEP case, we report results for a single realization of the experiment (in a second realization we obtained similar results). Figure \ref{fig:thompson_1pc} shows the recall obtained by each method in the molecule data sets. PDTS significantly outperforms the Monte Carlo approach, and also offers better performance than greedy sampling. This shows the importance of building in exploration into the sampling strategy, rather than relying on purely exploitative methods. The greedy approach performs best in the CEP data set. In this case, the greedy strategy initially finds better molecules than PDTS, but after a while PDTS overtakes, probably because a promising area of chemical space initially discovered by the greedy approach starts to become exhausted. The previous results allow us to consider the savings produced by BO. In the CEP data set, PDTS achieves about 20 times higher recall values than the Monte Carlo approach, which is comparable to the exhaustive enumeration that was used to collect the CEP data. We estimate that, with BO, the CEP virtual screening process would have taken 1,500 CPU years instead of the 30,000 that were actually used. Regarding the One-dose and Malaria data sets, PDTS can locate in both sets about 70\% of the top 1\% molecules by sampling approximately 6,000 molecules. By contrast, the Monte Carlo approach would require sampling 14,000 molecules. This represents a significant reduction in the discovery time for new therapeutic molecules and savings in the economic costs associated with molecule synthesis and testing. \vspace{-0.1cm} \subsection{Comparison with $\epsilon$-greedy Approaches} \vspace{-0.1cm} We can easily modify the greedy baseline from the previous section to include some amount of exploration by replacing a small fraction of the molecules in each batch with molecules chosen uniformly at random. This approach is often called $\epsilon$-greedy \cite{watkins1989learning}, where the variable $\epsilon$ indicates the fraction of molecules that are sampled uniformly at random. The disadvantage of the $\epsilon$-greedy approach is that it requires the tuning of $\epsilon$ to the problem of interest whereas the amount of exploration is automatically set by PDTS. \begin{table} \centering \caption{Average rank and standard errors by each method.}\label{table1} \begin{tabular}{lr@{$\pm$}l} \hline \bf{Method}& \multicolumn{2}{c}{\bf{Rank}}\\ \hline $\epsilon = 0.01$ & 3.42 & 0.28 \\ $\epsilon = 0.025$ & 3.02 & 0.25 \\ $\epsilon = 0.05$ & 2.86 & 0.23 \\ $\epsilon = 0.075$ & 3.20 & 0.26 \\ PDTS & \bf{2.51} & \bf{0.20} \\ \hline \vspace{-8mm} \end{tabular} \end{table} We compared PDTS with different versions of $\epsilon$-greedy in the same way as above, using $\epsilon = 0.01, 0.025, 0.05$ and $0.075$. The experiments with the One-dose and the Malaria data sets are similar to the ones done before. However, we now sub-sample the CEP data set to be able to average across 50 different realizations of the experiment: we choose 4,000 molecules uniformly at random and then collect data in batches of size 50 across 50 different repetitions of the screening process. We compute the average rank obtained by each method across the $3\times 50 = 150$ simulated screening experiments. A ranking equal to 1 indicates that the method always obtains the highest recall at the end of the experiment, while a ranking equal to 5 indicates that the method always obtains the worst recall value. Table \ref{table1} shows that the lowest average rank is obtained by PDTS, which achieves better exploration-exploitation trade-offs than the $\epsilon$-greedy approaches. \vspace{-0.1cm} \section{Conclusions} \vspace{-0.1cm} We have presented a Parallel and Distributed implementation of Thompson Sampling (PDTS), a highly scalable method for parallel Bayesian optimization. PDTS can be applied when scalability limits the applicability of competing approaches. We have evaluated the performance of PDTS in experiments with both Gaussian process and probabilistic neural networks. We show that PDTS compares favorably with parallel EI in problems with small batch sizes. We also demonstrate the effectiveness of PDTS on large scale real world applications that involve searching chemical space for new molecules wit improved properties. We show that PDTS outperforms other scalable approaches on these applications, in particular, a greedy search strategy, $\epsilon$-greedy approaches and a random search method. \subsection*{Acknowledgements} J.M.H.L. acknowledges support from the Rafael del Pino Foundation. The authors thank Ryan P. Adams for useful discussions. A.A.-G. and E.O.P.-K. acknowledge the Department of Energy Program on Theory and modeling through grant {DE-SC0008733}. {\small
2023-04-23T08:17:34.125Z
2017-06-07T02:08:02.000Z
redpajama/arxiv
arxiv_0000
264
7,649
9e05d95d384643aace7949d7cb3b66987fe82877
\section{Introduction} This paper is devoted to generalize the Thomas projective parameter \cite{thomas1} and the Weyl projective tensor \cite{weyl} as invariants of a mapping of an affine connection space. These invariants are primary generalized in \cite{vesicgeninv1} but it is obtained these invariants are not unique ones in this paper. Therefore, we are interested to obtain some invariants of geometric mappings different of the invariants from \cite{vesicgeninv1} in this paper. \subsection{Spaces of affine connection} An $N$-dimensional manifold $\mathcal M^N$ equipped with an affine connection $\nabla$ with torsion is called the non-symmetric affine connection space $\mathbb{GA}_N$. The affine connection coefficients of the affine connection $\nabla$ are $L^i_{jk}$ and they are non-symmetric by indices $j$ and $k$. The geometrical object \begin{equation} L^i_{\underline{jk}}=\frac12\big(L^i_{jk}+L^i_{kj}\big), \end{equation} \noindent satisfies the equation \begin{equation*} L^{i'}_{\underline{j'k'}}=x^{i'}_ix^j_{j'}x^k_{k'}L^i_{\underline{jk}}+ x^{i'}_{i}x^i_{j'k'}, \end{equation*} \noindent so it is the affine connection coefficient of a symmetric affine connection space $\mathbb A_N$. The manifold $\mathcal M^N$ equipped with an affine connection $\widetilde\nabla$ whose coefficients are $L^i_{\underline{jk}}$ is \textbf{the associated space $\mathbb A_N$} (of the space $\mathbb{GA}_N$). There is one kind of covariant derivation with regard to a symmetric affine connection: \begin{equation} a^i_{j|k}=a^i_{j,k}+L^i_{\underline{\alpha k}}a^\alpha_j- L^\alpha_{\underline{jk}}a^i_\alpha, \label{eq:covderivativesim} \end{equation} \noindent for a tensor $a^i_j$ of the type $(1,1)$ and the partial derivative $\partial/\partial x^i$ denoted by comma. From the corresponding Ricci-type identity, it is obtained one curvature tensor of the associated space $\mathbb A_N$: \begin{equation} R^i_{jmn}=L^i_{\underline{jm},n}-L^i_{\underline{jn},m} + L^\alpha_{\underline{jm}}L^i_{\underline{\alpha n}}- L^\alpha_{\underline{jn}}L^i_{\underline{\alpha m}}. \label{eq:R} \end{equation} Special symmetric affine connection spaces are Riemannian spaces $\mathbb R_N$ whose affine connection coefficients are the corresponding Christoffel symbols $\Gamma^i_{\underline{jk}}$ of the second kind. Many authors have developed the theory of symmetric affine connection spaces and mappings between them. Some of them are J. Mike\v s \cite{mik1, mik10, mik2, mik3, mik5, mik6, mik7, mik8}, N. S. Sinyukov \cite{sinjukov}, V. E. Berezovski \cite{mik1, mik5, mik10, mik7, mik8}, L. P. Eisenhart \cite{eisRim} and many others. M. S. Stankovi\'c (see \cite{micasg}) obtained an invariant of an almost geodesic mapping of a non-symmetric affine connection space from the corresponding transformation of the curvature tensor (\ref{eq:R}). \subsection{Motivation} H. Weyl \cite{weyl} and T. Y. Thomas \cite{thomas1} obtained invariants of geodesic mappings of a symmetric affine connection space. J. Mike\v s with his research group \cite{mik1, mik10, mik2, mik3, mik5, mik6, mik7, mik8}, N. S. Sinyukov \cite{sinjukov}, and many other authors have continued the process of generalization of these invariants. The search for general formula for invariants of geometric mappings is started in \cite{vesicgeninv1}. The basic invariants of geometric mappings are obtained in this paper. It is also studied a special case in that paper. In this special case, it is obtained basic invariants of a mapping but the author founded some other invariants different of the basic ones. We are interested to obtain some of these invariants which are not basic in this paper. The invariants which we are interested to develop in this paper are \begin{itemize} \item The generalized Thomas projective parameter \cite{thomas1}: \begin{equation} T^i_{jk}=L^i_{\underline{jk}}- \frac1{N+1}\big(\delta^i_kL^\alpha_{\underline{j\alpha}}+ \delta^i_jL^\alpha_{\underline{k\alpha}}\big). \label{eq:Thomasgeodesic} \end{equation} \item The Weyl projective tensor \cite{weyl}: \begin{equation} W^i_{jmn}=R^i_{jmn}+\frac1{N+1}\delta^i_jR_{[mn]}+ \frac N{N^2-1}\delta^i_{[m}R_{jn]}+ \frac1{N^2-1}\delta^i_{[m}R_{n]j}. \label{eq:Weylgeodesic} \end{equation} \end{itemize} In a Riemannian space $\mathbb{R}_N$, the Weyl projective tensor (\ref{eq:Weylgeodesic}) reduces to \begin{equation} W^i_{jmn}=R^i_{jmn}+ \frac 1{N-1}\big(\delta^i_{m}R_{jn}-\delta^i_{n}R_{jm}\big). \label{eq:WeylgeodesicRN} \end{equation} \section{Reminder on basic invariants} Let $f:\mathbb{GA}_N\to\mathbb{G\overline A}_N$ be a mapping between non-symmetric affine connection spaces $\mathbb{GA}_N$ and $\mathbb{G\overline A}_N$. The deformation tensor $P^i_{jk}$ of this mapping satisfies the corresponding equation \cite{vesicgeninv1} \begin{equation} P^i_{jk}=\overline L^i_{jk}-L^i_{jk}= \overline\omega{}^i_{jk}-\omega^i_{jk}+\overline\tau{}^i_{jk}- \tau^i_{jk}, \label{eq:P} \end{equation} \noindent for geometrical objects $\omega^i_{jk},\overline\omega{}^i_{jk},\tau^i_{jk},\overline\tau{}^i_{jk}$ of the type $(1,2)$ such that $\omega^i_{jk}=\omega^i_{kj},$ $\overline\omega{}^i_{jk}=\overline\omega{}^i_{kj},$ $\tau^i_{jk}=-\tau^i_{kj},$ $\overline\tau{}^i_{jk}=-\overline\tau{}^i_{kj}$. After symmetrizing the equation (\ref{eq:P}) by indices $j$ and $k$, one gets \begin{eqnarray} \overline L^i_{\underline{jk}}=L^i_{\underline{jk}}+ \overline\omega^i_{jk}-\omega^i_{jk}. \label{eq:Psim} \end{eqnarray} With regard to the last result, it is obtained three kinds of invariants of the mapping $f$ in \cite{vesicgeninv1}. In this paper, we are interested to generalize just invariants of the second kind. The basic invariants which we will generalize are: \begin{align} &\aligned {\widetilde{\mathcal T}}{}^i_{jk}=L^i_{\underline{jk}}-\omega^i_{jk}, \endaligned\label{eq:Tsimantisimgeneral}\\\displaybreak[0] &\aligned {\widetilde{\mathcal W}}{}^i_{jmn}= R^i_{jmn}-\omega{}^i_{jm|n}+ \omega{}^i_{jn|m}+ \omega{}^\alpha_{jm}\omega{}^i_{\alpha n}-\omega{}^\alpha_{jn}\omega{}^i_{\alpha m}, \endaligned\label{eq:Wbasic} \end{align} \noindent for $p^1_1,\ldots,p^2_3=1,2,\omega_{(1).jk}^i=L^i_{\underline{jk}}, \omega^i_{(2).jk}=\omega^i_{jk}$. The invariant (\ref{eq:Tsimantisimgeneral}) is the basic invariant of the mapping $f$ of the Thomas type but the invariant (\ref{eq:Wbasic}) is the basic invariant of the mapping $f$ of the Weyl type. \section{Derived invariants} In general, the geometrical object $\omega^i_{jk}$ from the equation (\ref{eq:P}) has the form \begin{equation} \omega^i_{jk}=s_{(1)}\big(\delta^i_j\rho_k+\delta^i_k\rho_j\big)+ s_{(2)}\big(F^i_j\sigma_k+F^i_k\sigma_j\big)+s_{(3)}\sigma_{jk}\varphi^i, \label{eq:omegagen} \end{equation} \noindent for $s_1,s_2,s_3\in\mathbb R$, $1$-forms $\rho_j,\sigma_j$, an affinor $F^i_j$, a covariant tensor $\sigma_{jk}$ symmetric by indices $j$ and $k$ and a contra-variant vector $\varphi^i$. \begin{rem} The geometrical objects $\delta^i_j\rho_k+\delta^i_k\rho_j$, $F^i_j\sigma_k+F^i_k\sigma_j$, $\sigma_{jk}\varphi^i$ are linearly independent. Otherwise, there would not be necessary three constants $s_{(1)},s_{(2)},s_{(3)}$. \end{rem} With regard to the equation (\ref{eq:omegagen}), we get \begin{equation} \aligned \omega^\alpha_{jm}\omega^i_{\alpha n}&=\big(s_{(1)}\big)^2\delta^i_j\rho_m\rho_n+\big(s_{(1)}\big)^2 \delta^i_m\rho_j\rho_n\\&+ \delta^i_n\Big(2\big(s_{(1)}\big)^2\rho_j\rho_m+ s_{(1)}s_{(2)}\big(F^\alpha_m\rho_\alpha\sigma_j+F^\alpha_j\rho_\alpha\sigma_m\big) +s_{(1)}s_{(3)}\sigma_{jm}\rho_\alpha\varphi^\alpha\Big)\\&+ \big(s_{(2)}\big)^2\Big(F^i_n\big(F^\alpha_m\sigma_j+F^\alpha_j\sigma_m\big)\sigma_\alpha+ F^i_\alpha\big(F^\alpha_m\sigma_j+F^\alpha_j\sigma_m\big)\sigma_n\Big)+ \big(s_{(3)}\big)^2\sigma_{jm}\sigma_{\alpha n}\varphi^\alpha\varphi^i\\&+ s_{(1)}s_{(2)}\Big(F^i_n\big(\rho_j\sigma_m+\rho_m\sigma_j\big)+ F^i_m\big(\rho_j\sigma_n+\rho_n\sigma_j\big)+ F^i_j\big(\rho_m\sigma_n+\rho_n\sigma_m\big)\Big)\\&+ s_{(1)}s_{(3)}\big(\sigma_{mn}\rho_j+\sigma_{jn}\rho_m+\sigma_{jm}\rho_n\big)\varphi^i\\&+ s_{(2)}s_{(3)}\Big(\big(F^\alpha_m\sigma_j+F^\alpha_j\sigma_m\big) \sigma_{\alpha n}\varphi^i+\big(F^i_n\sigma_\alpha+ F^i_\alpha\sigma_n\big)\sigma_{jm}\varphi^\alpha\Big) \endaligned\label{eq:omegaomegagen} \end{equation} It holds the following theorem: \begin{thm} Let $f:\mathbb{GA}_N\to\mathbb{G\overline A}_N$ be a geometric mapping. The geometrical objects \begin{align} &\aligned \widetilde{\mathcal T}{}^{(s).i}_{jk}=L^i_{\underline{jk}}- s_{(1)}\big(\delta^i_j\rho_k+\delta^i_k\rho_j\big)- s_{(2)}\big(F^i_j\sigma_k+F^i_k\sigma_j\big)-s_{(3)}\sigma_{jk}\varphi^i, \endaligned\label{eq:basicThomasGeneral}\\\displaybreak[0] &\aligned \widetilde{\mathcal W}{}^{(s).i}_{jmn}&=R^i_{jmn}-\delta^i_j\zeta^{(s)}_{[mn]}- \delta^i_m\zeta^{(s)}_{jn}+\delta^i_n\zeta^{(s)}_{jm} +\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jmn}-\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jnm}, \endaligned\label{eq:basicWeylGeneral} \end{align} \noindent for \begin{align} &\aligned \zeta^{(s)}_{ij}&=s_{(1)}\rho_{i|j}+\big(s_{(1)}\big)^2\rho_i\rho_j+ s_{(1)}s_{(2)}\big(F^\alpha_i\sigma_j+F^\alpha_j\sigma_i\big)\rho_\alpha+ s_{(1)}s_{(3)}\sigma_{ij}\rho_\alpha\varphi^\alpha, \endaligned\label{eq:zeta(s)}\\ &\aligned \widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jmn}&=\big(s_{(2)}\big)^2\Big(F^i_n\big(F^\alpha_m\sigma_j+ F^\alpha_j\sigma_m\big)\sigma_\alpha+F^i_\alpha F^\alpha_m\sigma_j\sigma_n\Big)+\big(s_{(3)}\big)^2\sigma_{jm}\sigma_{\alpha n}\varphi^\alpha\varphi^i\\&+ s_{(2)}s_{(3)}\Big(\big(F^\alpha_{m}\sigma_j+ F^\alpha_j\sigma_{m}\big)\sigma_{\alpha n}\varphi^i- \big(F^i_{m}\sigma_\alpha+ F^i_\alpha\sigma_{m}\big)\sigma_{jn}\varphi^\alpha\Big)\\&- s_{(2)}\big(F^i_j\sigma_m+F^i_m\sigma_j\big)_{|n}- s_{(3)}\big(\sigma_{jm}\varphi^i\big)_{|n}, \endaligned\label{eq:D(s2)(s3)} \end{align} \noindent $s=(s_{1},s_{2},s_{3})$, are the basic invariants of the mapping $f$.\hfill\qed \end{thm} \subsection{Invariants in associated space} We will analyze the invariants (\ref{eq:basicThomasGeneral}, \ref{eq:basicWeylGeneral}) of a mapping $f:\mathbb{GA}_N\to\mathbb{G\overline A}_N$ bellow. From this analyzing, we will obtain some other invariants of the mapping $f$. After contracting the equality $\widetilde{\overline{\mathcal T}}{}^i_{jk}- \widetilde{\mathcal T}{}^i_{jk}=0$ by indices $i$ and $k$, we get \begin{equation} \aligned (N+1)s_{(1)}\big(\overline\rho_j-\rho_j\big)&=\overline L^\alpha_{\underline{j\alpha}}-s_{(2)}\big(\overline F^\alpha_j\overline\sigma_k+\overline F\overline\sigma_j\big)- s_{(3)}\overline\sigma_{j\alpha}\overline\varphi^\alpha\\&- L^\alpha_{\underline{j\alpha}}+s_{(2)}\big(F^\alpha_j\sigma_\alpha+F\sigma_j\big)+ s_{(3)}\sigma_{j\alpha}\varphi^\alpha, \endaligned\label{eq:rho-rho} \end{equation} \noindent for $F=F^\alpha_\alpha,\overline F=\overline F^\alpha_\alpha$. After substituting this equation into the equality $\widetilde{\overline{\mathcal T}}{}^{(s).i}_{jk}- \widetilde{\mathcal T}{}^{(s).i}_{jk}=0$, one obtains \begin{equation*} \widetilde{{\overline T}}{}^{(s).i}_{jk}= \widetilde{T}{}^{(s).i}_{jk}, \end{equation*} \noindent for \begin{equation} \aligned \widetilde T{}^{(s).i}_{jk}&=L^i_{\underline{jk}}- \frac{s_{(1)}}{N+1}\delta^i_j\Big(L^\alpha_{\underline{k\alpha}}- s_{(2)}(F^\alpha_k\sigma_\alpha+F\sigma_k\big)-s_{(3)}\sigma_{k\alpha}\varphi^\alpha\Big)\\& - \frac{s_{(1)}}{N+1}\delta^i_k\Big(L^\alpha_{\underline{j\alpha}}- s_{(2)}(F^\alpha_j\sigma_\alpha+F\sigma_j\big)-s_{(3)}\sigma_{j\alpha}\varphi^\alpha\Big)\\&- s_{(2)}\big(F^i_j\sigma_k+F^i_k\sigma_j\big)-s_{(3)}\sigma_{jk}\varphi^i, \endaligned\label{eq:generalThomassymmetric} \end{equation} \noindent and the corresponding $\widetilde{\overline T}{}^{(s).i}_{jk}$. \begin{lem} Let $f:\mathbb{GA}_N\to\mathbb{G\overline A}_N$ be a geometric mapping of a non-symmetric affine connection space $\mathbb{GA}_N$. The geometrical object \emph{(\ref{eq:generalThomassymmetric})} is an invariant of the mapping $f$.\hfill\qed \end{lem} \begin{cor} The invariant \emph{(\ref{eq:generalThomassymmetric})} and the Thomas projective parameter $T^i_{jk}$ given in the equation \emph{(\ref{eq:Thomasgeodesic})} satisfy the equation \begin{equation} \aligned \widetilde T{}^{(s).i}_{jk}&=s_{(1)} T^{i}_{jk}+(1-s_{(1)})L^i_{\underline{jk}} - s_{(2)}\big(F^i_j\sigma_k+F^i_k\sigma_j\big)-s_{(3)}\sigma_{jk}\varphi^i \\&+ \frac{s_{(1)}}{N+1}\Big( s_{(2)}(F^\alpha_k\sigma_\alpha+F\sigma_k\big)+s_{(3)}\sigma_{k\alpha}\varphi^\alpha\Big)\delta^i_j\\& + \frac{s_{(1)}}{N+1}\Big( s_{(2)}(F^\alpha_j\sigma_\alpha+F\sigma_j\big)+s_{(3)}\sigma_{j\alpha}\varphi^\alpha\Big)\delta^i_k, \endaligned\label{eq:dThomasThomascorrelation} \end{equation} \noindent for the corresponding $s=(s_1,s_2,s_3)$.\hfill\qed \end{cor} The geometrical object (\ref{eq:generalThomassymmetric}) is \emph{the derived associated invariant} of the Thomas type. Furthermore, from the invariance of the geometrical object (\ref{eq:basicWeylGeneral}) we obtain \begin{equation} \aligned \overline R^i_{jmn}&=R^i_{jmn}+\delta^i_j\big(\overline\zeta^{(s)}_{[mn]}- \zeta^{(s)}_{[mn]}\big)+\delta^i_m\big(\overline\zeta^{(s)}_{jn}- \zeta^{(s)}_{jn}\big)-\delta^i_n\big(\overline\zeta^{(s)}_{jm}- \zeta^{(s)}_{jm}\big)\\&- \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).i}_{jmn}+ \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).i}_{jnm}+ \widetilde{{\mathcal D}}{}^{(s_2).(s_3).i}_{jmn}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).i}_{jnm}. \endaligned\label{eq:R-Rgeneral} \end{equation} After contracting this equation by indices $i$ and $j$, we get \begin{equation} \aligned (N+1)\big(\overline\zeta^{(s)}_{[mn]}-\zeta^{(s)}_{[mn]}\big)&= -\overline R_{[mn]}+\widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha mn}- \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha nm}\\&+R_{[mn]}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha mn}+ \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha nm}. \endaligned\label{eq:R-Rgenerali=j} \end{equation} From the equations (\ref{eq:R-Rgeneral}, \ref{eq:R-Rgenerali=j}), one obtains \begin{equation} \aligned \overline R^i_{jmn}&=R^i_{jmn}+\frac1{N+1}\delta^i_j\big(R_{[mn]}-\overline R_{[mn]}\big)+\delta^i_m\big(\overline\zeta^{(s)}_{jn}- \zeta^{(s)}_{jn}\big)-\delta^i_n\big(\overline\zeta^{(s)}_{jm}- \zeta^{(s)}_{jm}\big)\\& +\frac1{N+1}\delta^i_j\big(\widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha mn}- \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha nm}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha mn}+ \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha nm}\big)\\&- \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).i}_{jmn}+ \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).i}_{jnm}+ \widetilde{{\mathcal D}}{}^{(s_2).(s_3).i}_{jmn}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).i}_{jnm}. \endaligned\label{eq:R-Rgenerali=j2} \end{equation} From the contraction of this result by indices $i$ and $n$, we get \begin{equation} \aligned (N-1)\big(\overline\zeta^{(s)}_{jm}-\zeta^{(s)}_{jm}\big)&= \frac N{N+1}\big(R_{jm}-\overline R_{jm}\big)+ \frac1{N+1}\big(R_{mj}-\overline R_{mj}\big)\\&+ \frac1{N+1}\big( \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jm]}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jm]}\big)\\&- \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[m\alpha]}+ \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[m\alpha]}. \endaligned\label{eq:R-Rgenerali=n} \end{equation} With regard to the equations (\ref{eq:R-Rgenerali=j2}, \ref{eq:R-Rgenerali=n}), we get \begin{equation*} {{\overline W}}{}^{(s).[1].i}_{jmn}= {{W}}{}^{(s).[1].i}_{jmn}, \end{equation*} \noindent for \begin{equation} \aligned {{W}}{}^{(s).[1].i}_{jmn}&=R^i_{jmn}+ \frac1{N+1}\delta^i_jR_{[mn]}+ \frac N{N^2-1}\delta^i_{[m}R_{jn]}+ \frac1{N^2-1}\delta^i_{[m}R_{n]j}\\& +\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{j[mn]}- \frac1{N+1}\delta^i_j\widetilde{\mathcal D}{}^{(s_2).(s_3).\alpha}_{\alpha[mn]}\\&+ \frac1{N^2-1}\delta^i_m\big((N+1)\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[n\alpha]}-\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jn]}\big)\\&- \frac1{N^2-1}\delta^i_n\big((N+1)\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[m\alpha]}-\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jm]}\big), \endaligned\label{eq:generalWeylderived1} \tag{$i$} \end{equation} \noindent and the corresponding ${{\overline W}}{}^{(s).[1].i}_{jmn}$. Let us test are some summands in the invariant (\ref{eq:generalWeylderived1}) invariants of the mapping $f$. After contract the equality ${{\overline W}}{}^{(s).[1].i}_{jmn}-{{W}}{}^{(s).[1].i}_{jmn}=0$ by indices $i$ and $n$, we get \begin{equation*} \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jm]}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{\alpha[jm]}=0. \end{equation*} Hence, the invariant (\ref{eq:generalWeylderived1}) reduces to \begin{equation} \aligned {{W}}{}^{(s).[2].i}_{jmn}&=R^i_{jmn}+ \frac1{N+1}\delta^i_jR_{[mn]}+ \frac N{N^2-1}\delta^i_{[m}R_{jn]}+ \frac1{N^2-1}\delta^i_{[m}R_{n]j}\\& +\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{j[mn]}+ \frac1{N-1}\big(\delta^i_m\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[n\alpha]}- \delta^i_n\widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[m\alpha]}\big), \endaligned\label{eq:generalWeylderived1final} \tag{$ii$} \end{equation} \noindent and the corresponding ${\overline{W}}{}^{(s).[2].i}_{jmn}$. Let us check-out are there invariants of the mapping $f$ into the invariant (\ref{eq:generalWeylderived1final}). After contracting the equality ${\overline{W}}{}^{(s).[2].i}_{jmn}- {{W}}{}^{(s).[2].i}_{jmn}=0$ by the indices $i$ and $j$, one obtains \begin{equation*} \widetilde{\overline{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[n\alpha]}- \widetilde{{\mathcal D}}{}^{(s_2).(s_3).\alpha}_{j[n\alpha]}=0. \end{equation*} Hereof, the invariant (\ref{eq:generalWeylderived1final}) reduces to \begin{equation} \aligned W{}^{(s).i}_{jmn}&=R^i_{jmn}+ \frac1{N+1}\delta^i_jR_{[mn]}+ \frac N{N^2-1}\delta^i_{[m}R_{jn]}+ \frac1{N^2-1}\delta^i_{[m}R_{n]j}\\& +\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jmn}- \widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jnm}. \endaligned\label{eq:Weylgensimf} \end{equation} It holds the following theorem: \begin{thm} Let $f:\mathbb{GA}_N\to\mathbb{G\overline A}_N$ be a geometric mapping. The geometrical object\linebreak \emph{(\ref{eq:Weylgensimf})} is an invariant of this mapping. \hfill\qed \end{thm} \begin{cor} The invariant \emph{(\ref{eq:Weylgensimf})} and the Weyl projective tensor \emph{(\ref{eq:Weylgeodesic})} satisfy the equation \begin{equation} W{}^{(s).i}_{jmn}=W^i_{jmn} +\widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jmn}- \widetilde{\mathcal D}{}^{(s_2).(s_3).i}_{jnm}, \label{eq:dWeylWeylcorrelation} \end{equation} \noindent for the corresponding $s=(s_1,s_2,s_3)$.\hfill\qed \end{cor} The geometrical object (\ref{eq:Weylgensimf}) is \emph{the derived associated invariant} of the Weyl type. \subsection{$F$-planar mappings} In this part of paper, we will apply the above obtained results. The theoretical part of this application will be search for invariants of conformal mappings. The practical example will be about a transformation of a special Riemannian space.\\[3pt] \noindent\textbf{$F$-planar mappings.} A mapping $f:\mathbb{A}_N\to\mathbb{\overline A}_N$ is called \emph{the $F$-planar mapping} if it is determined with the equation \begin{equation} \overline L^i_{\underline{jk}}= L^i_{\underline{jk}}+\delta^i_k\psi_j+\delta^i_j\psi_k+ F^i_k\sigma_j+F^i_j\sigma_k, \label{eq:F-planardfn} \end{equation} \noindent for $1$-forms $\psi_j,\sigma_j$ and an affinor $F^i_j$. The basic equation of the inverse mapping $f^{-1}:\mathbb {A}_N\to\mathbb{\overline{A}}_N$ is \begin{equation} L^i_{\underline{jk}}=\overline L^i_{\underline{jk}}- \delta^i_k\psi_j-\delta^i_j\psi_k-F^i_k\sigma_j- F^i_j\sigma_k. \label{eq:F-planar-1dfn} \end{equation} Hence, we get the mapping $f^{-1}$ is an $F$-planar mapping for \begin{eqnarray} \overline F^i_j=F^i_j,& \overline\sigma_j=-\sigma_j,& \overline\psi_j=-\psi_j. \label{eq:F-planarpsisigmaF-1} \end{eqnarray} Moreover, it holds \begin{equation} \overline L^i_{\underline{jk}}= L^i_{\underline{jk}}+\delta^i_k\psi_j+\delta^i_j\psi_k- \frac12\overline F^i_k\overline\sigma_j- \frac12\overline F^i_j\overline\sigma_k+ \frac12F^i_k\sigma_j+\frac12F^i_j\sigma_k. \label{eq:F-planaromega-omega1} \end{equation} Therefore, the $F$-planar mapping $f$ is the case of $s_1=1,s_2=1/2,s_3=0$, in the equation (\ref{eq:omegagen}). After contracting the equation (\ref{eq:F-planaromega-omega1}) by indices $i$ and $k$, one gets \begin{equation} \aligned \psi_j&=\frac1{N+1}\Big(\overline L^\alpha_{\underline{j\alpha}} +\frac12\overline F^i_k\overline\sigma_j+ \frac12\overline F^i_j\overline\sigma_k\Big)- \frac1{N+1}\Big(L^\alpha_{\underline{j\alpha}} +\frac12F^i_k\sigma_j+ \frac12F^i_j\sigma_k\Big). \endaligned\label{eq:F-planarpsij} \end{equation} With regard to the equations (\ref{eq:omegagen}, \ref{eq:F-planaromega-omega1}, \ref{eq:F-planarpsij}), we obtain \begin{equation} \rho_j=\frac1{N+1}\Big(L^\alpha_{\underline{j\alpha}} +\frac12F\sigma_j+ \frac12F^\alpha_j\sigma_\alpha\Big), \label{eq:F-planarrhoj} \end{equation} \noindent for $F=F^\alpha_\alpha$. From the equation (\ref{eq:basicThomasGeneral}), it holds that the associated invariant of the Thomas type of the mapping $f$ is \begin{align} &\aligned \widetilde{T}{}^i_{jk}&=L^i_{\underline{jk}}-\frac12\big(F^i_j\sigma_k+F^i_k\sigma_j\big)- \frac1{N+1}\delta^i_j\Big(L^\alpha_{\underline{k\alpha}}- \frac12\big(F^\alpha_k\sigma_\alpha+F\sigma_k\big)\Big) \\&- \frac1{N+1}\delta^i_k\Big(L^\alpha_{\underline{j\alpha}}- \frac12\big(F^\alpha_j\sigma_\alpha+F\sigma_j\big)\Big), \endaligned\label{eq:F-planarThomas}\\& \aligned \widetilde{T}{}^i_{jk}&=T^i_{{jk}}-\frac12\big(F^i_j\sigma_k+F^i_k\sigma_j\big)+ \frac1{2(N+1)}\delta^i_j \big(F^\alpha_k\sigma_\alpha+F\sigma_k\big) \\&+ \frac1{2(N+1)}\big(F^\alpha_j\sigma_\alpha+F\sigma_j\big), \endaligned\tag{\ref{eq:F-planarThomas}'}\label{eq:F-planarThomas'} \end{align} \noindent for the generalized Thomas projective parameter $T^i_{jk}$ from the equation (\ref{eq:Thomasgeodesic}). Based on the equation (\ref{eq:F-planarpsisigmaF-1}), we obtain \begin{equation*} \overline F^i_j\overline F^m_n\overline\sigma_p\overline\sigma_q= F^i_jF^m_n\sigma_p\sigma_q, \end{equation*} \noindent i.e. the geometrical objects (\ref{eq:zeta(s)}, \ref{eq:D(s2)(s3)}) reduce to \begin{align} &\aligned\widetilde{\mathcal D}{}^i_{jmn}=-\frac12\big(F^i_j\sigma_m+F^i_m\sigma_j\big)_{|n}, \endaligned\label{eq:F-planarD(s2)(s3)}\\ &\aligned \zeta_{ij}&=\frac1{N+1}L^\alpha_{\underline{i\alpha}|j}+\frac1{(N+1)^2} L^\alpha_{\underline{i\alpha}}L^\beta_{\underline{j\beta}}+ \frac1{2(N+1)}L^\beta_{\underline{\alpha\beta}} \big(F^\alpha_i\sigma_j+F^\alpha_j\sigma_i\big)\\&+ \frac1{2(N+1)}\Big(\big(F\sigma_i+F^\alpha_i\sigma_\alpha\big)_{|j}+ L^\alpha_{\underline{i\alpha}}\big(F\sigma_j+F^\beta_j\sigma_\beta\big)+ L^\alpha_{\underline{j\alpha}}\big(F\sigma_i+F^\beta_i\sigma_\beta\big)\Big). \endaligned\label{eq:F-planarsigma} \end{align} Therefore, the invariant (\ref{eq:basicWeylGeneral}) of the mapping $f$ is \begin{equation} \aligned \widetilde{\mathcal W}{}^i_{jmn}&=R^i_{jmn}+\frac1{N+1}\delta^i_jR_{[mn]}-\frac12\big(F^i_j\sigma_m+F^i_m\sigma_j\big)_{|n}+ \frac12\big(F^i_j\sigma_n+F^i_n\sigma_j\big)_{|m}-\delta^i_{[m}\zeta_{jn]}, \endaligned\label{eq:F-planarWbasic} \end{equation} \noindent for $\zeta_{ij}$ given in the equation (\ref{eq:F-planarsigma}). The invariant (\ref{eq:Weylgensimf}) of the $F$-planar mapping $f$ is \begin{equation} \widetilde W^i_{jmn}=W^i_{jmn}-\frac12\big(F^i_j\sigma_m+F^i_m\sigma_j\big)_{|n}+ \frac12\big(F^i_j\sigma_n+F^i_n\sigma_j\big)_{|m}, \label{eq:F-planarWderived} \end{equation} \noindent for the Weyl projective tensor $W^i_{jmn}$.\\[3pt] \begin{exm} In this example, we are aimed to obtain the invariants \emph{(\ref{eq:F-planarThomas}, \ref{eq:F-planarWbasic}, \ref{eq:F-planarWderived})} of an $F$-planar mapping $f:\mathbb R_3\to\mathbb{\overline R}_3$ for the Riemannian space $\mathbb R_3$ determined with the metric tensor \begin{equation} g_{\underline{ij}}=\left[\begin{array}{ccc} u^2&0&0\\ 0&v^2&0\\ 0&0&w^2 \end{array}\right] \end{equation} The corresponding affinor $F^i_j$ and covariant vector $\sigma_j$ are \begin{eqnarray} F^i_j=\left[\begin{array}{ccc} \sin u&0&0\\ 0&\cos v&0\\ 0&0&w \end{array}\right]&\mbox{and}& \sigma_j=\left[\begin{array}{c} 0\\ 0\\ \ln(1+u^2+v^2+w^2) \end{array}\right] \end{eqnarray} Let be $\mathcal F^i_{jk}=F^i_k\sigma_j+F^i_j\sigma_k$. It is satisfied \begin{equation} \mathcal F^i_{jk}=\left\{\begin{array}{ll} 0,&j,k\in\{1,2\}\\ \sin u\ln(1+u^2+v^2+w^2),&i=j=1,k=3\mbox{ or }i=k=1,j=3,\\ \cos v\ln(1+u^2+v^2+w^2),&i=j=2,k=3\mbox{ or }i=k=2,j=3,\\ 2w\ln(1+u^2+v^2+w^2),&i=j=k=3. \end{array}\right. \label{eq:F-planarFijk} \end{equation} It also holds \begin{equation} \mathcal F^\alpha_{j\alpha}=\big(\sin u+\cos v+w)\sigma_j+F^{(j)}_j\sigma_{(j)} \label{eq:F-planarFj} \end{equation} The Christoffel symbols of the second kind of this space are \begin{equation} \begin{array}{ccc} \Gamma^1_{\underline{11}}=\dfrac1u,&\Gamma^1_{\underline{22}}=\dfrac v{u^2},&\Gamma^1_{\underline{33}}=\dfrac w{u^2},\\\\ \Gamma^2_{\underline{11}}=\dfrac u{v^2},& \Gamma^2_{\underline{22}}=\dfrac1v,& \Gamma^2_{\underline{33}}=\dfrac w{v^2},\\\\ \Gamma^3_{\underline{11}}=\dfrac u{w^2},& \Gamma^3_{\underline{22}}=\dfrac v{w^2},& \Gamma^3_{\underline{33}}=\dfrac 1w, \end{array} \end{equation} \noindent and $\Gamma^i_{\underline{jk}}=0$ in all other cases. The generalized Thomas projective parameter of the space $\mathbb R_3$ is \begin{equation} T^i_{jk}=\left\{\begin{array}{ll} \Gamma^i_{\underline{jk}},&i\not\in\{j,k\},\\ -\frac1{N+1}\Gamma^{(k)}_{\underline{k(k)}},&i=j\neq k,\\ \frac{N-1}{N+1}\big(\frac1u\delta^i_1+\frac1v\delta^i_2+\frac1w\delta^i_3\big),&i=j=k. \end{array}\right. \end{equation} Hence, the derived invariant of Thomas type of the mapping $f$ is \begin{equation} \widetilde{\mathcal T}{}^i_{jk}=T^i_{jk}-\frac12\mathcal F^i_{jk}+\frac1{8}\delta^i_j\mathcal F^\alpha_{k\alpha}+ \frac18\delta^i_k\mathcal F^\alpha_{j\alpha}, \end{equation} \noindent for $\mathcal F^i_{jk},\mathcal F^\alpha_{j\alpha}$ given by the equations \emph{(\ref{eq:F-planarFijk}, \ref{eq:F-planarFj})}. We have the following cases for the curvature tensor $R^i_{jmn}$: \begin{equation} \aligned 1.&\quad m=n\Rightarrow R^i_{jmn}=0\Rightarrow R_{jm}=0,\\ 2.&\quad j=m\neq n\Rightarrow R^i_{jmn}= \Gamma^i_{\underline{jm},n}+\Gamma^{(n)}_{\underline{jm}}\Gamma^i_{\underline{(n)n}} \Rightarrow R_{jm}={\Gamma^{\alpha}_{\underline{jm},\alpha}}+ \Gamma^{\alpha}_{\underline{jm}}L^{(\alpha)}_{\underline{\alpha(\alpha)}},\\ 3.&\quad j=n\neq m\Rightarrow R^i_{jmn}=-\Gamma^i_{\underline{jn},m}-\Gamma^{\alpha}_{\underline{jn}} \Gamma^i_{\underline{\alpha m}}\Rightarrow R_{jm}=- \Gamma^{(m)}_{\underline{j(j)}}\Gamma^{(j)}_{\underline{(m)m}},\\ 4.&\quad j\neq m,j\neq n\Rightarrow R^i_{jmn}=0\Rightarrow R_{jm}=0. \endaligned \end{equation} It is also satisfied \begin{equation} \aligned \zeta_{ij}&=\frac1{N+1}\Gamma^{(i)}_{\underline{i(i)}|j}+ \frac1{(N+1)^2}\Gamma^{(i)}_{\underline{i(i)}} \Gamma^{(j)}_{\underline{j(j)}}\\&+\frac1{2(N+1)} \Big(\Gamma^{(\alpha)}_{\underline{\alpha(\alpha)}} \mathcal F^\alpha_{ij}+ \mathcal F^\alpha_{i\alpha|j}+ \Gamma^{(i)}_{\underline{i(i)}}\mathcal F^\alpha_{j\alpha}+ \Gamma^{(j)}_{\underline{j(j)}} \mathcal F^\alpha_{i\alpha}\Big), \endaligned\label{eq:F-planarexamplezeta} \end{equation} \noindent for $\mathcal F^i_{jk},\mathcal F^\alpha_{j\alpha}$ given in the equations \emph{(\ref{eq:F-planarFijk}, \ref{eq:F-planarFj})}. Hence, the corresponding invariants of Weyl type of the mapping $f$ are \begin{align} &\aligned \widetilde{\mathcal W}{}^i_{jmn}=R^i_{jmn}-\delta^i_{[m}\zeta_{jn]}-\frac12 \mathcal F^i_{jm|n}+\frac12\mathcal F^i_{jn|m}, \endaligned\\ &\aligned \widetilde{ W}{}^i_{jmn}&=R^i_{jmn}+\frac1{N-1}\big(\delta^i_mR_{jn}-\delta^i_nR_{jm}\big) -\frac12\mathcal F^i_{jm|n}+ \frac12\mathcal F^i_{jn|m}. \endaligned \end{align} \end{exm}
2023-04-23T08:17:34.302Z
2018-11-05T02:08:56.000Z
redpajama/arxiv
arxiv_0000
267
4,883
4b299eddc9319e349ec258276b79bcb74e69cf66
\section{Introduction} State-of-the-art artificial neural networks (ANNs) achieve impressive results in a variety of machine intelligence tasks \citep{sejnowski2020unreasonable}. However, they largely rely on mechanisms that diverge from the original inspiration from biological neural networks \citep{bengio2015towards, illing2019biologically}. As a result, only a small part of this prolific field also contributes to computational neuroscience. In fact, this biological implausibility is also an important issue for machine intelligence. For their impressive performance, ANNs trade off other desired properties, which are present in biological systems. For example, ANN training often demands very large and labelled datasets. When labels are unavailable, self-supervised learning schemes exist, where supervisory error signals generated by the network itself are exploited and backpropagated from the output towards the input to update the network's parameters \citep{goodfellow2014generative, devlin2018bert, chen2020simple}. However, this global propagation of signals in deep networks introduces another limitation. Namely, it prevents the implementation of efficient distributed computing hardware that would be based on only local signals from neighbouring physical nodes in the network, and is in contrast to the local synaptic plasticity rules that are believed to govern biological learning. Several pieces of work have been addressing parts of the biological implausibility and drawbacks of backpropagation in ANNs \citep{bengio2015towards, lillicrap2016random, guerguiev2017towards, pfeiffer2018deep, illing2019biologically, pogodin2020kernelized, millidge2020predictive, pogodin2021towards}. Recently, an approximation to backpropagation that is mostly Hebbian, i.e. relies on mostly pre- and post-synaptic activity of each synapse, has been achieved by reducing the global error requirements to 1-bit information \citep{pogodin2020kernelized}. Two schemes that further localize the signal that is required for a weight update are Equilibrium Propagation \citep{scellier2017equilibrium} and Predictive Coding \citep{millidge2020predictive}. Both methods approximate backpropagation through Hebbian-like learning, by delegating the global aspect of the computation, from a global error signal, to a global convergence of the network state to an equilibrium. This equilibrium is reached through several iterative steps of feed-forward and feed-back communication throughout the network, before the ultimate weight update by one training example. The biological plausibility and hardware-efficiency of this added iterative and heavily feedback-dependent process are open questions that begin to be addressed \citep{ernoult2020equilibrium}. Moreover, learning through backpropagation, and presumably also its approximations, has another indication of biological implausibility, which also significantly limits ANN applicability. Namely, it produces networks that are confused by small adversarial perturbations of the input, which are imperceptible by humans. It has recently been proposed that a defence strategy of "deflection" of adversarial attacks may be the ultimate solution to that problem \citep{qin2020deflecting}. Through this strategy, to cause confusion in the network's inferred class, the adversary is forced to generate such a changed input that really belongs to the distribution of a different input class. Intuitively, but also strictly by definition, this deflection is achieved if a human assigns to the perturbed input the same label that the network does. Deflection of adversarial attacks in ANNs has been demonstrated by an elaborate scheme that is based on detecting the attacks \citep{qin2020deflecting}. However, the human ability to deflect adversarial perturbations likely does not rely on detecting them, but rather on effectively ignoring them, making the deflecting type of robustness an emergent property of biological computation rather than a defence mechanism. The biological principles that underlie this property of robustness are unclear, but it might emerge from the distinct algorithms that govern learning in the brain. Therefore, what is missing is a biologically plausible model that can learn from fewer data-points, without labels, through local plasticity, and without feedback from distant layers. This model could then be tested for emergent adversarial robustness. A good candidate category of biological networks and learning algorithms is that of competitive learning. Neurons that compete for their activation through lateral inhibition are a common connectivity pattern in the superficial layers of the cerebral cortex \citep{douglas2004neuronal, binzegger2004quantitative}. This pattern is described as winner-take-all (WTA), because competition suppresses activity of weakly activated neurons, and emphasizes strong ones. Combined with Hebbian-like plasticity rules, WTA connectivity gives rise to competitive-learning algorithms. These networks and learning schemes have been long studied \citep{von1973self} and a large literature based on simulations and analyses describes their functional properties. A WTA neuronal layer, depending on its specifics, can restore missing input signals \citep{rutishauser2011collective, diehl2016learning}, perform decision making i.e. winner selection \citep{hahnloser1999feedback, maass2000computational, rutishauser2011collective} generate oscillations such as those that underlie brain rhythms \citep{cannon2014neurosystems}. Perhaps more importantly, its neurons can learn to become selective to different input patterns, such as orientation of visual bars in models of the primary visual cortex \citep{von1973self}, MNIST handwritten digits \citep{nessler2013PLoS, diehl2015unsupervised, krotov2019unsupervised}, CIFAR10 objects \citep{krotov2019unsupervised}, spatiotemporal spiking patterns \citep{nessler2013PLoS}, and can adapt dynamically to model changing objects \citep{moraitis2020shortterm}. The WTA model is indeed biologically plausible, Hebbian plasticity is local, and learning is input-driven, relying on only feed-forward communication of neurons -- properties that seem to address several of the limitations of ANNs. However, the model's applicability is limited to simple tasks, because, the theoretical literature related to Hebbian WTA remains surprisingly limiting, despite its long history, and the strong and productive community interest \citep{sanger1989optimal, foldiak1989adaptive, foldiak1990forming, linsker1992local, olshausen1996emergence, bell1995information, olshausen1997sparse, lee1999independent, nessler2013PLoS, pehlevan2014hebbian, hu2014hebbian, PehlevanNIPS2015, pehlevan2017clustering, isomura2018error}. It remains unclear which specific plasticity rule and structure could optimize a non-spiking WTA for Bayesian inference, how to minimize a common loss function such as cross-entropy despite unsupervised learning, and how a cortical or artificial WTA could represent varying families of probability distributions. In summary, on the theoretical side, an algorithm that is simultaneously normative, based on WTA networks and Hebbian unsupervised plasticity, performs Bayesian inference, and, importantly, is composed of conventional, i.e. non-spiking, ANN elements and is rigorously linked to modern ANN tools such as cross-entropy loss, would be an important advance but has been missing. On the practical side, evidence that Hebbian WTA networks could be useful for presently pertinent issues of modern ANNs such as adversarial robustness, generation of synthetic images, or faster learning, has remained limited. Here we aim to fill these gaps. Recently, when WTA networks were studied in a theoretical framework compatible with conventional machine learning (ML), but in the context of short-term as opposed to long-term Hebbian plasticity, it resulted in surprising practical advantages over supervised ANNs \citep{moraitis2020shortterm}. A similar theoretical approach could also reveal unknown advantages of long-term Hebbian plasticity in WTA networks. In addition, it could provide insights into how a WTA microcircuit could participate in larger-scale computation by deeper cortical or artificial networks. Here we construct "SoftHebb", a biologically plausible WTA model that is based on standard rate-based neurons as in ANNs, can accommodate various activation functions, and learns without labels, using local plasticity and only feed-forward communication, i.e. the properties we seek in an ANN. Importantly, it is equipped with a simple normalization of the layer's activations, and an optional temperature-scaling mechanism \citep{hinton2015distilling}, producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it in simulations on the tasks of recognizing MNIST handwritten digits and Fashion-MNIST fashion products. First, we confirm that SoftHebb is more accurate than a hard-WTA model. Second, we validate that it minimizes a loss function (cross-entropy) even though it has no access to it or to labels during learning. In addition, likely owing to its Bayesian and generative properties, the unsupervised WTA model outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data, and increased robustness to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) \citep{madry2017towards}, and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection \citep{qin2020deflecting} of the adversarial attacks, and generates object interpolations. \section{Theory} \begin{defn}[\bfseries The input assumptions] \label{def:data} Each observation $_j\boldsymbol{x} \in \mathbb{R}^n$ is generated by a hidden "cause" $_jC$ from a finite set of $K$ possible such causes: $_jC \in \{C_k,\, \forall k \leq K\in \mathbb{N}\}.$ Therefore, the data is generated by a mixture of the probability distributions attributed to each of the $K$ classes $C_k$: \begin{equation p(\boldsymbol{x})=\sum_{k=1}^{K}p(\boldsymbol{x}|C_k)P(C_k). \label{eq:pstar} \end{equation} In addition, the dimensions of $\boldsymbol{x}$, $x_i$ are conditionally independent from each other, i.e. \\$p(\boldsymbol{x})=\prod_{i=1}^{n}p(x_i). \label{eq:independence}$ The number $K$ of the true causes or classes of the data is assumed to be known. \stepcounter{subsection} \end{defn} The term "cause" is used here in the sense of causal inference. It is important to emphasize that the true cause of each input is hidden, i.e. not known. In the case of a labelled dataset, labels may correspond to causes, and the labels are deleted before presenting the training data to the model. We choose a mixture model that corresponds to the data assumptions but is also interpretable in neural terms (Paragraph \ref{sec:neuro_exp}): \begin{defn}[\bfseries The generative probabilistic mixture model] \label{def:model} We consider a mixture model distribution $q$: $q(\boldsymbol{x})=\sum_{k=1}^{K}q(\boldsymbol{x}|C_k)\,Q(C_k),$ approximating the data distribution $p$. We choose specifically a mixture of exponentials and we parametrize $Q(C_k;w_{0k})$ also as an exponential, specifically: \begin{align} q(x_i|C_k;w_{ik})&=e^{w_{ik}\cdot \frac{x_i}{||\boldsymbol{x}||}},\, \forall k \label{eq:g_param}\\ Q(C_k;w_{0k})&=e^{w_{0k}},\,\forall k.\label{eq:g0_param} \end{align} In addition, the parameter vectors are subject to the normalization constraints: $||\boldsymbol{w}_k||=1,\, \forall k$, and $ \sum_{k=1}^{K}e^{w_{0k}}=1. \label{eq:norm_0}$ \stepcounter{subsection} \end{defn} The model we have chosen is a reasonable choice because it factorizes similarly to the data of Definition \ref{def:data}: \begin{equation}q_k\coloneqq q(\boldsymbol{x}|C_k; \boldsymbol{w}_k)=\prod_{i=1}^{n}q(x_i|C_k;w_{ik})=e^{\sum_{i=1}^{n}w_{ik}\frac{x_i}{||\boldsymbol{x}||}}=e^{u_k}, \label{eq:multinomial} \end{equation} where $u_k=\frac{ \boldsymbol{w}_k\cdot \boldsymbol{x}}{|| \boldsymbol{w}_k||\cdot||\boldsymbol{x}||}$, i.e. the cosine similarity of the two vectors. A similar probabilistic model was used in related previous theoretical work \citep{nessler2009stdp, nessler2013PLoS, moraitis2020shortterm}, but for different data assumptions, and with certain further constraints to the model. Namely, \citep{nessler2009stdp, nessler2013PLoS} considered data that was binary, and created by a population code, while the model was stochastic. These works provide the foundation of our derivation, but here we consider the more generic scenario where data are continuous-valued and input directly into the model, which is deterministic and, as we will show, more compatible with standard ANNs. In \citep{moraitis2020shortterm}, data had particular short-term temporal dependencies, whereas here we consider the distinct case of independent and identically distributed (i.i.d.) input samples. The Bayes-optimal parameters of a model mixture of exponentials can be found analytically as functions of the input distribution's parameters, and the model is equivalent to a soft winner-take-all neural network \citep{moraitis2020shortterm}. After describing this, we will prove here that Hebbian plasticity of synapses combined with local plasticity of the neuronal biases sets the parameters to their optimal values. \begin{thm}[\bfseries The optimal parameters of the model] \label{thm:optimal} The parameters that minimize the KL divergence of such a mixture model from the data are, for every $k$, \begin{gather} \prescript{}{opt}{}w_{0k}=\ln P(C_k) \label{eq:G0}\\ \text{and } \prescript{}{opt}{}\boldsymbol{w}^*_k=\frac{ \prescript{}{opt}{}\boldsymbol{w}_k}{|| \prescript{}{opt}{}\boldsymbol{w}_k||}=\frac{\mu_{p_k}\left(\boldsymbol{x}\right)}{||\mu_{p_k}\left(\boldsymbol{x}\right)||}, \end{gather} where $\prescript{}{opt}{}\boldsymbol{w}_k=c\cdot \mu_{p_k}\left(\boldsymbol{x}\right), c\in\mathbb{R}^+,\, \mu_{p_k}\left(\boldsymbol{x}\right)$ is the mean of the distribution $p_k$, and $p_k\coloneqq p(\boldsymbol{x}|C_k)$. \stepcounter{subsection} \end{thm} In other words, the optimal parameter vector of each component $k$ in this mixture is proportional to the mean of the corresponding component of the input distribution, i.e. it is a centroid of the component. In addition, the optimal parameter of the model's prior $Q(C_k)$ is the logarithm of the corresponding component's prior probability. The Theorem's proof was provided in the supplementary material of \citep{moraitis2020shortterm}, but for completeness we also provide it in our Appendix. These centroids and priors of the input's component distributions, as well as the method of their estimation, however, are different for different input assumptions, and we will derive a learning rule that provably sets the parameters to their Maximum Likelihood Estimate for the inputs addressed here. The learning rule is a Hebbian type of synaptic plasticity combined with a plasticity for neuronal biases. Before providing the rule and the related proof, we will describe how our mixture model is equivalent to a WTA neural network. \subsection{Equivalence of the probabilistic model to a WTA neural network} \stepcounter{thm} \label{sec:neuro_exp} The cosine similarity between the input vector and each centroid's parameters underpins the model (Eq. \ref{eq:multinomial}). This similarity is precisely computed by a linear neuron that receives normalized inputs $\boldsymbol{x}^*\coloneqq\frac{\boldsymbol{x}}{||\boldsymbol{x}||}$, and that normalizes its vector of synaptic weights: $\boldsymbol{w}^*_k\coloneqq\frac{\boldsymbol{w}}{||\boldsymbol{w}||}$. Specifically, the neuron's summed weighted input $u_k= \boldsymbol{w}^*_k\cdot\boldsymbol{x}^*$ then determines the cosine similarity of an input sample to the weight vector, thus computing the likelihood function of each component of the input mixture (Eq. \ref{eq:g_param}). The bias term of each neuron can store the parameter $w_{0k}$ of the prior $Q(C_k; w_{0k})$. Based on these, it can also be shown that a set of $K$ such neurons can actually compute the Bayesian posterior, if the neurons are connected in a configuration that implements softmax. Softmax has a biologically-plausible implementation through lateral inhibition (divisive normalization) between neurons \citep{nessler2009stdp, nessler2013PLoS, moraitis2020shortterm}. Specifically, based on the model of Definition \ref{def:model}, the posterior probability is \begin{equation} Q(C_k|\boldsymbol{x})=\frac{e^{u_k+w_{0k}}}{\sum_{l=1}^{K}e^{u_l+w_{0l}}}. \label{eq:Q_model} \end{equation} But in the neural description, $u_k+w_{0k}$ the activation of the $k$-th linear neuron. That is, Eq. \ref{eq:Q_model} shows that the result of Bayesian inference of the hidden cause from the input $Q(C_k|\boldsymbol{x})$ is found by a softmax operation on the linear neural activations. In this equivalence, we will be using $y_k\coloneqq Q(C_k|\boldsymbol{x};\boldsymbol{w})$ to symbolize the softmax output of the $k$-th neuron, i.e. the output after the WTA operation, interchangeably with $Q(C_k|\boldsymbol{x})$. It can be seen in Eq. \ref{eq:Q_model} that the probabilistic model has one more, alternative, but equivalent neural interpretation. Specifically, $Q(C_k|\boldsymbol{x})$ can be described as the output of a neuron with exponential activation function (numerator in Eq. \ref{eq:Q_model}) that is normalized by its layer's total output (denominator). This is equally accurate, and more directly analogous to the biological description \citep{nessler2009stdp, nessler2013PLoS, moraitis2020shortterm}. This shows that the exponential activation of each individual neuron $k$ directly equals the $k$-th exponential component distribution of the generative mixture model (Eq. \ref{eq:multinomial}). Therefore the softmax-configured linear neurons, or the equivalent normalized exponential neurons, fully implement the generative model of Definition \ref{def:model}, and also infer the Bayesian posterior probability given an input and the model parameters. However, the problem of calculating the model's parameters from data samples is a difficult one, if the input distribution's parameters are unknown. In the next sections we will show that this neural network can find these optimal parameters through Bayesian inference, in an unsupervised and on-line manner, based on only local Hebbian plasticity. \subsection{A Hebbian rule that optimizes the weights} Several Hebbian-like rules exist and have been combined with WTA networks. For example, in the case of stochastic binary neurons and binary population-coded inputs, it has been shown that weight updates with an exponential weight-dependence find the optimal weights \citep{nessler2009stdp, nessler2013PLoS}. Oja's rule is another candidate \citep{oja1982simplified}. An individual linear neuron equipped with this learning rule finds the first principal component of the input data \citep{oja1982simplified}. A variation of Oja's rule combined with hard-WTA networks and additional mechanisms has achieved good experimental results performance on classification tasks \citep{krotov2019unsupervised}, but lacks the theoretical underpinning that we aim for. Here we propose a Hebbian-like rule for which we will show it optimizes the soft WTA's generative model. The rule is similar to Oja's rule, but considers, for each neuron $k$, both its linear weighted summation of the inputs $u_k$, and its nonlinear output of the WTA $y_k$: \begin{equation} \label{eq:synplast} \Delta w_{ik}^{(SoftHebb)}\coloneqq\eta \cdot y_k \cdot \left(x_i-u_kw_{ik}\right), \end{equation} where $w_{ik}$ is the synaptic weight from the $i$-th input to the $k$-th neuron, and $\eta$ is the learning rate hyperparameter. By solving the equation $E[\Delta w_{ik}]=0$ where $E[]$ is the expected value over the input distribution, we can show that, with this rule, there exists a stable equilibrium value of the weights, and this equilibrium value is an optimal value according to Theorem \ref{thm:optimal}: \begin{thm} \label{thm:equilib} The equilibrium weights of the SoftHebb synaptic plasticity rule are \begin{equation}w_{ik}^{SoftHebb}=c \cdot\mu_{p_k}(x_i)=\prescript{}{opt}w_{ik}, \text{ where } c=\frac{1}{||\mu_{p_k}(\boldsymbol{x})||}. \end{equation} \end{thm} The proof is provided in the Appendix. Therefore, our update rule (Eq. \ref{eq:synplast}) optimizes the weights of the neural network. \subsection{Local learning of neuronal biases as Bayesian priors} For the complete optimization of the model, the neuronal biases $w_{0k}$ must also be optimized to satisfy Eq. \ref{eq:G0}, i.e. to optimize the Bayesian prior belief for the probability distribution over the $K$ input causes. We define the following rate-based rule inspired from the spike-based bias rule of \citep{nessler2013PLoS}: \begin{gather} \Delta w_{0k}^{SoftHebb}=\eta e^{-w_{0k}}\left(y_k - e^{w_{0k}} \right). \end{gather} With the same technique we used for Theorem \ref{thm:equilib}, we also provide proof in the Appendix that the equilibrium of the bias with this rule matches the optimal value $\prescript{}{opt}{}w_{0k}=\ln P(C_k)$ of Theorem \ref{thm:optimal}: \begin{thm} \label{thm:nesslerbias} The equilibrium biases of the SoftHebb bias learning rule are \begin{gather} w_{0k}^{SoftHebb}=\ln P(C_k)=\prescript{}{opt}{}w_{0k}.\end{gather} \end{thm} \subsection{Alternate activation functions} \label{sec:activation_fn} The model of Definition \ref{def:model} uses for each component $p(\boldsymbol{x}|C_k)$ an exponential probability distribution with a base of Euler's e, equivalent to a model using similarly exponential neurons (Subsection \ref{sec:neuro_exp}). Depending on the task, different probability distribution shapes, i.e. different neuronal activation functions, may be better models. This is compatible with our theory (see Appendix B). Firstly, the base of the exponential activation function can be chosen differently, resulting in a softmax function with a different base, such that Eq. \ref{eq:Q_model} becomes more generally $ Q(C_k|\boldsymbol{x})=\frac{b^{u_k+w_{0k}}}{\sum_{l=1}^{K}b^{u_l+w_{0l}}}. \label{eq:Qb_model} $ This is equivalent to Temperature Scaling \citep{hinton2015distilling}, a mechanism that also maintains the probabilistic interpretation of the output. It can also be implemented by a normalized layer of exponential neurons, and are compatible with our theoretical derivations and the optimization by the plasticity rule of Eq. \ref{eq:synplast}. Moreover, we show in the Appendix that soft WTA models can be constructed by rectified linear units (ReLU) or in general by neurons with any non-negative monotonically increasing activation function, and their weights are also optimized by the same plasticity rule. \subsection{Cross-entropy and true causes, as opposed to labels} \label{sec:causes} It is important to note that, in labelled datasets, the labels that have been assigned by a human supervisor may not correspond exactly to the true causes that generate the data, which SoftHebb infers. For example, consider MNIST. The 10 labels indicating the 10 decimal digits do not correspond exactly to the true cause of each example image. In reality, the cause $C$ of each MNIST example in the sense implied by causal inference is not the digit cause itself, but a combination of a single digit cause $D$, which is the MNIST label, with one of many handwriting styles $S$. That is, the probabilistic model is such that in the Eq. $P(\boldsymbol{x})=\sum_kP(\boldsymbol{x}|C_k)P(C_k)$ of Definition \ref{def:data}, the cause $C$ of each sample is dual, i.e. there exists a digit $D_d\,(d\in \left[0, 9\right])$ and a style $S_s$ such that \begin{gather} \label{eq:causes0}P(C_k)\coloneqq P(C=C_k)=P(D_d) P(S_s)\neq P(D_d).\\ \label{eq:causes} \text{and }P(D_d)=\sum_kP(C_k) P(D_d|C_k). \end{gather} This is important for our unsupervised model. To illustrate this point, a network with $K$ competing neurons trained on MNIST may learn not to specialize to $K$ digits $D$, but rather to $K$ handwriting styles $S$ of one digit $D_d$, or in general $K$ combinations of digits with styles – combinations, which are the true causes $C$ that generate the data. This leads in this case to a mismatch between the labels $D$, and the true causes $C$ of the data. Therefore, given the labels and not the causes, it is not obvious which number $K$ should be chosen for the number of neurons. Practically speaking, $K$ can be chosen using common heuristics from cluster analysis. It is also not obvious how to measure the loss of the WTA network during the learning process, since the ground truth for causes $C$ is missing. We will now provide the theoretical tools for achieving this loss-evaluation based on the labels. Even though SoftHebb is a generative model, it can be used for discrimination of the input classes $C_k$, using Bayes' theorem. More formally, the proof of Theorem \ref{thm:optimal} involved showing that SoftHebb minimizes the KL divergence of the model $q(\boldsymbol{x})$ from the data $p(\boldsymbol{x})$. Based on this it can be shown that the algorithm also minimizes its cross-entropy $H^{causes}_Q\coloneqq H(P(C), Q(C|\boldsymbol{x}))$ of the causes $Q(C_k|\boldsymbol{x})$ that it infers, from the true causes of the data $P(C_k)$: $\boldsymbol{w}^{SoftHebb}= arg \min_{\boldsymbol{w}} H^{causes}_Q.$ An additional consequence is that by minimizing $H^{causes}$, SoftHebb also minimizes its label-based cross-entropy $H^{labels}_Q\coloneqq H(P(D_d), Q(D_d))$ between the true labels $P(D_d)$ and the implicitly inferred labels $Q(D_d)$: \begin{gather} Q(D_d)\coloneqq\sum_{k}Q(C_k) P(D_d|C_k) \label{eq:causesQ}\\ \boldsymbol{w}^{SoftHebb}=arg \min_{\boldsymbol{w}} H^{causes}_Q= arg \min_{\boldsymbol{w}} H^{labels}_Q. \label{eq:argminHQ} \end{gather} This is because, in Eqs. \ref{eq:causes} and \ref{eq:causesQ}, the dependence of the labels on the true causes $P(D_d|C_k)$ is fixed by the data generation process. To obtain $Q(D_d|\boldsymbol{x})$ and measure the cross-entropy, the causal structure $P(D_d|C)$ is missing, but it can be represented by a supervised classifier $Q_2(D_d|Q(C|\boldsymbol{x}))$ of SoftHebb's outputs, trained using the labels $D_d$. Therefore, by (a) unsupervised training of SoftHebb, then (b) training a supervised classifier on top, and finally (c1) repeating the training of SoftHebb with the same initial weights and ordering of the training inputs, while (c2) measuring the trained classifier's loss, we can observe the cross-entropy loss $H^{labels}$ of SoftHebb while it is being minimized, and infer that $H^{causes}$ is also minimized (Eq. \ref{eq:argminHQ}). We call this the post-hoc cross-entropy method, and we have used it in our experiments (Section \ref{sec:exp_cross} and Fig. \ref{fig:performance} C and D) to evaluate the learning process in a theoretically sound manner. \section{Experiments} \begin{figure} \includegraphics[width = 140mm]{Figures/performance.pdf} \caption{Performance of SoftHebb on MNIST compared to hard-WTA and backpropagation.} \label{fig:performance} \end{figure} \subsection{MNIST accuracy vs hard WTA} \label{sec:vshardWTA} We implemented the theoretical SoftHebb model in simulations and tested it in the task of learning to classify MNIST handwritten digits. The network received the MNIST frames normalized by their Euclidean norm, and the plasticity rule we derived updated its weights and biases in an unsupervised manner. We used $K=2000$ neurons. First we trained the network for 100 epochs, i.e. randomly ordered presentations of the 60000 training digits. In our validation testing we found that softmax with a base of 1000 (see Section \ref{sec:activation_fn}) performed best. The learning rate $\eta$ of Eq. \ref{eq:synplast} decreased linearly from 0.03 to 0 throughout training. Each training experiment we will describe was repeated five times with varying random initializations and input order. We will report the mean and standard deviation of accuracies. Inference of the input labels by the WTA network of 2000 neurons was performed in two different ways. The first approach is single-layer, where after training the network we assigned a label to each of the 2000 neurons, in a standard approach that is used in unsupervised clustering. Namely, for each neuron, we found the label of the training set that makes it win the WTA competition most often. In this single-layer approach, this is the only time when labels were used, and at no point were weights updated using labels. The second approach was two-layer and based on supervised training of a perceptron classifier on top of the WTA layer. The classifier layer was trained with the Adam optimizer and cross-entropy loss for 60 epochs, while the previously-trained WTA parameters were frozen. SoftHebb achieved an accuracy of $(96.18\pm0.06)\%$ and $(96.94\pm0.02)\%$ in its 1- and 2-layer form respectively. To confirm the strength of the soft WTA approach combined with training the priors through biases, which makes the network Bayesian, we also trained the weights of a network with a hard-WTA setup, i.e. where the strongest-activated neuron's output $y_k$ is 1, and the other neurons are suppressed to 0, for each input. We found that an initial learning rate of 0.05 was best for the hard-WTA network. The SoftHebb model outperformed the hard WTA (Fig. \ref{fig:performance}A). However, SoftHebb's accuracy is significantly lower than a multi-layer perceptron (MLP) with one hidden layer of also 2000 neurons that is trained end-to-end exhaustively. The MLP achieves a $(98.33\pm0.06)\%$ accuracy (not shown in the figure). This is expected, due to end-to-end training, supervision, and the MLP being a discriminative model as opposed to a generative model merely applied to a classification task, as SoftHebb is. If the Bayesian and generative aspects that follow from our theory were not required, several additional mechanisms exist to enhance the discriminative power of WTA networks \citep{krotov2019unsupervised}, and even a random projection layer instead of a trained WTA performs well \citep{illing2019biologically}. The generative approach however has its own advantages even for a discriminative task, and we will show some of these here. \subsection{Cross-entropy minimization and single-epoch advantage over backpropagation} \label{sec:exp_cross} First, we show as a validation of the theory that the SoftHebb model minimizes cross-entropy $H^{causes}_Q$ of its activations from its input's causes, even though no explicit loss is provided. According to our post-hoc cross-entropy method (Section \ref{sec:causes}), as a proxy we observed the minimization of $H^{labels}_Q$ during the first epoch of on-line Hebbian learning. The loss on the training inputs as they appear (running loss), as well as on the whole testing dataset can be seen in Fig. \ref{fig:performance}C and D respectively (blue curves). The method allows us to observe the discriminative aspect of the generative model, as it is optimized. After this one epoch, the accuracy of the 1-layer form of the SoftHebb model is $(95.44\pm0.14)\%$. The 2-layer form is again obtained by training a supervised classifier output layer for 60 epochs, and its accuracy is $(96.21\pm0.15)\%$ (Fig. \ref{fig:performance}B, blue bars). We then also train for a single epoch a 2-layer MLP with a hidden-layer of 2000 neurons, with backpropagation of stochastic gradient descent (SGD) and cross-entropy loss. We found, through grid search, the optimal minibatch size and learning rate of the MLP (4 and 0.2 respectively). The MLP achieves an accuracy of $(95.39\pm0.45)\%$ (Fig. \ref{fig:performance}B, orange bar), if we exclude one run of the experiment which only achieved an accuracy of 86.92\%. Surprisingly, it not surpass SoftHebb, not even in its 1-layer form. In addition, the cross-entropy of the SoftHebb model is visibly minimized faster than through SGD (orange curves of Fig. \ref{fig:performance}C \& D). It is possible that SoftHebb's advantage in terms of loss and accuracy is a side-effect of pre-training the second layer when obtaining SoftHebb's post-hoc cross-entropy, or of that layer's 60-epoch training. To test this possibility, we similarly obtained a trained version of the MLP's output layer alone, and then trained its first layer with backpropagation and the second layer frozen. Meanwhile, we recorded its loss, thus obtaining its own version of post-hoc cross-entropy (\ref{fig:performance}C \& D, yellow curve). SoftHebb still showed an advantage in terms of loss minimization speed, and its 2-layer form's accuracy is still not surpassed (\ref{fig:performance}B, blue \& yellow bars), despite the fully unsupervised and local learning in the core of the network. Moreover, the figure shows that the minimization of the loss on the general test set by SoftHebb is smoother than the running loss, while SGD's test-set loss is influenced by the specifics of the individual training examples. This may indicate stronger generalization by the SoftHebb model, emerging from its Bayesian and generative nature. If this is true, SoftHebb may be more robust to input perturbations. \begin{figure} \centering \includegraphics[width = 140mm]{Figures/attack_curves_f.pdf} \caption{Noise and adversarial attack robustness of SoftHebb and of backpropagation-trained MLP on MNIST and Fashion-MNIST. The insets show one example from the testing set and its perturbed versions, for increasing perturbations. (A) SoftHebb is highly robust to noise. (B) MLP's MNIST accuracy drops to ~60\% by hardly perceptible perturbations ($\epsilon=16/255$), while SoftHebb requires visually noticeable perturbations ($\epsilon=64/255$) for similar drop in performance. At that degree of perturbation, the MLP has already dropped to zero. SoftHebb deflects the attack: it forces the attacker to produce examples of truly different classes - the original digit "4" is perturbed to look like a "0" (see also Fig. \ref{fig:gan}).} \label{fig:robustness} \end{figure} \subsection{Robustness to noise and adversarial attacks - Generative adversarial properties} \label{sec:robustness} Indeed, we tested the trained SoftHebb and MLP models for robustness, and found that SoftHebb is significantly more robust than the backprop-trained MLP, both to added Gaussian noise and to PGD adversarial attacks (see Fig. \ref{fig:robustness}). PGD \citep{madry2017towards} produces perturbations in a direction that maximizes the loss of each targeted network, and in size controlled by a parameter $\epsilon$. Strikingly, SoftHebb has a visible tendency to deflect the attacks, i.e. its confusing examples actually belong to a perceptually different class (Fig. \ref{fig:robustness}B and \ref{fig:gan}). This effectively nullifies the attack and was previously shown in elaborate state-of-the-art adversarial-defence models \citep{qin2020deflecting}. The pair of the adversarial attacker with the generative SoftHebb model essentially composes a generative adversarial network (GAN), even though the term is usually reserved for pairs \textit{trained }in tandem \citep{goodfellow2014generative, creswell2018generative}. As a result, the model could inherit certain properties of GANs. It can be seen that it is able to generate interpolations between input classes (Fig. \ref{fig:gan}). The parameter $\epsilon$ of the adversarial attack can control the balance between the interpolated objects. Similar functionality has existed in the realm of GANs \citep{radford2015unsupervised}, autoencoders \citep{berthelot2018understanding}, and other deep neural networks \citep{bojanowski2017optimizing}, but was not known for simple biologically-plausible models. \subsection{Generalizability of the algorithm to other datasets: Fashion-MNIST} Finally, we trained SoftHebb on a more difficult dataset, namely Fashion-MNIST \citep{xiao2017/online} which contains grey-scale images of clothing products. A supervised MLP of the same size achieved a test accuracy of $(88.7\pm0.34)\%$ on this task. We used the exact same SoftHebb model and hyperparameters that we used on MNIST, without any adjustment for the changed dataset. Still, the model achieved an accuracy of $(75.14\pm 0.17)\%$. In addition, with very small adversarial perturbations, the MLP drops to an accuracy lower than the SoftHebb model despite our generic training, while SoftHebb's robustness is reconfirmed (dashed lines in Fig. \ref{fig:robustness}) as are its generative interpolations (Fig. \ref{fig:gan}B). \begin{figure} \centering \includegraphics[width = 135mm]{Figures/adv_examples_softhebb.pdf} \caption{Examples generated by the adversarial pair PGD attacker/SoftHebb model. SoftHebb's inherent tendency to deflect the attack towards truly different classes is visible. This tendency can be repurposed to generate interpolations between different classes of the data distribution, a generative property previously unknown for such simple networks.} \label{fig:gan} \end{figure} \section{Discussion} We have described SoftHebb, a biologically plausible model that is unsupervised, local, and requires no error or other feedback from upper layers. The model consists of elements fully compatible with conventional ANNs. We have shown the importance of soft competition in rate-based WTA networks, and derived formally the plasticity that optimizes the network through Bayesian computations and learns a generative model of the input distribution. We also developed a method for quantifying its unsupervised discriminative loss in a theoretically sound manner. Our experiments are small, but they confirm our optimization theory, and show that SoftHebb has significant strengths that emerge from its unsupervised, generative, and Bayesian nature. It is intriguing that, through biological plausibility, emerge properties commonly associated with biological intelligence, such as speed of learning, robustness to noise and adversarial attacks, and deflection of the attacks. In particular, its ability to learn better than even supervised networks when training time is limited is interesting for resource-limited neuromorphic applications. Its robustness to noise and adversarial attacks is impressive, considering that it is intrinsic and was not instilled by specialized defences. SoftHebb tends to not merely be robust to attacks, but actually deflect them. We also showed that these networks can generate image interpolations in the latent space. However, the model is quite limited compared to state of-the-art ML, if classification accuracy, exhaustive training, and unperturbed inputs are the benchmark. To address this, its potential integration into multilayer networks should be explored using our new tools. This could also provide insights into the role of WTA microcircuits in larger networks in cortex. A first approach to such multilayer networks could be by stacking individually-trained layers. The remaining piece to successfully integrate SoftHebb in deep networks is to make the learned representation in each layer distributed, in spite of a WTA approach. SoftHebb is indeed more distributed than a hard WTA, and further-distributed features may be achieved by localized receptive fields \citep{pogodin2021towards} similar to area V1 of cortex \citep{hubel1962receptive}. \clearpage
2023-04-23T08:17:35.089Z
2021-10-07T02:25:21.000Z
redpajama/arxiv
arxiv_0000
289
6,143
840c3b1fa82023222266a0ccfe1e3ec77b8865a3
\section{Introduction} The paradigm of learning universal large-scale Transformer-based models has been recently transferred to speech from natural language processing~\cite{9585401,baevski2020wav2vec,chung2021w2v}. Due to the high complexity of speech data, the preferred way of downstream task adaptation for such models is to keep pretrained weights frozen and fine-tune small task-specific heads~\cite{Yang2021SUPERBSP}, so it is important to use the model's hidden states in the most efficient way. A common approach is to use outputs of the last layer of the model, combining them via various pooling methods; but it has been shown that for some tasks, e.g. phoneme prediction in HuBERT or the analogy task in BERT, embeddings from lower and middle layers are more useful~\cite{vulic2020probing}. \emph{Topological data analysis} (TDA) is a recently proposed way to get more efficient data representations from frozen Transformer weights~\cite{kushnareva-etal-2021-artificial,judgements}. TDA features prove to be better suited for many downstream tasks, including artificial text detection and linguistic acceptability judgement. In particular, for ungrammatical sentence detection conventional sentence embeddings yield classification quality no better than random, while TDA has led to meaningful results. Inspired by these results, in this work we apply TDA to the HuBERT model in order to build more powerful speech representations. TDA has already been applied to signals of various nature; previous attempts for speech used topological statistics of the audio signal waveform: persistent entropy for noise classification~\cite{RUCCO2017130} and emotion recognition~\cite{Gonzalez-Diaz}, detection of a periodic signal in noisy data~\cite{9747228} etc. However, TDA for Transformer-based models has so far been limited to NLP, processing attention maps for artificial text detection~\cite{kushnareva-etal-2021-artificial,https://doi.org/10.48550/arxiv.2206.15195} and linguistic acceptability~\cite{judgements} and word embeddings for dialogue term extraction~\cite{vukovic-etal-2022-dialogue} and constructing story trees~\cite{haghighatkhah-etal-2022-story}. Evolution of inner representations in neural networks has been studied with persistent Betti numbers~\cite{topologyofdeep} and recently introduced representation topology divergence (RTD)~\cite{barannikov2021representation}. In this work, we apply RTD in particular to intermediate embeddings and attention maps of a speech Transformer. Another interesting area of TDA applications is the interpretation of pretrained Transformer models: it is known in NLP that different heads are sensitive to different phenomena~\cite{vulic2020probing}. With TDA techniques, we demonstrate the same effect for speech Transformers, finding heads that are best for solving specific downstream tasks, e.g. separating a given pair of emotions, a given pair of speakers, detecting speech generated by a specific TTS model, or representing spectral features of sound samples (bit rate, LPCC etc.). We also investigate the pattern structure of attention maps in HuBERT with pattern classification methods~\cite{Yang_attention} and demonstrate that our TDA-based approach is able to extract meaningful representation from heads with any pattern types. \section{Methods}\label{sec:methods} In this section, we describe our approach. We first introduce topological data analysis for weighted graphs and then define the features we propose to extract. \noindent \textbf{Graph homologies}. Given a set of points in a high-dimen\-si\-onal space, TDA proposes ways to restore the underlying lower-dimensional manifold and study its topological properties. If the set of points has a graph structure, the $d$-di\-men\-si\-onal manifold can be obtained in the form of a {\it simplicial complex}. Informally speaking, it is the structure obtained by replacing $k+1$-cliques in the graph by filled-in $k$-dimensional simplices for all $0\le k \le d$: $0$-dimensional simplices are vertices, $1$-dimensional are edges, for $k=2$ we get triangles, for $k=3$, tetrahedrons, and so on~\cite{Sunada2013}. \emph{Homology groups} are key topological features for the classification of topological spaces~\cite{MR1867354}. Dimensions of the sequence of {homology groups} $H_k$, $k \ge 0$, are between of the main topological invariants. In the case of graph-like structures, $\dim H_0$ equals the number of connected components, $\dim H_1$ is the number of ``loops'', and $\dim H_k$ counts $k$-dimensional ``holes''. \noindent \textbf{Persistent homologies for weighted graphs and point clouds}. We would like to apply TDA to sets of vectors in ${\mathbb R}^d$ (embeddings). Such a set (point cloud) can be viewed as a complete graph $G$ with edges weighted by any distance-like measure between vectors. However, homology groups are only defined for unweighted graphs, so we make $G$ unweighted by thresholding, leaving only edges with weights lower than a given $\epsilon$; we denote the resulting graph by $G_\epsilon$. TDA can track the changes of topology across varying thresholds via \emph{persistent homology}. In this case, we build a family of graphs called a \emph{filtration}: an ordered set of graphs $\Gamma_\epsilon$ for a sequence of increasing $\epsilon$. For $\epsilon$ below the minimal distance between vertices the graph has no edges; as $\epsilon$ increases new edges are added, ending in the complete graph; during this process, gradual changes of graph topology can be expressed in terms of ``birth'' or ``death'' of basic features. We begin with $|V|$ connected components (all of them are ``born''), and as $\epsilon$ increases, pairs of them are joined together (one component ``dies''). ``Birth'' and ``death'' moments can be represented with a diagram called the \emph{barcode}~\cite{barannikov2021canonical,barannikov2021representation}, whose horizontal axis is a sequence of thresholds $\epsilon$, and each horizontal bar corresponds to a single feature. $H_0$-bars correspond to connected components; they start at the minimal distance between vertices and correspond to edges of the minimal spanning tree (MST). $H_1$-bars correspond to simple cycles with more than $3$ vertices; an $H_1$-bar starts when a cycle appears and ends when it is broken into triangles by a new edge. We denote by $H_k^m(G)$ the average length of $H_k$-bars for a weighted graph or point cloud $G$. \emph{Representation topology divergence} (RTD) measures the topological dissimilarity between two data representations, formalized as two weighted graphs $G^a$ and $G^b$ with a common set of vertices. Namely, we track how the topology of the graph $G^a_\epsilon \cap G^b_\epsilon$ changes as the filtration threshold $\epsilon$ increases. These changes are then summarized via a special kind of barcode, called the \emph{cross-barcode}, which reflects how much graphs $G^a$ and $G^b$ have in common on different scales. Our RTD feature is equal to the sum of bar lengths in this barcode. For a more general and formal definition, see~\cite{barannikov2021representation}. % \noindent \textbf{Features}. We use three groups of features: algebraic features of attention matrices, topological features of attention matrices, and topological features of embeddings; for comparison, we also pool embeddings from all layers. We count the number of features for the HuBERT-base model~\cite{9585401} that consists of $12$ layers with $12$ attention heads each and has embedding dimension $768$. \defH_0^{m,\mathrm{sym}}{H_0^{m,\mathrm{sym}}} \defH_0^{m,\mathrm{pc}}{H_0^{m,\mathrm{pc}}} \emph{Algebraic features} include the sum of the upper triangular part of the $n \times n$ attention matrix (normalized by $n^2$), which is used as a measure of asymmetry, and mean values of its $3$ longest diagonals. This yields $4$ features per attention map, $576$ for the entire HuBERT model. \emph{Topological features of attention matrices} include the $H_0^m$ feature for two graphs derived from each attention matrix $A_{\mathrm{attn}}$. $H_0^{m,\mathrm{sym}}$ is defined as $H_0^{m}$ for the graph with adjacency matrix $A^\prime = 1 - \max\left(A_{\mathrm{attn}}, A_{\mathrm{attn}}^\top\right)$, that is, the symmetrization of $A_{\mathrm{attn}}$ (cf.~\cite{kushnareva-etal-2021-artificial}). $H_0^{m,\mathrm{pc}}$ is defined as $H_0^{m}$ for the rows of $A_{\mathrm{attn}}$ considered as a point cloud with the $L_1$-distance. Here we have $2$ features per attention map, $288$ in total. \emph{Topological features of embeddings}. Considering the $i$-th layer's embeddings $X^{(i)}$ as a point cloud with the $L_2$-distance, we calculate $3$ features for each layer: $H_0^m(X^{(i)})$, RTD between $X^{(i)}$ and the last layer's embeddings $X^{(L)}$, and RTD between $X^{(i)}$ and initial embeddings $X^{(0)}$. Besides, we add the same $3$ features for mel-frequency cepstral coefficients (MFCC), i.e., baseline speech embeddings in $\mathbb{R}^{13}$. This gives us $51$ features for the entire model. \emph{Embeddings from all layers}. An alternative approach, inspired by~\cite{Devlin2019BERTPO}, is to use pooled embeddings from each layer (not only the last) of the Transformer. We explored two pooling strategies: averaging over the timescale (common for speech Transformers) and taking only the first embedding as suggested in~\cite{Devlin2019BERTPO}. This yields $9216$ features in each case. \section{Evaluation}\label{sec:results} \noindent \textbf{Datasets}. We have used four standard datasets for speech and emotion recognition. \textbf{IEMOCAP}, introduced in~\cite{Busso2008IEMOCAPIE}, contains $\approx 12$ hours of audiovisual data, including video, speech, motion capture of faces, and text transcriptions; we use only speech samples from the ``Anger'', ``Sadness'', ``Happiness'', and ``Neutral'' classes ($4490$ samples), with 5-fold cross-validation, similar to~\cite{8639633,9747460}. \textbf{CREMA-D}~\cite{6849440} has $7442$ clips from $91$ actors who spoke $12$ different sentences in one of six basic emotions (anger, disgust, fear, happy, neutral, and sad); we perform multi-class classification with $6$ classes and evaluate by averaging five splits with 70/15/15\% train/development/evaluation subsets. \textbf{ASVSpoof}~\cite{9358099} was presented for the 3rd Automatic Speaker Verification Spoofing and Countermeasures Challenge; we used the standard split with $25380$ training, $24844$ development, and $71237$ evaluation samples and performed classification into generated and bonafide labels, a standard task on ASVSpoof~\cite{Yu2018SpoofingDI}. \textbf{VoxCeleb1}~\cite{nagrani17_interspeech} contains over $100$K utterances by $1251$ celebrities extracted from \emph{YouTube} videos; we performed binary classification between pairs of utterances from the same or different speakers, with $40000$ pairs in the training set, $8000$ in development, and $37720$ in the test set; in this dataset, we clip each utterance to the first $5$ seconds for all methods except embeddings from layers. \noindent \textbf{Models}. We utilize the \emph{HuBERT Base} model~\cite{9585401} pretrained for automatic speech recognition on 960 hours of audio from the \emph{Librespeech} corpus~\cite{7178964}. We use HuBERT as a pretrained frozen instance, without fine-tuning or any other adjustment of the weights. We use a linear layer trained over the pooled output of the Transformer as the baseline, which is consistent with the SUPERB leaderboard~\cite{Yang2021SUPERBSP}; we used the results for IEMOCAP and VoxCeleb1 published there and trained the baselines ourselves for CREMA-D and ASVSpoof. As our approach, we train a logistic regression model with $L_1$-regularization over the set of features computed from HuBERT attention maps and/or embeddings. For automatic speaker verification (i.e., checking if two utterances are made by the same person) we compute the absolute value of elementwise differences between features of both utterances. \begin{table}[!t]\centering\small \setlength{\tabcolsep}{1pt} \caption{Experimental results; $\star$~--- from SUPERB~\cite{Yang2021SUPERBSP}.} \label{tbl:results} \begin{tabular}{P{.25\linewidth}P{.18\linewidth}P{.18\linewidth}P{.18\linewidth}P{.18\linewidth}}\hline Model & IEMOCAP & CREMA-D & ASVSpoof & VoxCeleb1 \\ & Acc $\uparrow$ & Acc $\uparrow$ & EER $\downarrow$ & EER $\downarrow$ \\ \hline HuBERT (baseline) & $64.92^\star$ & $71.047$ {\footnotesize $\pm0.566$} & $6.649$ & \hspace{0.1cm} $\boldsymbol{5.11}^\star$ \\\hline All layer embs, $1$st & $65.612$ {\footnotesize $\pm1.050$} & $71.320$ {\footnotesize $\pm0.479$} & $2.706$ & $42.701$ \\\hline All layer embs, mean & $69.355$ {\footnotesize $\pm1.801$} & $76.260$ {\footnotesize $\pm1.148$} & $\boldsymbol{1.519}$ & \hspace{0.1cm} $6.513$ \\\hline Attention features & $69.666$ {\footnotesize $\pm1.174$} & $79.200$ {\footnotesize $\pm1.240$} & $2.138$ & $30.326$ \\\hline Attn. \& non-attn. feat. & $\boldsymbol{69.955}$ {\footnotesize $\pm\boldsymbol{0.972}$} & $\boldsymbol{80.155}$ {\footnotesize $\pm\boldsymbol{0.680}$} & $1.946$ & $26.443$\\\hline \end{tabular} \end{table} \noindent \textbf{Experimental results}. Our results on all four datasets are shown in Table~\ref{tbl:results}; we report accuracy (Acc, in $\%$) and equal error rate (EER, in $\%$) for models trained on four different sets of features (see Section~\ref{sec:methods}): \begin{inparaenum}[(1)] \item \emph{Attention features} is a combination of {algebraic} and {topological features} calculated from attention maps; \item \emph{Attn. \& non-attn. feat.} denotes a combination of algebraic and topological features of attention maps and topological features of embeddings; this combines all our TDA features for this task; \item \emph{All layer embs, 1st} is the concatenation of first embeddings from all HuBERT layers; \item \emph{All layer embs, mean} is the concatenation of all HuBERT embeddings with timescale averaging. \end{inparaenum} First, on all datasets averaging the embeddings is a much better strategy than taking just the first. One reason might be that NLP Transformers that often use the first embedding insert a fictional first token that ``represents'' the text as a whole, while HuBERT does not do this. Second, adding non-attention features improves performance compared to just {attention} features, as expected since they contain more information, although for IEMOCAP the improvement is quite marginal. Third, topological features give better results than embeddings from all layers on multi-class emotion recognition datasets (IEMOCAP, CREMA-D), while on two others the results are opposite. On all datasets except VoxCeleb1 we achieve major improvements over the baseline (conventional usage of HuBERT). Poor performance of TDA features on VoxCeleb1 might be due to the fact that we used only short clips from each utterance due to high computational costs of TDA methods over large inputs. For CREMA-D, our model achieves a new state of the art result, improving the previous SOTA of 70.95\%~\cite{lerac2022} by over 9\%. \section{TDA for Attention maps interpretation} \noindent \textbf{Restricted tasks}. Inspired by recent works on attention interpretability in natural language processing~\cite{vulic2020probing}, we analyze the roles of individual attention heads in HuBERT. Since their ``areas of expertise'' are much more narrow than general problems considered above, we use two restricted tasks: separation of individual models (one synthetic model vs real speech) and separation (binary classification) of two speakers. \begin{figure*}[!t]\centering \setlength{\tabcolsep}{5pt} \begin{tabular}{p{.62\linewidth}p{.35\linewidth}}\centering \includegraphics[height=0.17\linewidth]{pics/fig01.png} \includegraphics[height=0.17\linewidth]{pics/fig02.png} \includegraphics[height=0.17\linewidth]{pics/fig03.png} & \\[-15pt] \caption{$H_0^{m,\mathrm{sym}}$ for the best heads for individual model separation. Blue: human speech, red: synthetic speech; dashed line: optimal classification.}\label{fig:sep_charts} & \\ \includegraphics[width=\linewidth]{pics/fig2.png} & \multirow[t]{3}{*}{\includegraphics[width=\linewidth]{pics/fig3.png}} \\[-.3cm] \caption{Spectral features with strong correlations with $H_0^{m,\mathrm{pc}}$.}\label{fig:pcc_TDA_spectral} & \caption{Types of attention heads in HuBERT.}\label{fig:head_class} \end{tabular}\vspace{-.5cm} \end{figure*} For individual model separation, we collected all samples produced by a given synthetic model and an equal amount of bonafide samples (real speech) randomly selected from train and validation sets. For each HuBERT attention head, we calculated the distributions of the $H_0^{m,\mathrm{sym}}$ feature for synthetic and real samples and ranked the heads by separation quality defined as $\mathrm{SQ}_{1, 2} = \frac{\vert m_1 - m_2 \vert}{\max(\sigma_1, \sigma_2)}$, where $m_i$ are the means and $\sigma_i$ are the variances of classes. Fig.~\ref{fig:sep_charts} shows sample separations; it turns out that for every voice model there are several heads that separate them well ($\mathrm{SQ}>1$) and for most models there are heads that separate them very well ($\mathrm{SQ}>3$). The best heads are different and are usually situated in the middle-to-top layers of HuBERT. The worst results were obtained on the A19 model that was specially fine-tuned on evaluation data (best $\mathrm{SQ} = 1.45$). Separation quality of the threshold classifier for the best head varies from EER $0.03\%$ for the A14 model ($\mathrm{SQ} = 3.5$) to EER $36.5\%$ for the A19 model. For individual speaker separation, a similar approach on ASVSpoof also shows that every pair of speakers has heads with good separation ($\mathrm{SQ}>1$), but this time there are no heads with $\mathrm{SQ}>3$. On the other hand, more general tasks are not handled well by individual heads; e.g. the best achieved separation of male vs female speakers has $\mathrm{SQ}=0.72$. Full results for all experiments are presented at the companion website\footnote{{\scriptsize \url{https://topohubert.github.io/speech-topology-webpages}}}. \noindent \textbf{Attention maps and spectral features}. Transformer-based models such as HuBERT do not implement spectral methods intentionally, but their attention mechanisms can extract a lot of various information, so we searched for potential similarities between TDA features and common spectral features. Following~\cite{borzi2022synthetic}, we extract the main spectral features for samples from the ASVSpoof dataset (real human speech only) and compute Pearson correlations between them and attention features. Fig.~\ref{fig:pcc_TDA_spectral} shows the results for $H_0^{m,\mathrm{pc}}$; some (but not all) spectral features have a ``counterpart'' in attention maps. \noindent \textbf{Categorizing attention maps in HuBERT}. To show the relation between input speech specifics, HuBERT patterns, and topological features, we categorized HuBERT patterns following~\cite{Yang_attention}, classifying attention heads into three categories: \emph{global}, \emph{vertical}, and \emph{diagonal} based on three metrics that evaluate their entropy ${\mathbb H}$ averaged over utterances $u$. For a head $h$ and its attention map $A^u\in{\mathbb R}^{d\times d}$, we compute its globalness $G_h = {\mathbb E}_u[\frac{1}{d}\sum_{i=1}^d{\mathbb H}(A^u_{i\ast})]$, verticality $V_h = {\mathbb E}_u[-{\mathbb H}(\frac{1}{d}\sum_{i=1}^dA^u_{i\ast})]$, and diagonality $D_h = {\mathbb E}_u[-\frac{1}{T^2}\sum_{i=1}^d\sum_{j=1}^d |i-j| \cdot A^u_{ij}]$, choosing the type based on the highest rank according to these metrics. Fig.~\ref{fig:head_class} shows that heads in lower layers tend to be global, while higher layers are mostly vertical. To show how these types interact with TDA features, we compute the accuracy based on $H_0^m$ for a subsample of IEMOCAP (angry vs. sad). Fig.~\ref{fig:bars} shows that, similar to~\cite{Yang_attention}, diagonal heads are more important for classification performance, but sometimes $H_0$-bars can extract valuable information even from global and vertical heads. \begin{figure}[!t]\centering \includegraphics[width=\linewidth]{pics/fig4.png}\vspace{-.3cm} \caption{Accuracy of each head broken into pattern types.}\label{fig:bars}\vspace{-.2cm} \end{figure} \begin{figure}[!t]\centering \includegraphics[width=\linewidth]{pics/fig51.png} \includegraphics[width=\linewidth]{pics/fig52.png}\vspace{-.3cm} \caption{Sample MSTs: angry (top) vs. sad (bottom).}\label{fig:trees}\vspace{-.2cm} \end{figure} Fig.~\ref{fig:trees} illustrates the patterns arising for various input classes. We took head $4$ from layer $2$ since it performs well in this classification. The middle and right columns of Fig.~\ref{fig:trees} show typical examples of ``angry'' and ``sad'' classes: original attention maps, symmetrized matrices and minimal spanning trees. A more concentrated and narrower diagonal pattern and less branching MST arise for the ``angry'' class (top). Interestingly, lower values of $H_0^m$ and clearer diagonal patterns correspond to loud utterances with lots of words per second and background dialog, while sparse and noisy patterns with larger $H_0^m$ values correspond to slow and lifeless speech from the ``sad'' class (see the companion website for examples). \section{Conclusion} In this work, we have applied topological data analysis to solving downstream tasks based on a pretrained HuBERT model and analysis and interpretation of individual attention heads. We have shown that TDA yields compact feature sets that give excellent results for tasks such as emotion recognition and speaker classification, including a new state of the art result on CREMA-D. We believe that topological analysis is an important and currently underexplored venue of research for large machine learning models such as Transformers, and propose TDA as a potentially fruitful direction of study.
2023-04-23T08:17:36.444Z
2022-12-05T02:13:10.000Z
redpajama/arxiv
arxiv_0000
331
3,360
8c93e9d3177ff505b3d2e0aed76ab5d7c8107584
\section{Introduction} In argumentation theory, an enthymeme is defined as an incomplete argument found in discourse, where some components are explicit, but other propositions are left implicit and need to be \emph{filled in} as premises or conclusions to fully understand what the argument is \cite{waltonReed2005}. In many instances the missing proposition is a premise. The well-cited example of the Silver Blade case from one of Sherlock Holmes' stories \cite{waltonReed2005} presents such as an incomplete argument \begin{displayquote} A dog was kept in the stable, and yet, though someone had been in and fetched out a horse, he had not barked enough to rouse the two lads in the loft. Obviously, the midnight visitor was someone whom the dog knew well. \end{displayquote} The missing premise in this case is the generalization ``Dogs generally bark when a person enters an area unless the dog knows the person well." While there has been work on identification (i.e., classification) and reconstruction of implicit premises in enthymemes \cite{rajendran-etal-2016-contextual,habernal-etal-2018-argument, reisert-etal-2015-computational,boltuzic-snajder-2016-fill,Razuvayevskaya2017FindingEI}, to our knowledge, \emph{automatically generating an implicit premise from a given enthymeme} is a new task. There are two main challenges that need to be addressed: 1) lack of large scale data of incomplete arguments together with annotated missing premises needed to train a sequence-to-sequence model (the largest such set contains 1.7K instances \cite{habernal-etal-2018-argument}); and 2) the inherent need to model commonsense or word knowledge. We propose an approach for generating an implicit premise given a incomplete argument that aims to address these two challenges. Our contributions are three fold. \begin{table}[] \small \centering \begin{tabular}{|l|l|l|} \hline Reason & \multicolumn{2}{l|}{Vaccinations save lives} \\ \hline Claim & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Vaccination should be mandatory\\ for all children\end{tabular}} \\ \hline\hline ZeroShot & \multicolumn{2}{l|}{Vaccines save lives, they save money} \\ \hline \begin{tabular}[c]{@{}l@{}}Fine-tuned on \\\textit{ART} \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Vaccinations are the best way to \\ protect children.\end{tabular}} \\ \hline \begin{tabular}[c]{@{}l@{}}Fine-tuned on \\ \textit{ART +PARA-C}\end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Vaccinations are the best way to\\ prevent childhood diseases.\end{tabular}} \\ \hline \end{tabular} \caption{\label{table:1}Implicit Premise Generation by BART \cite{lewis2019bart} in three different setting for an input enthymeme from dataset by \citet{habernal-etal-2018-argument} } \vspace{-3ex} \end{table} \emph{A new task of generating an implicit premise given an incomplete argument (enthymeme).} Given an enthymeme consisting of a stated conclusion and a stated premise, generate the implicit/missing premise. As the backbone sequence-to-sequence architecture we use BART \cite{lewis2019bart}. \emph{Leverage abductive reasoning as an auxiliary task}. To address the first challenge, we rely on an observation from argumentation theory that incomplete arguments in naturally occurring discourse, more often than not, require abductive reasoning (plausible explanations) rather than the more strict form of reasoning based on deductive logic \cite{waltonReed2005,10.2307/40320292}. The Silver Blaze case is such an example. We leverage the Abductive Reasoning in Narrative Text (\textit{ART}) dataset introduced by \citet{Bhagavatula2020Abductive} to fine-tune a BART model. \textit{ART} consists of pairs of observations together with the plausible explanation to be generated (Section \ref{section:data}). \emph{Encoding discourse-aware common sense knowledge.} To address the second challenge, we rely on PARA-COMET \cite{Gabriel2021ParagraphLevelCT}, a discourse-aware knowledge model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives. We encode the outputs of PARA-COMET during fine-tuning BART on our auxillary dataset (ART) (Section \ref{section:method}). We show on three different datasets (Section \ref{section:data}) that this knowledge-enhanced model performs best both in automatic and human-based evaluations (Section \ref{section:eval}). Table \ref{table:1} shows an example of an enthymeme consisting of a stated premise and conclusion and the generated implicit premise by a BART model (zero-shot), by a BART model fine-tuned on \textit{ART} dataset, and a BART model fine-tuned on \textit{ART} augmented with discourse-aware commonsense knowledge derived from PARA-COMET. We make the code available at \url{https://github.com/tuhinjubcse/EnthymemesEMNLP2021}. \section{Related Work} Prior work on enthymeme reconstruction has focused primarily on the identification (i.e., classification) of implicit premises in enthymemes \cite{rajendran-etal-2016-contextual,habernal-etal-2018-argument, reisert-etal-2015-computational,boltuzic-snajder-2016-fill,Razuvayevskaya2017FindingEI}. \citet{boltuzic-snajder-2016-fill} study how to identify enthymemes in online discussions, while \citet{habernal-etal-2018-argument} present the task of identifying the correct warrant given two candidates warrants in order to reconstruct an enthymeme. \citet{rajendran-etal-2016-contextual} introduce an approach to classify the stance of a statement as implicit or explicit, as a first step towards the long term goal of enthymeme reconstruction. Unlike these works which propose discriminative approaches to identify an enthymeme or the (correct) implicit premises, we focus on generative models that aim to \emph{generate an implicit premise given an enthymeme}, using abductive reasoning and discourse-aware commonsense knowledge. \citet{alshomary-etal-2020-target} introduce a closely related task of generating an argument's conclusion from its premises. Specifically, they focus on the subtask of inferring the conclusion’s target from the premises. They develop two complementary target inference approaches: one ranks premise targets and selects the top-ranked target as the conclusion target, the other finds a new conclusion target in a learned embedding space using a triplet neural network. Unlike this paper, our work focuses on a new task of generating an implicit premise given an enthymeme that consists of a stated conclusion and a stated premise. \label{section:data} \section{Datasets} \label{section:data} \paragraph{Training dataset.} Based on the theoretical connection between enthymemes and abductive reasoning, we use the \textit{Abductive Reasoning in narrative Text (ART)} data developed for the abductive NLG task \cite{Bhagavatula2020Abductive} to train our models. The task is framed as: given two observations (O1 and O2) from a narrative, generate the most plausible explanation (hypothesis) (Table \ref{table:anlg}). The observations O1, O2 in \textit{ART} are drawn from the ROCStories \cite{mostafazadeh2016corpus} dataset, a large collection of short, manually curated five sentence stories. The beginning and ending of each story maps to the first (O1) and second (O2) observations in ART, respectively. \citet{Bhagavatula2020Abductive} presented O1 and O2 as narrative context to crowdworkers and prompted them to generate plausible and implausible Hypotheses (H) to explain the observations. To avoid annotation artifacts, \citet{Bhagavatula2020Abductive} applied an adversarial filtering step to retain one challenging pair of plausible and implausible hypotheses that are hard to distinguish between. The \textit{ART} training set consists of 50481 instances, while the validation and test set consist of 7252 and 14313 instances, respectively. As can be seen in Table \ref{table:anlg} the observations O1 and O2 could be "mapped" to the stated Premise and the stated Claim in an enthymeme, while the hypothesis H is mapped to the \emph{implicit premise} we try to generate. \begin{table}[t] \small \centering \begin{tabular}{|l|l|l|} \hline O1 & \multicolumn{2}{l|}{Alex had his heart set on an ivy league college} \\ \hline O2 & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Alex ended up achieving his dream \\ of getting into the school.\end{tabular}} \\ \hline H & \multicolumn{2}{l|}{Alex applied to Harvard} \\ \hline \end{tabular} \caption{\label{table:anlg}Instances from the \textit{ART} dataset.} \vspace{-3ex} \end{table} \paragraph{Test datasets.} We test our models on three different datasets of incomplete arguments (enthymeme) annotated with human-generated implicit/missing premises. First, we use the Argument Reasoning Comprehension Task dataset released by \citet{habernal-etal-2018-argument} (\textbf{D1}), which contains 1654 \textit{\{claim, premise, warrant(implicit premise)\}} triples. Second, we used the dataset introduced by \citet{boltuzic-snajder-2016-fill}, which contains 494 enthymemes from an online debate forum with human annotated implicit premises (\textbf{D2}). Third, we use the dataset introduced by \citet{becker-etal-2020-implicit} (\textbf{D3}), which contains implicit premises annotated for each arguments from the MicroText Corpus \cite{peldszus2015annotated}. For \textbf{D3}, we focus only arguments that are in a \emph{support} relation since this corresponds to our task. Moreover, we choose the cases where there is only one implicit premise, rather than a chain of linked premises. This results in a total of 112 enthymemes for D3. For all datasets, we apply automatic filtering to keep only full-formed sentences as claim and premises (e.g., remove cases where the stated premise/claim consists of a noun-phrase, a partial clauses, or many sentences). \begin{table}[] \centering \small \begin{tabular}{|l|l|} \hline \begin{tabular}[c]{@{}l@{}}Encoder\\ Input\end{tabular} & \begin{tabular}[c]{@{}l@{}}Amy was looking through her \\ mother's old scrapbooks. \ \textbf{[SEP]} Amy \\ realized her mother had dated her\\ history professor.\end{tabular} \\ \hline\hline \begin{tabular}[c]{@{}l@{}}Encoder\\ Input +\\ PARA-COMET\end{tabular} & \begin{tabular}[c]{@{}l@{}}Amy was looking through her \\ mother's old scrapbooks. \ \textbf{[SEP]} \textbf{\color{ForestGreen}to} \\ \textbf{\color{ForestGreen}find something} \ \textbf{[SEP]} Amy realized\\ her mother had dated her history\\ professor.\end{tabular} \\ \hline\hline \begin{tabular}[c]{@{}l@{}}Decoder\\ Ouput\end{tabular} & \begin{tabular}[c]{@{}l@{}}Amy was looking through her\\ mother's old scrapbooks. \textbf{\textit{\color{blue}And since}} \\ \textit{Amy found pictures of her history}\\ \textit{professor and mother together}. Amy\\ realized her mother had dated her\\ history professor.\end{tabular} \\ \hline \end{tabular} \caption{\label{table:finetuning} Encoder input in two settings: fine-tuning on \textit{ART} and fine-tuning on \textit{ART} + PARA-COMET (the green text between [SEP]). For decoder's output every hypothesis is prepended by \textit{And since} in bolded blue.} \vspace{-3ex} \end{table} \section{Method}\label{section:method} For our generation model, we use BART \cite{lewis2019bart}, a pre-trained conditional language model that combines bidirectional and auto-regressive transformers. It is implemented as a sequence-to-sequence model with a bidirectional encoder over corrupted text and a left-to-right auto-regressive decoder. \paragraph{Fine-tuning BART on \textit{ART}.} To fine-tune BART on the \textit{ART} dataset (Section \ref{section:data}), we concatenate O1 and O2 with a special delimiter [SEP] as input to BART encoder as shown in Table \ref{table:finetuning} Row 1. For decoding, we focus on reconstructing the entire argument given an enthymeme. To encourage fluency and coherence in our generated argument, we prepend the plausible hypothesis (implicit premise) with a discourse marker \textit{And since} (Table 3 Row 3) during fine-tuning. \paragraph{Fine-tuning BART on PARA-COMET enhanced \textit{ART}.} Adapted knowledge models such as COMET \cite{bosselut-etal-2019-comet} have been shown to generate implicit commonsense inferences along several dimensions (depending on what knowledge graphs they were pre-trained on). PARA-COMET \cite{Gabriel2021ParagraphLevelCT}, is an extension of COMET pre-trained on ATOMIC \cite{atomic} that is able to generate discourse-aware common sense knowledge. ATOMIC is a knowledge graph that contains 9 relations related to social commonsense knowledge, including dynamic aspects of events such as causes and effects, if-then conditional statements, and mental states. Given a text with T sentences ${S_{1}, S_{2}...S_{T} }$, PARA-COMET generates a set of commonsense inferences for the 9 inferential relations from ATOMIC for each sentence $S_{i}$, which are consistent with the entire narrative. Following PARA-COMET's input format, we create a discourse of two sentences containing [O1,O2] from ART. We then feed this as an input to the trained PARA-COMET model and obtain 9 commonsense relations for both O1 and O2. Given the causal nature of the implicit premises for this work we use only the relation \textit{xIntent}. Given an event (e.g., ``X compliments Y"), \textit{xIntent} states the likely intents of person X (e.g., ``X wants to be nice"). We only consider \textit{xIntent} returned for O1 (Premise on our task). We experimented with other relations as well as \textit{xIntent} for both O1 and O2 but the results were not better. After obtaining discourse-aware commonsense, we concatenate \{O1, commonsense, O2\} in a sequential order as shown in Table \ref{table:finetuning} Row 2 and pass it to BART's encoder for fine-tuning. For decoding, we use the same process as before (Table \ref{table:finetuning} Row 3). \paragraph{Inference-time decoding.} For generation on our task and test sets, we concatenate the \{Premise, Claim\} or \{Premise, commonsense, Claim\} in a given enthymeme in the same way as shown in Table \ref{table:finetuning} and pass as an input to the encoder of fine-tuned BART. The fine-tuned BART model then generates the entire argument along with the implicit premise auto-regressively. We use beam search with a beam width of 5 for generation. Post decoding, we split the argument into 3 individual sentences and treat the middle sentence starting with \textit{And since} as the implicit premise after removing the artificially added discourse marker. For zero-shot setting, we use the pre-trained BART (bart-large) model. We use the format \{\emph{Premise. And since [MASK]. Claim}\} and let the language model generate an implicit premise. \section{Evaluation and Results} \label{section:eval} We evaluate three setups: 1) directly use pre-trained BART (Zero-shot); 2) fine-tune BART on \textit{ART}; 3) fine-tune BART on \textit{ART+PARA-COMET}. \paragraph{Automatic Evaluation Setup.} We use \textit{BLEU}~\cite{BLEU}, one of the most widely used automatic metrics for generation tasks to compute BLEU-1 and BLEU-2 scores between the system output and the human written gold implicit premise. We also report F1-Score of \textit{BERTScore}, a metric for evaluating text generation using contextualized embeddings. \paragraph{Human evaluation setup.} We select 50 enthymemes from each test set (total of 150 enthymemes) and the output of our fine-tune BART models (with or without PARA-COMET). We hired crowdworkers on the Amazon Mechanical Turk platform. Given an enthymemes they were asked if the generated implicit premises were plausible or not (agreement: 0.56 based on Krippendorff's $\alpha$). Each enthymeme was judged for plausibility by 3 distinct Turkers (50 crowdworkers overall). As it was a binary judgement, we took majority voting which means if 2/3 of the annotators thought it was plausible we marked it as plausible. Plausibility judgement considers whether the generated premise was grammatical, relevant to the argument, coherent with our commonsense and completes the argument. \paragraph{Results.} While pre-trained language models often contain structured commonsense \cite{davison-etal-2019-commonsense,Zhou2020EvaluatingCI} Table \ref{table:auto} shows that pre-trained BART cannot generate plausible implicit premises. Fine-tuning on the \textit{ART} dataset improves the results significantly. Finally, the model that encodes discourse-aware commonsense outperform all baselines on all test datasets (D1, D2 and D3). Human evaluation further demonstrates that encoding commonsense knowledge leads to better implicit premise generation (Table \ref{table:human}). \paragraph{Analysis.} We notice that adding commonsense beams from PARA-COMET makes the generated implicit premise more plausible. For instance, for the stated claim and premise from D3 in Table \ref{analysis}, we see that PARA-COMET adds a beam \textit{to feel better}. Similarly it adds a beam \textit{to learn more} for the stated claim and premise from D1 for both examples shown in Table \ref{analysis}. We posit that adding these in combination with the stated claim and premise, leads our model to infer more plausible implicit premises compared to the ones generated by BART fine-tuned on \textit{ART}. Finally, given that D3 has been annotated with argument schemes \cite{musi-etal-2018-multi}, we can explore their role in enthymeme reconstruction. We notice that most of the generated plausible implicit premises belong to enthymemes annotated with \textit{Practical Evaluation} argument scheme, where ``the premise is an evaluation about something being ‘good’ or ‘bad’, while the claim expresses a recommendation/advice about stopping/continuing an action" (Table \ref{analysis} ). \begin{table}[] \centering \small \begin{tabular}{|p{0.4cm}|l|l|l|l|} \hline Data & System & BLEU1 & BLEU2 & BS \\ \hline \multirow{3}{*}{D1} & ZeroShot & 6.02 & 2.17 & 42.88 \\ \cline{2-5} & ART & 9.16 & 3.11 & 48.35 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{10.56} & \textbf{3.90} & \textbf{50.22} \\ \hline\hline \multirow{3}{*}{D2} & ZeroShot & 28.24 & 15.13 & 46.96 \\ \cline{2-5} & ART & 37.77 & 18.76 & 60.63 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{44.12} & \textbf{24.14} & \textbf{67.75} \\ \hline\hline \multirow{3}{*}{D3} & ZeroShot & 12.58 & 6.25 & 44.64 \\ \cline{2-5} & ART & 14.89 & 6.34 & 51.78 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{15.56} & \textbf{7.50} & \textbf{53.38} \\ \hline \end{tabular} \caption{\label{table:auto} Automatic evaluation of implicit premise generation by BART in 3 settings based on BLEU1, BLEU2 and BertScore(BS). Difference is significant, $( \alpha < 0.005)$ via Wilcoxon signed-rank test.} \end{table} \begin{table}[] \centering \small \begin{tabular}{|l|l|c|} \hline Data & System & Plausibility \\ \hline \multirow{2}{*}{D1} & ART & 50\% \\ \cline{2-3} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{58}\% \\ \hline\hline \multirow{2}{*}{D2} & ART & 48\% \\ \cline{2-3} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{66}\% \\ \hline\hline \multirow{2}{*}{D3} & ART & 38\% \\ \cline{2-3} & \begin{tabular}[c]{@{}l@{}} +PARA-COMET\end{tabular} & \textbf{56}\% \\ \hline \end{tabular} \caption{\label{table:human}Human evaluation results our finetuned BART models in two settings. } \vspace{-3ex} \end{table} \begin{table}[h!] \centering \small \begin{tabular}{|p{0.18cm}|p{1.54cm}|p{4.8cm}|} \hline & & \\ \hline \multirow{6}{*}{D1} & St Premise & \begin{tabular}[c]{@{}l@{}}Deaf students need more specialized \\ education\end{tabular} \\ \cline{2-3} & St Claim & States need special schools for the deaf \\ \cline{2-3} & Gold & \begin{tabular}[c]{@{}l@{}}Their parents can't always enroll them\\ in a deaf private school\end{tabular} \\ \cline{2-3} & Zero-shot & We can't afford it, we shouldn't \\ \cline{2-3} & ART & \begin{tabular}[c]{@{}l@{}}The deaf students are not getting \\ enough education.\end{tabular} \\ \cline{2-3} & +PARA-COMET & \begin{tabular}[c]{@{}l@{}}Deaf students are not \textbf{\color{ForestGreen}being served}\\ \textbf{\color{ForestGreen}well in the schools}\end{tabular} \\ \hline\hline \multirow{6}{*}{D1} & St Premise & \begin{tabular}[c]{@{}l@{}}Understanding other culture is \\more important now than ever before.\end{tabular} \\ \cline{2-3} & St Claim & Colleges need humanities programs \\ \cline{2-3} & Gold & \begin{tabular}[c]{@{}l@{}}More people now fail to understand \\other cultures\end{tabular} \\ \cline{2-3} & Zero-shot & It's the humanities, we need them \\ \cline{2-3} & ART & \begin{tabular}[c]{@{}l@{}}The humanities are the most important \\subjects in college.\end{tabular} \\ \cline{2-3} & +PARA-COMET & \begin{tabular}[c]{@{}l@{}}There is a \textbf{\color{ForestGreen}lot of misinformation}\\ out there about other cultures \end{tabular} \\ \hline\hline \multirow{6}{*}{D2} & St Premise & \begin{tabular}[c]{@{}l@{}}Bush new spending in 8 years? \$5.07\\ TRILLION Obama total New Spending\\ (projected out for the next 8 years)?\\\$1.44 TRILLION. And of that total,\\ only \$430 billion is non-recession \\related.\end{tabular} \\ \cline{2-3} & St Claim & Fixed the economy \\ \cline{2-3} & Gold & Obama spends less money than Bush. \\ \cline{2-3} & Zero-shot & We are talking about the economy \\ \cline{2-3} & ART & \begin{tabular}[c]{@{}l@{}}The Obama administration has spent\\ \$1 trillion.\end{tabular} \\ \cline{2-3} & +PARA-COMET & \begin{tabular}[c]{@{}l@{}}The Obama's \textbf{\color{ForestGreen}spending is much less}\\ than Bush's.\end{tabular} \\ \hline\hline \multirow{6}{*}{D3} & St Premise & \begin{tabular}[c]{@{}l@{}}The morning-after pill has a \\number of side effects.\end{tabular} \\ \cline{2-3} & St Claim & The morning-after pill should only be prescribed after counselling by a physician or pharmacist., \\ \cline{2-3} & Gold & Physicians and pharmacists inform about side effects. \\ \cline{2-3} & Zero-shot & \begin{tabular}[c]{@{}l@{}}Morning-after pills are not FDA\\ approved, they should be avoided .\end{tabular} \\ \cline{2-3} & ART & \begin{tabular}[c]{@{}l@{}}The morning- after pill can \\cause depression.\end{tabular} \\ \cline{2-3} & +PARA-COMET & \begin{tabular}[c]{@{}l@{}}The side effects \textbf{\color{ForestGreen}can be very serious}.\end{tabular} \\ \hline \end{tabular} \caption{\label{analysis}Enthymeme generation for a given stated Premise and Claim by BART in 3 settings: zero-shot; fine-tuned on \textit{ART}; and fine-tuned on \textit{ART} + PARA-COMET. Text bolded in green displays how generations are more plausible due to incorporation of discourse aware commonsense.} \end{table} \section{Conclusions} We propose an end-to-end approach for a new task of \emph{automatically generating an implicit premise given an enthymeme}. We show how leveraging abductive reasoning as an auxiliary task improves over zero-shot performance of a state-of-the-art generative language model. Finally, we build a knowledge-enhanced model by encoding discourse-aware commonsense that outperforms all existing baselines in terms of automatic metrics as well as plausibility judgements from crowdworkers. Future work includes exploring other sources for commonsense knowledge, experimenting with improved decoding techniques, as well as studying the role of argument schemes in enthymemes reconstruction. \section{Ethical Considerations} Although we use language models trained on data collected from the Web, which have been shown to have issues with bias and abusive language \cite{sheng-etal-2019-woman, wallace-etal-2019-universal}, the inductive bias of our models should limit inadvertent negative impacts. Unlike model variants such as GPT, BART is a conditional language model, which provides more control of the generated output. Finally, we finetune our model on the ART dataset, which is built on five sentence short stories which is devoid of harmful and toxic text especially targeted at marginalized communities. While dual-use concerns are certainly possible here, we think that open-sourcing this technology will help to facilitate understanding of arguments with more balanced and better reasoning. The technology should be used responsibly, particularly making sure the generation is controllable by providing the stated premise, claim and any commonsense knowledge pertaining to the enthymeme in textual form. Finally, we pay the Turkers \$15/hour, complying with minimum wage standards in US.
2023-04-23T08:17:37.550Z
2021-09-14T02:16:39.000Z
redpajama/arxiv
arxiv_0000
355
3,770
d7b02c4e9e8d145d0cee1786ee62aa507c8c6831
\section{Equivalence between definitions of precategories} \label{sec:precat-equivalent-definitions} \noindent We prove the equivalence between the equational and the enriched definition of precategories:% \ifx\precatequivalentdefsprop\undefined \begin{prop} There is an equivalence of categories between $\nPCat{n+1}$ and categories enriched in $\nPCat n$ with the funny tensor product. \end{prop} \else \precatequivalentdefsprop* \fi \begin{proof} \renewcommand\tcomp{c}% \renewcommand\tunit{i}% Given~$C \in \nPCat {n+1}$, we define an associated object~$D \in \enrCatpar{\nPCat n}$ as follows. We put \[ D_0 = C_0 \qtand D(x,y) = C_{\uparrow(x,y)} \] where~$C_{\uparrow(x,y)}$ is the $n$\precategory such that \[ (C_{\uparrow(x,y)})_i = \set{ u \in C_{i+1} \mid \csrc_0(u) = x \text{ and } \ctgt_0(u) = y} \] for~$i \in \set{0,\ldots,n}$ and whose composition operation~$\pcomp_{k,l}$ is the operation~$\pcomp_{k+1,l+1}$ on~$C$ for~$k,l \in \set{1,\ldots,n}$. Given~$x \in D_0$, we define the identity morphism \[ \tunit_{x}\co \termcat \to D(x,x) \] as the morphism which maps the unique $0$\cell~$\ast$ of~$\termcat$ to~$\unit {x} \in C_1$. Given~$x,y,z \in C_0$, we define the composition morphism \[ \tcomp_{x,y,z}\co D(x,y) \funny D(y,z) \to D(x,z) \in \nPCat {n} \] as the unique morphism such that~$l_{x,y,z} = \tcomp_{x,y,z} \circ \lincfun{D(x,y),D(y,z)}$ is the composite \[ D(x,y) \times \catsk 0 {D(y,z)} \simeq \coprod_{g \in D(y,z)_0} D(x,y) \xto{[(-)\pcomp_0 g]_{g \in D(y,z)_0}} D(x,z) \] and~$r_{x,y,z} = \tcomp_{x,y,z} \circ \rincfun{D(x,y),D(y,z)}$ is the composite \[ \catsk 0 {D(x,y)} \times D(y,z) \simeq \coprod_{f\in D(x,y)_0} D(y,z) \xto{[f\pcomp_0 (-)]_{f \in D(x,y)_0}} D(x,z). \] We verify that the composition morphism is left unital, \ie given~$x,y \in D_0$, the diagram \[ \begin{tikzcd}[column sep=small] \termcat \funny D(x,y) \ar[rr,"{\tunit_x \funny D(x,y)}"] \ar[rd,"\funnyl_{D(x,y)}"'] & & D(x,x) \funny D(x,y) \ar[ld,"{\tcomp_{x,x,y}}"] \\ & D(x,y) \end{tikzcd} \] commutes. We compute that \begin{align*} \tcomp_{x,x,y} \circ (\tunit_x \funny D(x,y)) \circ \lincfun{\termcat,D(x,y)} &= \tcomp_{x,x,y} \circ \lincfun{D(x,x),D(x,y)} \circ (\tunit_x \times \catsk 0 {D(x,y)}) \\ & & \makebox[1cm][r]{(by definition of~$\funny$)} \\ &= l_{x,x,y} \circ (\tunit_x \times \catsk 0 {D(x,y)}) \\ &= \cucatsk {D(x,y)} \circ \pi_2 & \makebox[1cm][r]{\hfill (by unitality of~$\unit x$)} \\ &= \funnyl_{D(x,y)} \circ \lincfun{\termcat,D(x,y)} \shortintertext{and} \tcomp_{x,x,y} \circ (\tunit_x \funny D(x,y)) \circ \rincfun{\termcat,D(x,y)} &= \tcomp_{x,x,y} \circ \rincfun{D(x,x),D(x,y)} \circ (\catsk 0 {(\tunit_x)} \times D(x,y)) \\ & & \makebox[1cm][r]{(by definition of~$\funny$)} \\ &= r_{x,x,y} \circ (\catsk 0 {(\tunit_x)} \times D(x,y)) \\ &= \pi_2 & \makebox[1cm][r]{\hfill (by unitality of~$\unit x$)} \\ &= \funnyl_{D(x,y)} \circ \rincfun{\termcat,D(x,y)} \end{align*} Thus, by the colimit definition of~$\termcat \funny D(x,y)$, the above triangle commutes. Similarly, the triangle \[ \begin{tikzcd}[column sep=small] D(x,y) \funny \termcat \ar[rr,"{D(x,y) \funny \tunit_y}"] \ar[rd,"\funnyr_{D(x,y)}"'] & & D(x,y) \funny D(y,y) \ar[ld,"{\tcomp_{x,y,y}}"] \\ & D(x,y) \end{tikzcd} \] commutes, so that the composition morphism is right unital. We now verify that it is associative, \ie given~$w,x,y,z \in D_0$, that the diagram \begingroup \makeatletter \renewcommand{\maketag@@@}[1]{\hbox to 0.000008pt{\hss\m@th\normalsize\normalfont#1}}% \makeatother \begin{equation} \label{eq:precat-enr-assoc} \begin{tikzpicture}[commutative diagrams/every diagram,xscale=2.4,yscale=0.6] \node at (0:2.7cm) {}; \node at (180:2.7cm) {}; \node (PB) at (90+72:2cm) {$\mathmakebox[3cm][c]{\hspace*{1cm}(D(w,x) \funny D(x,y)) \funny D(y,z)}$}; \node (PL) at (90:2cm) {$D(w,y) \funny D(y,z)$}; \node (PR1) at (90+2*72:2cm) {$D(w,x) \funny (D(x,y) \funny D(y,z))$}; \node (PR2) at (90+3*72:2cm) {$D(w,x) \funny D(x,z)$}; \node (PE) at (90+4*72:2cm) {$D(w,z)$}; \path[commutative diagrams/.cd,every arrow,every label] (PB) edge node {${c_{w,x,y} \funny D(y,z)}$} (PL) (PL) edge node {${c_{w,y,z}}$} (PE) (PB) edge node[swap,pos=0.3] {$\mathstrut\smash{\funnyass_{D(w,x),D(x,y),D(y,z)}}$} (PR1) (PR1) edge node[swap] {${D(w,x) \funny c_{x,y,z}}$} (PR2) (PR2) edge node[swap,pos=0.7] {$\mathstrut\smash{c_{w,x,z}}$} (PE); \end{tikzpicture} \end{equation} \endgroup commutes. By a colimit definition analogous to~\eqref{eq:funny-3-pushout}, it is enough to show the commutation of the diagram when precomposing with the morphisms~$\iota_1,\iota_2,\iota_3$ where \begin{align*} \iota_1 &= \lincfun{D(w,x)\funny D(x,y),D(y,z)} \circ (\lincfun{D(w,x),D(x,y)} \times \catsk 0 {D(y,z)}), \\ \iota_2 &= \lincfun{D(w,x)\funny D(x,y),D(y,z)} \circ (\rincfun{D(w,x),D(x,y)} \times \catsk 0 {D(y,z)}), \\ \iota_3 &= \rincfun{D(w,x)\funny D(x,y),D(y,z)}\zbox. \end{align*} Writing~$D^1,D^2,D^3$ for~$D(w,x),D(x,y),D(y,z)$, we compute that \begin{align*} &\phantom{\;=\;} \tcomp_{w,x,z} \circ (D^1\funny\tcomp_{x,y,z}) \circ \funnyass_{D^1,D^2,D^3} \circ \iota_1 \\ &= \tcomp_{w,x,z} \circ (D^1\funny\tcomp_{x,y,z}) \circ \funnyass_{D^1,D^2,D^3} \circ \lincfun{D^1\funny D^2,D^3} \circ (\lincfun{D^1,D^2} \times \catsk 0 {D^3})\\ &= \tcomp_{w,x,z} \circ (D^1\funny\tcomp_{x,y,z}) \circ \lincfun{D^1,D^2\funny D^3} \circ \alpha_{D^1,\catsk 0 D^2,\catsk 0 D^3}\\ &= \tcomp_{w,x,z} \circ \lincfun{D^1,D(x,z)} \circ (D^1\times ((-)\pcomp_0(-))) \circ \alpha_{D^1,\catsk 0 D^2,\catsk 0 D^3}\\ &= ((-)\pcomp_0(-)) \circ (D^1\times ((-)\pcomp_0(-))) \circ \alpha_{D^1,\catsk 0 D^2,\catsk 0 D^3}\\ &= ((-)\pcomp_0(-)) \circ (((-)\pcomp_0(-))\times \catsk 0 D^3) & \makebox[1cm][r]{(by associativity of~$\pcomp_0$)}\\ &= \tcomp_{w,y,z} \circ \lincfun{D(w,y),D^3} \circ (((-)\pcomp_0(-))\times \catsk 0 D^3)\\ &= \tcomp_{w,y,z} \circ \lincfun{D(w,y),D^3} \circ (\tcomp_{w,x,y}\times \catsk 0 D^3) \circ (\lincfun{D^1,D^2} \times \catsk 0 {D^3})\\ &= \tcomp_{w,y,z} \circ (\tcomp_{w,x,y}\funny D^3) \circ \lincfun{D^1\funny D^2,D^3} \circ (\lincfun{D^1,D^2} \times \catsk 0 {D^3})\\ &= \tcomp_{w,y,z} \circ (\tcomp_{w,x,y}\funny D^3) \circ \iota_1 \end{align*} so that the diagram~\eqref{eq:precat-enr-assoc} commutes when precomposed with~$\iota_1$ and, similarly, it commutes when precomposed with~$\iota_2$ and~$\iota_3$. Thus, \eqref{eq:precat-enr-assoc} commutes. Hence,~$D$ is a category enriched in $n$\precategories. The operation~$C \mapsto D$ can easily be extended to morphisms of $(n{+}1)$\precategories, giving a functor \[ F\co \nPCat{n+1} \to \enrCatpar{\nPCat n}. \] \medskip \noindent Conversely, given~$C \in \enrCatpar{\nPCat n}$, we define an associated object~$D \in \nPCat {n+1}$. We put \[ D_0 = C_0 \qtand D_{i+1} = \coprod_{x,y \in C_0}C(x,y)_i \] for~$i \in \set{0,\ldots,n}$. Given~$k \in \N$ with~$k\le n$,~$\iota_{x,y}(u)\in D_{k+1}$ and~$\eps \in \set{-,+}$, we put \[ \csrctgt\eps_k(\iota_{x,y}(u)) = \begin{cases} x & \text{if~$k = 0$ and~$\eps = -$,} \\ y & \text{if~$k = 0$ and~$\eps = +$,} \\ \iota_{x,y}(\csrctgt\eps_{k-1}(u)) & \text{if~$k > 0$,} \end{cases} \] so that the operations~$\csrc,\ctgt$ equips~$D$ with a structure of $(n{+}1)$\globular set. Given~$x \in D_0$, we put \[ \unit {x} = \iota_{x,x}(\tunit_x(\ast)) \] and, given~$k \in \N$ with~$k \le n-1$ and~$\iota_{x,y}(u) \in D_{k+1}$, we put \[ \unitp {k+2}{\iota_{x,y}(u)} = \iota_{x,y}(\unitp {k+1}{u})\zbox. \] Given~$i,{k_1},{k_2} \in \set{0,\ldots,n}$ with~$i = \min({k_1},{k_2}) - 1$, and~$u = \iota_{x,y}(\tilde u) \in D_{k_1},v = \iota_{x',y'}(\tilde v) \in D_{k_2}$ that are $i$\composable, we put \[ u \pcomp_i v = \begin{cases} \iota_{x,y}({\tilde u} \pcomp_i {\tilde v}) & \text{if~$i > 0$} \\ \iota_{x,y'}(l_{x,y,y'}({\tilde u},\unitp {k_1 - 1} {\tilde v})) & \text{if~$i = 0$ and~${k_2} = 1$} \\ \iota_{x,y'}(r_{x,y,y'}(\unitp {k_2 - 1} {\tilde u},{\tilde v})) & \text{if~$i = 0$ and~${k_1} = 1$} \end{cases} \] where~$l_{x,y,z}$ is the composite \[ C(x,y) \times \catsk 0 {C(y,z)} \xto{\lincfun{C(x,y),C(y,z)}} C(x,y) \funny C(y,z) \xto{\tcomp_{x,y,z}} C(x,z) \] and~$r_{x,y,z}$ is the composite \[ \catsk 0 {C(x,y)} \times C(y,z) \xto{\rincfun{C(x,y),C(y,z)}} C(x,y) \funny C(y,z) \xto{\tcomp_{x,y,z}} C(x,z). \] We now have to show that the axioms of $(n{+}1)$\precategories are satisfied. Note that, by the definition of~$D$, it is enough to prove the axioms for the~$\unitp 1{}$ and~$\pcomp_0$ operations. Given~$x \in D_0$ and~$\eps \in \set{-,+}$, we have \[ \csrctgt\eps_0(\unit x) = \csrctgt\eps_0(\iota_{x,x}(\tunit_x(\ast))) = x \] so that~\Axr{precat:src-tgt-unit} holds. For~$k \in \set{1,\ldots,n+1}$, given~$u = \iota_{x,y}(\tilde u) \in D_k$ and~$v = \iota_{y,z}({\tilde v}) \in D_1$ such that~$u,v$ are $0$\composable, if~$k = 1$, then \[ \csrc_0(u \pcomp_0 v) = \csrc_0(\iota_{x,z}(l_{x,y,z}(\tilde u,{\tilde v}))) = x, \] and, similarly,~$\ctgt_0(u \pcomp_0 v) = z$. Otherwise, if~$k > 1$, then, for~$\eps \in \set{-,+}$, \begin{align*} \csrctgt\eps_{k-1}(u \pcomp_0 v) &= \csrctgt\eps_{k-1}(\iota_{x,z}(l_{x,y,z}(\tilde u,\unitp{k-1} {\tilde v}))) \\ &= \iota_{x,z}(\csrctgt\eps_{k-2}(l_{x,y,z}(\tilde u,\unitp {k-1}{\tilde v}))) \\ &= \iota_{x,z}(l_{x,y,z}(\csrctgt\eps_{k-2}(\tilde u),\unitp {k-2}{\tilde v})) \\ &= \iota_{x,y}(\csrctgt\eps_{k-2}(\tilde u)) \pcomp_0 \iota_{y,z}(\tilde v) \\ &= \csrctgt\eps_{k-1}(u) \pcomp_0 v. \end{align*} Analogous equalities are satisfied for $0$\composable~$u \in D_1$ and~$v \in D_k$, so that~\Axr{precat:csrc-tgt} holds. Given~$k \in \set{1,\ldots,n+1}$ and~$u = \iota_{x,y}(\tilde u) \in D_k$, we have \begin{align*} u \pcomp_0 \unit y &= \iota_{x,y}(l_{x,y,y}(\tilde u,\unitp {k-1} {\tunit_y(\ast)})) \\ &= \iota_{x,y}(c_{x,y,y} \circ (C(x,y) \funny \tunit_y) \circ \lincfun{C(x,y),\termcat}(\tilde u,\unitp {k-1} {\ast})) \\ &= \iota_{x,y}(\funnyr_{C(x,y)} \circ \lincfun{C(x,y),\termcat}(\tilde u,\unitp {k-1} {\ast})) & \makebox[4cm][r]{(by the axioms of enriched categories)} \\ &= \iota_{x,y}(\pi_1(\tilde u,\unitp {k-1} {\ast})) & \makebox[4cm][r]{(by definition of~$\funnyr$)} \\ &= u\zbox. \end{align*} Moreover, given~$k \in \set{1,\ldots,n}$ and $0$\composable~$u = \iota_{x,y}(\tilde u) \in D_1$ and~$v = \iota_{y,z}(\tilde v) \in D_k$, we have \begin{align*} u \pcomp_0 \unitp{k+1} v &= \iota_{x,z}(r_{x,y,z}(\unitp k {\tilde u},\unitp k {\tilde v})) \\ &= \iota_{x,z}(\unitp k {} (r_{x,y,z}(\unitp {k-1} {\tilde u},\tilde v))) \\ &= \unitp {k+1}{}(\iota_{x,z}(r_{x,y,z}(\unitp {k-1} {\tilde u},\tilde v))) \\ &= \unitp {k+1} {u \pcomp_0 v}\zbox. \end{align*} Analogous equalities hold when composing with identities on the left, so that~\Axr{precat:compat-id-comp} holds. Given~$k \in \set{1,\ldots,n+1}$ and $0$\composable~$u_1 = \iota_{w,x}(\tilde u_1) \in D_k$,~$u_2 = \iota_{x,y}(\tilde u_2) \in D_1$ and~$u_3 = \iota_{y,z}(\tilde u_3) \in D_1$, we have \begin{align*} (u_1 \pcomp_0 u_2) \pcomp_0 u_3 &= \iota_{w,z}(l_{w,y,z}(l_{w,x,y}(\tilde u_1,\unitp{k-1}{\tilde u_2}),\unitp {k-1}{\tilde u_3}))\zbox. \end{align*} Writing~$C^1,C^2,C^3$ for~$C(w,x),C(x,y),C(y,z)$, we compute that \begin{align*} &\phantom{\;=\;}l_{w,y,z} \circ (l_{w,x,y} \times \catsk 0 {C^3}) \\ &= \tcomp_{w,y,z} \circ \lincfun{C(w,y),C^3} \circ (\tcomp_{w,x,y} \times \catsk 0 {C^3}) \circ (\lincfun{C^1,C^2} \times \catsk 0 {C^3}) \\ &= \tcomp_{w,y,z} \circ (\tcomp_{w,x,y} \funny C^3) \circ \lincfun{C^1\funny C^2,C^3} \circ (\lincfun{C^1,C^2} \times \catsk 0 {C^3}) \shortintertextnobreakabove{\hfill(by definition of~$\funny$)} &= \tcomp_{w,x,z} \circ (C^1 \funny \tcomp_{x,y,z}) \circ \funnyass_{C^1,C^2,C^3} \circ \lincfun{C^1\funny C^2,C^3} \circ (\lincfun{C^1,C^2} \times \catsk 0 {C^3}) \shortintertextnobreakabove{\hfill(by the axioms of enriched categories)} &= \tcomp_{w,x,z} \circ (C^1 \funny \tcomp_{x,y,z}) \circ \lincfun{C^1,C^2 \funny C^3} \circ \alpha_{C^1,\catsk 0 C^2,\catsk 0 C^3} \shortintertextnobreakabove{\hfill(by definition of~$\funnyass$)} &= \tcomp_{w,x,z} \circ \lincfun{C^1,C(x,z)} \circ (C^1 \times \catsk 0 {(\tcomp_{x,y,z})}) \circ \alpha_{C^1,\catsk 0 C^2,\catsk 0 C^3} \\ &= l_{w,x,z} \circ (C^1 \times \catsk 0 {(l_{x,y,z})}) \circ \alpha_{C^1,\catsk 0 C^2,\catsk 0 C^3}\zbox. \end{align*} Thus, \begin{align*} (u_1 \pcomp_0 u_2) \pcomp_0 u_3 &= \iota_{w,z}(l_{w,x,z}(\tilde u_1,\catsk 0 {(l_{x,y,z})}(\unitp{k-1}{\tilde u_2},\unitp{k-1}{\tilde u_3}))) \\ &= \iota_{w,z}(l_{w,x,z}(\tilde u_1,\unitp{k-1}{\catsk 0 {(l_{x,y,z})}({\tilde u_2},{\tilde u_3})})) \\ &= u_1 \pcomp_0 \iota_{x,z}(\catsk 0 {(l_{x,y,z})}({\tilde u_2},{\tilde u_3}))\\ &= u_1 \pcomp_0 \iota_{x,z}(l_{x,y,z}({\tilde u_2},{\tilde u_3}))\\ &= u_1 \pcomp_0 (u_2 \pcomp_0 u_3) \end{align*} and similar equalities can be shown for~$(u_1,u_2,u_3) \in (D_1 \times_0 D_k \times_0 D_1) \sqcup (D_1 \times_0 D_1 \times_0 D_k)$, so that \Axr{precat:assoc} holds. Finally, for~$i,k_1,k_2,k \in \set{1,\ldots,n+1}$ such that~$i = \min(k_1,k_2) - 1$,~$k = \max(k_1,k_2)$, given~$u = \iota_{x,y}(\tilde u) \in D_1$ and $i$\composable~$v_1 = \iota_{y,z}(\tilde v_1) \in D_{k_1}, v_2 = \iota_{y,z}(\tilde v_2) \in D_{k_2}$, we have \begin{align*} u \pcomp_0 (v_1 \pcomp_i v_2) &= u \pcomp_0 \iota_{y,z}(\tilde v_1 \pcomp_{i-1} \tilde v_2) \\ &= \iota_{x,z}(r_{x,y,z}(\unitp {k-1} u,\tilde v_1 \pcomp_{i-1} \tilde v_2)) \\ &= \iota_{x,z}(r_{x,y,z}(\unitp {k_1-1} u \pcomp_{i-1} \unitp {k_2-1} u,\tilde v_1 \pcomp_{i-1} \tilde v_2)) \\ &= \iota_{x,z}(r_{x,y,z}(\unitp {k_1-1} u,\tilde v_1) \pcomp_{i-1}r_{x,y,z}(\unitp {k_2-1} u,\tilde v_2)) \\ &= \iota_{x,z}(r_{x,y,z}(\unitp {k_1-1} u,\tilde v_1)) \pcomp_{i} \iota_{x,z}(r_{x,y,z}(\unitp {k_2-1} u,\tilde v_2)) \\ &= (u \pcomp_0 v_1) \pcomp_{i} (u \pcomp_0 v_2) \end{align*} and an analogous equality can be shown for~$((u_1,u_2),v) \in ((D_{k_1} \times_i D_{k_2}) \times_0 D_1)$, so that \Axr{precat:distrib} holds. Hence,~$D$ is an $(n{+}1)$\precategory. The construction~$C \mapsto D$ extends naturally to enriched functors, giving a functor~$G\co \enrCatpar{\nPCat n} \to \nPCat{n+1}$. Given~$C \in \nPCat {n+1}$ and~$C' = G \circ F(C)$, there is a morphism~$\alpha_C \co C \to C'$ which is the identity between~$C_0$ and~$C'_0$ and, for~$k \in \N$ with~$k\le n$, maps~$u \in C_{k+1}$ to~$\iota_{x,y}(u)$ where~$x = \csrc_0(u)$ and~$y = \ctgt_0(u)$, and one can verify that it is an isomorphism which is natural in~$C$. Conversely, given~$C \in \enrCatpar{\nPCat n}$ and~$C' = F \circ G (C)$, there is a morphism~$\beta\co C \to C'$ which is the identity between~$C_0$ and~$C'_0$, and, for~$x,y \in C_0$, maps~$u \in C(x,y)$ to~$\iota_{x,y}(u) \in C'(x,y)$, and one can verify that it is an isomorphism which is natural in~$C$. Hence,~$F$ is an equivalence of categories. \end{proof} \section{Gray presentations induce Gray categories} \label{sec:gray-pres-gray-cat} Until the end of this section, we suppose fixed a Gray presentation~$\P$. Our goal is to prove \Thmr{gray-pres-gray-cat}, \ie that $\prespcat\P$ is a lax Gray category. We start by the exchange law for $3$-cells that we prove first on rewriting steps: \begin{lemapp} \label{lem:prespcat-peiffer-ctxt} Given rewriting steps $R_i\co \phi_i \TO \phi'_i \in \freecat\P_3$ for $i \in \set{1,2}$, such that $R_1,R_2$ are $1$\composable, we have, in $\prespcat\P_3$, \[ (R_1 \comp_1 \phi_2) \comp_2 (\phi_1' \comp_1 R_2) = (\phi_1 \comp_1 R_2) \comp_2 (R_1 \comp_1 \phi_2'). \] \end{lemapp} \begin{proof} Let $l_i,r_i \in \prespcat\P_1$, $\lambda_i,\rho_i \in \prespcat\P_2$, $A_i \in \P_3$ such that $R_i = \lambda_i \comp_0 (l_i \comp_0 A_i \comp_0 r_i) \comp_i \rho_i$ for $i \in \set{1,2}$, and $\mu_i,\mu'_i\in \prespcat\P_2$ such that $A_i\co \mu_i \TO \mu_i'$ for $i \in \set{1,2}$. In $\prespcat\P_3$, we have \begin{align*} & (R_1 \comp_1 \phi_2) \comp_2 (\phi_1' \comp_1 R_2) \\ =\;&\lambda_1 \\ & \comp_1 [((l_1 \comp_0 A_1 \comp_0 r_1) \comp_1 \rho_1 \comp_1 \lambda_2 \comp_1 ( l_2 \comp_0 \mu_2 \comp_0 r_2) ) \\ & \phantom{{}\comp_1{}}\comp_2 ((l_1 \comp_0 \mu'_1 \comp_0 r_1) \comp_1 \rho_1 \comp_1 \lambda_2 \comp_1 ( l_2 \comp_0 A_2 \comp_0 r_2 ))] \\ & \comp_1 \rho_2 && \text{(by the axioms of precategories)}\\ =\;&\lambda_1 \\ & \comp_1 [((l_1 \comp_0 \mu_1 \comp_0 r_1) \comp_1 \rho_1 \comp_1 \lambda_2 \comp_1 ( l_2 \comp_0 A_2 \comp_0 r_2 ) ) \\ & \phantom{{}\comp_1{}}\comp_2 ((l_1 \comp_0 A_1 \comp_0 r_1) \comp_1 \rho_1 \comp_1 \lambda_2 \comp_1 ( l_2 \comp_0 \mu_2' \comp_0 r_2))] \\ & \comp_1 \rho_2 && \text{(by independence generator)}\\ &= (\phi_1 \comp_1 R_2) \comp_2 (R_1 \comp_1 \phi_2') &&\qedhere \end{align*} \end{proof} \noindent We can now conclude that the exchange law for $3$-cells holds: \begin{lemapp} \label{lem:prespcat-peiffer} Given $F_i\co \phi_i \TO \phi_i' \in \prespcat\P_3$ for $i \in \set{1,2}$ such that $F_1,F_2$ are $1$\composable, we have, in $\prespcat\P_3$, \[ (F_1 \comp_1 \phi_2) \comp_2 (\phi_1' \comp_1 F_2) = (\phi_1 \comp_1 F_2) \comp_2 (F_1 \comp_1 \phi_2'). \] \end{lemapp} \begin{proof} As an element of $\prespcat\P_3$, $F_i$ can be written $F_i = R_{i,1} \comp_2 \cdots \comp_2 R_{i,k_i}$ where \[ R_{i,j} = \lambda_{i,j} \comp_1 (l_{i,j} \comp_0 A_{i,j} \comp_0 r_{i,j}) \comp_1 \rho_{i,j} \] for some $k_{i} \in \N$, $\lambda_{i,j},\rho_{i,j} \in \prespcat\P_2$, $l_{i,j},r_{i,j} \in \prespcat\P_1$, $A_{i,j} \in \P_3$ for $1 \le j \le k_i$, for $i \in \set{1,2}$. Note that \[ F_1 \comp_1 \phi_2 = (R_{1,1} \comp_1 \phi_2) \comp_2 \cdots \comp_2 (R_{1,k_1} \comp_1 \phi_2) \] and \[ \phi_1' \comp_1 F_2 = (\phi_1' \comp_1 R_{2,1}) \comp_2 \cdots \comp_2 (\phi_1' \comp_1 R_{2,k_2}). \] Then, by using \Lemr{prespcat-peiffer-ctxt} $k_1 k_2$ times as expected to reorder the $R_{1,j_1}$'s after the $R_{2,j_2}$'s for $1 \le j_i \le k_i$ for $i \in \set{1,2}$, we obtain that \[ (F_1 \comp_1 \phi_2) \comp_2 (\phi_1' \comp_1 F_2) = (\phi_1 \comp_1 F_2) \comp_2 (F_1 \comp_1 \phi_2'). \qedhere \] \end{proof} \noindent We now prove the various conditions on~$X_{-,-}$. First, a technical lemma: \begin{propapp} \label{prop:compat-Y-one-cells} Given $f\in \freecat\P_1$, $\phi,\psi \in \freecat\P_2$ with $f,\phi,\psi$ $0$-composable, there is a canonical isomorphism $(f \comp_0 \phi) \shuffle \psi \simeq \phi \shuffle \psi$ and for all $p \in \freecat{(\phi \shuffle \psi)}_1$, we have \[ \winterp{p}_{f \comp_0 \phi,\psi} = f \comp_0 \winterp{p}_{\phi,\psi} \] Similarly, given $\phi, \psi \in \freecat\P_2$ and $h\in \freecat\P_1$ with $\phi,\psi,h$ $0$-composable, we have a canonical isomorphism $\phi \shuffle (\psi \comp_0 h) \simeq \phi \shuffle \psi$ and for all $p \in \freecat{(\phi \shuffle (\psi \comp_0 h))}_1$, we have \[ \winterp{p}_{\phi,\psi \comp_0 h} = \winterp{p}_{\phi,\psi} \comp_0 h. \] Finally, given $\phi,\psi \in \freecat\P_2$ and $g \in \freecat\P_1$ with $\phi,g,\psi$ $0$-composable, we have a canonical isomorphism $(\phi \comp_0 g) \shuffle \psi \simeq \phi \shuffle (g \comp_0 \psi)$ and for all $p \in \freecat {((\phi \comp_0 g) \shuffle \psi)}_1$, we have \[ \winterp{p}_{\phi \comp_0 g,\psi} = \winterp{p}_{\phi,g \comp_0 \psi}. \] \end{propapp} \begin{proof} Let $f\in \freecat\P_1$, $\phi,\psi \in \freecat\P_2$ with $f,\phi,\psi$ $0$-composable and let $r,s \ge 0$, $f_i,g_i \in \freecat\P_1$, $\alpha_i \in \P_2$ for $i \in \set{1,\ldots,r}$ and $f'_j,g'_j \in \freecat\P_1$, $\alpha'_j \in \P_2$ for $j \in \set{1,\ldots,s}$ such that \[ \phi = (f_1 \comp_0 \alpha_1 \comp_0 g_1) \comp_1 \cdots \comp_1 (f_r \comp_0 \alpha_r \comp_0 g_r) \qtand \psi = (f'_1 \comp_0 \alpha'_1 \comp_0 g'_1) \comp_1 \cdots \comp_1 (f'_r \comp_0 \alpha'_r \comp_0 g'_r). \] By contemplating the definitions of $(f \comp_0 \phi) \shuffle \psi$ and $\phi \shuffle \psi$, we deduce a canonical isomorphism between them. Under this isomorphism, we easily verify that we have $\winterp{w}_{f \comp_0 \phi,\psi} = f \comp_0 \winterp{w}_{\phi,\psi}$ for $w \in ((f \comp_0 \phi) \shuffle \psi)_0$. Now, given $u \letter l_i \letter r_j v \in ((f\comp_0 \phi) \shuffle \psi)_0$, we have \begin{align*} \winterp{\wtrans_{u,v}}_{f \comp_0 \phi,\psi} &= \winterp{u}_{f \comp_0 \phi,\psi} \comp_1 (f \comp_0 f_i \comp_0 X_{\alpha_i,g_i \comp_0 f_j,\alpha'_j} \comp_0 g_j) \comp_1 \winterp{v}_{f \comp_0 \phi,\psi} \\ &= f \comp_0 (\winterp{u}_{ \phi,\psi} \comp_1 (f_i \comp_0 X_{\alpha_i,g_i \comp_0 f_j,\alpha'_j} \comp_0 g_j) \comp_1 \winterp{v}_{\phi,\psi} ) \\ &= f \comp_0 \winterp{\wtrans_{u,v}}_{\phi,\psi}. \end{align*} By functoriality of $\winterp{-}_{f \comp_0 \phi,\psi}$ and $\winterp{-}_{\phi,\psi}$, we deduce that, for all $p \in \freecat{(f \comp_0 \phi) \shuffle \psi}$, \[ \winterp{p}_{f \comp_0 \phi,\psi} = f \comp_0 \winterp{p}_{\phi,\psi}. \] The two other properties are shown similarly. \end{proof} \noindent We can now conclude the most simple properties of~$X_{-,-}$: \begin{lemapp} \label{lem:compat-X-one-cells} Given $\phi \co f \To f'\in \prespcat\P_2$ and $\psi\co g \To g' \in \prespcat\P_2$, we have the following equalities in~$\prespcat\P_3$: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{lem:compat-X-one-cells:units} $X_{\unit f,\psi} = \unit {f \comp_0 \psi}$ and $X_{\phi, \unit g} = \unit{\phi \comp_0 g}$ when $\phi,\psi$ are $0$\composable, \item \label{lem:compat-X-one-cells:left} $X_{l \comp_0 \phi,\psi} = l \comp_0 X_{\phi,\psi}$ for $l \in \freecat\P_1$ such that $l,\phi,\psi$ are $0$\composable, \item \label{lem:compat-X-one-cells:middle} $X_{\phi \comp_0 m,\psi} = X_{\phi, m \comp_0 \psi}$ for $m \in \freecat\P_1$ such that $\phi,m,\psi$ are $0$\composable, \item \label{lem:compat-X-one-cells:right} $X_{\phi,\psi \comp_0 r} = X_{\phi,\psi} \comp_0 r$ for $r \in \freecat\P_1$ such that $\phi,\psi,r$ are $0$\composable. \end{enumerate} \end{lemapp} \begin{proof} \ref{lem:compat-X-one-cells:units} is clear, since both $\wtrans_{\unit f,\psi}$ and $\wtrans_{\phi,\unit g}$ are identity paths on the unique $0$-cells of~$\freecat{(\unit f\shuffle\psi)}$ and~$\freecat{(\phi\shuffle\unit g)}$ respectively. \ref{lem:compat-X-one-cells:left} is a consequence of \Propr{compat-Y-one-cells}, since $\wtrans_{f\comp_0\phi,\psi}$ is sent to $\wtrans_{\phi,\psi}$ by the canonical isomorphism $(f \comp_0\phi) \shuffle \psi \simeq \phi \shuffle \psi$. \ref{lem:compat-X-one-cells:middle} and~\ref{lem:compat-X-one-cells:right} follow similarly. \end{proof} The last required properties on~$X_{-,-}$ are more difficult to prove. In fact, we need a proper coherence theorem showing that, for $0$\composable $\phi,\psi \in \prespcat\P_2$, $X_{\phi,\psi} = \winterp{p}_{\phi,\psi}$ for all $p \in \freecat{(\phi\shuffle\psi)}_1$ parallel to $\wtrans_{\phi,\psi}$. We progressively introduce the necessary material to prove this fact below. Given a word $w \in (\phi \shuffle \psi)_0$, there is a function \[ \lindex_w\co \set{1,\ldots,\len\phi} \to \set{1,\ldots,\len{\phi} + \len{\psi}} \] defined such that, for $i \in \set{1,\ldots,\len{\phi}}$, if $w = w'\letter l_i w''$, then $\lindex_w(i) = \len{w'}+1$. We have that the function~$\lindex$ characterizes the existence of path in $\freecat{(\phi\shuffle\psi)}$, as in: \begin{lemapp} \label{lem:s-path-criterion} Given $0$\composable $\phi,\psi \in \freecat\P_2$ and $w,w' \in (\phi \shuffle \psi)_0$, there is a path \[ p\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1 \] if and only if $\lindex_w(i) \le \lindex_{w'}(i)$ for $1 \le i \le \len{\phi}$. \end{lemapp} \begin{proof} Given $\wtrans_{u,v}\co u \letter l_r \letter r_s v \to u \letter r_s \letter l_r v \in (\phi \shuffle \psi)_1$, it is clear that $\lindex_{u \letter l_r \letter r_s v}(i) \le \lindex_{u \letter r_s \letter l_r v}(i)$ for all $1 \le i \le \len\phi$, so that, given a path $p\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1$, by induction on~$p$, we have $\lindex_w(i) \le \lindex_{w'}(i)$ for $1 \le i \le \len{\phi}$. Conversely, given $w,w' \in (\phi \shuffle \psi)_0$ such that $\lindex_w \le \lindex_{w'}$, we show by induction on~$N(w,w')$ defined by \[ N(w,w') = \sum_{1 \le i \le \len\phi}\lindex_{w'}(i) - \lindex_w(i) \] that there is a path $p\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1$. If $N(w,w') = 0$, then $w = w'$ and $\id w\co w \to w'$ is a suitable path. Otherwise, let $i_{\max}$ be the largest $i \le \len{\phi}$ such that $\lindex_{w'}(i) > \lindex_w(i)$. Then, either $i_{\max} = \len{\phi}$ or $\lindex_w(i_{\max}) + 1 < \lindex_w(i_{\max} + 1)$ since \begin{align*} \lindex_w(i_{\max}) + 1 &\le \lindex_{w'}(i_{\max}) \\ & < \lindex_{w'}(i_{\max} + 1) \\ &= \lindex_w(i_{\max} + 1) \end{align*} So we can write $w = u \letter l_{i_{\max}} \letter r_j v$ for some words $u,v$ and $j \in \set{1,\ldots,\len{\psi}}$. We have a path generator $\wtrans_{u,v}\co w \to \tilde w \in (\phi \shuffle \psi)_1$ where $\tilde w = u \letter r_j \letter l_{i_{\max}} v$. Then, \[ \lindex_{\tilde w}(i) = \begin{cases} \lindex_w(i) & \text{if $i \neq i_{\max}$} \\ \lindex_w(i_{\max}) + 1 & \text{if $i = i_{\max}$} \end{cases} \] so $\lindex{\tilde w} \le \lindex {w'}$ and $N(\tilde w,w') < N(w,w')$. Thus, by induction, we get \[ p'\co \tilde w \to w' \in \freecat{(\phi \shuffle \psi)}_1 \] and we build a path $\wtrans_{u,v} \comp_0 p'\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1$ as wanted. \end{proof} \noindent Given $0$\composable $\phi,\psi \in \freecat\P_2$ and $w = w_1\ldots w_{\len\phi+\len\psi} \in (\phi \shuffle \psi)_0$, we define $\Inv(w)$ as \begin{multline*} \Inv(w) = \setsize{\set{ (i,j) \mid 1 \le i < j \le \len\phi + \len\psi \text{ and } w_i = \letter r_{i'} \text{ and } w_j = \letter l_{j'} \\ \text{ for some $i' \in \set{1,\ldots,\len\psi}$ and $j' \in \set{1,\ldots,\len\phi}$}}}. \end{multline*} \noindent We have that $\Inv$ characterizes the length of the paths of~$\freecat{(\phi\shuffle\psi)}$, as in: \begin{lemapp} \label{lem:s-path-length} Given $0$\composable $\phi,\psi \in \freecat\P_2$ and $p\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1$, we have \[ \len{p} = \Inv(w') - \Inv(w). \] In particular, given $w,w' \in (\phi \shuffle \psi)_0$, all the paths $p\co w \to w' \in \freecat{(\phi \shuffle \psi)}_1$ have the same length. \end{lemapp} \begin{proof} We show this by induction on the length of~$p$. If $p = \unit w$, then the conclusion holds. Otherwise, $p = \wtrans_{u,u'}\comp_0 r$ for some $u,u' \in \Sigma_{\phi,\psi}$ and $r\co \tilde w \to w' \in \freecat{(\phi\shuffle\psi)}_1$. Then, by induction hypothesis, $\len{r} = \Inv(w') - \Inv(\tilde w)$. Note that, by the definition of~$\wtrans_{u,u'}$, $w = u \letter l_i \letter r_j u'$ and $\tilde w = u \letter r_j\letter l_i r$ for some $i \in \set{1,\ldots,\len\phi}$ and $j \in \set{1,\ldots,\len\psi}$. Hence, \[ \len{p} = \len{r} + 1 = \Inv(w') - \Inv(\tilde w) + \Inv(\tilde w) - \Inv(w) = \Inv(w') - \Inv(w).\qedhere \] \end{proof} \noindent Given $0$\composable $\phi,\psi \in\freecat\P_2$, we now prove the following coherence property for~$\freecat{(\phi\shuffle\psi)}$: \begin{lemapp} \label{lem:s-relation} Let $\approx$ be a congruence on $\freecat{(\phi\shuffle\psi)}$. Suppose that, for all words $u_1,u_2,u_3 \in \Sigma_{\phi,\psi}$, $i,i' \in \set{1,\ldots,\len\phi}$ and $j,j' \in \set{1,\ldots,\len\psi}$ such that $u_1 \letter l_{i}\letter r_{j} u_2 \letter l_{i'}\letter r_{j'} u_3 \in (\phi \shuffle \psi)_0$, we have \[ \begin{tikzcd}[column sep={5em,between origins},cramped] & u_1 \letter l_{i}\letter r_{j} u_2 \letter l_{i'}\letter r_{j'} u_3 \arrow[dl,pos=0.6,"\wtrans_{u_1,u_2 \letter l_{i'}\letter r_{j'} u_3}"'] \arrow[dr,pos=0.6,"\wtrans_{u_1 \letter l_{i}\letter r_{j} u_2,u_3}"] & \\ \makebox[6ex][c]{$u_1 {\letter r_{j}\letter l_{i}} u_2 \letter l_{i'}\letter r_{j'} u_3$} \arrow[dr,pos=0.3,"\wtrans_{u_1 \letter r_{j}\letter l_{i} u_2,u_3}"'] & \approx & \makebox[6ex][c]{$u_1 \letter l_{i}\letter r_{j} u_2\letter r_{j'}\letter l_{i'} u_3$} \arrow[dl,pos=0.3,"\wtrans_{u_1,u_2\letter r_{j'}\letter l_{i'} u_3}"] \\ & u_1 \letter r_{j}\letter l_{i} u_2 \letter r_{j'}\letter l_{i'} u_3 & \end{tikzcd} \]% then, for all $p_1,p_2\co v \to w \in \freecat{(\phi\shuffle\psi)}_1$, we have $p_1 \approx p_2$. \end{lemapp} \begin{proof} We prove this by induction on~$\len{p_1}$. By \Lemr{s-path-length}, we have $\len{p_1} = \len{p_2}$. In particular, if $p_1 = \unit v$, then $p_2 = \unit v$. Otherwise, $p_i = q_i \comp_0 r_i$ with $q_i\co v \to v_i$ and $r_i\co v_i \to w$ and $\len{q_i} = 1$ for $i \in \set{1,2}$. If $q_1 = q_2$, then we conclude with the induction hypothesis on $r_1$ and $r_2$. Otherwise, up to symmetry, we have $q_1 = \wtrans_{u_1,u_2 \letter l_{i'}\letter r_{j'} u_3}$ and $q_2=\wtrans_{u_1 \letter{l}_i\letter{r}_j u_2, u_3}$ for some $u_1,u_2,u_3 \in \freecat\Sigma_{\phi,\psi}$, $i,i' \in \set{1,\ldots,\len\phi}$ and $j,j' \in \set{1,\ldots,\len\psi}$. Let \begin{align*} q_1' &= \wtrans_{u_1\letter r_j \letter l_i u_2,u_3}, & q_2' &= \wtrans_{u_1, u_2\letter r_{j'}\letter l_{i'} u_3}, & v' &=u_1\letter r_{j}\letter l_i u_2 \letter r_{j'}\letter l_{i'}u_3. \end{align*} Since we have a path $v \xto{q_1} v_1 \xto{r_1} w$, by \Lemr{s-path-criterion}, we have $\lindex_v(s) \le \lindex_w(s)$ for $s \in \set{1,\ldots,\len\phi}$. Moreover, \[ \lindex_v(i) < \lindex_{v_1}(i) \le \lindex_w(i) \qtand \lindex_v(i') < \lindex_{v_2}(i') \le \lindex_w(i'). \] Also, for $s \in \set{1,\ldots,\len\phi}$, \[ \lindex_{v'}(s) = \begin{cases} \lindex_v(s) + 1 & \text{if $s \in \set{i,i'}$,} \\ \lindex_v(s) & \text{otherwise.} \end{cases} \] From the preceding properties, we deduce that $\lindex_{v'}(s) \le \lindex_w(s)$ for $s \in \set{1,\ldots,\len\phi}$. Thus, by \Lemr{s-path-criterion}, there is a path $r'\co v' \to w\in \freecat{(\phi\shuffle\psi)}_1$ as in \[ \begin{tikzcd}[column sep={4em,between origins}] & v_1 \ar[dr,"{q_1'}"{description}] \ar[drrr,"r_1"] & & \\ v \ar[ur,"q_1"] \ar[dr,"q_2"'] & & {v'} \ar[rr,"{r'}",pos=0.3] & & w \\ & v_2 \ar[ur,"{q_2'}"'{description}] \ar[urrr,"{r_2}"'] \end{tikzcd} \] Since $\len{r_i} = \len{p_i} - 1$ for $i \in \set{1,2}$, by induction hypothesis, we have $r_i \approx q_i' \comp_0 r'$ for $i \in \set{1,2}$, which can be extended to $q_i \comp_0 r_i \approx q_i \comp_0 q_i' \comp_0 r'$, since $\approx$ is a congruence. By hypothesis, we have $q_1 \comp_0 q_1' \approx q_2 \comp_0 q_2'$, which can be extended to $q_1 \comp_0 q_1' \comp_0 r' \approx q_2 \comp_0 q_2' \comp_0 r'$. By transitivity of~$\approx$, we get that $q_1 \comp_0 r_1 \approx q_2 \comp_0 r_2$, that is, $p_1 \approx p_2$. \end{proof} \noindent We then apply this coherence property to~$\winterp{-}_{-,-}$ and get that ``all exchange methods are equivalent'', as in: \begin{propapp} \label{prop:interchange-coherence} Given $0$\composable $\phi,\psi \in \prespcat\P_2$, for all $p_1,p_2\co u \to v \in \freecat{(\phi\shuffle\psi)}_1$, we have, in $\prespcat\P_3$, \[ \winterp{p_1}_{\phi,\psi} = \winterp{p_2}_{\phi,\psi}. \] \end{propapp} \begin{proof} By \Lemr{prespcat-peiffer}, for all words $u_1,u_2,u_3 \in \Sigma_{\phi,\psi}$, $i,i' \in \set{1,\ldots,\len\phi}$ and $j,j' \in \set{1,\ldots,\len\psi}$ such that $u_1 \letter l_{i}\letter r_{j} u_2 \letter l_{i'}\letter r_{j'} u_3 \in (\phi \shuffle \psi)_0$, we have \[ \begin{tikzcd}[column sep={6em,between origins},cramped] & \winterp{u_1 \letter l_{i}\letter r_{j} u_2 \letter l_{i'}\letter r_{j'} u_3}_{\phi,\psi} \arrow[dl,pos=0.6,"\winterp{\wtrans_{u_1,u_2 \letter l_{i'}\letter r_{j'} u_3}}_{\phi,\psi}"'] \arrow[dr,pos=0.6,"\winterp{\wtrans_{u_1 \letter l_{i}\letter r_{j} u_2,u_3}}_{\phi,\psi}"] & \\ \winterp{u_1 {\letter r_{j}\letter l_{i}} u_2 \letter l_{i'}\letter r_{j'} u_3}_{\phi,\psi} \arrow[dr,pos=0.3,"\winterp{\wtrans_{u_1 \letter r_{j}\letter l_{i} u_2,u_3}}_{\phi,\psi}"'] & = & \winterp{u_1 \letter l_{i}\letter r_{j} u_2\letter r_{j'}\letter l_{i'} u_3}_{\phi,\psi} \arrow[dl,pos=0.3,"\winterp{\wtrans_{u_1,u_2\letter r_{j'}\letter l_{i'} u_3}}_{\phi,\psi}"] \\ & \winterp{u_1 \letter r_{j}\letter l_{i} u_2 \letter r_{j'}\letter l_{i'} u_3}_{\phi,\psi} & \end{tikzcd} \] Moreover, the relation $\approx$ defined on parallel $p_1,p_2 \in \freecat{(\phi \shuffle \psi)}_1$ by $p_1 \approx p_2$ when $\winterp{p_1}_{\phi,\psi} = \winterp{p_2}_{\phi,\psi}$ is clearly a congruence. Hence, by \Lemr{s-relation}, we have that $\winterp{p_1}_{\phi,\psi} = \winterp{p_2}_{\phi,\psi}$ for all parallel $p_1,p_2 \in \freecat{(\phi\shuffle\psi)}_1$. \end{proof} \noindent The preceding property says in particular that $X_{\phi,\psi} = \winterp{p}_{\phi,\psi}$ for all $0$\composable $\phi,\psi \in \freecat\P_2$ and paths $p\in \freecat{(\phi\shuffle\psi)}_1$ parallel to~$\wtrans_{\phi,\psi}$. Let $\phi,\psi \in \freecat\P_2$ be $0$\composable $2$-cells, and $\phi',\psi' \in \freecat\P_2$ be $0$\composable $2$-cells such that $\phi,\phi'$ and $\psi,\psi'$ are $1$\composable. To obtain the last required properties on~$X_{-,-}$, we need to relate $\phi\shuffle\psi$ and $\phi'\shuffle\psi'$ to $(\phi\comp_1\phi')\shuffle(\psi\comp_1\psi')$. Given $w \in (\phi \shuffle \psi)_0$, there is a functor \[ w\wcomp(-)\co \freecat{(\phi'\shuffle\psi')} \to \freecat{((\phi\comp_1\phi')\shuffle(\psi\comp_1\psi'))} \] which is uniquely defined by the mappings \begin{align*} u & \mapsto w\wshiftup (u) \\ \wtrans_{u_1,u_2} & \mapsto X_{w\wshiftup(u_1),\wshiftup(u_2)} \end{align*} for $u \in (\phi' \shuffle \psi')_0$ and $\wtrans_{u_1,u_2} \in (\phi' \shuffle \psi')_1$ and where, for $v = v_1\ldots v_k \in \freecat\Sigma_{\phi',\psi'}$, $\wshiftup(v) \in \freecat\Sigma_{\phi\comp_1\phi',\psi\comp_1\psi'}$ is defined by \[ \wshiftup(v)_r = \begin{cases} \letter l_{\len{\phi} + i} & \text{if $v_r = \letter l_i$ for some $i \in \set{1,\ldots,\len{\phi'}}$} \\ \letter r_{\len{\psi} + j} & \text{if $v_r = \letter r_j$ for some $j \in \set{1,\ldots,\len{\psi'}}$} \end{cases} \] for $r \in \set{1,\ldots,k}$. Similarly, given $w \in (\phi'\shuffle\psi')_0$, there is a functor \[ (-)\wcomp w\co \freecat{(\phi\shuffle\psi)} \to \freecat{((\phi\comp_1\phi')\shuffle(\psi\comp_1\psi'))} \] which is uniquely defined by the mappings \begin{align*} u & \mapsto u\wshiftup(w) \\ \wtrans_{u_1,u_2} & \mapsto \wtrans_{u_1,u_2\wshiftup(w)} \end{align*} for $u \in (\phi\shuffle\psi)_0$ and $\wtrans_{u_1,u_2} \in (\phi\shuffle\psi)_1$ and where $\wshiftup(-)$ is defined as above. \bigskip\noindent The functors $w\wcomp(-)$ and $(-)\wcomp w$ satisfy the following compatibility property: \begin{lemapp} \label{lem:compat-word-waction} Let $\phi,\psi \in \freecat\P_2$ be $0$\composable $2$-cells, and $\phi',\psi' \in \freecat\P_2$ be $0$\composable $2$-cells such that $\phi,\phi'$ and $\psi,\psi'$ are $1$\composable. Given $w \in (\phi\shuffle\psi)_0$, we have the following equalities in~$\freecat\P_3$: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{lem:compat-word-waction:0-cells} $\winterp{w \wcomp (u)}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{w}_{\phi,\psi} \comp_1 \winterp{u}_{\phi',\psi'}$ for $u \in (\phi'\shuffle\psi')_0$, \item \label{lem:compat-word-waction:1-cells} $\winterp{w \wcomp(p)}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{w}_{\phi,\psi} \comp_1 \winterp{p}_{\phi',\psi'}$ for $p \in \freecat{(\phi'\shuffle\psi')}_1$. \end{enumerate} Similarly, given $w \in (\phi'\shuffle\psi')_0$, we have: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item $\winterp{(u)\wcomp w}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{u}_{\phi,\psi} \comp_1 \winterp{w}_{\phi',\psi'}$ for $u \in (\phi\shuffle\psi)_0$, \item $\winterp{(p) \wcomp w}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{p}_{\phi,\psi} \comp_1 \winterp{w}_{\phi',\psi'}$ for $p \in \freecat{(\phi\shuffle\psi)}_1$. \end{enumerate} \end{lemapp} \begin{proof} \newcommand\bigindices{{\phi\comp_1\phi',\psi\comp_1\psi'}} We only prove the first part, since the second part is similar. We start by~\ref{lem:compat-word-waction:0-cells}. We have $\winterp{w\wcomp(u)}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{w\wshiftup(u)}^{1,1}_{\phi\comp_1\phi',\psi\comp_1\psi'}$. By a simple induction on $w$, we obtain \begin{equation*} \winterp{w\wshiftup(u)}^{1,1}_{\phi\comp_1\phi',\psi\comp_1\psi'} = \winterp{w}^{1,1}_{\phi\comp_1\phi',\psi\comp_1\psi'} \comp_1 \winterp{\wshiftup(u)}^{\len{\phi},\len{\psi}}_{\phi\comp_1\phi',\psi\comp_1\psi'} \end{equation*} and, by other simple inductions on~$w$ and~$u$, we get \begin{align*} \winterp{w}^{1,1}_{\phi\comp_1\phi',\psi\comp_1\psi'} &= \winterp{w}^{1,1}_{\phi,\psi} = \winterp{w}_{\phi,\psi} & \winterp{\wshiftup(u)}^{\len{\phi},\len{\psi}}_{\phi\comp_1\phi',\psi\comp_1\psi'} &= \winterp{u}^{1,1}_{\phi',\psi'} = \winterp{u}_{\phi,\psi} \end{align*} so that \ref{lem:compat-word-waction:0-cells} holds. For~\ref{lem:compat-word-waction:1-cells}, by induction on~$p$, it is sufficient to prove the equality for~$p = \wtrans_{u_1,u_2} \in (\phi\shuffle\psi)_1$. Let $m = \len{\phi}$, $n = \len{\psi}$, and \[ (e_1 \comp_0 \alpha_1 \comp_0 f_1) \comp_1 \cdots \comp_1 (e_{m} \comp_0 \alpha_{m} \comp_0 f_m) \qquad (g_1 \comp_0 \beta_1 \comp_0 h_1) \comp_1 \cdots \comp_1 (g_{m} \comp_0 \beta_{m} \comp_0 h_m) \] be the unique decomposition of $\phi$ and $\psi$ respectively, for some $e_i,f_i,g_j,h_j \in \freecat\P_1$ and $\alpha_i,\beta_j \in \P_2$ for $i \in \set{1,\ldots,m}$ and $j\in \set{1,\ldots,n}$. We then have \begin{align*} \winterp{w\wcomp(\wtrans_{u_1,u_2})}_{\phi\comp_1\phi',\psi\comp_1\psi'} &= \winterp{\wtrans_{w\wshiftup(u_1),\wshiftup(u_2)}}_{\phi\comp_1\phi',\psi\comp_1\psi'} \\ &= \winterp{w\wshiftup(u_1)}^{1,1}_{\phi\comp_1\phi',\psi\comp_1\psi'}\comp_1(e_i \comp_0X_{\alpha_i,f_i\comp_0g_j,\beta_j}\comp_0h_j) \comp_1\winterp{\wshiftup(u_2)}^{k_l,k_r}_{\phi\comp_1\phi',\psi\comp_1\psi'} \end{align*} where $i,j$ are such that $u_1\letter l_i\letter r_ju_2 \in (\phi'\shuffle\psi')_0$ and \begin{align*} k_l &= \len\phi + i + 1 & k_r &= \len\psi + j + 1. \end{align*} By simple inductions, we obtain \begin{align*} \winterp{w\wshiftup(u_1)}^{1,1}_\bigindices &=\winterp{w}^{1,1}_\bigindices\comp_1\winterp{\wshiftup(u_1)}^{\len{\phi},\len{\psi}}_\bigindices \\ &=\winterp{w}^{1,1}_{\phi,\psi}\comp_1\winterp{u_1}^{1,1}_{\phi',\psi'} \\ &=\winterp{w}_{\phi,\psi}\comp_1\winterp{u_1}^{1,1}_{\phi',\psi'} \\ \shortintertext{and} \winterp{\wshiftup(u_2)}^{k_l,k_r}_{\phi\comp_1\phi',\psi\comp_1\psi'} &= \winterp{u_2}^{i+1,j+1}_{\phi',\psi'} \end{align*} so that \begin{align*} \winterp{w\wcomp(\wtrans_{u_1,u_2})}_{\phi\comp_1\phi',\psi\comp_1\psi'} &=\winterp{w}_{\phi,\psi}\comp_1\winterp{u_1}^{1,1}_{\phi',\psi'} \comp_1(e_i \comp_0X_{\alpha_i,f_i\comp_0g_j,\beta_j}\comp_0h_j) \comp_1\winterp{u_2}^{i+1,j+1}_{\phi',\psi'} \\ &= \winterp{w}_{\phi,\psi} \comp_1 \winterp{\wtrans_{u_1,u_2}}_{\phi',\psi'}. \qedhere \end{align*} \end{proof} \goodbreak\noindent We can now conclude the last required properties on~$X_{-,-}$: \begin{lemapp} \label{lem:compat-X-one-comp} Given $1$-composable $\phi,\phi' \in \prespcat\P_2$, $1$-composable $\psi,\psi' \in \prespcat\P_2$ such that $\phi,\psi$ are $0$-composable, we have the following equalities in~$\prespcat\P_3$: \begin{align*} X_{\phi \comp_1 \phi',\psi} &= ((\phi \comp_0 \csrc_1(\psi))\comp_1 X_{\phi',\psi})\comp_2 (X_{\phi,\psi}\comp_1(\phi' \comp_0 \ctgt_1(\psi))) \\ \shortintertext{and} X_{\phi,\psi \comp_1 \psi'} &= (X_{\phi,\psi} \comp_1 (\ctgt_1(\phi) \comp_0 \psi')) \comp_2 ((\csrc_1(\phi) \comp_0 \psi) \comp_1 X_{\phi,\psi'}). \end{align*} \end{lemapp} \begin{proof} \newcommand\phii{\phi\comp_1\phi'}% We only prove the first equality, since the second one is similar. By definition of~$X_{\phii,\psi}$, we have $X_{\phii,\psi} = \winterp{\wtrans_{\phii,\psi}}_{\phii,\psi}$. Moreover, by~\Propr{interchange-coherence}, $\winterp{\wtrans_{\phii,\psi}}_{\phii,\psi} = \winterp{p}_{\phii,\psi}$ in~$\prespcat\P_3$ for all path $p \in ((\phii)\shuffle\psi)_1$ parallel to~$\wtrans_{\phii,\psi}$. In particular, \[ \winterp{\wtrans_{\phii,\psi}}_{\phii,\psi} = \winterp{ (w\wcomp(\wtrans_{\phi',\psi})) \comp_0 ((\wtrans_{\phi,\psi})\wcomp w')}_{\phii,\psi} \] where {\abovedisplayskip=0pt% \begin{align*} w &= \letter l_1\ldots\letter l_{\len{\phi}} & w' &= \letter l_1\ldots\letter l_{\len{\phi'}} \end{align*}}% are the only $0$-cells of $\phi' \shuffle \unit{\csrc(\phi)}$ and $\phi \shuffle \unit{\ctgt(\psi)}$ respectively. Thus, \vspace{\abovedisplayskip}% \par\noindent{% \abovedisplayshortskip=0pt% \abovedisplayskip=0pt% \belowdisplayskip=0pt% \belowdisplayshortskip=0pt% \newdimen\mymargin\mymargin=25em% \begin{align*} \winterp{\wtrans_{\phii,\psi}}_{\phii,\psi} &= \winterp{ (w\wcomp(\wtrans_{\phi',\psi})) \comp_0 ((\wtrans_{\phi,\psi})\wcomp w')}_{\phii,\psi} \\ &= \winterp{ (w\wcomp(\wtrans_{\phi',\psi}))}_{\phii,\psi} \comp_2 \winterp{((\wtrans_{\phi,\psi})\wcomp w')}_{\phii,\psi} \\ \shortintertext{\hspace*{\mymargin}(by functoriality of~$\winterp{-}_{\phii,\psi}$)} &= (\winterp{w}_{\phi,\unit{\csrc(\psi)}} \comp_1 \winterp{\wtrans_{\phi',\psi}}_{\phi',\psi}) \comp_2 (\winterp{\wtrans_{\phi,\psi}}_{\phi,\psi}\comp_1\winterp{w'}_{\phi',\unit{\ctgt(\psi)}}) \\ \shortintertext{\hspace*{\mymargin}(by \Lemr{compat-word-waction})} &= ((\phi \comp_0 \csrc_1(\psi)) \comp_1 X_{\phi',\psi}) \comp_2 (X_{\phi,\psi}\comp_1(\phi'\comp_0 \ctgt_1(\psi)) \shortintertext{\hspace*{\mymargin}(by definition of~$\winterp{-}_{-,-}$ and~$X_{-,-}$).} \end{align*}}% \vskip-\baselineskip\vskip-\jot\vskip\belowdisplayskip \noindent Hence, \[ X_{\phii,\psi} = ((\phi \comp_0 \csrc_1(\psi)) \comp_1 X_{\phi',\psi}) \comp_2 (X_{\phi,\psi}\comp_1(\phi'\comp_0 \ctgt_1(\psi)).\qedhere \] \end{proof} \noindent We now prove the compatibility between $3$-cells and interchangers. We start by proving the compatibility with $3$\generators: \begin{lemapp} \label{lem:prespcat-exch-gen} Given $A\co \phi \TO \phi'\co f \To f' \in \P_3$ and $\psi\co g \To g' \in \prespcat\P_2$ such that $A,\psi$ are $0$\composable, we have, in $\prespcat\P_3$, \[ ((A \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 X_{\phi',\psi} = X_{\phi,\psi} \comp_2 ((f \comp_0 \psi) \comp_1 (A \comp_0 g')). \] Similarly, given $\phi\co f\To f' \in \prespcat\P_2$ and $B\co \psi\TO\psi'\co g\To g'$ such that $\phi,B$ are $0$\composable, we have, in $\prespcat\P$, \[ X_{\phi,\psi} \comp_2 ((g \comp_0 B) \comp_1 (\phi \comp_0 f')) = ((\phi \comp_0 g) \comp_1 (f \comp_0 B)) \comp_2 X_{\phi,\psi'}. \] \end{lemapp} \begin{proof} We only prove the first part of the property, since the other one is symmetric, and we do so by an induction on~$\len\psi$. If $\len\psi = 0$, $\psi$ is an identity and the result follows. Otherwise, $\psi = w \comp_1 \tilde\psi$ where $w = (l \comp_0 \alpha \comp_0 r)$ with $l,r \in \prespcat\P_1$, $\alpha\co h \To h' \in \P_2$ and $\tilde\psi \in \prespcat\P_2$ with $\len{\tilde\psi} = \len\psi-1$. Let $\tilde g = \ctgt_1(w)$. By \Lemr{compat-X-one-comp}, we have \begin{align} \label{eq:x-phi-psi} X_{\phi,\psi} &= (X_{\phi,w} \comp_1 (f'\comp_0 \tilde\psi)) \comp_2 ((f \comp_0 w) \comp_1 X_{\phi,\tilde\psi}) \\ \label{eq:x-phi-psip} X_{\phi',\psi} &= (X_{\phi',w} \comp_1 (f'\comp_0 \tilde\psi)) \comp_2 ((f \comp_0 w) \comp_1 X_{\phi',\tilde\psi}). \end{align} Also, by \Lemr{compat-X-one-cells}\ref{lem:compat-X-one-cells:right}, we have \begin{align} \label{eq:x-phip?-w} X_{\phi,w} &= X_{\phi,l \comp_0 \alpha} \comp_0 r & X_{\phi',w} &= X_{\phi',l \comp_0 \alpha} \comp_0 r \end{align} so that \begin{equation} \label{eq:A-X-w} \begin{aligned}[c] & ((A \comp_0 g) \comp_1 (f' \comp_0 w)) \comp_2 X_{\phi',w} \\ =\; & \left[((A \comp_0 l \comp_0 h) \comp_1 (f' \comp_0 l \comp_0 \alpha)) \comp_2 X_{\phi',l \comp_0 \alpha}\right] \comp_0 r \\ =\; & \left[X_{\phi,l \comp_0 \alpha} \comp_2 ((f \comp_0 l \comp_0 \alpha) \comp_1 (A \comp_0 l \comp_0 h'))\right] \comp_0 r \\ & \text{\hspace*{15em}(by interchange naturality generator)} \\ =\; & X_{\phi,w} \comp_2 ((f \comp_0 w) \comp_1 (A \comp_0 g')). \end{aligned} \end{equation} Thus, \begin{align*} & ((A \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 X_{\phi',\psi} \\ =\; & ((A \comp_0 g) \comp_1 (f' \comp_0 w) \comp_1 (f' \comp_0 \tilde\psi)) \\ & \hspace*{1em}\comp_2 (X_{\phi',w} \comp_1 (f'\comp_0 \tilde\psi)) \comp_2 ((f \comp_0 w) \comp_1 X_{\phi',\tilde\psi}) && \text{(by \eqref{eq:x-phi-psip})}\\ =\; & \left[ ( ((A \comp_0 g) \comp_1 (f' \comp_0 w) ) \comp_2 X_{\phi',w} ) \comp_1 (f'\comp_0 \tilde\psi)\right] \\ & \hspace*{1em}\comp_2 ((f \comp_0 w) \comp_1 X_{\phi',\tilde\psi}) \\ =\; & \left[ ( X_{\phi,w} \comp_2 ((f \comp_0 w) \comp_1 (A \comp_0 \tilde g) ) ) \comp_1 (f'\comp_0 \tilde\psi)\right] \\ & \hspace*{1em}\comp_2 ((f \comp_0 w) \comp_1 X_{\phi',\tilde\psi}) && \text{(by \eqref{eq:A-X-w})}\\ =\; & ( X_{\phi,w} \comp_1 (f'\comp_0 \tilde\psi)) \\ & \hspace*{1em}\comp_2 ((f \comp_0 w) \comp_1 (A \comp_0 \tilde g) \comp_1 (f'\comp_0 \tilde\psi))\comp_2 ((f \comp_0 w) \comp_1 X_{\phi',\tilde\psi}) \\ =\; & ( X_{\phi,w} \comp_1 (f'\comp_0 \tilde\psi)) \\ & \hspace*{1em}\comp_2 \left[ (f \comp_0 w) \comp_1 (((A \comp_0 \tilde g) \comp_1 (f'\comp_0 \tilde\psi))\comp_2 X_{\phi',\tilde\psi}) \right]\\ =\; & ( X_{\phi,w} \comp_1 (f'\comp_0 \tilde\psi)) \\ & \hspace*{1em}\comp_2 \left[ (f \comp_0 w) \comp_1 (X_{\phi',\tilde\psi} \comp_2 ((f\comp_0 \tilde\psi) \comp_1 (A \comp_0 g'))) \right] && \text{(by induction)}\\ =\; & ( X_{\phi,w} \comp_1 (f'\comp_0 \tilde\psi)) \comp_2 ((f \comp_0 w) \comp_1 (X_{\phi',\tilde\psi})) \\ & \hspace*{1em}\comp_2 ((f \comp_0 w) \comp_1 (f\comp_0 \tilde\psi) \comp_1 (A \comp_0 g')) \\ =\; & X_{\phi,\psi} \comp_2 ((f \comp_0 \psi) \comp_1 (A \comp_0 g')) && \text{(by \eqref{eq:x-phi-psi})}. \qedhere \end{align*} \end{proof} \noindent Next, we prove the compatibility between interchangers and rewriting steps: \begin{lemapp} \label{lem:prespcat-exch-ctxt} Given a rewriting step~$R\co \phi\TO\phi'\co f\To f' \in \freecat\P_3$ with $R = \lambda \comp_1 (l \comp_0 A \comp_0 r) \comp_1 \rho$ for some $l,r \in \freecat\P_1$, $\lambda,\rho \in \freecat\P_2$, $A\co \mu \TO \mu' \in \P_3$, and $\psi\co g\To g'\in \freecat\P_2$ such that $R,\psi$ are $0$\composable, we have, in $\prespcat\P_3$, \begin{equation} \label{eq:prespcat-exch-ctxt:goal} ((R \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 X_{\phi',\psi} = X_{\phi,\psi} \comp_2 ((f \comp_0 \psi) \comp_1 (R \comp_0 g')). \end{equation} Similarly, given $\phi \in \freecat\P_2$ and a rewriting step $S\co \psi\TO\psi'\co g\To g'\in \freecat\P_3$ with $S = \lambda \comp_1 (l \comp_0 B \comp_0 r) \comp_1 \rho$ for some $\lambda,\rho \in \freecat\P_2$, $l,r \in \freecat\P_1$, $B\co \nu \TO \nu' \in \P_3$ such that $\phi,S$ are $0$\composable, we have, in $\prespcat\P_3$, \[ X_{\phi,\psi} \comp_2 ((f \comp_0 B) \comp_1 (\phi \comp_0 g')) = ((\phi \comp_0 g) \comp_1 (f' \comp_0 B)) \comp_2 X_{\phi,\psi'}. \] \end{lemapp} \begin{proof} By symmetry, we only prove the first part. Let \begin{align*} \tilde \mu &= l \comp_0 \mu \comp_0 r & h &= \csrc_1(\mu) & \tilde h &= \csrc_1(\tilde \mu) \\ \tilde \mu' &= l \comp_0 \mu' \comp_0 r & h' &= \ctgt_1(\mu') & \tilde h' &= \ctgt_1(\tilde \mu) \end{align*} We have \begin{align*} R \comp_0 g &= (\lambda \comp_0 g) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\rho \comp_0 g) \end{align*} and, by \Lemr{compat-X-one-comp}, \begin{equation} \label{eq:prespcat-exch-ctxt:X-dec} \begin{aligned} X_{\phi,\psi} =\; & (((\lambda \comp_1 \tilde \mu) \comp_0 g) \comp_1 X_{\rho,\psi}) \\ &\hspace*{1em}\comp_2 (((\lambda \comp_0 g) \comp_1 X_{\tilde \mu,\psi} \comp_1 (\rho \comp_0 g'))) \\ &\hspace*{1em}\comp_2 ((X_{\lambda,\psi} \comp_1 ((\tilde \mu \comp_1 \rho) \comp_0 g'))) \end{aligned} \end{equation} \begin{equation} \label{eq:prespcat-exch-ctxt:X-decp} \begin{aligned} X_{\phi',\psi} =\; & (((\lambda \comp_1 \tilde \mu') \comp_0 g) \comp_1 X_{\rho,\psi}) \\ &\hspace*{1em}\comp_2 (((\lambda \comp_0 g) \comp_1 X_{\tilde \mu',\psi} \comp_1 (\rho \comp_0 g'))) \\ &\hspace*{1em}\comp_2 ((X_{\lambda,\psi} \comp_1 ((\tilde \mu' \comp_1 \rho) \comp_0 g'))). \end{aligned} \end{equation} We start the calculation of the left-hand side of~\eqref{eq:prespcat-exch-ctxt:goal}, using~\eqref{eq:prespcat-exch-ctxt:X-decp}. We get \begin{align*} & ((R \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 (((\lambda \comp_1 \tilde \mu') \comp_0 g) \comp_1 X_{\rho,\psi}) \\ =\;& (\lambda \comp_0 g) \\ & \hspace*{1em}\comp_1 \Big[((l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\rho \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 ((\mu'\comp_0 g) \comp_1 X_{\rho,\psi} ) \Big] \\ =\;&(\lambda \comp_0 g) \\ & \hspace*{1em}\comp_1 \Big[((\mu \comp_0 g) \comp_1 X_{\rho,\psi} ) \comp_2 ((l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\tilde h' \comp_0 \psi) \comp_1 (\rho \comp_0 g'))\Big] && \text{(by \Lemr{prespcat-peiffer})} \\ =\;&((\lambda \comp_0 g) \comp_1 (\tilde \mu \comp_0 g) \comp_1 X_{\rho,\psi} ) \\ &\hspace*{1em}\comp_2 ((\lambda \comp_0 g) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\tilde h' \comp_0 \psi) \comp_1 (\rho \comp_0 g')). \end{align*} Also, we do a step of calculation for the right-hand side of~\eqref{eq:prespcat-exch-ctxt:goal}, using~\eqref{eq:prespcat-exch-ctxt:X-dec}. We get \begin{align*} & (X_{\lambda,\psi} \comp_1 ((\tilde \mu \comp_1 \rho) \comp_0 g')) \comp_2 ((f \comp_0 \psi) \comp_1 (R \comp_0 g')) \\ =\; & ((\lambda \comp_0 g) \comp_1 (\tilde h \comp_0 \psi) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g') \comp_1 (\rho \comp_0 g')) \\ &\hspace*{1em}\comp_2 (X_{\lambda,\psi} \comp_1 (\tilde \mu' \comp_0 g') \comp_1 (\rho \comp_0 g') ). \end{align*} Finally, we do the last step of calculation between the left-hand side and the right-hand side of~\eqref{eq:prespcat-exch-ctxt:goal}. Note that \begin{align*} & ((l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\tilde h' \comp_0 \psi)) \comp_2 X_{\tilde \mu',\psi} \\ =\; & l \comp_0 (((A \comp_0 r \comp_0 g) \comp_1 (h' \comp_0 r \comp_0 \psi)) \comp_2 X_{\mu' \comp_0 r,\psi}) && \text{(by \Lemr{compat-X-one-cells}\ref{lem:compat-X-one-cells:left})} \\ =\; & l \comp_0 (((A \comp_0 r \comp_0 g) \comp_1 (h' \comp_0 r \comp_0 \psi)) \comp_2 X_{\mu',r \comp_0 \psi}) && \text{(by \Lemr{compat-X-one-cells}\ref{lem:compat-X-one-cells:middle})} \\ =\; & l \comp_0 (X_{\mu,r \comp_0 \psi} \comp_2 ((h \comp_0 r \comp_0 \psi) \comp_1 (A \comp_0 r \comp_0 g'))) && \text{(by \Lemr{prespcat-exch-gen})}\\ =\; &l \comp_0 (X_{\mu \comp_0 r,\psi} \comp_2 ((h \comp_0 r \comp_0 \psi) \comp_1 (A \comp_0 r \comp_0 g'))) && \text{(by \Lemr{compat-X-one-cells}\ref{lem:compat-X-one-cells:middle})} \\ =\; &X_{\tilde \mu,\psi} \comp_2 ((\tilde h \comp_0 \psi) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g')) && \text{(by \Lemr{compat-X-one-cells}\ref{lem:compat-X-one-cells:left})} \end{align*} so that \begin{align*} &((\lambda \comp_0 g) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\tilde h' \comp_0 \psi) \comp_1 (\rho \comp_0 g')) \comp_2 ((\lambda \comp_0 g) \comp_1 X_{\tilde \mu',\psi} \comp_1 (\rho \comp_0 g')) \\ =\; &(\lambda \comp_0 g) \comp_1 \left[ ((l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\tilde h' \comp_0 \psi)) \comp_2 X_{\tilde \mu',\psi}\right] \comp_1 (\rho \comp_0 g') \\ =\; &(\lambda \comp_0 g) \comp_1 \left[ X_{\tilde \mu,\psi} \comp_2 ((\tilde h \comp_0 \psi) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g'))\right] \comp_1 (\rho \comp_0 g') \\ =\; &((\lambda \comp_0 g) \comp_1 X_{\tilde \mu,\psi} \comp_1 (\rho \comp_0 g'))\comp_2 ((\lambda \comp_0 g) \comp_1 (\tilde h \comp_0 \psi) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g') \comp_1 (\rho \comp_0 g')). \end{align*}% By combining the previous equations, we obtain \begin{align*} & ((R \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 X_{\phi',\psi} \\ =\; & ((\lambda \comp_0 g) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\rho \comp_0 g)\comp_1 (f' \comp_0 \psi)) \\ & \hspace*{1em}\comp_2 (((\lambda \comp_1 \tilde \mu') \comp_0 g) \comp_1 X_{\rho,\psi}) \\ &\hspace*{1em}\comp_2 (((\lambda \comp_0 g) \comp_1 X_{\tilde \mu',\psi} \comp_1 (\rho \comp_0 g'))) \\ &\hspace*{1em}\comp_2 ((X_{\lambda,\psi} \comp_1 ((\tilde \mu' \comp_1 \rho) \comp_0 g'))) \\ =\; & (((\lambda \comp_1 \tilde \mu) \comp_0 g) \comp_1 X_{\rho,\psi}) \\ &\hspace*{1em}\comp_2 (((\lambda \comp_0 g) \comp_1 X_{\tilde \mu,\psi} \comp_1 (\rho \comp_0 g'))) \\ &\hspace*{1em}\comp_2 ((X_{\lambda,\psi} \comp_1 ((\tilde \mu \comp_1 \rho) \comp_0 g'))) \\ & \hspace*{1em}\comp_2 ((f \comp_0 \psi) \comp_1 (\lambda \comp_0 g) \comp_1 (l \comp_0 A \comp_0 r \comp_0 g) \comp_1 (\rho \comp_0 g)) \\ =\; & X_{\phi,\psi} \comp_2 ((f \comp_0 \psi) \comp_1 (R \comp_0 g')) \end{align*} which is what we wanted. \end{proof \noindent We can deduce the complete compatibility between interchangers and $3$-cells: \begin{lemapp} \label{lem:prespcat-nat-exch} Given $F\co \phi \TO \phi'\co f \To f' \in \prespcat\P_3$ and $\psi\co g \To g' \in \prespcat\P_2$ such that $F,\psi$ are $0$\composable, we have \[ ((F \comp_0 g) \comp_1 (f' \comp_0 \psi)) \comp_2 X_{\phi',\psi} = X_{\phi,\psi} \comp_2 ((f \comp_0 \psi) \comp_1 (F \comp_0 g')). \] Similarly, given $\phi\co f \To f' \in\prespcat\P_2$ and $G\co \psi \TO \psi'\co g \To g' \in \prespcat\P_3$ such that $\phi,G$ are $0$\composable, we have \[ X_{\phi,\psi} \comp_2 ((f \comp_0 G) \comp_1 (\phi \comp_0 g')) = ((\phi \comp_0 g) \comp_1 (f' \comp_0 G)) \comp_2 X_{\phi,\psi'}. \] \end{lemapp} \begin{proof} Remember that each $3$-cell $\prespcat\P$ can be written as a sequence of rewriting steps of~$\P$. By induction on the length of such a sequence defining $F$ or $G$ as in the statement, we conclude using \Lemr{prespcat-exch-ctxt}. \end{proof} \noindent We can conclude that: \ifx\graypresgraycatthm\undefined \begin{theo} Some theorem. \end{theo} \else \graypresgraycatthm* \fi \begin{proof} The axioms of lax Gray category follow from \Lemr{compat-X-one-cells}, \Lemr{compat-X-one-comp}, \Lemr{prespcat-peiffer} and \Lemr{prespcat-nat-exch}. \end{proof} \section{Finiteness of critical branchings} \label{sec:finiteness-cp} In this section, we give a proof of \Thmr{finite-cp}, \ie that Gray presentations, under some reasonable conditions, have a finite number of critical branchings. Our proof is constructive, so that we can extract a program to compute the critical branchings of such Gray presentations. First, we aim at showing that there is no critical branching~$(S_1,S_2)$ of a Gray presentation~$\P$ where both inner $3$\generators of $S_1$ and $S_2$ are interchange generators. We begin with a technical lemma for minimal and independent branchings: \begin{lemapp} \label{lem:caract-min-indep} Given a minimal local branching $(S_1,S_2)$ of a Gray presentation~$\P$, with \[S_i = \lambda_i \comp_1 (l_i \comp_0 A_i \comp_0 r_i) \comp_1 \rho_i\] and $l_i,r_i \in \freecat\P_1$, $\lambda_i,\rho_i \in \freecat\P_2$, $A_i \in \P_3$ for $i \in \set{1,2}$, the followings hold: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{lem:caract-min-indep:lambda} either $\lambda_1$ or $\lambda_2$ is an identity, \item \label{lem:caract-min-indep:rho} either $\rho_1$ or $\rho_2$ is an identity, \item \label{lem:caract-min-indep:indep} $(S_1,S_2)$ is independent if and only if \[ \len{\csrc_2(A_1)} + \len{\csrc_2(A_2)} \le \len{\csrc_2(S_1)} \qqtand \len{\lambda_1}\len{\rho_1} = \len{\lambda_2}\len{\rho_2} = 0. \] \end{enumerate} If $(S_1,S_2)$ is moreover not independent: \begin{enumerate}[start=4,label=(\roman*),ref=(\roman*)] \item \label{lem:caract-min-indep:l} either $l_1$ or $l_2$ is an identity, \item \label{lem:caract-min-indep:r} either $r_1$ or $r_2$ is an identity. \end{enumerate} \end{lemapp} \begin{proof} Suppose that neither $\lambda_1$ nor $\lambda_2$ are identities. Then, since \[ \lambda_1 \comp_1 (l_1 \comp_0 \csrc_2(A_1) \comp_0 r_1) \comp_1 \rho_1 = \lambda_2 \comp_1 (l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2) \comp_1 \rho_2, \] we have $\lambda_i = w \comp_1 \lambda_i'$ for some $w \in \freecat\P_2$ and $\lambda_i' \in \freecat\P_2$ for $i \in \set{1,2}$, such that $\len{w} \ge 1$, contradicting the minimality of $(S_1,S_2)$. So either $\lambda_1$ or $\lambda_2$ is an identity and similarly for $\rho_1$ and $\rho_2$, which concludes~\ref{lem:caract-min-indep:lambda} and~\ref{lem:caract-min-indep:rho}. By the definition of independent branching, the first implication of~\ref{lem:caract-min-indep:indep} is trivial. For the converse, suppose that $(S_1,S_2)$ is such that \[ \len{\csrc_2(A_1)} + \len{\csrc_2(A_2)} \le \len{\csrc_2(S_1)} \qtand \len{\lambda_1}\len{\rho_1} = \len{\lambda_2}\len{\rho_2} = 0. \] We can suppose by symmetry that $\lambda_1$ is a unit. Since $\len{\csrc_2(S_1)} = \len{\lambda_1} + \len{\csrc_2(A_1)} + \len{\rho_1}$, we have that $\len{\csrc_2(A_2)} \le \len{\rho_1}$. If $\len{\rho_1} = 0$, then \[ S_1 = l_1 \comp_0 A_1 \comp_0 r_1 \qtand \len{\csrc_2(A_2)} = 0, \] thus, since $\len{\lambda_2}\len{\rho_2} = 0$, we have \[ \text{either} \quad S_2 = \csrc_2(S_1) \comp_1 (l_2 \comp_2 A_2 \comp_2 r_2) \quad\text{or}\quad S_2 = (l_2 \comp_2 A_2 \comp_2 r_2) \comp_1 \csrc_2(S_1). \] In both cases, $(S_1,S_2)$ is independent. Otherwise, $\len{\rho_1} > 0$ and, by~\ref{lem:caract-min-indep:rho}, we have $\len{\rho_2} = 0$ so that \[ S_1 = (l_1 \comp_0 A_1 \comp_0 r_1) \comp_1 \rho_1 \qtand S_2 = \lambda_2 \comp_1 (l_2 \comp_0 A_2 \comp_0 r_2). \] Since $\len{\csrc_2(A_2)} \le \len{\rho_1}$, we have $\rho_1 = \chi \comp_1 (l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2)$ for some $\chi \in \freecat\P_2$ and, since $\csrc_2(S_1) = \csrc_2(S_2)$, we get \[ (l_1 \comp_0 \csrc_2(A_1) \comp_0 r_1) \comp_1 \chi \comp_1 (l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2) = \lambda_2 \comp_1 (l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2). \] So $\lambda_2 = (l_1 \comp_0 \csrc_2(A_1) \comp_0 r_1) \comp_1 \chi$ and hence $(S_1,S_2)$ is an independent branching, which concludes the proof of~\ref{lem:caract-min-indep:indep}. Finally, suppose that $(S_1,S_2)$ is not independent. By~\ref{lem:caract-min-indep:indep}, it implies that \[ \text{either}\quad \len{\csrc_2(A_1)} + \len{\csrc_2(A_2)} > \len{\csrc_2(S_1)} \quad\text{or}\quad \len{\lambda_1}\len{\rho_1} > 0 \quad\text{or}\quad \len{\lambda_2}\len{\rho_2} > 0. \] If $\len{\lambda_1}\len{\rho_1} > 0$, then $\len{\lambda_2} = \len{\rho_2} = 0$ by~\ref{lem:caract-min-indep:lambda} and~\ref{lem:caract-min-indep:rho}, so that \[ \lambda_1 \comp_1 (l_1 \comp_0 A_1 \comp_0 r_1) \comp_1 \rho_1 = l_2 \comp_0 A_2 \comp_0 r_2 \] thus there exists $\lambda_1',\rho_1' \in \freecat\P_2$ such that \[ \lambda_1 = l_2 \comp_0 \lambda_1' \comp_0 r_2 \qtand \rho_1 = l_2 \comp_0 \rho_1' \comp_0 r_2, \] and we have \[ l_2 \comp_0 \ctgt_1(\lambda_1') \comp_0 r_2 = \ctgt_1(\lambda_1) = l_1 \comp_0 \csrc_1(A_1) \comp_0 r_1. \] Thus, $l_1$ and $l_2$ have the same prefix~$l$ of size $k = \min(\len{l_1},\len{l_2})$ and we can write \begin{align*} S_1 &= l \comp_0 S_1' & S_2 &= l \comp_0 S_2' \end{align*} for some rewriting steps $S_1,S_2 \in \freecat\P_3$. Since $(S_1,S_2)$ is minimal, we have $k = 0$, so $\len{l_1}\len{l_2} = 0$. We show similarly that $\len{r_1}\len{r_2} = 0$. The case where $\len{\lambda_2}\len{\rho_2} > 0$ is handled similarly. So suppose that \begin{equation} \label{eq:not-indep-conditions} \len{\lambda_1}\len{\rho_1} = 0 \qtand \len{\lambda_2}\len{\rho_2} = 0 \qtand \len{\csrc_1(A_1)} + \len{\csrc_1(A_2)} > \len{\csrc_2(S_1)}. \end{equation} In particular, we get that $\len{\csrc_2(A_i)} > 0$ for $i \in \set{1,2}$. Let $u_i,v_i \in \freecat\P_1$ and $\alpha_i \in \P_2$ for $i \in \set{1,\ldots,r}$ with $r = \len{\csrc_2(S_1)}$ such that \[ \csrc_2(S_1) = (u_1 \comp_0 \alpha_1 \comp_0 v_1) \comp_1 \cdots \comp_1 (u_r \comp_0 \alpha_r \comp_0 v_r). \] The condition last part of \eqref{eq:not-indep-conditions} implies that there is $i_0$ such that $l_1$ and $l_2$ are both prefix of $u_{i_0}$. So, $l_1$ and $l_2$ have the same prefix $l$ of length $k = \min(\len{l_1},\len{l_2})$. Now, we prove that $\lambda_1 = l \comp_0 \lambda_1'$ for some $\lambda_1' \in \freecat\P_2$. If $\len{\lambda_1} = 0$, then \[\lambda_1 = l_1 \comp_0 \csrc_1(S_1) \comp_0 r_1,\] so $\lambda = l \comp_0 \lambda_1'$ for some $\lambda' \in \freecat\P_2$. Otherwise, if $\len{\lambda_1} > 0$, since $\len{\lambda_1}\len{\rho_1} = 0$, we have $\len{\rho_1} = 0$ and, by~\ref{lem:caract-min-indep:lambda}, $\len{\lambda_2} = 0$. Also, by the last part of~\eqref{eq:not-indep-conditions}, we have $\len{\lambda_1} < \len{\csrc_2(A_2)}$. Thus, \begin{center} $\lambda_1$ is a prefix of $l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2$, \end{center} so $\lambda_1 = l \comp_0 \lambda_1'$ for some $\lambda_1 \in \freecat\P_2$. Similarly, there are $\rho_1', \lambda_2',\rho_2'\in \freecat\P_2$ such that \[ \rho_1 = l \comp_0 \rho_1' \qtand \lambda_2 = l \comp_0 \lambda_2' \qtand \rho_2 = l \comp_0 \lambda_2'. \] Hence $S_1 = l \comp_0 S_1'$ and $S_2 = l \comp_0 S_2'$ for some rewriting steps $S'_1,S'_2 \in \freecat\P_3$. Since $(S_1,S_2)$ is minimal, we have $\len{l_1}\len{l_2} = \len{l} = 0$, which proves~\ref{lem:caract-min-indep:l}. The proof of~\ref{lem:caract-min-indep:r} is similar. \end{proof} \noindent We now have enough material to show that: \begin{propapp} \label{prop:no-st-st-cp} Given a Gray presentation $\P$, there are no critical branching~$(S_1,S_2)$ of~$\P$ such that both the inner $3$\generators of $S_1$ and $S_2$ are interchange generators. \end{propapp} \begin{proof} Let~$(S_1,S_2)$ be a local minimal branching such that, for~$i \in \set{1,2}$, \[ S_i = \lambda_i \pcomp_1 (l_i \pcomp_0 X_{\alpha_i,g_i,\beta_i} \pcomp_0 r_i) \pcomp_1 \rho_i \] for some~$l_i,r_i,g_i \in \freecat\P_1$,~$\lambda_i,\rho_i \in \freecat\P_2$ and~$\alpha_i,\beta_i \in \P_2$, and let~$\phi$ be~$\csrc_2(S_1)$. Since~$\len{\csrc_2(X_{\alpha_1,g_1,\beta_1})} = 2$, we have~$\len{\phi} \ge 2$. If~$\len{\phi} = 2$, then~$\len{\lambda_i} = \len{\rho_i} = 0$ for~$i \in \set{1,2}$. Thus, since~$\csrc_2(S_1) = \csrc_2(S_2)$, we get \begin{align*} & (l_1 \pcomp_0 \alpha_1 \pcomp_0 g_1 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 r_1) \pcomp_1 (l_1 \pcomp_0 \ctgt_1(\alpha_1) \pcomp_0 g_1 \pcomp_0 \beta_1 \pcomp_0 r_1) \\ =\;& (l_2 \pcomp_0 \alpha_2 \pcomp_0 g_2 \pcomp_0 \csrc_1(\beta_2) \pcomp_0 r_2) \pcomp_1 (l_2 \pcomp_0 \ctgt_1(\alpha_2) \pcomp_0 g_2 \pcomp_0 \beta_2 \pcomp_0 r_2). \end{align*} By the unique decomposition property given by \Thmr{precat-nf}, we obtain \[ l_1 = l_2,\quad r_1 = r_2,\quad \alpha_1 = \alpha_2,\quad \beta_1 = \beta_2 \qtand g_1 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 r_1 = g_2 \pcomp_0 \csrc_1(\beta_2) \pcomp_0 r_2. \] So~$g_1 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 r_1 = g_2 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 r_1$, which implies that~$g_1 = g_2$. Hence,~$(S_1,S_2)$ is trivial. If~$\len{\phi} = 3$, then~$\len{\lambda_i} + \len{\rho_i} = 1$ for~$i \in \set{1,2}$, and, by \Lemr{caract-min-indep}, \[ \text{either\quad$\len{\rho_1} = \len{\lambda_2} = 1$\quad or \quad$\len{\lambda_1} = \len{\rho_2} = 1$\zbox.} \] By symmetry, we can suppose that~$\len{\rho_1} = \len{\lambda_2} = 1$, which implies that~$\len{\lambda_1} = \len{\rho_2} = 0$. By unique decomposition of whiskers, since~$\csrc_2(S_1) = \csrc_2(S_2)$, we have \begin{align*} l_1 \pcomp_0 \alpha_1 \pcomp_0 g_1 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 r_1 & = \lambda_2 \\ l_1 \pcomp_0 \ctgt_1(\alpha_1) \pcomp_0 g_1 \pcomp_0 \beta_1 \pcomp_0 r_1 & = l_2 \pcomp_0 \alpha_2 \pcomp_0 g_2 \pcomp_0 \csrc_1(\beta_2) \pcomp_0 r_2 \\ \rho_1 & = l_2 \pcomp_0 \ctgt_1(\alpha_2) \pcomp_0 g_2 \pcomp_0 \beta_2 \pcomp_0 r_2 \end{align*} and the second line implies that~$l_1 \pcomp_0 \ctgt_1(\alpha_1) \pcomp_0 g_1 = l_2$,~$\beta_1 = \alpha_2$ and~$r_1 = g_2 \pcomp_0 \csrc_1(\beta_2) \pcomp_0 r_2$. Since~$(S_1,S_2)$ is minimal, we have~$\len{l_1} = \len{r_2} = 0$. So \begin{align*} S_1 &= (X_{\alpha_1,g_1,\beta_1} \pcomp_0 g_2 \pcomp_0 \csrc_1(\beta_2)) \pcomp_1 (\ctgt_1(X_{\alpha_1,g_1,\beta_1}) \pcomp_0 g_2 \pcomp_0 \beta_2) \\ S_2 &= (\alpha_1 \pcomp_0 g_1 \pcomp_0 \csrc_1(\beta_1) \pcomp_0 g_2 \pcomp_0 \csrc_1(\beta_2)) \pcomp_1 (\ctgt_1(\alpha_1) \pcomp_0 g_1 \pcomp_0 X_{\beta_1,g_2,\beta_2}) \end{align*} thus~$(S_1,S_2)$ is a natural branching, hence not a critical one. Finally, if~$\len{\phi} \ge 4$, then, since~$\len{\lambda_i} + \len{\rho_i} = \len{\phi} - 2 \ge 2$ for~$i \in \set{1,2}$, by \Lemr{caract-min-indep}, we have that \[ \text{either\quad$\len{\lambda_1} = \len{\rho_2} = \len\phi - 2$\quad or\quad$\len{\rho_1} = \len{\lambda_2} = \len\phi - 2$\zbox.} \] In either case, \[ \len{\lambda_1}\len{\rho_1} = \len{\lambda_2}\len{\rho_2} = 0 \qtand \len{\csrc_2(X_{\alpha_1,g_1,\beta_1})} + \len{\csrc_2(X_{\alpha_2,g_2,\beta_2})} = 4 \le \len{\phi} \] so, by~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:indep},~$(S_1,S_2)$ is independent, hence not critical. \end{proof} \noindent Until the end of this section, we denote by $\P$ a Gray presentation such that $\P_2$ and $\P_3$ are finite and $\len{\csrc_2(A)} > 0$ for every $A \in \P_3$, \ie a Gray presentation satisfying the hypothesis of \Thmr{finite-cp}. The next result we prove is a characterization of independent branchings among minimal ones: \begin{lemapp} \label{lem:definite-indep} Given a minimal branching $(S_1,S_2)$ of~$\P$ with \[ S_i = \lambda_i \comp_1 (l_i \comp_0 A_i \comp_0 r_i) \comp_1 \rho_i \] for some $l_i,r_i \in \freecat\P_1$, $\lambda_i,\rho_i \in \freecat\P_2$ and $A_i \in \P_3$ for $i \in \set{1,2}$, we have that $(S_1,S_2)$ is independent if and only if \begin{center} either $\len{\lambda_1} \ge \len{\csrc_2(A_2)}$ or $\len{\rho_1} \ge \len{\csrc_2(A_2)}$ (\resp $\len{\lambda_2} \ge \len{\csrc_2(A_1)}$ or $\len{\rho_2} \ge \len{\csrc_2(A_1)}$). \end{center} \end{lemapp} \begin{proof} If $(S_1,S_2)$ is independent, then, by \Lemr{caract-min-indep}\ref{lem:caract-min-indep:indep}, \[ \len{\csrc_2(A_1)} + \len{\csrc_2(A_2)} \le \len{\lambda_1} + \len{\csrc_2(A_1)} + \len{\rho_1} = \len{\lambda_2} + \len{\csrc_2(A_2)} + \len{\rho_2}, \] that is, \[ \len{\csrc_2(A_1)} \le \len{\lambda_2} + \len{\rho_2} \qtand \len{\csrc_2(A_2)} \le \len{\lambda_1} + \len{\rho_1}. \] By hypothesis, we have $\len{\csrc_2(A_1)} > 0$, so that $\len{\lambda_2} + \len{\rho_2} > 0$. If $\len{\lambda_2} > 0$, then, by \Lemr{caract-min-indep}\ref{lem:caract-min-indep:lambda}, $\len{\lambda_1} = 0$ so $\len{\csrc_2(A_2)}\le \len{\rho_1}$. Similarly, if $\len{\rho_2} > 0$, then $\len{\csrc_2(A_2)} \le \len{\lambda_1}$, which proves the first implication. Conversely, if $\len{\lambda_1} \ge \len{\csrc_2(A_2)}$, then, since $\csrc_2(A_2) > 0$ by our hypothesis on~$\P$, we have $\len{\lambda_1} > 0$. By \Lemr{caract-min-indep}\ref{lem:caract-min-indep:lambda}, we get that $\len{\lambda_2} = 0$. Also, \[ \len{\lambda_1}+ \len{\csrc_2(A_1)} + \len{\rho_1} = \len{\csrc_2(A_2)} + \len{\rho_2} \le \len{\lambda_1} + \len{\rho_2}, \] so $\len{\rho_2} \ge \len{\csrc_2(A_1)} + \len{\rho_1}$, thus $\len{\rho_1} < \len{\rho_2}$. By \Lemr{caract-min-indep}\ref{lem:caract-min-indep:rho}, we have $\len{\rho_1} = 0$. Moreover, \[ \len{\csrc_2(A_1)} + \len{\csrc_2(A_2)} \le \len{\csrc_2(A_1)} + \len{\lambda_1} = \len{\csrc_2(S_1)} \] hence, by~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:indep}, $(S_1,S_2)$ is independent. \end{proof} \noindent Then, we prove that minimal non-independent branchings are uniquely characterized by a small amount of information: \begin{lemapp} \label{lem:definite-branching-data} Given a minimal non-independent branching $(S_1,S_2)$ of~$\P$ with \[ S_i = \lambda_i \comp_1 (l_i \comp_0 A_i \comp_0 r_i) \comp_1 \rho_i \] for some $l_i,r_i \in \freecat\P_1$, $\lambda_i,\rho_i \in \freecat\P_2$ and $A_i \in \P_3$ for $i \in \set{1,2}$, we have that $(S_1,S_2)$ is uniquely determined by $A_1$, $A_2$, $\len{\lambda_1}$ and $\len{\lambda_2}$. \end{lemapp} \begin{proof} Let the unique $k_1,k_2 >0$, $u_i,u'_i,v_i,v'_i \in \freecat\P_1$ and $\alpha_i,\beta_i \in \P_2$ such that \[ \csrc_2(A_1) = (u_1 \comp_0 \alpha_1 \comp_0 u'_1) \comp_1 \cdots \comp_1 (u_{k_1} \comp_0 \alpha_{k_1} \comp_0 u'_{k_1}) \] and \[ \csrc_2(A_2) = (v_1 \comp_0 \beta_1 \comp_0 v'_1) \comp_1 \cdots \comp_1 (v_{k_2} \comp_0 \beta_{k_2} \comp_0 v'_{k_2}). \] Let $i_1 = 1 + \len{\lambda_1}$ and $i_2 = 1 + \len{\lambda_2}$. Since \begin{equation} \label{eq:branching-src-equal} \lambda_1 \comp_1 (l_1 \comp_0 \csrc_2(A_1) \comp_0 r_1) \comp_1 \rho_1 = \lambda_2 \comp_1 (l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2) \comp_1 \rho_2, \end{equation} and, by~\Lemr{definite-indep}, $\len{\lambda_1} < \len{\csrc_2(A_2)}$ and $\len{\lambda_2} < \len{\csrc_2(A_1)}$, we get \[ l_1 \comp_0 u_{i_2} \comp_0 \alpha_{i_2} \comp_0 u'_{i_2} \comp_0 r_1 = l_2 \comp_0 v_{i_1} \comp_0 \beta_{i_1} \comp_0 v'_{i_1} \comp_0 r_2 \] so that \[ l_1 \comp_0 u_{i_2} = l_2 \comp_0 v_{i_1} \qtand u'_{i_2} \comp_0 r_1 = v'_{i_1} \comp_0 r_2. \] By~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:l}, either $l_1$ or $l_2$ is an identity. Thus, if $\len{u_{i_2}} \le \len{v_{i_1}}$, then $\len{l_1} \ge \len{l_2}$ so $l_2$ is a unit and $l_2$ is the prefix of $u_{i_2}$ of size $\len{u_{i_2}} - \len{v_{i_1}}$. Otherwise, if $\len{u_{i_2}} \le \len{v_{i_1}}$, we obtain similarly that $l_1$ is the prefix of $v_{i_1}$ of size $\len{v_{i_1}} - \len{u_{i_2}}$ and $l_2$ is a unit. In both cases, $l_1$ and $l_2$ are completely determined by $A_1$, $A_2$, $\len{\lambda_1}$ and $\len{\lambda_2}$. A similar argument holds for $r_1$ and $r_2$. Now, if $\len{\lambda_1} >0$, by~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:lambda}, $\len{\lambda_2} = 0$. By \eqref{eq:branching-src-equal} and since $\len{\lambda_1} < \len{\csrc_2(A_2)}$, $\lambda_1$ is the prefix of $l_2 \comp_0 \csrc_2(A_2) \comp_0 r_2$ of length~$\len{\lambda_1}$. Otherwise, if $\len{\lambda_1} = 0$, then $\lambda_1 = \unit{l_1 \comp_0 \csrc_1(A_1) \comp_0 r_1}$. In both cases, $\lambda_1$ is completely determined by $A_1$, $A_2$, $\len{\lambda_1}$. A similar argument holds for $\lambda_2$. Note that, if we prove that $\len{\rho_1}$ and $\len{\rho_2}$ are completely determined by $A_1$, $A_2$, $\len{\lambda_1}$ and $\len{\lambda_2}$, the above argument also applies to $\rho_1$ and $\rho_2$ and the lemma is proved. But \[ \len{\lambda_1} + \len{\csrc_2(A_1)} + \len{\rho_1} = \len{\lambda_2} + \len{\csrc_2(A_2)} + \len{\rho_2}, \] so that if $\len{\lambda_1} + \len{\csrc_2(A_1)} \ge \len{\lambda_2} + \len{\csrc_2(A_2)}$, then, by~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:rho}, $\len{\rho_1} = 0$ and \[ \len{\rho_2} = \len{\lambda_1} + \len{\csrc_2(A_1)} - \len{\lambda_2} - \len{\csrc_2(A_2)}. \] Otherwise, if $\len{\lambda_1} + \len{\csrc_2(A_1)} \le \len{\lambda_2} + \len{\csrc_2(A_2)}$, we get similarly that \[ \len{\rho_1} = \len{\lambda_2} + \len{\csrc_2(A_2)} - \len{\lambda_1} - \len{\csrc_2(A_1)} \] and $\len{\rho_2} = 0$. In both cases, $\len{\rho_1}$ and $\len{\rho_2}$ are completely determined by $A_1$, $A_2$, $\len{\lambda_1}$ and $\len{\lambda_2}$, which concludes the proof. \end{proof} \noindent Given~$A \in \P_3$, we say that~$A$ is an \emph{operational} generator if it is not an interchange generator. We now prove that an operational generator can form a critical branching with a finite number of interchange generators: \begin{lemapp} \label{lem:definite-finite-op-st-branching} Given an operational $A_1 \in \P_3$, there are a finite number interchange generator $A_2 \in P_3$ so that there is a critical branching~$(S_1,S_2)$ of~$\P$ with \[ S_i = \lambda_i \comp_1 (l_i \comp_0 A_i \comp_0 r_i) \comp_1 \rho_i \] for some $l_i,r_i \in \freecat\P_1$, $\lambda_i,\rho_i \in \freecat\P_2$ for $i \in \set{1,2}$. \end{lemapp} \begin{proof} Let~$\alpha,\beta \in \P_2$,~$u \in \freecat\P_1$, $A_2 = X_{\alpha,u,\beta}$,~$l_i,r_i \in \freecat\P_1$,~$\lambda_i,\rho_i \in \freecat\P_2$ for~$i \in \set{1,2}$, so that~$(S_1,S_2)$ is a critical branching of~$\P$ with \begin{center} $S_i = \lambda_i \comp_1 (l_i \comp_0 A_i \comp_0 r_i) \comp_1 \rho_i$ for $i \in \set{1,2}$ \end{center} for~$i \in \set{1,2}$. Let the unique $k \ge 2$, $v_i,v'_i \in \freecat\P_1$, $\gamma_i \in \P_2$ for $i \in \set{1,\ldots,k}$ such that \[ \csrc_2(A_1) = (v_1 \comp_0 \gamma_1 \comp_0 v'_1) \comp_1 \cdots \comp_1 (v_k \comp_0 \gamma_k \comp_0 v'_k). \] By~\Lemr{definite-indep}, since $(S_1,S_2)$ is non-independent, \[ 2 = \len{\csrc_2(X_{\alpha,u,\beta})} > \max(\len{\lambda_1},\len{\rho_1}). \] Note that we cannot have $\len{\lambda_1} = \len{\rho_1} = 1$. Indeed, otherwise, by~\Lemr{caract-min-indep}, we would have $\len{\lambda_2} = \len{\rho_2} = 0$, so that \[ 2 = \len{\csrc_2(X_{\alpha,u,\beta})} = \len{\lambda_1} + \len{\csrc_2(A_1)} + \len{\rho_1}. \] and thus $\len{\csrc_2(A_1)} = 0$, contradicting our hypothesis on the $3$\generators of~$\P$. This leaves three cases to handle. Suppose that $\len{\lambda_1} = \len{\rho_1} = 0$. Then, \[ l_1 \comp_0 \csrc_2(A_1) \comp_0 r_1 = \lambda_2 \comp_0 (l_2 \comp_0 \csrc_2(X_{\alpha,u,\beta}) \comp_0 r_2) \comp_1 \rho_2. \] Thus, \begin{align*} l_1 \comp_0 v_{1 + \len{\lambda_2}} \comp_0 \gamma_{1 + \len{\lambda_2}} \comp_0 v'_{1 + \len{\lambda_2}} \comp_0 r_1 &= l_2 \comp_0 \alpha \comp_0 u \comp_0 \csrc_1(\beta) \comp_0 r_2 \\ l_1 \comp_0 v_{2 + \len{\lambda_2}} \comp_0 \gamma_{2 + \len{\lambda_2}} \comp_0 v'_{2 + \len{\lambda_2}} \comp_0 r_1 &= l_2 \comp_0 \ctgt_1(\alpha) \comp_0 u \comp_0 \beta \comp_0 r_2 \end{align*} so \begin{align*} \gamma_{1 + \len{\lambda_2}} &= \alpha, &\gamma_{2 + \len{\lambda_2}} &= \beta, & l_2 &= l_1 \comp_0 v_{1 + \len{\lambda_2}}, & r_2 &= v'_{2 + \len{\lambda_2}} \comp_0 r_1 \end{align*} and $u$ is the suffix of $l_1 \comp_0 v_{2 + \len{\lambda_2}}$ of length $\len{l_1 \comp_0 v_{2+\len{\lambda_2}}} - \len{l_2 \comp_0 \ctgt_1(\alpha)}$. In particular, $X_{\alpha,u,\beta}$ is completely determined by $A_1$ and $\len{\lambda_2}$. And since \[ \len{\lambda_2} = \len{\csrc_2(A_1)} - \len{\csrc_2(X_{\alpha,u,\beta})} - \len{\rho_2} \in \set{0,\ldots,\len{\csrc_2(A_1)} - 2}, \] there is a finite number of possible $X_{\alpha,u,\beta}$ which induce a critical branching~$(S_1,S_2)$. Suppose now that $\len{\lambda_1} = 1$ and $\len{\rho_1} = 0$. Then, by~\Lemr{caract-min-indep}, $\len{\lambda_2} = 0$. So \[ \lambda_1 = l_2 \comp_0 \alpha \comp_0 u \comp_0 \csrc_1(\beta) \comp_0 r_2 \] and \[ l_1 \comp_0 v_1 \comp_0 \gamma_1 \comp_0 v'_1 \comp_0 r_1 = l_2 \comp_0 \ctgt_1(\alpha) \comp_0 u \comp_0 \beta \comp_0 r_2. \] In particular, we have $\beta = \gamma_1$ and $r_2 = v'_1 \comp_0 r_1$, so $\len{r_1} \le \len{r_2}$. By~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:r}, we have $\len{r_1} = 0$ and $r_2 = v'_1$. Note that we have $\len{u} < \len{v_1}$. Indeed, otherwise $u = u' \comp_0 v_1$ for some $u'$ and, since \[ \len{l_1} + \len{v_1} = \len{l_2} + \len{\ctgt_1(\alpha)} + \len{u}, \] we get that $\len{l_2} \le \len{l_1}$. By~\Lemr{caract-min-indep}\ref{lem:caract-min-indep:l}, it implies that $\len{l_2} = 0$ and $l_1 = \ctgt_1(\alpha) \comp_0 u'$, which gives \[ S_1 = (\alpha \comp_0 u' \comp_0 \csrc_1(A_1)) \comp_1 (\ctgt_1(\alpha) \comp_0 u' \comp_0 A_1) \] and \[ S_2 = (X_{\alpha,u' \comp_0 v_1,\gamma_1} \comp_0 v'_1) \comp_0 ((\ctgt_1(\alpha) \comp_0 u') \comp_0 ((v_2 \comp_0 \gamma_2 \comp_0 v'_2) \comp_1 \cdots \comp_1 (v_k \comp_0 \gamma_k \comp_0 v'_k))) \] so that $(S_1,S_2)$ is a natural branching, contradicting the fact that $(S_1,S_2)$ is a critical branching. Hence, $\len{u} < \len{v_1}$ and $u$ is a strict suffix of $v_1$, thus there are $\len{v_1}$ such possible $u$. Moreover, since $\P_2$ is finite, there are a finite number of possible $\alpha \in \P_2$. Hence, there are a finite number of possible $X_{\alpha,u,\beta} \in \P_2$ that induces a critical branching $(S_1,S_2)$ such that $\len{\lambda_1} = 1$ and $\len{\rho_1} = 0$. The case where~$\len{\lambda_1} = 0$ and~$\len{\rho_1} = 1$ is similarly handled, which concludes the proof. \end{proof} \noindent We can now conclude the finiteness property for critical branchings of Gray presentations: \ifx\grayfinitecriticalbranchings\undefined \begin{theo} Some theorem. \end{theo} \else \grayfinitecriticalbranchings* \fi \begin{proof} Let~$S_i = \lambda_i \pcomp_1 (l_i \pcomp_0 A_i \pcomp_0 r_i) \pcomp_1 \rho_i$ with~$l_i,r_i \in \freecat\Q_1$,~$\lambda_i,\rho_i \in \freecat\Q_2$ and~$A_i \in \Q_3$ for~$i \in \set{1,2}$ such that~$(S_1,S_2)$ is a critical branching of~$\Q$. By~\Lemr{definite-branching-data}, such a branching is uniquely determined by~$A_1$,~$A_2$,~$\len{\lambda_1}$ and~$\len{\lambda_2}$. By~\Lemr{definite-indep}, \[ \len{\lambda_1} < \len{\csrc_2(A_2)} \qtand \len{\lambda_2} < \len{\csrc_2(A_1)}. \] Hence, for a given pair~$(A_1,A_2)$, there are a finite number of tuples~$(l_1,l_2,r_1,r_2,\lambda_1,\lambda_2,\rho_1,\rho_2)$ such that~$(S_1,S_2)$ is a critical branching. Moreover, by \Propr{no-st-st-cp}, either~$A_1$ or~$A_2$ is an operational generator. By symmetry, we can suppose that~$A_1$ is operational. Since~$\Q_3$ is finite, there is a finite number of such~$A_1$. Moreover, there are a finite number of pairs~$(A_1,A_2)$ where~$A_2$ is operational too. If~$A_2$ is an interchange generator, then, by~\Cref{lem:definite-finite-op-st-branching}, there are a finite number of possible~$A_2$ for a given~$A_1$ such that~$(S_1,S_2)$ is a critical branching, which concludes the finiteness analysis. \end{proof} \subsection{Pseudomonoids} \label{ssec:app-pseudomonoid} In \Exr{pseudo-monoid-gray-pres}, we introduced a Gray presentation~$\P$ for the theory of pseudomonoids. The set~$\P_4$ of $4$-generators contains only the required ones in a Gray presentation, so that we do not expect~$\P$ to be coherent (see~\eqref{eq:parallel-not-equal} for an example). We will show that the rewriting system is terminating and thus, \Thmr{squier}, adding a $4$-generator corresponding to each critical branching will turn the presentation into a coherent one. Those branchings can be computed as in the proof of \Thmr{finite-cp}, which is constructive: we obtain, up to symmetrical branchings, five critical branchings: \[ \begin{tikzcd}[ampersand replacement=\&,sep={5em,between origins},cramped,baseline=(\tikzcdmatrixname-1-1.center)] \satex{mon-cp1} \tar[r] \tar[d] \& \satex{mon-cp1-r} \\ \satex{mon-cp1-l} \end{tikzcd} \qquad\qquad \begin{tikzcd}[ampersand replacement=\&,sep={5em,between origins},cramped,baseline=(\tikzcdmatrixname-1-1.center)] \satex{mon-cp2} \tar[r,""{name=src}] \tar[d] \&\satex{mon-cp2-r} \\ \satex{mon-cp2-l} \end{tikzcd} \qquad\qquad \begin{tikzcd}[ampersand replacement=\&,sep={5em,between origins},baseline=(\tikzcdmatrixname-1-1.center),cramped] \satex{mon-cp4} \tar[d] \tar[r,""{name=srcp}] \& \satex{mon-cp4-l} \\ \satex{mon-cp4-r} \& \end{tikzcd}% \] \[ \begin{tikzcd}[ampersand replacement=\&,sep={5em,between origins},cramped,baseline=(\tikzcdmatrixname-1-1.center)] \satex{mon-cp3} \tar[r] \tar[d] \&\satex{mon-cp3-r} \\ \satex{mon-cp3-l} \end{tikzcd} \qquad \qquad \qquad \begin{tikzcd}[ampersand replacement=\&,sep={5em,between origins},baseline=(\tikzcdmatrixname-1-1.center),cramped] \satex{mon-cp5} \tar[r] \tar[d] \&\satex{mon-cp5-l} \\ \satex{mon-cp5-r} \end{tikzcd} \] We observe that each of these branchings is joinable, and we define formal new $4$-generators $R_1, R_2, R_3, R_4, R_5$ that fill the holes: \[ \begin{tikzcd}[ampersand replacement=\&,sep={5.5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satex{mon-cp1} \tar[r] \tar[d] \& \satex{mon-cp1-r} \ar[phantom,d,"\overset{R_1}\LLeftarrow"] \tar[r] \& \satex{mon-cp1-r-2} \tar[d,""{auto=false,name=src}] \\ \satex{mon-cp1-l} \tar[r] \& \satex{mon-cp1-l-2} \tar[r] \& \satex{mon-cp1-e} \end{tikzcd} \quad \begin{tikzcd}[ampersand replacement=\&,sep={5.5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satex{mon-cp2} \tar[r,""{name=src,auto=false}] \tar[d] \&\satex{mon-cp2-r} \tar[d] \\ \satex{mon-cp2-l} \&\satex{mon-cp2-r-2} \tar[l,""{name=tgt,auto=false}] \ar[from=src,to=tgt,phantom,"\overset{R_2}\LLeftarrow"] \end{tikzcd} \quad \begin{tikzcd}[ampersand replacement=\&,sep={5.5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satex{mon-cp4} \tar[r,""{auto=false,name=src}] \tar[d] \& \satex{mon-cp4-l} \tar[d] \\ \satex{mon-cp4-r} \&\satex{mon-cp4-l-2} \tar[l,""{auto=false,name=tgt}] \ar[phantom,from=src,to=tgt,"\overset{R_3}\LLeftarrow"] \end{tikzcd} \]% \[ \begin{tikzcd}[ampersand replacement=\&,sep={5.5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satex{mon-cp3} \tar[d] \tar[r,""{auto=false,name=src}] \& \satex{mon-cp3-r} \tar[d] \\ \satex{mon-cp3-l} \ar[r,equal,""{auto=false,name=tgt}] \& \satex{mon-cp3-l} \ar[phantom,"\overset{R_4}\LLeftarrow",from=src,to=tgt] \end{tikzcd} \qquad \qquad \begin{tikzcd}[ampersand replacement=\&,sep={5.5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satex{mon-cp5} \tar[d] \tar[r,""{auto=false,name=src}] \& \satex{mon-cp5-l} \tar[d] \\ \satex{mon-cp5-r} \ar[r,equal,""{auto=false,name=tgt}] \& \satex{mon-cp5-r} \ar[phantom,from=src,to=tgt,"\overset{R_5}\LLeftarrow"] \end{tikzcd} \] We then define~$\PMon$ as the Gray presentation obtained from~$\P$ of \Exr{pseudo-monoid-gray-pres} by adding $R_1,\ldots,R_5$ to~$\P_4$. As claimed above, in order to deduce coherence, we need to show the termination of~$\PMon$. For this purpose, we use the tools of \Secr{rewriting} and build a reduction order. We split the task in two and define a first order that handles the termination of the $\monA,\monL,\monR$ generators, and then a second one that handles the termination of interchange generators. For the first task, we use a technique similar to the one used in~\cite{lafont1992penrose}. Given~$n \in \N$, we write~$\ltex^1$ for the partial order on~$\N^n$ such that, given $a,b \in \N^n$, $a \ltex^1 b$ when $a_i \le b_i$ for all $i \in \set{1,\ldots,n}$ and there exists $j \in \set{1,\ldots,n}$ such that $a_{j} < b_{j}$. Let~$\Monex$ be the $2$\precategory \begin{itemize} \item which has only one $0$-cell: $\Monex_0 = \set{\ast}$, \item whose $1$-cells are the natural numbers: $\Monex_1 = \N$, \item whose $2$-cells $m \To n$ for $m,n\in \N$ are the strictly monotone functions \[ \phi\co (\N^m,\ltex^1) \to (\N^n,\ltex^1). \] \end{itemize} Moreover, $\unit\ast = 0$ and composition of $1$-cells is given by addition. Given $m \in \Monex_1$, $\unit m$ is the identity function on~$\N^m$, and given $m,n,k,k' \in \N$ and $\chi \co k \to k' \in \Monex_2$, the $2$-cell \[ m \comp_0 \chi \comp_0 n \co {m+k+n} \To {m+k'+n} \] is the function $\chi'\co \N^{m+k+n} \to \N^{m+k'+n}$ such that, for~$x = (x_1,\ldots,x_{m+k+n})\in \N^{m+k+n}$, for $i \in \set{1,\ldots,m+k'+n}$, \[ \chi'(x)_i = \begin{cases} x_i & \text{if $i \le m$} \\ \chi(x_{m+1},\ldots,x_{m+k})_{i - m} & \text{if $m < i \le m + k'$} \\ x_{i - k' + k} & \text{if $i > m + k'$} \end{cases} \] and, given $m,n,p \in \N$, $\phi \co m \To n \in \Monex_2$ and $\psi \co n \To p \in \Monex_2$, $\phi \comp_1 \psi$ is defined as $\psi \circ \phi$ and one shows readily that these operations indeed give strictly monotone functions. One easily checks that $\Monex$ is a strict $2$\category. Given $m,m',n,n' \in \N$ and $\phi\co m\To n,\psi\co m'\To n' \in \Monex$, we write $\phi \ltex^2 \psi$ when $m = m'$, $n = n'$ and $\phi(x) \ltex^1 \psi(x)$ for all~$x \in \N^m$. We have that: \begin{prop} $\ltex^2$ is well-founded on $\Monex_2$. \end{prop} \begin{proof} We define a function $N\co \Monex_2 \to \N$ by \begin{center} $N(\phi) = \phi(z)_1 + \cdots + \phi(z)_n$ \qquad for $\phi\co m \To n \in \Monex_2$ \end{center} where $z = (0,\ldots,0)$. Now, if $\psi\co m \To n \in \Monex_2$ is such that $\psi \ltex^2 \phi$, then $\psi(z) \ltex^1 \phi(z)$ so that $N(\psi)<N(\phi)$. Thus, $\ltex^2$ on $\Monex_2$ is well-founded. \end{proof} \noindent We observe that the order $\ltex^2$ is compatible with the structure of $\Monex$: \begin{prop} \label{prop:lessexists-stable} Given $m,n,m',n',k,k' \in \N$, $\mu\co m' \To m$, $\nu\co n \To n'$, and $\phi,\phi'\co k\To k' \in \Monex_2$ such that $\phi \gtex^2 \phi'$, we have \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{prop:lessexists-stable:0comp}$m \comp_0 \phi \comp_0 n \gtex^2 m \comp_0 \phi' \comp_0 n$, \item \label{prop:lessexists-stable:1comp}$\mu \comp_1 \phi \comp_1 \nu \gtex^2 \mu \comp_1 \phi' \comp_1 \nu$. \end{enumerate} \end{prop} \begin{proof} Given~$x \in \N^{m+k+n}$, we have~$\phi(x_{m+1},\ldots,x_{m+k}) \gtex^1 \phi'(x_{m+1},\ldots,x_{m+k})$ so \[ (m \pcomp_0 \phi \pcomp_0 n)(x) \gtex^1 (m \pcomp_0 \phi' \pcomp_0 n)(x)\zbox. \] Thus,~\ref{prop:lessexists-stable:0comp} holds. Moreover, given~$y \in \N^{m'}$, we have~$\phi(\mu(y)) \gtex^1 \phi'(\mu(y))$. Since~$\nu$ is monotone, we have~$\nu(\phi(\mu(y))) \gtex^1 \nu(\phi'(\mu(y)))$. Thus,~\ref{prop:lessexists-stable:1comp} holds. \end{proof} \noindent We define a $2$\prefunctor $F\co \freecat{\PMon}_2 \to \Monex$ by the universal property of the $2$\polygraph $\restrictcat\PMon{2}$, \ie $F$ is the unique functor such that $F(\ast) = \ast$, $F(\bar 1) = 1$, $F(\mu) = f_\mu$ and $F(\eta) = f_\eta$ where \[ f_\mu\co \N^2 \to \N^1 \qquad\qquad f_\eta \co \N^0 \to \N^1 \] are defined by $f_\mu(x,y) = 2x + y + 1$ for all $x,y \in \N$ and $f_\eta() = 1$. The interpretation exhibits the $3$\generators $\monA$, $\monL$ and $\monR$ of~$\PMon$ as decreasing operations: \begin{prop} \label{prop:mon-interp-gen-decreasing} The followings hold: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{prop:mon-interp-gen-decreasing:A}$F(\csrc_2(\monA)) \gtex^2 F(\ctgt_2(\monA))$, \item \label{prop:mon-interp-gen-decreasing:L} $F(\csrc_2(\monL)) \gtex^2 F(\ctgt_2(\monL))$, \item \label{prop:mon-interp-gen-decreasing:R} $F(\csrc_2(\monR)) \gtex^2 F(\ctgt_2(\monR))$, \item \label{prop:mon-interp-gen-decreasing:X} $F(\ctgt_2(X_{\alpha,m,\beta})) = F(\csrc_2(X_{\alpha,m,\beta}))$ for $\alpha,\beta \in \PMon_2$ and $m \in \N$. \end{enumerate} \end{prop} \begin{proof} Let $\phi = F(\csrc_2(\monA))$ and $\psi = F(\ctgt_2(\monA))$. By calculations, we get that \[ \phi(x,y,z) = (4x + 2y + z + 3) \qqtand \psi(x,y,z) = (2x + 2y + z + 1) \] for $x,y,z \in \N$, so $\phi(x,y,z) \gtex^1 \psi(x,y,z)$ for all $x,y,z \in \N$. The cases~\ref{prop:mon-interp-gen-decreasing:L} and~\ref{prop:mon-interp-gen-decreasing:R} are shown similarly. \ref{prop:mon-interp-gen-decreasing:X} is a consequence of the fact that $\Monex$ is a strict $2$\category. \end{proof} \noindent We define a partial order~$<$ on~$\freecat{\PMon}_2$ by putting, for $\phi,\psi \in \freecat{\PMon}_2$, \begin{center} $\phi < \psi$ when $F(\phi) \ltex^2 F(\psi)$ or [$F(\phi) = F(\psi)$ and $\intnorm(\phi) <_\omega \intnorm(\psi)$]. \end{center} \begin{prop} \label{prop:pseudomon-term-order} The partial order~$<$ on~$\freecat{\PMon}_2$ is a reduction order for~$\PMon$. \end{prop} \begin{proof} Let $G \in \PMon_3$. If~$G \in \set{\monA,\monL,\monR}$, then, by \Propr{mon-interp-gen-decreasing}, $\ctgt_2(G) < \csrc_2(G)$. Otherwise, if ${G = X_{\alpha,u,\beta}}$ for some $\alpha,\beta \in \PMon_2$ and $u \in \freecat{\PMon}_1$, then, by \Propr{mon-interp-gen-decreasing}\ref{prop:mon-interp-gen-decreasing:X}, \[ F(\ctgt_2(G)) = F(\csrc_2(G)) \quad\qtand\quad \intnorm(\ctgt_2(G)) <_\omega \intnorm(\csrc_2(G)). \] So $\ctgt_2(G) < \csrc_2(G)$. The other requirements for~$<$ to be a reduction order are consequences of \Propr{lessexists-stable} and \Propr{term-criterion-interchanger}\ref{prop:term-criterion-interchanger:0-comp}\ref{prop:term-criterion-interchanger:1-comp}. \end{proof} \noindent Finally, we can use our coherence criterion to show that: \begin{theo} \label{thm:pseudomon-coherent} $\PMon$ is a coherent Gray presentation. \end{theo} \begin{proof} By~\Propr{pseudomon-term-order}, $\PMon$ has a reduction order, so the rewriting system~$\PMon$ is terminating by \Propr{term-order-implies-terminating}. Since $R_1,\ldots,R_5 \in \PMon_4$, by \Thmr{squier}, $\freeinvf{\prespcat{\PMon}}$ is a coherent $(3,2)$-Gray category. \end{proof} \subsection{Pseudoadjunctions} \label{ssec:app-adjunction} We now show the coherence of the Gray presentation of pseudoadjunctions introduced below. The way we do this is again by using \Thmr{squier}. However, we need a specific argument to show the termination of the interchange generators on the associated rewriting system. For this, we introduce a notion of ``connected'' diagrams and we use a result of~\cite{delpeuch2018normalization} stating that interchange generators terminate on such connected diagrams. We define the $3$\prepolygraph for pseudoadjunctions as the $3$\prepolygraph~$\P$ such that \[ \P_0 = \set{\appfont x,\appfont y} \qtand \P_1 = \set{\appfont f\co \appfont x \to \appfont y, \appfont g \co \appfont y \to \appfont x} \qtand \P_2 = \set{\mathsf\eta\co \unit{\appfont x} \To \appfont f \comp_0 \appfont g, \varepsilon\co \appfont g \comp_0 \appfont f \To \unit{\appfont y}} \] where~$\eta$ and~$\varepsilon$ are pictured as~$\satex{cap}$ and~$\satex{cup}$ respectively, and~$\P_3$ is defined by $\P_3 = \set{\adjN,\adjNinv}$, where \[ \adjN\co (\eta \comp_0 \appfont f) \comp_1 (\appfont f \comp_0 \varepsilon) \TO \unit {\appfont f} \qtand \adjNinv\co (\appfont g \comp_0 \eta) \comp_1 (\varepsilon \comp_0 \appfont b) \TO \unit {\appfont g} \] which can be represented by \[ \begin{tikzcd} \satex{adj2-l}\tar[r,"\adjN"]&\satex{adj2-r} \end{tikzcd} \qtand \begin{tikzcd} \satex{adj1-l}\tar[r,"\adjNinv"]&\satex{adj1-r} \end{tikzcd} \pbox. \] We then extend~$\P$ to a Gray presentation by adding $3$\generators corresponding to interchange generators and $4$\generators corresponding to independence generator and interchange naturality generator, just like we did for pseudomonoids in \Exr{pseudo-monoid-gray-pres}. For coherence, we need to add other $4$\generators to $\P_4$. Provided that~$\P$ is terminating, by \Thmr{squier}, adding $4$\generators that fill the holes created by critical branchings is enough, just like for pseudomonoids. Using the constructive proof of \Thmr{finite-cp}, we compute all the critical branchings of~$\P$. We then obtain, up to symmetrical branchings, two critical branchings: \[ \begin{tikzcd}[column sep=3ex,baseline=(\tikzcdmatrixname-1-1.center)] \satex{adj-cp1-l}\tar[dr]\tar[rr]&&\satex{adj-cp1-c}\\ &\satex{cup} \end{tikzcd} \qquad\quad \begin{tikzcd}[column sep=3ex,baseline=(\tikzcdmatrixname-1-1.center)] \satex{adj-cp2-l}\tar[dr]\tar[rr]& &\satex{adj-cp2-c}\\ &\satex{cap} \end{tikzcd} \] We observe that each of these branchings is joinable, and we define formal new $4$\generators $R_1,R_2$ that fill the holes: \[ \begin{tikzcd}[column sep=3ex,baseline=(\tikzcdmatrixname-1-1.center)] \satex{adj-cp1-l}\tar[dr]\tar[rr]&\ar[d,phantom,pos=0.3,"\overset{R_1}\LLeftarrow"]&\tar[dl]\satex{adj-cp1-c}\\ &\satex{cup} \end{tikzcd} \qquad\quad \begin{tikzcd}[column sep=3ex,baseline=(\tikzcdmatrixname-1-1.center)] \satex{adj-cp2-l}\tar[dr]\tar[rr]&\ar[d,phantom,pos=0.3,"\overset{R_2}\LLeftarrow"description]&\tar[dl]\satex{adj-cp2-c}\\ &\satex{cap} \end{tikzcd} \] We then define~$\PAdj$ as the Gray presentation obtained from~$\P$ by adding $R_1$ and~$R_2$ to~$\P_4$. We aim at showing that this rewriting system is terminating by exhibiting a reduction order. However, we cannot use \Propr{term-criterion-interchanger} to handle interchangers (as for the case of pseudomonoids) since $\P$ is not positive. Instead, we invoke the result of \cite{delpeuch2018normalization} which states the termination of interchangers on ``connected diagrams''. Given a $2$\prepolygraph~$\Q$, a $2$-cell of $\freecat\Q_2$ is connected when, intuitively, each $2$\generator on its graphical representation is accessible by a path starting from a top or bottom input. For example, given $\Q$ such that $\Q_0 = \set{\ast}$, $\Q_1 = \set{\bar 1}$ and $\Q_2 = \set{\satex{cap}\co \bar 0 \To \bar 2, \satex{cup}\co \bar 2 \To \bar 0}$, we can build the following two $2$-cells of $\freecat\Q_2$: \[ \satex{adj-ex-conn} \hspace*{8em} \satex{adj-ex-not-conn} \] where the one on the left is connected whereas the one on the right is not, since the two generators of the ``bubble'' cannot be accessed from the top or bottom border. A more formal definition can be obtained by computing the ``connected components'' of the diagram, together with a map between the top and bottom inputs of the diagram to the associated connected components. This is adequatly represented by cospans of~$\Set$. Based on this idea, we define a $2$\precategory that allows to compute the connected components of a $2$-cell of~$\freecat\Q$. Let~$\N_m$ be the set $\set{1,\ldots,m}$ for $m \ge 0$. We define the $2$\precategory~$\cospancat$ as the $2$\precategory such that: \begin{itemize} \item it has a unique $0$-cell, denoted $\ast$, \item the $1$-cells are the natural numbers, with $0$ as unit and addition as composition, \item the $2$-cells $m \To n$ are the classes of equivalent cospans $\tikzcdin{\N_m \ar[r,"f"]\& S \& \N_n \ar[l,"g"']}$ in~$\Set$, \end{itemize} where two cospans $ \begin{tikzcd}[cramped,column sep=small] A \ar[r,"f"] & S & \ar[l,"g"'] B \end{tikzcd} $ and $ \begin{tikzcd}[cramped,column sep=small] A \ar[r,"f'"] & S' & \ar[l,"g'"'] B \end{tikzcd} $ are said \emph{equivalent} when there exists an isomorphism $h\co S \to S' \in \Set$ such that $f' = h \circ f$ and $g' = h \circ g$. The unit of $m \in \cospancat_1$ is the cospan $\tikzcdin{\N_m \ar[r,"\id {\N_m}"] \& \N_m \& \N_m \ar[l,"\id {\N_m}"']}$, and, given $\phi \co m_1 \To m_2 \in \cospancat_2$ and $\psi \co m_2 \To m_3 \in \cospancat_2$, represented by the cospans \[ \tikzcdin{\N_{m_1} \ar[r,"f"] \& S \& \N_{m_2} \ar[l,"g"']} \qtand \tikzcdin{\N_{m_2} \ar[r,"f'"] \& S' \& \N_{m_3} \ar[l,"g'"']} \] respectively, their composite is represented by the cospan \[ \begin{tikzcd}[sep=small & & S'' \ar[dd,phantom,"{\dcorner}",very near start] & &\\ & S \ar[ru,"h",dotted] & & S' \ar[lu,"h'"',dotted] \\ \N_{m_1} \ar[ru,"f"] & & \N_{m_2} \ar[lu,"g"'] \ar[ru,"{f'}"] & & \N_{m_3} \ar[lu,"{g'}"'] \end{tikzcd} \] where the middle square is a pushout. Given $\phi\co m \To n \in \cospancat_2$ represented by \[ \tikzcdin{\N_{m} \ar[r,"f"] \& S \& \N_{n} \ar[l,"g"']} \] and $p,q \in \cospancat_1$, the $2$-cell $p \comp_0 \phi \comp_0 q$ is represented by the cospan \[ \begin{tikzcd}[sep=small & \N_p \sqcup S \sqcup \N_q & \\ \N_{p + m + q} \ar[ru,pos=0.3,"(\id {\N_p} \sqcup f \sqcup \id {\N_q}) \circ \theta_{p,m,q}"] & & \N_{p + n + q} \ar[lu,pos=0.3,"(\id {\N_p} \sqcup g \sqcup \id {\N_q}) \circ \theta_{p,n,q}"'] \end{tikzcd} \] where $\theta_{p,r,q}\co \N_{p + r + q} \to \N_{p} \sqcup \N_r \sqcup \N_q$, for $r \in \N$, is the obvious bijection. One easily verifies that $\cospancat$ is in fact a $2$\category (fact that will be useful when dealing with interchange generators later). Given a $2$\prepolygraph~$\Q$, by the universal property of $2$\prepolygraphs, we define a $2$\prefunctor $\coninterp_\Q\co \freecat\Q \to \cospancat$ such that \begin{itemize} \item the image of $x \in \Q_0$ is $\ast$, \item the image of $a \in \Q_1$ is $1$, \item the image of $\alpha\co f \To g \in \Q_2$ is represented by the unique cospan $\tikzcdin{\N_{\len f} \ar[r,"\ast"] \& \set{\ast} \& \N_{\len g} \ar[l,"\ast"']}$ \end{itemize} We can now give our definition for connectedness: a $2$-cell $\phi \in \freecat\Q_2$ is \emph{connected} when $\coninterp_\Q(\phi)$ is represented by a cospan $\tikzcdin{\N_{m} \ar[r,"f"] \& S \& \N_{n} \ar[l,"g"']}$, with $m = \len{\csrc_1(\phi)}$ and $n = \len{\ctgt_1(\phi)}$, such that $f,g$ are jointly epimorphic. Since the latter property is invariant by equivalences of cospan, if $\phi$ is connected, then for every representative $\tikzcdin{\N_{m} \ar[r,"f"] \& S \& \N_{n} \ar[l,"g"']}$ of $\coninterp_\Q(\phi)$, $f,g$ are jointly epimorphic. \bigskip \noindent As one can expect, connexity is preserved by interchangers in general: \begin{lem} \label{lem:connectivity-int} Let $\P$ be a $2$\prepolygraph. Let $\alpha,\beta \in \P_2$ and $g \in \freecat\P_1$ such that $\alpha,g,\beta$ are $0$\composable. Then, \[ \coninterp_\P((\alpha \comp_0 g \comp_0 \csrc_1(\beta)) \comp_1 (\ctgt_1(\alpha) \comp_0 g \comp_0 \beta)) = \coninterp_\P((\csrc_1(\alpha) \comp_0 g \comp_0 \beta) \comp_1 (\alpha \comp_0 g \comp_0 \ctgt_1(\beta))) \] \end{lem} \begin{proof} This is a direct consequence of the fact that $\cospancat$ is a strict $2$\category. \end{proof} \noindent Moreover, in the case of~$\PAdj$, the $3$\generators $\adjN$ and $\adjNinv$ do not change connexity: \begin{lem} \label{lem:adj-connectivity-3gen} We have \[ \coninterp_{\PAdj}((\eta \comp_0 \appfont f) \comp_1 (\appfont f \comp_0 \varepsilon)) = \coninterp_{\PAdj}(\unit{\appfont f}) \] and \[ \coninterp_{\PAdj}( (\appfont g \comp_0 \eta) \comp_1 (\varepsilon \comp_0 \appfont g)) = \coninterp_{\PAdj}(\unit{\appfont g}). \] \end{lem} \begin{proof} By calculations, we verify that \[ \begin{tikzcd}[sep=small & \set{\ast} & \\ \N_{1} \ar[ru,"\ast"] & & \N_{1} \ar[lu,"\ast"'] \end{tikzcd} \] is a representative of both $\coninterp_{\PAdj}((\eta \comp_0 \appfont f) \comp_1 (\appfont f \comp_0 \varepsilon))$ and $\coninterp_{\PAdj}(\unit{\appfont f})$, so that \[ \coninterp_{\PAdj}((\eta \comp_0 \appfont f) \comp_1 (\appfont f \comp_0 \varepsilon)) = \coninterp_{\PAdj}(\unit{\appfont f}) \] and similarly, \[ \coninterp_{\PAdj}( (\appfont g \comp_0 \eta) \comp_1 (\varepsilon \comp_0 \appfont g)) = \coninterp_{\PAdj}(\unit{\appfont g}). \qedhere \] \end{proof} \noindent We now prove a technical lemma that we will use to show the connexity of the $2$-cells in~$\freecat{\PAdj}_2$: \begin{lem} \label{lem:epi-cospan-connectivity} Let $\P$ be a $2$\prepolygraph and $\phi,\phi' \in \freecat\P_2$ and $\tikzcdin{\N_{n_1} \ar[r,"f"] \& S \& \N_{n_2} \ar[l,"g"']}$ be a representative of $\coninterp_\P(\phi)$ for some $n_1,n_2 \in \N$ such that $\phi,\phi'$ are $1$\composable and $f$ is surjective. Then, $\phi \comp_1 \phi'$ is connected if and only if $\phi'$ is connected. \end{lem} \begin{proof} Let $\tikzcdin{\N_{n_2} \ar[r,"f'"] \& S' \& \N_{n_3} \ar[l,"g'"']}$ be a representative of~$\coninterp_\P(\phi')$ for some $n_2,n_3 \in \N$. Then, $\coninterp_\P(\phi')$ is represented by $\tikzcdin[sep=2.5em]{\N_{n_1} \ar[r,"f'' \circ f"] \& S'' \& \N_{n_3} \ar[l,"g'' \circ g'"']}$ where $S''$, $f''$ and~$g''$ are defined by the pushout of~$g$ and~$f'$ as in \[ \begin{tikzcd}[sep=small & & S'' \ar[dd,phantom,very near start,"{\dcorner}"] & &\\ & S \ar[ru,"f''",dotted]& & S' \ar[lu,"g''"',dotted] \\ \N_{n_1} \ar[ru,"f"] & & \N_{n_2} \ar[lu,"g"'] \ar[ru,"f'"] & & \N_{n_3} \ar[lu,"g'"'] \end{tikzcd} \pbox. \] Suppose that $\phi'$ is connected, \ie $f'$ and~$g'$ are jointly surjective. Since $f$ is surjective by hypothesis and ~$f''$ and~$g''$ are jointly surjective (by the universal property of pushout), we have that $f'' \circ f, g'' \circ f', g'' \circ g'$ are jointly surjective. Moreover, \[ g'' \circ f' = f'' \circ g = f'' \circ f \circ h \] where $h$ is a factorization of~$g$ through~$f$ (that exists, since $f$ is supposed surjective). Thus, we conclude that $f'' \circ f, g'' \circ g$ are jointly surjective. Conversely, suppose that $f'' \circ f$ and $g'' \circ g$ are jointly surjective and let $y \in S'$. We have to show that $y$ is in the image of~$f'$ or~$g'$. Recall that \[ S'' \simeq (S \sqcup S')/\sim \] where $\sim$ is the equivalence relation induced by $g(x) \sim f'(x)$ for $x \in \N_{n_2}$: either $y$ is in the image of~$f'$, or we have both that $y$ is the only preimage of $g''(y)$ by $g''$ and $g''(y)$ is not in the image of~$f''$. In the former case, we conclude directly, and in the latter, since $f'' \circ f$ and $g'' \circ g'$ are jointly surjective, there is $x \in \N_{n_3}$ such that $g'' \circ g' (x) = g''(y)$, so that $g'(x) = y$, which is what we wanted. Thus, $f'$ and~$g'$ are jointly surjective, \ie $\phi'$ is connected. \end{proof} \noindent We can now prove our connectedness result for pseudoadjunctions: \begin{prop} \label{prop:adj-connex} For every $\phi \in \freecat{\PAdj}_2$, $\phi$ is connected. \end{prop} \begin{proof} Suppose by absurdity that it is not true and let $N \in \N$ be the smallest natural number such that the set $S = \set{\phi \in \freecat{\PAdj}_2 \mid \len\phi = N \text{ and } \phi \text{ is not connected}}$ is not empty. Given $\phi \in S$, let \[ (f_1 \comp_0 \alpha_1 \comp_0 h_1) \comp_1 \cdots \comp_1 (f_N \comp_0 \alpha_N \comp_0 h_N) \] be a decomposition of~$\phi$. Note that there is at least one $i \in \set{1,\ldots,N}$ such that $\alpha_i = \varepsilon$. Indeed, given $f,h \in \freecat{\PAdj}_1$ such that $f,\eta,h$ are $0$\composable, a representative $\tikzcdin{\N_{m} \ar[r,"u"] \& T \& \N_n \ar[l,"v"']}$ of $\coninterp_\Q(f \comp_0 \eta \comp_0 h)$ has the property that $v$ is an epimorphism. Since epimorphisms are stable by pushouts, given $\phi' \in \freecat{\PAdj}_2$ such that $\phi' = (f'_1 \comp_0 \eta \comp_0 h'_1) \comp_1 \cdots \comp_1 (f'_k \comp_0 \eta \comp_0 h'_k)$ with $f'_i,h'_i \in \freecat{\PAdj}_1$ for $i \in \set{1,\ldots,k}$, a representative $\tikzcdin{\N_{m'} \ar[r,"u'"] \& T' \& \N_{n'} \ar[l,"v'"']}$ of $\coninterp_{\PAdj}(\phi')$ has the property that $v'$ is an epimorphism (by induction on~$k$), and in particular, $\phi'$ is connected. Consider the minimal index $i_0$ such that there is $\phi \in S$ with $\alpha_{i_0} = \varepsilon$. Suppose first that $i_0 = 1$. Then, given a representative $\tikzcdin{\N_{m_1} \ar[r,"u_1"] \& T_1 \& \N_{m_2} \ar[l,"v_1"']}$ of $\coninterp_{\PAdj}(f_1 \comp_0 \alpha_1 \comp_0 h_1)$, we easily check that $u_1$ is an epimorphism. By \Lemr{epi-cospan-connectivity}, we deduce that \[ (f_2 \comp_0 \alpha_2 \comp_0 h_2) \comp_1 \cdots \comp_1 (f_k \comp_0 \alpha_k \comp_0 h_k) \] is not connected, contradicting the minimality of~$N$. Suppose $i_0 > 1$. By the definition of $i_0$, we have $\alpha_{i_0 - 1} = \eta$. There are different cases depending on~$\len{f_{i_0-1}}$ (see~\Cref{fig:adj-conn}): \begin{figure} \centering \[ \satex{adj-conn-fig-1} \qquad \qquad \satex{adj-conn-fig-2} \qquad \qquad \satex{adj-conn-fig-3} \qquad \qquad \satex{adj-conn-fig-4} \qquad \qquad \satex{adj-conn-fig-5} \] \caption{The different cases} \label{fig:adj-conn} \end{figure} \begin{itemize} \item if $\len{f_{i_0-1}} \le \len{f_{i_0}} - 2$, then, since $\ctgt_1(f_{i_0 - 1} \comp_0 \alpha_{i_0 - 1} \comp_0 h_{i_0 - 1}) = \csrc_1(f_{i_0} \comp_0 \alpha_{i_0} \comp_0 h_{i_0})$, we have \[ f_{i_0} = f_{i_0 - 1} \comp_0 \ctgt_1(\eta) \comp_0 g \qtand h_{i_0 - 1} = g \comp_0 \csrc_1(\varepsilon) \comp_0 h_{i_0} \] for some $g \in \freecat{\PAdj}_1$. By \Lemr{connectivity-int}, we have \[ \coninterp_{\PAdj}( (\eta \comp_0 g \comp_0 \csrc_1(\varepsilon)) \comp_1 (\ctgt_1(\eta) \comp_0 g \comp_0 \varepsilon)) = \coninterp((\csrc_1(\eta) \comp_0 g \comp_0 \varepsilon) \comp_1 (\eta \comp_0 g \comp_0 \ctgt_1(\varepsilon))) \] thus, by functoriality of $\coninterp_{\PAdj}$, the morphism $\phi'$ defined by \begin{multline*} \phi' = (f_1 \comp_0 \alpha_1 \comp_0 h_1) \comp_1 \cdots \comp_1 (f_{i_0 - 2} \comp_0 \alpha_{i_0 - 2} \comp_0 h_{i_0 - 2}) \\ \comp_1 (f_{i_0 - 1} \comp_0 g \comp_0 \varepsilon \comp_0 h_{i_0}) \comp_1 (f_{i_0 - 1} \comp_0 \eta \comp_0 g \comp_0 h_{i_0}) \\ \comp_1 (f_{i_0 + 1} \comp_0 \alpha_{i_0 + 1} \comp_0 h_{i_0+1}) \comp_1 \cdots \comp_1 (f_k \comp_0 \alpha_k \comp_0 h_k) \end{multline*} satisfies that $\coninterp_{\PAdj}(\phi) = \coninterp_{\PAdj}(\phi')$. So $\phi'$ is not connected, and the $(i_0 {-} 1)$-th $2$\generator in the decomposition of $\phi'$ is $\varepsilon$, contradicting the minimality of~$i_0$; \item if $\len{f_{i_0-1}} \ge \len{f_{i_0}} + 2$, then the case is similar to the previous one; \item if $\len{f_{i_0 - 1}} = \len{f_{i_0}} - 1$, then, since $\coninterp_{\PAdj}((\eta \comp_0 \appfont f) \comp_1 (\appfont f \comp_0 \varepsilon)) = \coninterp_{\PAdj}(\unit{\appfont f})$ by \Lemr{adj-connectivity-3gen}, the $2$-cell $\phi'$ defined by \begin{multline*} \phi' = (f_1 \comp_0 \alpha_1 \comp_0 h_1) \comp_1 \cdots \comp_1 (f_{i_0 - 2} \comp_0 \alpha_{i_0 - 2} \comp_0 h_{i_0 - 2}) \\ \comp_1 (f_{i_0 + 1} \comp_0 \alpha_{i_0 + 1} \comp_0 h_{i_0+1}) \comp_1 \cdots \comp_1 (f_k \comp_0 \alpha_k \comp_0 h_k) \end{multline*} satisfies $\coninterp_{\PAdj}(\phi) = \coninterp_{\PAdj}(\phi')$ (by functoriality of~$\coninterp_{\PAdj}$), so that $\phi'$ is not connected, contradicting the minimality of~$N$; \item if $\len{f_{i_0-1}} = \len{f_{i_0}} + 1$, then the situation is similar to the previous one, since, by \Lemr{adj-connectivity-3gen}, \[ \coninterp_{\PAdj}( (\appfont g \comp_0 \eta) \comp_1 (\varepsilon \comp_0 \appfont g)) = \coninterp_{\PAdj}(\unit{\appfont g}); \] \item finally, the case $\len{f_{i_0-1}} = \len{f_{i_0}}$ is impossible since \[ f_{i_0 - 1} \comp_0 \ctgt_1(\alpha_{i_0 - 1}) \comp_0 h_{i_0 - 1} = f_{i_0} \comp_0 \csrc_1(\alpha_{i_0}) \comp_0 h_{i_0} \] and \[ \ctgt_1(\alpha_{i_0 - 1}) = \appfont f \comp_0 \appfont g \neq \appfont g \comp_0 \appfont f = \csrc_1(\alpha_{i_0}).\qedhere \] \end{itemize} \end{proof} \noindent We are now able to prove termination: \begin{prop} \label{prop:adj-terminating} The rewriting system $\PAdj$ is terminating. \end{prop} \begin{proof} Suppose by contradiction that there is an infinite sequence $S_i\co \phi_i \TO \phi_{i+1}$ for $i \ge 0$ with $S_i$ a rewriting step in~$\freecat{\PAdj}_3$. Since \[ \len{\csrc_2(\adjN)} = \len{\csrc_2(\adjNinv)} = 2 \qtand \len{\ctgt_2(\adjN)} = \len{\ctgt_2(\adjNinv)} = 0, \] if the inner $3$\generator of~$S_i$ is~$\adjN$ or~$\adjNinv$, for some $i \ge 0$, then $\len{\phi_{i+1}} = \len{\phi_i} - 2$. Since \[ \csrc_2(X_{\alpha,f,\beta}) = \ctgt_2(X_{\alpha,f,\beta}) = 2 \] for $0$\composable $\alpha \in \PAdj_2$, $f \in \freecat{\PAdj}_1$, $\beta \in \PAdj_2$, it means that there is $i_0 \ge 0$ such that for $i \ge i_0$, the inner generator of~$S_i$ is an interchanger. By \cite[Thm.~16]{delpeuch2018normalization}, there is no infinite sequence of rewriting steps made of interchangers. Thus, by \Propr{adj-connex}, there is no infinite sequence of rewriting steps whose inner $3$\generator is an interchanger of~$\PAdj$, contradicting the existence of $(S_i)_{i \ge 0}$. Thus, $\PAdj$ is terminating. \end{proof} \noindent Finally, we can apply our coherence criterion and show that: \begin{theo} \label{thm:adj-coherent} $\PAdj$ is a coherent Gray presentation. \end{theo} \begin{proof} By \Propr{adj-terminating}, $\restrictcat\PAdj 3$ is terminating. Since $R_1,R_2 \in \PAdj_4$, by \Thmr{squier}, the conclusion follows. \end{proof} \subsection{Self-dualities} \label{ssec:app-untyped-adjunction} \input{app-untyped} \subsection{Frobenius pseudomonoids} \label{ssec:app-frobenius-monoid} We now consider the example of Frobenius pseudomonoids~\cite{street2004frobenius} which categorifies the classical notion of Frobenius monoids. Sadly, it is only a partial example since we were not able to handle the units of the structure (if we add them, the critical branchings are not confluent) and to show that our presentation is terminating, even though we believe that the latter is true. We nevertheless give the computation of critical branchings for this example, hoping that a termination argument will be found later. We define the $3$\prepolygraph~$\P$ for (non-unitary) Frobenius pseudomonoids as follows. We put \[ \P_0 = \set{\ast} \qtand \P_1 = \set{\bar 1} \qtand \P_2 = \set{\mu\co \bar 2 \to \bar 1, \delta\co \bar 1 \to \bar 2} \] where we denote $\bar n$ by $\underbrace{\bar 1 \comp_0 \cdots \comp_0 \bar 1}_n$ for $n \in \N$. We picture $\mu$ and $\delta$ by $\satex{mu}$ and $\satex{delta}$ respectively, and we define $\P_3$ by $\P_3 = \set{\frobN,\frobNinv,\frobA,\frobAco,\frobM,\frobMco}$ where \begin{align*} \satex{frob-l}&\xTO{\mathsf{N}}\satex{frob-c} & \satex{frob-r}&\xTO{\frobNinv}\satex{frob-c} & \satex{frob-assoc-l}&\xTO{\frobA}\satex{frob-assoc-r} & \satex{frob-coassoc-l}&\xTO{\frobAco}\satex{frob-coassoc-r} & \satex{frob-bmu-l}&\xTO{\frobM}\satex{frob-bmu-r} & \satex{frob-bdelta-l}&\xTO{\frobMco}\satex{frob-bdelta-r} \end{align*} As before, we then extend $\P$ to a Gray presentation by adding $3$\generators corresponding to interchange generators and $4$\generators corresponding to independence generators and interchange naturality generators. Using the constructive proof of~\Thmr{finite-cp}, we find 19 critical branchings, and we use them to define a set of nineteen $4$-generators $R_1,\ldots,R_{19}$ that we add to~$\P_4$. These critical branchings are shown in~\Cref{fig:pseudofrobenius-cps}. \begin{figure}[p]\ContinuedFloat* \centering \begingroup \def5em{5em} \openup15pt \begin{gather*} \mathmakebox[\linewidth][c]{\hfil\begin{tikzcd}[row sep={5.8em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{ass-zig1} \ar[d,equal]\tar[r]\ar[rd,phantom,"\DDownarrow R_1"description] \& \satexnoscale[scale=0.8]{ass-zig2} \tar[d] \\ \satexnoscale[scale=0.8]{ass-zig1} \tar[r] \& \satexnoscale[scale=0.8]{ass-zig3} \end{tikzcd} \hfil \begin{tikzcd}[row sep={5.8em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{coass-zig1} \ar[rd,phantom,"\DDownarrow R_{2}",start anchor=center,end anchor=center] \tar[r] \ar[d,equal] \& \satexnoscale[scale=0.8]{coass-zig2} \tar[d] \\ \satexnoscale[scale=0.8]{coass-zig1} \tar[r] \& \satexnoscale[scale=0.8]{coass-zig3} \end{tikzcd} \hfil \begin{tikzcd}[row sep={5.8em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{cotrans-ass1} \ar[d,equal] \tar[r] \ar[rd,start anchor=center,end anchor=center,phantom,"\DDownarrow R_3"] \& \satexnoscale[scale=0.8]{cotrans-ass2} \tar[d] \\ \satexnoscale[scale=0.8]{cotrans-ass1} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-ass3} \end{tikzcd} \hfil \begin{tikzcd}[row sep={5.8em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{coass-trans1} \tar[r] \ar[d,equal]\ar[rd,"\DDownarrow R_4",start anchor=center,end anchor=center,phantom] \& \satexnoscale[scale=0.8]{coass-trans2} \tar[d] \\ \satexnoscale[scale=0.8]{coass-trans1} \tar[r] \& \satexnoscale[scale=0.8]{coass-trans3} \end{tikzcd}\hfil} \\ \mathmakebox[\linewidth][c]{\hfil\begin{tikzcd}[row sep={4.9em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{mon-hex1}\tar[d]\tar[r]\&\satexnoscale[scale=0.8]{mon-hex2}\tar[r]\ar[d,phantom,"\DDownarrow R_5"description]\&\satexnoscale[scale=0.8]{mon-hex3}\tar[d]\\ \satexnoscale[scale=0.8]{mon-hex6}\tar[r]\&\satexnoscale[scale=0.8]{mon-hex5}\tar[r]\&\satexnoscale[scale=0.8]{mon-hex4} \end{tikzcd} \hfil \begin{tikzcd}[row sep={4.9em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{comon-hex1} \tar[d] \tar[r] \& \satexnoscale[scale=0.8]{comon-hex2} \tar[r] \ar[d,phantom,"\DDownarrow R_6"description]\& \satexnoscale[scale=0.8]{comon-hex3} \tar[d] \\ \satexnoscale[scale=0.8]{comon-hex4} \tar[r] \& \satexnoscale[scale=0.8]{comon-hex5} \tar[r] \& \satexnoscale[scale=0.8]{comon-hex6} \end{tikzcd}\hfil} \\ \mathmakebox[\linewidth][c]{\hfil\begin{tikzcd}[row sep={4.9em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{zag-ass1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{zag-ass2} \tar[r]\ar[d,phantom,"\DDownarrow R_7"description] \& \satexnoscale[scale=0.8]{zag-ass3} \tar[d] \\ \satexnoscale[scale=0.8]{zag-ass4} \tar[r] \& \satexnoscale[scale=0.8]{zag-ass5} \tar[r] \& \satexnoscale[scale=0.8]{zag-ass6} \end{tikzcd} \hfil \begin{tikzcd}[row sep={4.9em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{coass-zag1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{coass-zag4} \tar[r] \ar[d,phantom,"\DDownarrow R_8"description] \& \satexnoscale[scale=0.8]{coass-zag5} \tar[d] \\ \satexnoscale[scale=0.8]{coass-zag2} \tar[r] \& \satexnoscale[scale=0.8]{coass-zag3} \tar[r] \& \satexnoscale[scale=0.8]{coass-zag6} \end{tikzcd}\hfil} \\ \mathmakebox[\linewidth][c]{\hfil\begin{tikzcd}[ampersand replacement=\&,row sep={4.9em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{zig-echmu1} \tar[d] \tar[r] \& \satexnoscale[scale=0.8]{zig-echmu2} \tar[r] \ar[d,phantom,"\DDownarrow R_{9}"description]\& \satexnoscale[scale=0.8]{zig-echmu3} \tar[d] \\ \satexnoscale[scale=0.8]{zig-echmu4} \tar[r] \& \satexnoscale[scale=0.8]{zig-echmu5} \tar[r] \& \satexnoscale[scale=0.8]{zig-echmu6} \end{tikzcd} \hfil \begin{tikzcd}[row sep={4.9em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{echde-zig1} \tar[d] \tar[r] \& \satexnoscale[scale=0.8]{echde-zig2} \tar[r] \ar[d,phantom,"\DDownarrow R_{10}"description]\& \satexnoscale[scale=0.8]{echde-zig3} \tar[d] \\ \satexnoscale[scale=0.8]{echde-zig4} \tar[r] \& \satexnoscale[scale=0.8]{echde-zig5} \tar[r] \& \satexnoscale[scale=0.8]{echde-zig6} \end{tikzcd}\hfil} \end{gather*} \endgroup \caption{The critical branchings for Frobenius pseudomonoids} \label{fig:pseudofrobenius-cps} \end{figure}% \begin{figure}[p!]\ContinuedFloat \centering \begingroup \def5em{5em} \openup15pt \begin{gather*} \mathmakebox[\linewidth][c]{\hfil\begin{tikzcd}[row sep={6.2em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{echmu-zig4} \tar[r] \& \satexnoscale[scale=0.8]{echmu-zig5} \tar[r] \ar[d,phantom,"\DDownarrow R_{11}"description] \& \satexnoscale[scale=0.8]{echmu-zig6} \tar[d] \\ \satexnoscale[scale=0.8]{echmu-zig1} \tar[u] \tar[r] \& \satexnoscale[scale=0.8]{echmu-zig2} \tar[r] \& \satexnoscale[scale=0.8]{echmu-zig3} \end{tikzcd} \hfil \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{zig-echde4} \tar[r] \& \satexnoscale[scale=0.8]{zig-echde5} \tar[r]\ar[d,phantom,"\DDownarrow R_{12}"description] \& \satexnoscale[scale=0.8]{zig-echde6} \tar[d] \\ \satexnoscale[scale=0.8]{zig-echde1} \tar[u] \tar[r] \& \satexnoscale[scale=0.8]{zig-echde2} \tar[r] \& \satexnoscale[scale=0.8]{zig-echde3} \end{tikzcd}\hfil} \\ \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{cotrans-trans1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{cotrans-trans2} \ar[d,phantom,"\DDownarrow R_{13}"]\tar[r,""{name=arrb}] \& \satexnoscale[scale=0.8]{cotrans-trans3} \tar[d] \\ \satexnoscale[scale=0.8]{cotrans-trans5} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-trans6} \tar[r,""{name=arre}] \& \satexnoscale[scale=0.8]{cotrans-trans4} \end{tikzcd} \\ \begin{tikzcd}[row sep={6.2em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{trans-ass1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{trans-ass2} \tar[r,""{name=arrb,auto=false}] \& \satexnoscale[scale=0.8]{trans-ass3} \tar[r] \& \satexnoscale[scale=0.8]{trans-ass4} \tar[d] \\ \satexnoscale[scale=0.8]{trans-ass5} \tar[r] \& \satexnoscale[scale=0.8]{trans-ass6} \tar[r,""{name=arre,auto=false}] \& \satexnoscale[scale=0.8]{trans-ass7} \tar[r] \& \satexnoscale[scale=0.8]{trans-ass8} \ar[from=arrb,to=arre,"\DDownarrow R_{14}",phantom] \end{tikzcd} \qquad \begin{tikzcd}[row sep={6.2em,between origins},column sep={5em,between origins},ampersand replacement=\&,baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{coass-cotrans1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{coass-cotrans2} \tar[r,""{name=arrb,auto=false}] \& \satexnoscale[scale=0.8]{coass-cotrans3} \tar[r] \& \satexnoscale[scale=0.8]{coass-cotrans4} \tar[d] \\ \satexnoscale[scale=0.8]{coass-cotrans5} \tar[r] \& \satexnoscale[scale=0.8]{coass-cotrans6} \tar[r,""{name=arre,auto=false}] \& \satexnoscale[scale=0.8]{coass-cotrans7} \tar[r] \& \satexnoscale[scale=0.8]{coass-cotrans8}\ar[from=arrb,to=arre,phantom,"\DDownarrow R_{15}"description] \end{tikzcd} \end{gather*} \endgroup \caption{The critical branchings for Frobenius pseudomonoids} \label{fig:pseudofrobenius-cpsbis} \end{figure}% \begin{figure}[p!]\ContinuedFloat \centering \begingroup \def5em{5em} \openup15pt \begin{gather*} \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{echde-trans1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{echde-trans2} \tar[r,""{name=arrb,auto=false}] \& \satexnoscale[scale=0.8]{echde-trans3} \tar[r] \& \satexnoscale[scale=0.8]{echde-trans4} \tar[d] \\ \satexnoscale[scale=0.8]{echde-trans5} \tar[r] \& \satexnoscale[scale=0.8]{echde-trans6} \tar[r,""{name=arre,auto=false}] \& \satexnoscale[scale=0.8]{echde-trans7} \tar[r] \& \satexnoscale[scale=0.8]{echde-trans8}\ar[from=arrb,to=arre,phantom,"\DDownarrow R_{16}"] \end{tikzcd} \qquad \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{cotrans-echmu1} \tar[r] \tar[d] \& \satexnoscale[scale=0.8]{cotrans-echmu2} \tar[r,""{name=arrb,auto=false}] \& \satexnoscale[scale=0.8]{cotrans-echmu3} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echmu4} \tar[d] \\ \satexnoscale[scale=0.8]{cotrans-echmu5} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echmu6} \tar[r,""{name=arre,auto=false}] \& \satexnoscale[scale=0.8]{cotrans-echmu7} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echmu8} \ar[from=arrb,to=arre,phantom,"\DDownarrow R_{17}"] \end{tikzcd} \\ \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{echmu-trans4} \tar[r] \ar[rrrd,phantom,"\DDownarrow R_{18}",start anchor=center,end anchor=center] \& \satexnoscale[scale=0.8]{echmu-trans5} \tar[r] \& \satexnoscale[scale=0.8]{echmu-trans6} \tar[r] \& \satexnoscale[scale=0.8]{echmu-trans7} \tar[d] \\ \satexnoscale[scale=0.8]{echmu-trans1} \tar[r] \tar[u] \& \satexnoscale[scale=0.8]{echmu-trans2} \tar[r] \& \satexnoscale[scale=0.8]{echmu-trans3} \ar[r,equal] \& \satexnoscale[scale=0.8]{echmu-trans3} \end{tikzcd} \qquad \begin{tikzcd}[ampersand replacement=\&,row sep={6.2em,between origins},column sep={5em,between origins},baseline=(\tikzcdmatrixname-2-1.center)] \satexnoscale[scale=0.8]{cotrans-echde4} \tar[r] \ar[rrrd,phantom,"\DDownarrow R_{19}",start anchor=center,end anchor=center] \& \satexnoscale[scale=0.8]{cotrans-echde5} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echde6} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echde7} \tar[d] \\ \satexnoscale[scale=0.8]{cotrans-echde1} \tar[r] \tar[u] \& \satexnoscale[scale=0.8]{cotrans-echde2} \tar[r] \& \satexnoscale[scale=0.8]{cotrans-echde3} \ar[r,equal] \& \satexnoscale[scale=0.8]{cotrans-echde3} \end{tikzcd} \end{gather*} \endgroup \caption{The critical branchings for Frobenius pseudomonoids} \label{fig:pseudofrobenius-cpsbisbis} \end{figure}% We define then define $\PFrob$ as the Gray presentation obtained from~$\P$ by adding the $4$\generators $R_1,\ldots,R_{19}$ from above. Since we were not able to show termination, we conjecture it: \begin{conj} $\PFrob$ is terminating. \end{conj} \noindent From this assumption, we deduce that: \begin{theo} \label{thm:frob-coherent} If $\PFrob$ is terminating, then $\PFrob$ is a coherent Gray presentation. \end{theo} \begin{proof} This is a consequence of \Thmr{squier}. \end{proof} \section*{Introduction} \input{intro} \section{Precategories} \label{sec:precat} \input{precat} \section{Gray categories} \label{sec:gray} \input{gray} \section{Rewriting} \label{sec:rewriting} \input{rewriting} \section{Applications} \label{sec:applications} \input{applications} \input{appendix} \subsection{$2$-categories} \subsection{The Gray tensor products} \label{ssec:gray-tensor} \input{gray-tens} \subsection{Gray categories} \label{ssec:gray-cat} \input{gray-cat} \subsection{Gray presentations} \label{ssec:gray-presentation} \input{gray-pres} \section{Coherent presentations of Gray categories} \section*{Introduction} \input{intro} \section{Precategories} \label{sec:precat} \input{precat} \section{Gray categories} \label{sec:gray} \input{gray} \section{Rewriting} \label{sec:rewriting} \input{rewriting} \section{Applications} \label{sec:applications} \input{applications} \clearpage \bibliographystyle{plain} \subsection{Globular sets} \label{ssec:globular-sets} Given $n\in\N$, an \emph{$n$-globular set}~$C$ is a diagram of sets \[ \begin{tikzcd} C_0 & \ar[l,shift right,"\csrc_0"'] \ar[l,shift left,"\ctgt_0"] C_1 & \ar[l,shift right,"\csrc_1"'] \ar[l,shift left,"\ctgt_1"] C_2 & \ar[l,shift right,"\csrc_2"'] \ar[l,shift left,"\ctgt_2"] \ldots & \ar[l,shift right,"\csrc_{n-1}"'] \ar[l,shift left,"\ctgt_{n-1}"] C_n \end{tikzcd} \] such that $\csrc_i\circ \csrc_{i+1}=\csrc_i\circ \ctgt_{i+1}$ and $\ctgt_i\circ \csrc_{i+1}=\ctgt_i\circ \ctgt_{i+1}$ for $0\leq i<n-1$. An element~$u$ of $C_i$ is called an \emph{$i$-globe} of~$C$ and, for $i>0$, the globes $\csrc_{i-1}(u)$ and $\ctgt_{i-1}(u)$ are respectively called the \emph{source} and \emph{target} and~$u$. We write~$\nGlob n$ for the category of $n$-globular sets, a morphism $f\co C\to D$ being a family of morphisms $f_i\co C_i\to D_i$, for $0\leq i\leq n$, such that $\csrc_i\circ f_{i+1}=f_i\circ \csrc_i$. Given $m \ge n$ and $C \in \nGlob n$, we denote by $\restrictcat C n$ the $n$\globular set obtained from~$C$ by removing the $i$-globes for $n < i \le m$. This operation extends to a functor $\restrictcat {(-)} n\co \nGlob m \to \nGlob n$. For simplicity, we often implicitly suppose that, in an $n$-globular set~$C$, the sets $C_i$ are pairwise disjoint and write $u\in C$ for $u\in\bigsqcup_iC_i$. For $\eps\in\set{-,+}$ and $k\geq 0$, we write \[ \partial_{i,k}^\eps=\partial_{i}^\eps\circ\partial_{i+1}^\eps\circ\cdots\circ\partial_{i+k-1}^\eps \] for the \emph{iterated source} (when $\eps=-$) and \emph{target} (when $\eps=+$) maps. We generally omit the index~$k$ when it is clear from the context and sometimes simply write $\partial^\eps(u)$ for $\partial_{i,1}^\eps(u)$. Given $i,j,k\in \set{0,\ldots,n}$ with $k<i$ and $k<j$, we write $C_i\times_kC_j$ for the pullback \[ \begin{tikzcd}[column sep=tiny,row sep=small,cramped] & C_i\times_kC_j \ar[rd,dotted] \ar[ld,dotted] \ar[dd,phantom,"\dcorner",very near start]& \\ C_i\ar[rd,"\ctgt_k"']&&\ar[ld,"\csrc_k"]C_j \\ & C_k \end{tikzcd}\pbox. \] A sequence of globes $u_1 \in C_{i_1}, \ldots, u_p \in C_{i_p}$ is said \emph{$i$-composable}, for some $i \le \min(i_1,\ldots,i_p)$, when $\ctgt_{i}(u_j) = \csrc_i(u_{j+1})$ for $1 \le j < p$. Given $u,v \in C_{i+1}$ with $i < n$, $u$ and $v$ are said \emph{parallel} when $\csrctgt\eps(u) = \csrctgt\eps(v)$ for $\eps \in \set{-,+}$. For $u\in C_{i+1}$, we sometimes write $u\co v\to w$ to indicate that $\csrc_i(u)=v$ and $\ctgt_i(u)=w$. In low dimension, we use $n$-arrows such as $\To$, $\TO$, $\TOO$, \etc to indicate the sources and the targets of $n$-globes in several dimensions. For example, given a $2$\globular set $C$ and $\phi \in C_2$, we sometimes write \[ \phi\co f \To g \co x \to y \] to indicate that $\csrc_1(\phi) = f$, $\ctgt_1(\phi) = g$, $\csrc_0(\phi) = x$ and $\ctgt_0(\phi) = y$. We also use these arrows in graphical representations to picture the elements of a globular set~$C$. For example, given an $n$\globular set~$C$ with $n \ge 2$, the drawing \begin{equation} \label{eq:some-globular-set} \begin{tikzcd}[sep=4em] x \ar[r,"f",bend left=50,""{auto=false,name=fst}] \ar[r,"g"{description},""{auto=false,name=snd}] \ar[r,"h"',bend right=50,""{auto=false,name=trd}] & y \ar[from=fst,to=snd,phantom,"\phantom{\scriptstyle\phi}\Downarrow\scriptstyle\phi"] \ar[from=snd,to=trd,phantom,"\phantom{\scriptstyle\psi}\Downarrow\scriptstyle\psi"] \ar[r,"k"] & z \end{tikzcd} \end{equation} figures two $2$-cells $\phi,\psi \in C_2$, four $1$-cells $f,g,h,k \in C_1$ and three $0$-cells $x,y,z \in C_0$ such that \begin{gather*} \csrc_1(\phi) = f, \qquad \ctgt_1(\phi) = \csrc_1(\psi)= g, \qquad \ctgt_1(\psi) = h, \\ \csrc_0(f) = \csrc_0(g) = \csrc_0(h) = x, \qquad \ctgt_0(f) = \ctgt_0(g) = \ctgt_0(h) = \csrc_0(k) = y, \qquad \ctgt_0(k) = 0. \end{gather*} \subsection[\texorpdfstring{$n$}{n}-precategories]{\texorpdfstring{$\bm n$}{n}-precategories} \label{ssec:precat-def} Given $n\in\N$, an \emph{$n$-precategory}~$C$ is an $n$-globular set equipped with \begin{itemize} \item identity functions $\unitp{k}{}\co C_{k-1}\to C_{k}$, for $0< i\le n$, \item composition functions $\comp_{k,l}\co C_k\times_{\min(k,l)-1}C_l\to C_{\max(k,l)}$, for $0<k,l\leq n$, \end{itemize} satisfying the axioms below. In this context, the elements of~$C_i$ are called \emph{$i$\cells}. Since the dimensions of the cells determine the functions to be used, we often omit the indices of $\unit{}$ and, given $0 < k,l \le n$ and $i = \min(k,l) - 1$, we often write~$\comp_{i}$ for $\comp_{k,l}$. For example, in a $2$\precategory which has a configuration of cells as in~\eqref{eq:some-globular-set}, there are, among others, $1$-cells $f\comp_0 k$, $h\comp_0 k$ and $2$-cells $\phi\comp_1\psi$ and $\psi\comp_0k$ given by the composition operations. The axioms of $n$\precategories are the following: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{precat:first}\label{precat:src-tgt-unit}for~$k < n$ and~$u\in C_k$, \[ \csrc_k(\unit {u})=u=\ctgt_k(\unit {u}), \] \item \label{precat:csrc-tgt}for~$i,k,l \in \set{0,\ldots,n}$ such that~$i = \min(k,l) -1$,~$(u,v)\in C_k\times_iC_l$, and~${\eps \in \set{-,+}}$, \begin{align*} \csrctgt\eps(u \comp_i v) &= \begin{cases} u\comp_i \csrctgt\eps(v)& \text{if~$k < l$,}\\ \csrc(u)&\text{if~$k=l$ and~$\eps = -$,}\\ \ctgt(v)&\text{if~$k=l$ and~$\eps = +$,}\\ \csrctgt\eps(u)\comp_i v&\text{if~$k>l$,} \end{cases} \end{align*} \item \label{precat:compat-id-comp}for~$i,k,l \in \set{0,\ldots,n}$ with~$i = \min(k,l) - 1$, given~$(u,v)\in C_{k-1}\times_{i}C_l$, \begin{align*} \unit u\comp_i v &= \begin{cases} v&\text{if~$k \le l$,}\\ \unit{u\comp_i v}&\text{if~$k > l$,} \end{cases} \shortintertext{and, given~$(u,v) \in C_k \times_i C_{l-1}$,} u\comp_i\unit{v} &= \begin{cases} u&\text{if~$l\le k$,}\\ \unit{u\comp_i v}&\text{if~$l > k$,} \end{cases} \end{align*} \item \label{precat:before-last} \label{precat:assoc}for~$i,k,l,m \in \set{0,\ldots,n}$ with~$i = \min(k,l) - 1 = \min(l,m) - 1$, and~$u \in C_k$,~$v \in C_l$ and~$w \in C_w$ such that~$u,v,w$ are $i$\composable, \[ (u\comp_iv)\comp_iw = u\comp_i(v\comp_iw), \] \item \label{precat:distrib}for~$i,j,k,l,l' \in \set{0,\ldots,n}$ such that \[ i = \min(k,\max(l,l')) - 1,\quad j = \min(l,l') - 1 \qtand i < j\zbox, \] given~$u \in C_k$ and~$(v,v') \in C_l \times_j C_{l'}$ such that~$u,v$ are $i$\composable, \[ u \comp_i (v \comp_j v') = (u \comp_i v) \comp_j (u \comp_i v') \] and, given~$(u,u') \in C_l \times_j C_{l'}$ and~$v \in C_k$ such that~$u,v$ are $i$\composable, \[ (u \comp_j u') \comp_i v = (u \comp_i v) \comp_j (u' \comp_i v). \] \end{enumerate} A morphism of $n$-precategories, called an \emph{$n$-prefunctor}, is a morphism between the underlying globular sets which preserves identities and compositions as expected. We write $\nPCat n$ for the category of $n$-precategories. The above description exhibits $n$\precategories as an essentially algebraic theory. Thus, $\nPCat n$ is a locally presentable category~\cite[Thm.~3.36]{adamek1994locally}; consequently, it is complete and cocomplete~\cite[Cor.~1.28]{adamek1994locally}. In the following, we write $\termcat$ for the terminal $n$\precategory for $n\ge 0$. In dimension $2$, string diagrams can be used as usual to represent compositions of~$2$\cells. For example, given a $2$\precategory~$C$ and $\phi\co f \To f' \in C_2$ and $\psi\co g \To g' \in C_2$ such that $\phi,\psi$ are $0$\composable, we can represent the $2$\cells \[ (\phi \comp_0 g) \comp_1 (f' \comp_0 \psi) \qquad\qqtand\qquad (f \comp_0 \psi) \comp_1 (\phi \comp_0 g') \] respectively by \[ \satex[scale=1.5]{phi-psi} \quad\qqtand\quad \satex[scale=1.5]{psi-phi} \pbox. \] Note however that, with our definition of precategories, the diagram \[ \satex[scale=1.5]{psiphi} \] makes no sense. \subsection{Truncation functors} \label{ssec:truncation-functors} Similarly to strict categories~\cite{metayer2008cofibrant}\todo{Métayer ne fait que la troncation et l'inclusion. une meilleure idée de citation?}, the categories $\nPCat n$ for $n \ge 0$ can be related by several functors. For $m\geq n$, we have a truncation functor \[ \truncf{m}{n}\co \nPCat m \to\nPCat n \] where, given an $m$-precategory~$C$, $\truncf{m}{n}(C)$ is the $n$-precategory obtained by forgetting all the $i$-cells for $n < i \le m$. This functor admits a left adjoint \[ \incf{n}{m}\co\nPCat n\to\nPCat m \] which, to an $n$\precategory~$C$, associates the $m$\precategory~$\incf{n}{m}(C)$ obtained by formally adding $i$\nbd-identities for $n < i \le m$, \ie $\incf{n}{m} (C)_i=C_i$ for $i \le n$ and $\incf{n}{m} (C)_i=C_n$ for $i > n$. \begin{prop} \label{prop:trunc-incf-la-ra} For $m > n$, the functors $\truncf{m}{n}$ and $\incf{n}{m}$ admit both left and right adjoints, \ie we have a sequence of adjunctions \[ \incfla{m}{n}\dashv\incf{n}{m}\dashv\truncf{m}{n}\dashv\truncfra n m. \] As a consequence, the functors $\truncf{m}{n}$ and $\incf{n}{m}$ preserve both limits and colimits. \end{prop} \begin{proof} Suppose given an $m$-precategory~$C$. % The $n$-precategory $\incfla m n(C)$ has the same $i$-cells as $C$ for $i<n$ and $\incfla m n(C)_n$ is obtained by quotienting $C_n$ under the smallest congruence $\sim$ such that $u\sim v$ whenever there exists an $(n+1)$-cell $\alpha\co u\to v$. % The $n$-precategory $\truncfra mn(C)$ has the same $i$-cells as $C$ for $0\leq i\leq n$ and, for $n\leq i<m$, $\truncfra mn(C)_{i+1}$ is defined from $\truncfra mn(C)_i$ as the set of pairs $(u,v)\in \truncfra mn(C)_i\times\truncfra mn(C)_i$ with $\csrc(u)=\csrc(v)$ and $\ctgt(u)=\ctgt(v)$, with $\csrc(u,v)=u$ as source and $\ctgt(u,v)=v$ as target. % Details are left to the reader. \end{proof} Given $n < m$, we write $\catsk n {(-)}$ for the functor~$\incf{n}{m} \circ \truncf{m}{n}\co \nPCat m \to \nPCat m$ and, given an $m$\precategory~$C$, we call~$\catsk n C$ the \emph{$n$\skeleton} of~$C$. It corresponds to the $m$\precategory obtained from~$C$ by removing all non-trivial $i$-cells with $i>n$. We write \[ \cucatsk{(-)}\co \catsk n {(-)} \to \id{\nPCat m} \] for the counit of the adjunction $\incf{n}{m} \dashv \truncf{m}{n}$. Since $\incf{n}{m}$ and $\truncf{m}{n}$ both preserve limits and colimits by \Propr{trunc-incf-la-ra}, so does the functor $\catsk n {(-)}$. \subsection{The funny tensor product} \label{ssec:funny-tensor} We now define the funny tensor product, that we will use to give an enriched definition of precategories. It can be thought of as a variant of the cartesian product of categories where we restrict to morphisms where one of the components is the identity (or, more precisely, to formal composites of such morphisms). We give a rather direct and concise definition, and we refer the reader to the work of Weber~\cite{weber2009free} for a more abstract definition. Given $n \ge 0$ and two $n$\precategories~$C$ and~$D$, the \emph{funny tensor product of~$C$ and~$D$} is the pushout \[ \begin{tikzcd}[sep=large \catsk 0 C\times \catsk 0 D \ar[d,"\cucatsk C \times \catsk 0 D"'] \ar[r,"{\catsk 0 C \times \cucatsk{D}}"] & \ar[d,dotted,"\rincfun{C,D}"]\catsk 0 C\times D\\ C\times \catsk 0 D\ar[r,dotted,"\lincfun{C,D}"']&C\funny D \end{tikzcd} \pbox. \] Since $\cucatsk{(-)}$ is a natural transformation, the funny tensor product can be extended as a functor \[ (-)\funny(-)\co \nPCat n \times \nPCat n \to \nPCat n\pbox. \] \input{funny-mod} \subsection{Prepolygraphs} \label{ssec:pol} In this section, we introduce the notion of \emph{prepolygraph} which generalizes in arbitrary dimension the notion of rewriting system. This definition is an adaptation to precategories of the notion of polygraph introduced by Burroni for strict categories~\cite{burroni1993higher}. Polygraphs were also generalized by Batanin to algebras of any finitary monad on globular sets~\cite{batanin1998computads}, and prepolygraphs are a particular instance of this construction, for which we provide rather here an explicit construction. For $n \ge 0$, writing $\fgf n$ for the canonical forgetful functor $\nPCat n \to \nGlob n$, we define the category $\nPCat n^+$ as the pullback \[ \begin{tikzcd}[sep=3ex] \nPCat n^+ \ar[d,"{\fgfv n}"',dotted] \ar[r,"{\fgf n^+}",dotted]& \nGlob{n+1}\ar[d,"\restrictcat {(-)} n"]\\ \nPCat n\ar[r,"\fgf n"']&\nGlob n \end{tikzcd} \] and write $\fgf n^+\co \nPCat n^+ \to \nGlob{n+1}$ for the top arrow of the pullback and $\fgfv n\co \nPCat n^+ \to \nPCat n$ for the left arrow. An object $(C,C_{n+1})$ of $\nPCat n^+$ consists of an $n$-precategory~$C$ equipped with a set~$C_{n+1}$ of $(n{+}1)$-cells and two maps $\gsrc_n,\gtgt_n\co C_{n+1}\to C_n$ (note however that there is no notion of composition for $(n{+}1)$-cells). There is a functor~$\fgfw n\co \nPCat {n+1} \to \nPCat n^+$ defined as the universal arrow \[ \begin{tikzcd}[sep=3ex] \nPCat {n+1} \ar[rrd,bend left,"\fgf{n+1}"] \ar[ddr,bend right,"\truncf{n+1}{n}"'] \ar[rd,dotted,"\fgfw n"'] \\ & \nPCat n^+ \ar[d,"\fgfv n"'] \ar[r,"\fgf n^+"]&\nGlob{n+1}\ar[d,"\restrictcat{(-)} n"]\\ & \nPCat n\ar[r,"\fgf n"']&\nGlob n \end{tikzcd} \] and, since categories and functors in the above diagram are induced by finite limit sketches and morphisms of finite limit sketches, they are all right adjoints (see~\cite[Thm.~4.1]{barr2005topos} for instance), so that $\fgfw n$ admits a left adjoint $\freefl n\co \nPCat n^+ \to \nPCat {n+1}$. We define the category $\nPol n$ of $n$\polygraphs together with a functor $\freef n\co \nPol n \to \nPCat n$ by induction on~$n$. We define $\nPol 0 = \Set$ and take $\freef 0$ to be the identity functor. Now suppose that $\nPol n$ and $\freef n$ are defined for $n \ge 0$. We define $\nPol {n+1}$ as the pullback \[ \begin{tikzcd}[row sep=3ex] \nPol{n+1}\ar[d,"{\restrictcat{(-)} n}"',dotted]\ar[r,"{\freef n^+}",dotted] &\nPCat n^+\ar[d,"\fgfv n"]\\ \nPol{n}\ar[r,"\freef n"']&\nPCat n \end{tikzcd} \] and write $\freef n^+\co \nPol{n+1} \to \nPCat n^+$ for the top arrow and $\restrictcat{(-)} n$ for the left arrow of the diagram. Finally, we define $\freef {n+1}$ as $\freefl n \circ \freef n^+$. More explicitly, an $(n{+}1)$-prepolygraph~$\P$ consists in a diagram of sets \[ \begin{tikzcd}[column sep=10ex,labels={inner sep=0.5pt}] \P_0\ar[d,"\polinj0"{inner sep=2pt}] &\P_1 \ar[dl,shift right,"\gsrc_0"',pos=0.3] \ar[dl,shift left,"\gsrc_0",pos=0.3]\ar[d,"\polinj1"{inner sep=2pt}] &\P_2\ar[dl,shift right,"\gsrc_1"',pos=0.3]\ar[dl,shift left,"\gsrc_1",pos=0.3]\ar[d,"\polinj2"{inner sep=2pt}] &\ldots &\P_{n} \ar[dl,shift right,"\gsrc_{n-1}"',pos=0.3] \ar[dl,shift left,"\gsrc_{n-1}",pos=0.3]\ar[d,"\polinj{n}"{inner sep=2pt}] &\P_{n+1}\ar[dl,shift right,"\gsrc_{n}"',pos=0.3]\ar[dl,shift left,"\gsrc_{n}",pos=0.3]\\ \freecat{\P_0} & \freecat{\P_1} \ar[l,shift right,"{\csrc_0}"'] \ar[l,shift left,"{\ctgt_0}"] &\ldots \ar[l,shift right,"{\csrc_1}"']\ar[l,shift left,"{\ctgt_1}"] &\freecat{\P_{n-1}} &\ar[l,shift right,"{\csrc_{n-1}}"']\ar[l,shift left,"{\ctgt_{n-1}}"]\freecat{\P_{n}} \end{tikzcd} \] such that ${\csrc_i}\circ\gsrc_{i+1}={\csrc_i}\circ\gtgt_{i+1}$ and $\ctgt_i\circ\gsrc_{i+1}=\ctgt_i\circ\gtgt_{i+1}$, together with a structure of $n$\precategory on the globular set on the bottom row: $\P_i$ is the set of \emph{$i$-generators}, $\gsrc_i,\gtgt_i\co \P_{i+1}\to\freecat{\P_i}$ respectively associate to each $(i{+}1)$-generator its \emph{source} and \emph{target}, and $\freecat{\P_i}$ is the set of \emph{$i$-cells}, \ie formal compositions of $i$-generators. By definition, an $(n{+}1)$\prepolygraph~$\P$ has an underlying $n$\prepolygraph~$\restrictcat\P n$. More generally, for $m \ge n$, an $m$\prepolygraph~$\P$ has an underlying $n$\prepolygraph obtained by applying successively the forgetful functors $\restrictcat{(-)} i$ for $m > i \ge n$. \begin{example} \label{ex:pseudo-monoid-3-pol} We define the \emph{$3$\prepolygraph~$\P$ for pseudomonoids} as follows. We put \begin{align*} \P_0&= \set{x} & \P_1&= \set{a\co x \to x} & \P_2&= \set{\mu\co \bar 2 \To \bar 1, \eta\co \bar 0 \To \bar 1} \end{align*} where, given $n\in\N$, we write $\bar n$ for the composite $a \comp_0 \cdots \comp_0 a$ of $n$ copies of~$a$, and we define $\P_3$ as the set with the following three elements \[ \setlength\arraycolsep{0pt} \begin{array}{r@{\ }c@{\ }r@{\ }c@{\ }l} \monA&\co &(\mu \comp_0 \bar 1) \comp_1 \mu &\TO& (\bar 1 \comp_0 \mu) \comp_1 \mu \\ \monL&\co &(\eta \comp_0 \bar 1) \comp_1 \mu &\TO& \unit{\bar 1} \\ \monR&\co &(\bar 1 \comp_0 \eta) \comp_1 \mu &\TO& \unit{\bar 1} \end{array} \ \zbox. \] Note that we make use of the arrows $\to$, $\To$ and $\TO$ to indicate the source and target of each $i$\generator for $i\in \set{1,2}$: $a$ is a $1$\generator such that $\gsrc_0(a) = \gtgt_0(a) = x$, $\mu$ is a $2$\generator such that $\gsrc_1(\mu) = a \comp_0 a$ and $\gtgt_1(\mu) = a$, and so on. In the following, we will keep using this notation to describe the generators of other prepolygraphs. \end{example} \subsection{Presentations} \label{ssec:presentations} \input{pres} \subsection{Freely generated cells} \label{ssec:freely-generated-cells} Given $(C,C_{n+1})\in \nPCat n^+$, we give an explicit description of the free $(n{+}1)$\precategory~$\freefl n(C)$ it generates, similar to the one given in~\cite{metayer2008cofibrant} in the case of polygraphs. This $(n{+}1)$-precategory has~$C$ as underlying $n$-precategory so that we focus on the description of the $(n{+}1)$-cells, which can be described as equivalence classes of terms, called here \emph{expressions}, corresponding to formal composites of cells. These expressions are defined inductively as follows: \begin{itemize} \item for every element $u\in C_{n+1}$, there is an expression, still noted $u$, \item for every $n$-cell $u\in C_n$, there is an expression $\unit u$, \item for every $0\leq i<n$, for every $u\in C_{i+1}$ and every expression $v$, there is an expression $u\comp_i v$, \item for every $0\leq i<n$, for every expression $u$ and every $v\in C_{i+1}$, there is an expression $u\comp_i v$, \item for every pair of expressions $u$ and $v$, there is an expression $u\comp_n v$. \end{itemize} We then define \emph{well-typed expressions} through typing rules in a sequent calculus. We consider judgments of the form \begin{itemize} \item $\vdash t\co u\to v$, where $t$ is an expression and $u,v\in C_n$, with the intended meaning that the expression $t$ has $u$ as source and $v$ as target, \item $\vdash t=t'\co u\to v$, where $t$ and $t'$ are expressions and $u,v\in C_n$, with the intended meaning that $t$ and $t'$ are equal expressions from~$u$ to~$v$. \end{itemize} The associated typing rules are \begin{itemize} \item for every $t\in C_{n+1}$ with $\partial_n^-(t)=u$ and $\partial_n^+(t)=v$, \[ \inferrule{ }{\vdash t\co u\to v} \] \item for every $u\in C_n$, \[ \inferrule{ }{\vdash\unit u\co u\to u} \] \item for every $0\leq i<n$, every $u\in C_{i+1}$ with $\ctgt_i(u)=\csrc_i(v)$, \[ \inferrule{ \vdash t\co v\to v' }{ \vdash u\comp_i t\co (u\comp_iv)\to(u\comp_iv') } \] \item for every $0\leq i<n$, every $v\in C_{i+1}$ with $\ctgt_i(u)=\csrc_i(v)$ \[ \inferrule{ \vdash t\co u\to u' }{ \vdash t\comp_i v\co (u\comp_iv)\to(u'\comp_iv) } \] \item and \[ \inferrule{ \vdash t\co u\to v \\ \vdash t'\co v\to w }{ \vdash t\comp_n t'\co u\to w } \] \end{itemize} The equality rules, which express different desirable properties of the equality relation, are introduced below. The first rules enforce that equality is an equivalence relation: \begin{equation} \label{eq:expr-equiv} \inferrule{\vdash t\co u\to v}{\vdash t=t\co u\to v} \qquad \inferrule{\vdash t=t'\co u\to v}{\vdash t'=t\co u\to v} \qquad \inferrule{\vdash t=t'\co u\to v\\\vdash t'=t''\co u\to v}{\vdash t=t''\co u\to v} \end{equation} The next ones express that that identities are neutral elements for composition: \begin{gather*} \inferrule{\vdash t\co u\to v}{\vdash\unit u\comp_n t=t\co u\to v} \qquad \inferrule{\vdash t\co u\to v}{\vdash t\comp_n\unit v=t\co u\to v} \\ \inferrule{ \vdash t \co u \to u' \\ i < n } { \vdash \unitp{i+1}{\csrc_i(u)} \comp_i t = t \co u \to u' } \qquad \inferrule{ \vdash t \co u \to u' } { \vdash t \comp_i \unitp{i+1}{\ctgt_i(u)} = t \co u \to u' } \end{gather*} The next ones express that composition is associative: {\displayskipforlongtable\begin{longtable}{c} \inferrule{ \vdash t_1\co u_0\to u_1 \\ \vdash t_2\co u_1\to u_2 \\ \vdash t_3\co u_2\to u_3 }{ \vdash (t_1\comp_n t_2)\comp_n t_3=t_1\comp_n(t_2\comp_n t_3)\co u_0\to u_3 } \crjot \inferrule{ \vdash t \co v \to v' \\ u_1,u_2 \in C_{i+1} \\ \ctgt_i(u_1) = \csrc_i(u_2) \\ \ctgt_i(u_2) = \csrc_i(v) } {\vdash u_1 \comp_i (u_2 \comp_i t) = (u_1 \comp_i u_2) \comp_i t \co u_1 \comp_i u_2 \comp_i v \to u_1 \comp_i u_2 \comp_i v'} \crjot \inferrule{ \vdash t \co u \to u' \\ v_1,v_2 \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v_1) \\ \ctgt_i(v_1) = \csrc_i(v_2) } {\vdash (t \comp_i v_1) \comp_i v_2 = t \comp_i (v_1 \comp_i v_2) \co u \comp_i v_1 \comp_i v_2 \to u' \comp_i v_1 \comp_i v_2} \crjot \inferrule{ \vdash t \co v \to v' \\ i < n \\ u \in C_{i+1} \\ w \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v) \\ \ctgt_i(v') = \csrc_i(w) } { \vdash (u \comp_i t) \comp_i w = u \comp_i (t \comp_i w) \co u \comp_i v \comp_i w \to u \comp_i v' \comp_i w } \end{longtable} }\noindent The next ones express that $(n{+}1)$-identities are compatible with low-dimensional compositions: \begin{gather*} \inferrule{ i < n \\ u \in C_{i+1} \\ v \in C_{n} \\ \ctgt_i(u) = \csrc_i(v) } { \vdash u \comp_i \unit v = \unit {u \comp_i v}\co u \comp_i v \to u \comp_i v } \\ \inferrule{ u \in C_{n} \\ i < n \\ v \in C_{i} \\ \ctgt_i(u) = \csrc_i(v) } { \vdash \unit u \comp_i v = \unit {u \comp_i v}\co u \comp_i v \to u \comp_i v } \end{gather*} The next ones express that $n$-compositions are compatible with low dimensional compositions: \begin{gather*} \inferrule{ \vdash t_1 \co v_1 \to v_2 \\ \vdash t_2 \co v_2 \to v_3 \\ u \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v_1) } { \vdash u \comp_i (t_1 \comp_n t_2) = (u \comp_i t_1) \comp_n (u \comp_i t_2) \co u \comp_i v_1 \to u \comp_i v_3 } \\ \inferrule{ \vdash t_1 \co u_1 \to u_2 \\ \vdash t_2 \co u_2 \to u_3 \\ v \in C_{i+1} \\ \ctgt_i(u_1) = \csrc_i(v) } { \vdash (t_1 \comp_n t_2) \comp_i v = (t_1 \comp_i v) \comp_n (t_2 \comp_i v) \co u_1 \comp_i v \to u_3 \comp_i v } \end{gather*} The next ones express the distributivity properties between the different low-dimensional compositions: \begin{gather*} \inferrule{ \vdash t \co w \to w' \\ i < j < n \\ u \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(w) \\ v \in C_{j+1} \\ \ctgt_j(v) = \csrc_j(w) } { \vdash u \comp_i (v \comp_j t) = (u \comp_i v) \comp_j (u \comp_i t) \co u \comp_i (v \comp_j w) \to u \comp_i (v \comp_j w') } \\ \inferrule{ \vdash t \co v \to v' \\ i < j < n \\ u \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v) \\ w \in C_{j+1} \\ \ctgt_j(v) = \csrc_j(w) } { \vdash u \comp_i (t \comp_j w) = (u \comp_i t) \comp_j (u \comp_i w) \co u \comp_i (v \comp_j w) \to u \comp_i (v' \comp_j w) } \\ \inferrule{ \vdash t \co v \to v' \\ i < j < n \\ u \in C_{j+1} \\ \ctgt_j(u) = \csrc_j(v) \\ w \in C_{i+1} \\ \ctgt_i(v) = \csrc_i(w) } { \vdash (u \comp_j t) \comp_i w = (u \comp_i w) \comp_j (t \comp_i w) \co (u \comp_j v) \comp_i w \to (u \comp_j v') \comp_i w } \\ \inferrule{ \vdash t \co u \to u' \\ i < j < n \\ v \in C_{j+1} \\ \ctgt_j(u) = \csrc_j(v) \\ w \in C_{i+1} \\ \ctgt_i(v) = \csrc_i(w) } { \vdash (t \comp_j v) \comp_i w = (t \comp_i w) \comp_j (v \comp_i w) \co (u \comp_j v) \comp_i w \to (u' \comp_j v) \comp_i w } \end{gather*} Finally, the last ones express that equality is contextual: {\displayskipforlongtable\begin{longtable}{c} \inferrule{ \vdash t = t' \co v \to v' \\ u \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v) } { \vdash u \comp_i t = u \comp_i t'\co u \comp_i v \to u \comp_i v' } \crjot \inferrule{ \vdash t = t' \co u \to u' \\ v \in C_{i+1} \\ \ctgt_i(u) = \csrc_i(v) } { \vdash t \comp_i v = t' \comp_i v\co u \comp_i v \to u' \comp_i v } \crjot \inferrule{ \vdash t_1 = t_1' \co u_1 \to u_2 \\ \vdash t_2 \co u_2 \to u_3 } { \vdash t_1 \comp_n t_2 = t_1' \comp_n t_2 \co u_1 \to u_3 } \crjot \inferrule{ \vdash t_1 \co u_1 \to u_2 \\ \vdash t_2 = t_2' \co u_2 \to u_3 } { \vdash t_1 \comp_n t_2 = t_1 \comp_n t_2' \co u_1 \to u_3 } \end{longtable} }\noindent The following lemmas show that typing is unique and well-behaved regarding equality. They are easily shown by inductions on the derivations: \begin{lem}[Uniqueness of typing] \label{lem:typing-unique} Given an expression $t$ such that the judgements~${\vdash t\co u\to v}$ and ${\vdash t\co u'\to v'}$ are derivable, we have $u=u'$ and $v=v'$. \end{lem} \begin{lem} \label{lem:eq-implies-typing} If $\vdash t=t'\co u\to v$ is derivable then $\vdash t\co u\to v$ and $\vdash t'\co u\to v$ are derivable. \end{lem} \noindent A term $t$ is \emph{well-typed} if there are $u,v\in C_n$ such that $\vdash t\co u\to v$ is derivable using the above rules. In this case, by \Lemr{typing-unique}, the types $u$ and $v$ are uniquely determined by~$t$, and we write $\csrc_n(t)=u$ and $\ctgt_n(t)=v$. We define $C_{n+1}^*$ to be the set of equivalence classes under~$=$ of well-typed expressions. By \Lemr{eq-implies-typing}, the operations $\csrc_n$ and $\ctgt_n$ are compatible with the relation~$=$. We finally define $\freefl n(C)$ as the $(n{+}1)$-precategory with~$C$ as underlying $n$-precategory, $C_{n+1}^*$ as set of $(n{+}1)$-cells, with sources and targets given by the maps $\csrc_n$ and $\ctgt_n$. The compositions and identities on the $(n{+}1)$-cells are induced in the expected way by the corresponding syntactic constructions (this is well-defined by the axioms of~$=$). It is routine to verify that: \begin{theo} The above construction defines a functor~$\freefl n$ which is left adjoint to~$\fgfw n$. \end{theo} \subsection{Normal form for cells} \label{ssec:cell-nf} Suppose given $(C,C_{n+1})\in\nPCat n^+$. The set $C_{n+1}^*$ of cells of $\freefl n(C)$ was described in previous section as a quotient of expressions modulo a congruence $=$. In order to conveniently work with its equivalence classes, we introduce here a notion of normal form for those. From now on, we adopt the convention that missing parenthesis in expressions are implicitly bracketed on the right, \ie we write $u_1\comp_n u_2\comp_n\cdots\comp_n u_k$ instead of $u_1\comp_n(u_2\comp_n(\cdots\comp_n u_k))$. By removing the relations~\eqref{eq:expr-equiv} in the definition of the congruence $=$ and orienting from left to right the remaining equations, we obtain a relation $\tred$ which can be interpreted as a rewriting relation on expressions: \begin{align*} \unit u \comp_n t &\tred t & t \comp_n \unit u &\tred t \\ (t_1 \comp_n t_2) \comp_n t_3 & \tred t_1 \comp_n (t_2 \comp_n t_3) & (u_1 \comp_i t_n) \comp_i u_2 &\tred u_1 \comp_i (t_n \comp_i u_2) \\ &\cdots & &\cdots \end{align*} We now study the properties of $\tred$. We recall that such a relation is said to be \emph{terminating} when there is no infinite sequence $(t_i)_{i \ge 0}$ such that $t_i \tred t_{i+1}$ for $i \ge 0$. A \emph{normal form} is an expression $t$ such that there exists no $t'$ with $t \tred t'$. Writing $\tred^*$ for the reflexive transitive closure of $\tred$, the relation $\tred$ is said \emph{locally confluent} when for all expressions $t$, $t_1$ and $t_2$ such that $t \tred t_1$ and $t \tred t_2$, we have $t_1 \tred^* t'$ and $t_2 \tred^* t'$ for some expression $t'$ (diagram on the left) and \emph{confluent} when for all expressions $t$, $t_1$ and $t_2$ such that $t \tred^* t_1$ and $t \tred^* t_2$, we have $t_1 \tred^* t'$ and $t_2 \tred^* t'$ for some expression $t'$ (diagram on the right): \[ \begin{tikzcd}[sep=small] &\ar[dl,Rightarrow]t\ar[dr,Rightarrow]&\\ t_1\ar[dr,dotted,Rightarrow,"*"']&&\ar[dl,dotted,Rightarrow,"*"]t_2\\ &t'& \end{tikzcd} \qquad\qquad\qquad\qquad \begin{tikzcd}[sep=small] &\ar[dl,Rightarrow,"*"']t\ar[dr,Rightarrow,"*"]&\\ t_1\ar[dr,dotted,Rightarrow,"*"']&&\ar[dl,dotted,Rightarrow,"*"]t_2\\ &t'& \end{tikzcd} \] Those notions are introduced in more details in~\cite{baader1999term}. \begin{lem} \label{lem:precat-terminating} The relation $\tred$ is terminating. \end{lem} \begin{proof} In order to show termination, we define a measure on the terms that is decreased by each rewriting operation. To do so, we first define counting functions $c_n$ and $l_i,r_i$ for $0 \le i < n$ from expressions to $\N$ that take into account the three kinds of operations in the expression: top $n$-dimensional compositions, and lower $i$-dimensional left and right compositions. These functions count the numbers of potential reductions in an expression $t$ with the associated operations. Since reductions involving composition operations change value of counting functions of composition operations of lower dimension, we will use a lexicographical ordering of the counting functions to obtain the wanted measure. Given an expression $t$, we define $c_n(t) \in \N$ and $l_{i}(t),r_{i}(t) \in \N$ for $0 \le i < n$ by induction on $t$ as follows: \begin{itemize} \item if $g \in C_{n+1}$, we put $c_n(g) = l_{i}(g) = r_{i}(g) = 0$ for $0 \le i < n$, \item if $u \in C_n$, we put $c_n(\unit u) = l_{i}(\unit u) = r_{i}(\unit u) = 1$, \item if $t = t_1 \comp_n t_2$, we put \begin{align*} c_n(t) &= 2c_n(t_1) + c_n(t_2) + 1, \\ l_{i}(t) &= l_{i}(t_1) + l_{i}(t_2) + 2, \\ r_{i}(t) &= r_{i}(t_1) + r_{i}(t_2) + 2, \end{align*} \item if $t = u \comp_j t'$, we put $c_n(t) = c_n(t')$ and \begin{align*} l_{i}(t)&= \begin{cases} l_{i}(t') & \text{if $j < i$,} \\ 2l_{i}(t') + 1 & \text{if $j = i$,} \\ l_{i}(t') + 1 & \text{if $j > i$,} \end{cases} & r_{i}(t)&= \begin{cases} r_{i}(t') & \text{if $j < i$,} \\ r_{i}(t') + 1 & \text{if $j \ge i$,} \end{cases} \end{align*} \item if $t = t' \comp_j v$, we put $c_{n}(t) = c_n(t')$ and \begin{align*} l_{i}(t)&= \begin{cases} l_{i}(t') & \text{if $j \le i$,} \\ l_{i}(t') + 1 & \text{if $j > i$,} \end{cases} & r_{i}(t)&= \begin{cases} r_{i}(t') & \text{if $j < i$,} \\ 2r_{i}(t') + 1 & \text{if $j = i$,} \\ r_{i}(t') + 1 & \text{if $j > i$.} \end{cases} \end{align*} \end{itemize} For each expression $t$, we define \[ N(t) = (c_n(t),l_{n-1}(t),r_{n-1}(t),\ldots,l_{0}(t),r_{0}(t)) \in \N^{2n+1} \] and consider the lexicographical ordering~$\ltlex$ on~$\N^{2n+1}$. For the inductive rules of $\tred$, we observe that \begin{itemize} \item if $t = t_1 \comp_n t_2$ and $t' = t'_1 \comp_n t_2$ with $N(t_1) \ltlex N(t'_1)$, then $N(t) \ltlex N(t')$, \item if $t = t_1 \comp_n t_2$ if $t' = t_1 \comp_n t'_2$ with $N(t'_2) \ltlex N(t_2)$, then $N(t') \ltlex N(t)$, \item if $t = u \comp_i \tilde t$ and $t' = u \comp_i \tilde t'$ with $N(\tilde t') \ltlex N(\tilde t)$, then $N(t') \ltlex N(t)$, \item if $t = \tilde t \comp_i v$ and $t' = \tilde t' \comp_i v$ with $N(\tilde t') \ltlex N(\tilde t)$, then $N(t') \ltlex N(t)$. \end{itemize} It is sufficient to prove that the other reduction rules decrease the norm $N(-)$. We only cover the most representative cases by computing the first component of $N(-)$ modified by the reduction rule and showing that it is strictly decreasing: \begin{align*} c_n(\unit u \comp_n t) &= c_n(t) + 3 > c_n(t), \\ c_n((t_1 \comp_n t_2) \comp_n t_3) &= 4c_n(t_1) + 2c_n(t_2) + c_n(t_3) + 3 \\ & > 2c_n(t_1) + 2c_n(t_2) + c_n(t_3) + 2 = c_n(t_1 \comp_n (t_2 \comp_n t_3)), \\ l_{i}(u_1 \comp_i (u_2 \comp_i t)) &= 4l_{i}(t) + 3 > 2l_{i}(t) + 1 = l_{i}((u_1 \comp_i u_2) \comp_i t), \\ r_{i}((u_1 \comp_i t) \comp_i u_2) &= 2r_{i}(t) + 3 > 2r_{i}(t) + 2 = r_{i}(u_1 \comp_i (t \comp_i u_2)), \\ l_{i}(u \comp_i (t_1 \comp_n t_2)) &= 2l_{i}(t_1) + 2l_{i}(t_2) + 5 \\ &> 2l_{i}(t_1) + 2l_{i}(t_2) + 4 = l_{i}((u \comp_i t_1) \comp_n (u \comp_i t_2)), \\ l_{i}(u \comp_i (v \comp_j t)) &= 2l_{i}(t) + 3 > 2l_{i}(t) + 2 = l_{i}((u \comp_i v) \comp_j (u \comp_i t)) \text{ for $j > i$.} \end{align*} Thus, if $t \tred t'$, we have $N(t') \ltlex N(t)$. Since the lexicographical order~$\ltlex$ on~$\N^{2n+1}$ is well-founded, the reduction rule $\tred$ is terminating. \end{proof} \begin{lem} \label{lem:precat-locally-confluent} The relation $\tred$ is locally confluent. \end{lem} \begin{proof} By a direct adaptation of the critical pair lemma (for example \cite[Thm.~6.2.4]{baader1999term}), it is enough to show that all critical branchings are confluent, which can be checked by direct computation. For example, given $t_1$, $t_2$, $t_3$ and $t_4$ suitably typed, there is a critical branching given by the reductions \[ (t_1 \comp_n (t_2 \comp_n t_3)) \comp_n t_4 \tlred ((t_1 \comp_n t_2) \comp_n t_3) \comp_n t_4 \tred (t_1 \comp_n t_2) \comp_n (t_3 \comp_n t_4). \] This branching is confluent since \[ (t_1 \comp_n (t_2 \comp_n t_3)) \comp_n t_4 \tred t_1 \comp_n ((t_2 \comp_n t_3) \comp_n t_4) \tred t_1 \comp_n (t_2 \comp_n (t_3 \comp_n t_4)) \] and \[ (t_1 \comp_n t_2) \comp_n (t_3 \comp_n t_4) \tred t_1 \comp_n (t_2 \comp_n (t_3 \comp_n t_4)). \] Another critical branching is given by the reductions \[ (u_1 \comp_i u_2) \comp_i (t_1 \comp_n t_2) \tlred u_1 \comp_i (u_2 \comp_i (t_1 \comp_n t_2)) \tred u_1 \comp_i ((u_2 \comp_i t_1) \comp_n (u_2 \comp_i t_2)) \] for $u_1,u_2 \in C_i$ with $i \le n$ and $t_1,t_2$ suitably typed. This branching is confluent since \[ (u_1 \comp_i u_2) \comp_i (t_1 \comp_n t_2) \tred ((u_1 \comp_i u_2) \comp_i t_1) \comp_n ((u_1 \comp_i u_2) \comp_i t_2) \] and \begin{align*} u_1 \comp_i ((u_2 \comp_i t_1) \comp_n (u_2 \comp_i t_2)) &\tred (u_1 \comp_i(u_2 \comp_i t_1)) \comp_n (u_1 \comp_i (u_2 \comp_i t_2)) \\ &\tred ((u_1 \comp_iu_2) \comp_i t_1) \comp_n ((u_1 \comp_i u_2) \comp_i t_2). \end{align*} The other cases are similar. \end{proof} \begin{theo} \label{thm:precat-nf} Any cell in $u\in C_{n+1}^*$ admits a unique representative by an expression of the form \[ u=u_1\comp_nu_2\comp_n\cdots\comp_n u_k \] where each $u_i$ decomposes as \begin{equation} \label{eq:precat-nf} u_i=v^i_{n}\comp_{n-1}(\cdots\comp_2(v^i_2\comp_1(v^i_1\comp_0 A^i\comp_0 w^i_1)\comp_1 w^i_2)\comp_2\cdots)\comp_{n-1} w^i_{n} \end{equation} where $A^i$ is an element of~$C_{n+1}$ and $v^i_j$ and $w^i_j$ are $j$-cells in $C_j$. \end{theo} \begin{proof} We have seen in \Lemr{precat-terminating} and \Lemr{precat-locally-confluent} that the relation $\tred$ is terminating and locally confluent. By Newman's lemma (see, for example, \cite[Lem.~2.7.2]{baader1999term}), it is thus confluent and every equivalence class of expressions contains a unique normal form, which can be obtained by reducing any expression to its normal form. It can be checked that those normal forms are in bijective correspondence with the expression of the form \eqref{eq:precat-nf} (essentially, those expressions are normal forms where identities have been suitably inserted). \end{proof} \noindent A cell of $\freecat C_{n+1}$ of the form~\eqref{eq:precat-nf} is called a \emph{whisker}. By the inductive definition of prepolygraphs from \Ssecr{pol} and \Thmr{precat-nf}, given an $m$\prepolygraph~$\P$ with $m > 0$, an $(i{+}1)$-cell $u \in \freecat\P_i$ with $i \in \set{0,\ldots,m-1}$ can be uniquely written as a composite of $(i{+}1)$\dimensional whiskers $u_1 \comp_n \cdots \comp_n u_k$ for a unique $k \in \N$ that is called the \emph{length} of~$u$ and denoted by~$\len{u}$. Moreover, each whisker $u_j$ admits a unique decomposition of the form~\eqref{eq:precat-nf}. We will extensively use this canonical form for cells of precategories freely generated by a prepolygraph in the following, often omitting to invoke \Thmr{precat-nf}. \begin{example} Recall the $3$\prepolygraph of pseudomonoids~$\P$ from \Exr{pseudo-monoid-3-pol}. \Thmr{precat-nf} allows a canonical string diagram representation of the elements of~$\freecat\P_2$: first, we represent the $2$\generators $\mu$ and $\eta$ by $\satex{mu}$ and $\satex{eta}$ respectively. Secondly, we represent the whiskers $\bar m \comp_0 \alpha \comp_0 \bar n$ for $m,n\in\N$ and $\alpha \in \P_2$ by adding $m$ wires on the left and $n$ wires on the right of the representation of $\alpha$. For example, $\bar 2 \comp_0 \mu \comp_0 \bar 3$ is represented by \[ \satex{whisk}\pbox. \] Finally, a $2$-cell of~$\freecat\P_2$, which decomposes as a composite of whiskers $w_1 \comp_1 \cdots \comp_1 w_k$, is represented by stacking the representation the whiskers. For example, below are shown two $2$-cells with their associated graphical representation: \begin{align*} (0\comp_0\mu\comp_02)\comp_1(1\comp_0\mu\comp_00)\comp_1\mu&=\satex{mon-ex1} \\ (2\comp_0\mu\comp_00)\comp_1(0\comp_0\mu\comp_01)\comp_1\mu&=\satex{mon-ex2}\pbox. \end{align*} Note that, contrary to $2$-cells of strict $2$-categories, these two $2$-cells are not equal in $\freecat\P_2$. The above graphical representation can be used in order to define unambiguously the source and target of $3$-cells. Here, the $3$-generators $\monA$, $\monL$, and $\monR$ can be described graphically by \[ \setlength\myarrayintersep{\jot \setlength\arraycolsep{0pt} \begin{array}{c@{\ }l@{\ }r@{\ }c@{\ }l} \monA&\co&\spacebelowsatex{mon-assoc-l} &\TO& \satex{mon-assoc-r} \\ \monL&\co &\spacebelowsatex{mon-unit-l} &\TO& \satex{mon-unit-c} \\ \monR&\co &\satex{mon-unit-r} &\TO& \satex{mon-unit-c}\pbox{\hspace*{1.5em}.} \end{array} \] \simon{centrer les targets de L et R}% \end{example} \subsection[\texorpdfstring{$(3,2)$}{(3,2)}-precategories]{$\bm{(3,2)}$-precategories} \label{ssec:(3,2)-precategory} In the following sections, we will mostly consider $3$\precategories that are generated by $3$\prepolygraphs (as the one from \Exr{pseudo-monoid-3-pol}), whose $3$\generators should moreover be thought as ``invertible operations'' (think of the $3$\generators $\monA,\monL,\monR$ of \Exr{pseudo-monoid-3-pol}). Thus, we will in fact be dealing with $3$\precategories whose $3$\cells are all invertible. Such $3$\precategories will usually be obtained by applying a localization construction to the $3$\precategory~$\freecat\P$ for some $3$\prepolygraph~$\P$, which is a direct adaptation of the one for categories and described below. Given a $3$\precategory~$C$, a $3$-cell~$F\co \phi\TO \phi' \in C_3$ is \emph{invertible} when there exists $G\co \phi'\TO \phi$ such that $F \comp_2 G = \unit{\phi}$ and $G \comp_2 F = \unit{\phi'}$. In this case, $G$ is unique and we write it as $F^{-1}$. A \emph{$(3,2)$\precategory} is a $3$\precategory where every $3$-cell is invertible. The $(3,2)$\precategories form a full subcategory of~$\nPCat {3}$ denoted~$\nPCat {(3,2)}$. There is a forgetful functor \[ \fginvf\co \nPCat{(3,2)} \to \nPCat {3} \] which admits a left adjoint $\freeinvf {(-)}$ also called \emph{localization functor} described as follows. Given a $3$\precategory~$C$, for every $F\co \phi\To\phi' \in C_3$, we write $F^+$ for a formal element of source $\phi$ and target $\phi'$, and $F^-$ for a formal element of source $\phi'$ and target $\phi$. A \emph{zigzag} of~$C$ is a list \begin{equation} \label{eq:zigzag-list} (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'} \end{equation} for some $k \ge 0$, $F_1,\ldots,F_k \in C_{3}$ and $\eps_1,\ldots,\eps_k \in \set{-,+}$ such that $\phi = \csrc(F_1^{\eps_1})$, $\phi' = \ctgt(F_k^{\eps_k})$ and $\ctgt(F_i^{\eps_i}) = \csrc(F_{i+1}^{\eps_{i+1}})$ for $1 \le i < k$ (there is one empty list $()_{\phi,\phi}$ for each~$\phi \in \freecat\P_2$, by convention). The source and the target of a zigzag as in~\eqref{eq:zigzag-list} are $\phi$ and $\phi'$ respectively. Then, we define $\restrictcat{(\freeinvf{C})}{2}$ as $\restrictcat{C}{2}$ and $(\freeinvf{C})_{3}$ as the quotient of the zigzags by the following equalities: for every zigzag~$(F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'}$, \begin{itemize} \item if $F_i = \unit{\psi}$ for some $i \in \set{1,\ldots,k}$ and $\psi \in C_2$, then \[ (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'} = (F_1^{\eps_1},\ldots,F_{i-1}^{\eps_{i-1}},F_{i+1}^{\eps_{i+1}},\ldots,F_k^{\eps_k})_{\phi,\phi'}, \] \item if $\eps_i = \eps_{i+1} = +$ for some $i \in \set{1,\ldots,k-1}$, then \[ (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'} = (F_1^{\eps_1},\ldots,F_{i-1}^{\eps_{i-1}},(F_{i} \comp_2 F_{i+1})^{+},F_{i+2}^{\eps_{i+2}},\ldots,F_k^{\eps_k})_{\phi,\phi'}, \] \item if $\eps_i = \eps_{i+1} = -$ for some $i \in \set{1,\ldots,k-1}$, then \[ (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'} = (F_1^{\eps_1},\ldots,F_{i-1}^{\eps_{i-1}},(F_{i+1} \comp_2 F_{i})^{-},F_{i+2}^{\eps_{i+2}},\ldots,F_k^{\eps_k})_{\phi,\phi'}, \] \item if $\set{\eps_i,\eps_{i+1}} = \set{-,+}$ and $F_i = F_{i+1}$ for some $i \in \set{1,\ldots,k-1}$, then \[ (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'} = (F_1^{\eps_1},\ldots,F_{i-1}^{\eps_{i-1}},F_{i+2}^{\eps_{i+2}},\ldots,F_k^{\eps_k})_{\phi,\phi'}. \] \end{itemize} Since the definitions of source and target of zigzags are compatible with the above equalities, they induces source and target operations $\csrc,\ctgt\co (\freeinvf{C})_{3} \to C_2$. Given \[ F = (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi_1,\phi_2} \in (\freeinvf{C})_{3} \qtand G = (G_1^{\delta_1},\ldots,G_l^{\delta_l})_{\phi_2,\phi_3} \in (\freeinvf{C})_{3}, \] we define $F \comp_2 G$ as \[ F \comp_2 G = (F_1^{\eps_1},\ldots,F_k^{\eps_k},G_1^{\delta_1},\ldots,G_l^{\delta_l})_{\phi_1,\phi_3} \] and, given $i \in \set{0,1}$, $u \in C_{i+1}$ and $F = (F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'}$ with $\ctgt_i(u) = \csrc_i(\phi)$, we define $u \comp_i F$ as \[ u \comp_i F = ((u \comp_i F_1)^{\eps_1},\ldots,(u \comp_i F_k)^{\eps_k})_{u \comp_i \phi,u \comp_i \phi'} \] and, finally, given $\phi \in C_2$, we define $\unit \phi$ as $()_{\phi,\phi}$. All these operations are compatible with the quotient equalities above, and they equip $\freeinvf{C}$ with a structure of $3$\precategory. There is a canonical $3$\prefunctor $H\co C \to \freeinvf{C}$ sending $F\co\phi\TO\psi\in C_3$ to $(F^+)_{\phi,\psi}$. Moreover, given a $(3,2)$\precategory~$D$ and a $3$\prefunctor $G\co C \to D$, we can define $G'\co \freeinvf{C} \to D$ by putting $G'(u) = G(u)$ for $u \in C_i$ with $i \le 2$ and \[ G'((F_1^{\eps_1},\ldots,F_k^{\eps_k})_{\phi,\phi'}) = G'(F_1^{\eps_1}) \comp_2 \ldots \comp_2 G'(F_k^{\eps_k}) \] for a zigzag $(F_1^{\eps_1},\ldots,F_k^{\eps_k})_{u,v}$ where \[ G'(F^\eps) = \begin{cases} G(F) & \text{if $\eps = +$} \\ G(F)^{-1} & \text{if $\eps = -$} \end{cases} \] for $F \in C_{3}$ and $\eps \in \set{-,+}$. The definition of~$G'$ is compatible with the quotient equalities above so that $G'$ is well-defined, and $G'$ can be shown to uniquely factorize $G$ through $H$. Hence, $\freeinvf{(-)}$ is indeed a left adjoint for $\fginvf$. In the following, given a $3$\precategory~$C$ and $F \in C_3$, we often write $F$ for $H(F)$. \subsection{Monoidal categories} A \emph{monoidal category}~$(\mcal{V},I,\otimes,\alpha,\lambda,\rho)$ is the data of a category~$\mcal{V}$, an element $I \in \mcal{V}$, a bifunctor $\otimes\co \mcal{V} \times \mcal{V} \to \mcal{V}$ and natural transformations \[ \alpha = (\alpha_{X,Y,Z}\co (X \otimes Y) \otimes Z \to X \otimes (Y \otimes Z))_{X,Y,Z \in \mcal{V}}, \] \[ \lambda= (\lambda_X\co I \otimes X \to X)_{X \in \mcal{V}} \] \[ \rho = (\rho_X\co X \otimes I \to X)_{X \in \mcal{V}} \] that are isomorphisms and such that \begin{equation} \label{eq:cat-mon-penta} \begin{tikzcd}[column sep=huge,row sep=small] ((W \otimes X) \otimes Y) \otimes Z \ar[r,"\alpha_{W\otimes X,Y,Z}"] \ar[dd,"\alpha_{W,X,Y} \otimes Z"'] & (W \otimes X) \otimes (Y \otimes Z) \ar[rd,"\alpha_{W,X,Y\otimes Z}"] \\ & & W \otimes (X \otimes (Y \otimes Z)) \\ (W \otimes (X \otimes Y)) \otimes Z \ar[r,"\alpha_{W,X\otimes Y,Z}"'] & W \otimes ((X \otimes Y) \otimes Z) \ar[ru,"W \otimes\alpha_{X,Y,Z}"'] \end{tikzcd} \end{equation} and \begin{equation} \label{eq:cat-mon-assoc-unit} \begin{tikzcd} (X \otimes I) \otimes Y \ar[rd,"\rho_X\otimes Y"'] \ar[r,"\alpha_{X,I,Y}"] & X \otimes (I \otimes Y) \ar[d,"X \otimes \lambda_Y"] \\ & X \otimes Y \end{tikzcd} \end{equation} are commutative diagrams. \subsection{Enriched categories} Given a monoidal category~$(\mcal{V},I,\otimes,\alpha,\lambda,\rho)$, a \emph{category enriched in $\mcal{V}$} is the data of a set~$C_0$, for all $x,y \in C_0$, an objects $C(x,y) \in \mcal{V}$, and, for all $x \in C_0$, an identity operation \[ i_x\co I \to C(x,x) \in \mcal{V} \] and, for all $x,y,z \in C_0$, a composition operation \[ c_{x,y,z}\co C(x,y)\otimes C(y,z) \to C(x,z) \in \mcal{V} \] such that, for all $w,x,y,z \in C_0$ \begin{equation} \label{eq:enr-cat-assoc} \begin{tikzcd}[row sep=small,column sep=huge] (C(w,x) \otimes C(x,y)) \otimes C(y,z) \ar[r,"{c_{w,x,y} \otimes C(y,z)}"] \ar[dd,"{\alpha_{C(w,x),C(x,y),C(y,z)}}"'] & C(w,y) \otimes C(y,z) \ar[rd,"{c_{w,y,z}}"] \\ & & C(w,z) \\ C(w,x) \otimes (C(x,y) \otimes C(y,z)) \ar[r,"{C(w,x) \otimes c_{x,y,z}}"'] & C(w,x) \otimes C(x,z) \ar[ru,"{c_{w,x,z}}"'] \end{tikzcd} \end{equation} is commutative and, for all $x,y \in C_0$, \begin{equation} \label{eq:enr-cat-left} \begin{tikzcd}[column sep=huge] I \otimes C(x,y) \ar[rd,"\lambda_{C(x,y)}"'] \ar[r,"{i_x \otimes C(x,y)}"] & C(x,x) \otimes C(x,y) \ar[d,"{c_{x,x,y}}"] \\ & C(x,y) \end{tikzcd} \end{equation} and \begin{equation} \label{eq:enr-cat-right} \begin{tikzcd}[column sep=huge] C(x,y) \otimes I \ar[rd,"{\rho_{C(x,y)}}"'] \ar[r,"{C(x,y) \otimes i_y}"] & C(x,y) \otimes C(y,y) \ar[d,"{c_{x,y,y}}"] \\ & C(x,y) \end{tikzcd} \end{equation} are commutative diagrams. \subsection{Coherence in Gray categories} \label{ssec:coherence} The aim of this article is to provide tools to study the coherence of presented Gray categories, by which we mean the following. A $3$\precategory~$C$ is \emph{coherent} when, for every pair of parallel $3$\cells $F_1,F_2\co \phi\TO\psi\in C_3$, we have $F_1=F_2$. By extension, a Gray presentation~$\P$ is \emph{coherent} when the underlying $(3,2)$\precategory of the $(3,2)$\nbd-Gray category~$\freeinvf{\prespcat\P}$ is coherent (remember that $\prespcat\P$ is a lax Gray category by \Cref{thm:gray-pres-gray-cat}, which implies that $\freeinvf{\prespcat\P}$ is a $(3,2)$\nbd-Gray category by \Cref{prop:gray-induces-3-2-gray}). Gray presentations~$\P$ with no other $4$\generators than the independence generators and the interchange naturality generators are usually not coherent. For example, in the Gray presentation~$\P$ of pseudomonoids given in \Exr{pseudo-monoid-gray-pres}, we do not expect the following parallel $3$\cells \begin{equation} \label{eq:parallel-not-equal} \begin{tikzcd}[row sep={2.5em,between origins}] & \satex{mon-cp1-r} \tar[r]& \satex{mon-cp1-r-2} \tar[rd] \\ \satex{mon-cp1} \tar[ru]\tar[rd] & & & \satex{mon-cp1-e} \\ & \satex{mon-cp1-l} \tar[r]& \satex{mon-cp1-l-2} \tar[ru] \end{tikzcd} \end{equation} to be equal in~$\freeinvf{\prespcat\P}$. For coherence, we need to add ``tiles'' in~$\P_4$ to fill the ``holes'' created by parallel $3$\cells as the ones above. A trivial way to do this is to add a $4$\generator~$R\co F_1 \TOO F_2$ for every pair of parallel $3$\cells $F_1$ and $F_2$ of~$\freecat\P$. However, this method gives quite big presentations, whereas we aim at small ones, so that the number of axioms to verify in concrete instances is as little as possible. We expose a better method in \Ssecr{critical-branchings}, in the form of \Cref{thm:squier}: we will see that it is enough to add a tile of the form \[ \begin{tikzcd}[sep=small,cramped] & \phi\tar[ld,"S_1"'] \tar[rd,"S_2"] \\ \phi_1 \tar[rd,"F_1"']&\sequiv& \phi_2\tar[ld,"F_2"] \\ & \psi \end{tikzcd} \] for every critical branching $(S_1,S_2)$ of~$\P$ for which we chose $3$\cells $F_1,F_2$ that make the branching $(S_1,S_2)$ joinable (definitions are introduced below). We now show how the coherence property can be obtained starting from a $3$\precategory whose $3$\cells satisfy a property of confluence, motivating the adaptation of rewriting theory to $3$\prepolygraphs in later sections in order to study the coherence of Gray presentations. In fact, we can already prove an analogous of the Church-Rosser property coming from rewriting theory in the context of confluent categories. A $3$\precategory~$C$ is \emph{confluent} when, for $2$\cells~$\phi,\phi_1,\phi_2 \in C_2$ and $3$\cells \[ F_1\co \phi \TO \phi_1 \qtand F_2\co \phi \TO \phi_2 \] of~$C$, there exist a $2$\cell $\psi \in C_2$ and $3$\cells \[ G_1\co \phi_1 \TO \psi \in C_3 \qtand G_2\co \phi_2 \TO \psi \in C_3 \] of~$C$ such that~$F_1 \pcomp_2 G_1 = F_2 \pcomp_2 G_2$: \[ \begin{tikzcd}[sep=small,cramped] & \phi\tar[ld,"F_1"'] \tar[rd,"F_1"] \\ \phi_1 \tar[rd,dotted,"G_1"']&& \phi_2\tar[ld,dotted,"G_2"] \\ & \psi \end{tikzcd} \] The $3$-cells of a $(3,2)$\precategory associated to a confluent $3$\precategory admits a simple form, as in: \begin{prop} \label{prop:confluent-cr} Given a confluent $3$\precategory~$C$, every $3$-cell $F\co \phi \TO \phi' \in \freeinvf C$ can be written $F = G \comp_2 \finv H$ for some $G\co \phi \TO \psi \in C_3$ and $H\co \phi'\TO \psi \in C_3$. \end{prop} \noindent The above property says that confluent categories satisfy a ``Church-Rosser property'' (\cite[Def.~2.1.3]{baader1999term}, for example), and is analogous to the classical result stating that confluent rewriting systems are Church-Rosser (\cite[Thm.~2.1.5]{baader1999term}, for example). \begin{proof} By the definition of $\freeinvf C$, a $3$-cell $F\co \phi \TO \phi' \in \freeinvf C$ can be written \[ F = \finv G_1 \comp_2 H_1 \comp_2 \cdots \comp_2 \finv{G_k} \comp_2 H_k \] for some $k \ge 0$, $G_i\co \chi_i \TO \phi_{i-1}$ and $H_i\co \chi_i \TO \phi_i$ for $1 \le i \le k$ with $\phi_0 = \phi$ and $\phi_k = \phi'$, as in \[ \begin{tikzcd}[sep=small,cramped] & \chi_1 \tar[ld,"G_1"'] \tar[rd,"H_1"]& & \cdots \tar[ld,"G_2"'] \tar[rd,"H_{k-1}"] & & \chi_k \tar[ld,"G_k"'] \tar[rd,"H_k"] & \\ \phi_0 & & \phi_1 & \cdots & \phi_{k-1} & & \phi_k \end{tikzcd} \pbox. \] We prove the property by induction on $k$. If $k = 0$, $F$ is an identity and the result follows. Otherwise, since $C$ is confluent, there exists $\psi_k$, $G_k'\co \phi_{k-1} \to \psi_{k}$ and $H_{k}'\co \phi_{k} \to \psi_{k}$ with \[ \begin{tikzcd}[sep=small,cramped] & \chi_k \tar[ld,"G_k"'] \tar[rd,"H_k"] & \\ {\phantom{\phi_k}\mathmakebox[0pt]{\phi_{k-1}}} \tar[rd,"G_{k}'"'] & = & {\phi_k} \tar[ld,"H_k'"] \\ & {\psi_{k}} \end{tikzcd} \pbox. \] By induction, the morphism \[ \finv G_1 \comp_2 H_1 \comp_2 \cdots \comp_2 \finv{G_{k-2}} \comp_2 H_{k-2} \comp_2 \finv{G_{k-1}} \comp_2 (H_{k-1} \comp_2 G_{k}') \] can be written $G \comp_2 \finv{H}$ for some $\psi$ in $C_2$ and $G\co \phi_0 \TO \psi$, $H \co \psi_k \TO \psi$ in $C_3$. Since $G_k \comp_2 G_k' = H_k \comp_2 H_k'$, we have $\finv{G_k} \comp_2 H_k = G_k' \comp_2 \finv{H_k'}$. Hence, \[ F = G \comp_2 \finv{H} \comp_2 \finv{H_k'} = G \comp_2 \finv{(H_k' \comp_2 H)} \] which is of the wanted form. \end{proof} \noindent Starting from a confluent $3$\precategory, we have the following simple criterion to deduce the coherence of the associated $(3,2)$\precategory: \begin{prop} \label{prop:confluent-impl-coherence} Let $C$ be a confluent $3$\precategory which moreover satisfies that, for every $F_1,F_2\co \phi \TO \phi' \in C_3$, we have $F_1 = F_2$ in the localization~$\freeinvf C$. Then, $\freeinvf C$ is coherent. In particular, if $C$ is a confluent $3$\precategory satisfying that, for every $F_1,F_2\co \phi \TO \phi'\in C_3$, there is $G\co \phi'\TO \phi''\in C_3$ such that $F_1 \comp_2 G = F_2 \comp_2 G$ in~$C_3$, then $\freeinvf C$ is coherent. \end{prop} \begin{proof} Let $F_1,F_2\co \phi \TO \phi' \in \freeinvf{C}_3$. By \Cref{prop:confluent-cr}, for $i \in \set{1,2}$, we have $F_i = G_i \comp_2 \finv{H_i}$ for some $\psi_i \in C_2$, $G_i \co \phi \TO \psi_i \in C_3$ and $H_i \co \phi' \TO \psi_i \in C_3$, as in \[ \begin{tikzcd}[column sep={between origins,3.2em},row sep={between origins,2.5em},cramped] & \psi_1 & \\ \phi \tar[ur,"G_1"] \tar[dr,"G_2"']& & \phi' \tar[ul,"H_1"'] \tar[dl,"H_2"] \\ & \psi_2 \end{tikzcd} \pbox. \] By confluence, there are $\psi \in C_2$ and $K_i\co \psi_i \TO \psi \in C_3$ for $i \in \set{1,2}$, such that $G_1 \comp_2 K_1 = G_2 \comp_2 K_2$. By the second hypothesis, we have $H_1 \comp_2 K_1 = H_2 \comp_2 K_2$ so that \begin{align*} G_1 \comp_2 \finv{H_1} &= G_1 \comp_2 K_1 \comp_2 \finv{(H_1 \comp_2 K_1)} \\ &= G_2 \comp_2 K_2 \comp_2 \finv{(H_2 \comp_2 K_2)} \\ &= G_2 \comp_2 \finv{H_2}. \end{align*} Hence, $F_1 = F_2$. For the last part, note that if $F_1 \comp_2 G = F_2 \comp_2 G$, then $\eta(F_1) = \eta(F_2)$, where~$\eta$ is the canonical $3$\prefunctor~$C \to \freeinvf C$. \end{proof} \subsection[Rewriting on \texorpdfstring{$3$}{3}-prepolygraphs]{Rewriting on $\bm 3$-prepolygraphs} \label{ssec:rewriting-on-3-pol} As we have seen in the previous section, coherence can be deduced from a confluence property on the $3$\cells of $3$\precategories. Since confluence of classical rewriting systems is usually shown using tools coming from rewriting theory, it motivates an adaptation of it in the context of~$3$\prepolygraphs for the aim of studying the coherence of Gray presentations. Given a $3$\prepolygraph~$\P$, a \emph{rewriting step of~$\P$} is a $3$-cell~$S \in \freecat\P_3$ of the form \[ \lambda \comp_1 (l \comp_0 A \comp_0 r) \comp_1 \rho \] for some $l,r \in \freecat\P_1$, $\lambda,\rho \in \freecat\P_2$ and $A \in \P_3$, with $l,A,r$ $0$-composable and $\lambda,l \comp_0 A \comp_0 r,\rho$ $1$-composable. For such $S$, we say that $A$ is the \emph{inner $3$\generator} of~$S$. A \emph{rewriting path} is a $3$-cell $F\co \phi \TO \phi'$ in~$\freecat\P_3$. Remember that, by \Cref{thm:precat-nf}, such a rewriting path can be uniquely written as a composite of rewriting steps $S_1\comp_2 \cdots \comp_2 S_k$, since rewriting steps are exactly $3$\dimensional whiskers. Given $\phi,\psi \in \freecat\P_2$, \emph{$\phi$ rewrites to $\psi$} when there exists a rewriting path $F\co \phi \TO \psi$. A~\emph{normal form} is a $2$-cell $\phi \in \freecat\P_2$ such that for all $\psi \in \freecat\P_2$ and $F\co \phi\TO \psi$, we have $F = \unit \phi$.\penalty-50{} $\P$ is \emph{terminating} when there does not exist an infinite sequence of rewriting steps $F_i\co \phi_i \TO \phi_{i+1}$ for~$i \ge 0$; A \emph{branching} is a pair rewriting paths $F_1\co \phi\TO \phi_1$ and $F_2\co \phi\TO \phi_2$ with the same source. The \emph{symmetric branching} of a branching $(F_1,F_2)$ is $(F_2,F_1)$. A branching $(F_1,F_2)$ is \emph{local} when both $F_1$ and $F_2$ are rewriting steps. A branching $(F_1,F_2)$ is \emph{joinable} when there exist rewriting paths $G_1\co \phi_1 \TO \psi$ and $G_2\co \phi_2 \TO \psi$; moreover, given a congruence $\sequiv$ on $\freecat\P$, if we have that $F_1 \comp_2 G_1 \sequiv F_2 \comp_2 G_2$, as in \[ \begin{tikzcd}[sep=small,cramped] &\phi\tar[dl,"F_1"']\tar[dr,"F_2"]&\\ \phi_1\tar[dr,dotted,"G_1"']&\sequiv&\tar[dl,dotted,"G_2"]\phi_2\\ &\psi \end{tikzcd} \] we say that the branching is \emph{confluent (for $\sequiv$)}. A \emph{rewriting system~$(\P,\sequiv)$} is the data of a $3$\prepolygraph $\P$ together with a congruence~$\sequiv$ on~$\freecat\P$. $(\P,\sequiv)$ is (\emph{locally}) \emph{confluent} when every (local) branching is confluent. It is \emph{convergent} when it is locally confluent and $\P$ is terminating. Given a $4$\prepolygraph~$\P$, there is a canonical rewriting system $(\restrictcat \P 3,\stdcong^\P)$ (recall the definition of~$\stdcong^\P$ given in \Ssecr{presentations}) where $\stdcong^\P$ intuitively witnesses that the ``space'' between two parallel $3$\cells can be filled with elementary tiles that are the elements of~$\P_4$. In the following, most of the concrete rewriting systems we study are of this form. The analogues of several well-known properties of abstract rewriting systems can be proved in our context. In particular, the classical proof by well-founded induction of Newman's lemma (\cite[Lem.~2.7.2]{baader1999term}, for example), can be directly adapted in order to show that: \begin{theo} \label{thm:newman-modulo} A rewriting system which is convergent is confluent. \end{theo} \begin{proof} Let $(\P,\sequiv)$ be a rewriting system which is convergent. Let $\TO^+ \subseteq \freecat\P_2 \times \freecat\P_2$ be the partial order such that $\phi \TO^+ \psi$ if there exists a rewriting path $F\co \phi \TO \psi \in \freecat\P_3$ with $\len{F} > 0$. Since the underlying rewriting system is terminating, $\TO^+$ is well-founded. Thus, we can prove the theorem by induction on~$\TO^+$. Suppose given a branching $F_1\co \phi\TO \phi_1 \in \freecat \P_3$ and $F_2\co \phi\TO \phi_2 \in \freecat \P_3$. If $\len{F_1}=0$ or $\len{F_2}=0$, then the branching is confluent. Otherwise, $F_i = S_i \comp_2 F_i'$ with $S_i\co \phi \TO \phi_i'$ a rewriting step and $F_i'\co \phi_i' \TO \phi_i$ a rewriting path for $i \in \set{1,2}$. Since the rewriting system is locally confluent, there are $\psi \in \freecat\P_2$ and rewriting paths $G_i\co \phi_i' \TO \psi$ for $i \in \set{1,2}$ such that $S_1 \comp_2 G_1 \sequiv S_2 \comp_2 G_2$. Since the rewriting system is terminating and $\sequiv$ is stable by composition, by composing the $G_i$'s with a path $G\co \psi \TO \psi'$ where $\psi'$ is a normal form, we can suppose that $\psi$ is a normal form. By induction on $\phi_1'$ and $\phi_2'$, there are rewriting paths $H_i \co \phi_i \TO \psi_i'$ and $F_i''\co \psi \TO \psi_i'$ such that $F_i' \comp_2 H_i \sequiv G_i \comp_2 F_i''$ for $i \in \set{1,2}$. Since $\psi$ is in normal form, $F_i'' = \unit \psi$ and we have $H_i\co \phi_i \TO \psi$ for $i \in \set{1,2}$ as in \[ \begin{tikzcd}[sep=1.5em,cramped & & \phi \ar[dd,phantom,"\sequiv"]\tar[dl,"S_1"'] \tar[dr,"S_2"] & &\\ & |[alias=Lp]| \phi_1' \tar[dl,"F_1'"']\tar[dr,"G_1"] & & |[alias=Rp]| \phi_2' \tar[dl,"G_2"'] \tar[dr,"F_2'"] \\ \phi_1 \tar[rr,"H_1"'{name=L}]& & \psi & & \phi_2 \tar[ll,"H_2"{name=R}] \ar[phantom,from=Lp,to=L,"\sequiv"] \ar[phantom,from=Rp,to=R,"\sequiv"] \end{tikzcd} \pbox. \] Moreover, \begin{align*} F_1 \comp_2 H_1 &\sequiv S_1 \comp_2 (F_1' \comp_2 H_1) \\ &\sequiv S_1 \comp_2 G_1 \\ &\sequiv S_2 \comp_2 G_2 \\ &\sequiv S_2 \comp_2 (F_2' \comp_2 H_2) \\ &\sequiv F_2 \comp_2 H_2. \qedhere \end{align*} \end{proof} \noindent \Cref{thm:newman-modulo} implies that, up to post-composition, all the parallel paths of a convergent rewriting system are equivalent. Later, this will allow us to apply \Cref{prop:confluent-impl-coherence} for showing the coherence of Gray presentations. \begin{lem} \label{lem:equiv-on-nf} Given a convergent rewriting system $(\P,\sequiv)$ and two rewriting paths $F_1,F_2\co \phi \TO \phi' \in \freecat\P_3$ as in \[ \begin{tikzcd}[cramped] \phi \tar[d,bend right=49,"F_1"'] \tar[d,bend left=49,"F_2"] \\ \phi' \end{tikzcd} \] there exists $G\co \phi' \TO \psi \in \freecat\P_3$ such that $F_1 \comp_2 G \sequiv F_2 \comp_2 G$, \ie \[ \begin{tikzcd}[sep=small,baseline=(\tikzcdmatrixname-2-3.base),cramped] & \phi \tar[ld,"F_1"'] \tar[rd,"F_2"] & \\ \phi' \tar[rd,"G"'] & \sequiv & \phi' \tar[ld,"G"] \\ & \psi \end{tikzcd} \pbox. \] \end{lem} \begin{proof} Given $F_1,F_2$ as above, since the rewriting system is terminating, there is a rewriting path $G\co \phi' \TO \psi$ where $\psi$ is a normal form. By confluence, there exist $G_1\co \psi \TO \psi'$ and $G_2\co \psi \TO \psi'$ such that $F_1 \comp_2 G \comp_2 G_1 \sequiv F_2 \comp_2 G \comp_2 G_2$. Since $\psi$ is a normal form, we have $G_1 = G_2 = \unit {\psi}$. Hence, $F_1 \comp_2 G \sequiv F_2 \comp_2 G$. \end{proof} \noindent Note that, in \Lemr{equiv-on-nf}, we do not necessarily have \[ \begin{tikzcd}[cramped] \phi \tar[d,bend right=49,"F_1"'] \tar[d,bend left=49,"F_2"] \ar[d,phantom,"\sequiv"] \\ \phi' \end{tikzcd} \] which explains why the method we develop in this section for showing coherence will only apply to $(3,2)$\precategories, but not to general $3$\precategories. \subsection{Termination} \label{ssec:termination} Here, we show a termination criterion for rewriting systems $(\P,\sequiv)$ based on a generalization of the notion of reduction order in classical rewriting theory where we require a compatibility between the order and the composition operations of cells. A \emph{reduction order} for a $3$\prepolygraph~$\P$ is a well-founded partial order~$<$ on~$\freecat\P_2$ such that: \begin{itemize} \item given~$A\co \phi\TO\phi' \in \P_3$, we have~$\phi > \phi'$, \item given~$l,r \in \freecat\P_1$ and parallel~$\phi,\phi' \in \freecat\P_2$ such that~$l,\phi,r$ are $0$\composable and~$\phi > \phi'$, we have \[ l \pcomp_0 \phi \pcomp_0 r > l \pcomp_0 \phi' \pcomp_0 r\zbox, \] \item given $1$\composable~$\lambda,\phi,\rho \in \freecat\P_2$, and~$\phi' \in \freecat\P_2$ parallel to~$\phi$ such that~$\phi > \phi'$, we have \[ \lambda \pcomp_1 \phi \pcomp_1 \rho > \lambda \pcomp_1 \phi' \pcomp_1 \rho\zbox. \] \end{itemize} \noindent The termination criterion is then: \begin{prop} \label{prop:term-order-implies-terminating} If $(\P,\sequiv)$ is a rewriting system such that there exists a reduction order for $\P$, then $(\P,\sequiv)$ is terminating. \end{prop} \begin{proof} The definition of a reduction order implies that, given a rewriting step $\lambda \comp_1 (l \comp_0 A \comp_0 r) \comp_1 \rho$ with $l,r \in \freecat\P_1$, $\lambda,\rho \in \freecat\P_2$ and $A\co \phi \TO \phi' \in \P_3$ suitably composable, we have \[ \lambda \comp_1 (l \comp_0 \phi \comp_0 r) \comp_1 \rho > \lambda \comp_1 (l \comp_0 \phi' \comp_0 r) \comp_1 \rho. \] So, given a sequence of $2$-composable rewriting steps $(F_i)_{i < k}$, where $k \in \N \cup \set{\infty}$, $F_i\co \phi_i \TO \phi_{i+1} \in \P_3$ for $i < k$, we have $\phi_i > \phi_{i+1}$ for $i < k$. Since $>$ is well-founded, it implies that $k \in \N$ Hence, the rewriting sytem $(\P,\sequiv)$ is terminating. \end{proof} In order to build a reduction order for a Gray presentation~$\P$, we have to build in particular a reduction order for the subset of~$\P_3$ made of interchange generators. We introduce below a sufficient criterion for the existence of such a reduction order. The idea is to consider the lengths of the $1$\cells of the whiskers in the decompositions of $2$\cells and show that they are decreasing in some way when an interchange generator is applied. Let~$\Nfseq$ be the set of finite sequences of elements of~$\N$. We order~$\Nfseq$ by~$\seqord$ where \[ (a_1,\ldots,a_k) \seqord (b_1,\ldots,b_l) \] when~$k = l$ and there exists~$i \in \N$ with $1 \le i \le k$ such that~$a_j = b_j$ for some~$j < i$ and~$a_i < b_i$. Note that~$\seqord$ is well-founded. Given a $2$\prepolygraph~$\P$, there is a function~$\intnorm\co \freecat\P_2 \to \Nfseq$ such that, given~$\phi \in \freecat\P_2$, decomposed uniquely (using \Cref{thm:precat-nf}) as \[ \phi = (l_1 \pcomp_0 \alpha_1 \pcomp_0 r_1) \pcomp_1 \cdots \pcomp_1 (l_k \pcomp_0 \alpha_k \pcomp_0 r_k) \] for some~$k \in \N$,~$l_i,r_i \in \freecat\P_1$ and~$\alpha_i \in \P_2$ for~$i \in \set{1,\ldots,k}$,~$\intnorm(\phi)$ is defined by \[ \intnorm(\phi) = (\len{l_k},\len{l_{k-1}},\ldots,\len{l_1}). \] Then,~$\intnorm$ induces a partial order~$\intord$ on~$\freecat\P_2$ by putting~$\phi \intord \psi$ when~$\csrctgt\eps_1(\phi) = \csrctgt\eps_1(\psi)$ for~$\eps \in \set{-,+}$ and~$\intnorm(\phi) \seqord \intnorm(\psi)$ for~$\phi,\psi \in \freecat\P_2$. Given a Gray presentation~$\P$, we say that~$\P$ is \emph{positive} when~$\len{\ctgt_1(\alpha)} > 0$ for all~$\alpha \in \P_2$. Under positiveness, the order~$\intord$ can be considered as a reduction order for the subset of $3$\generators of a Gray presentation made of interchangers, as in \begin{prop} \label{prop:term-criterion-interchanger} Let~$\P$ be a positive Gray presentation. The partial order~$\intord$ has the following properties: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{prop:term-criterion-interchanger:X} for every~$\alpha,\beta \in \P_2$ and~$f \in \freecat\P_1$ such that~$\alpha,f,\beta$ are $0$\composable, \[ \csrc_2(X_{\alpha,f,\beta}) \intordgt \ctgt_2(X_{\alpha,f,\beta})\zbox, \] \item \label{prop:term-criterion-interchanger:0-comp} for~$\phi,\phi' \in \freecat\P_2$ and~$l,r \in \freecat\P_1$ such that~$l,\phi,r$ are $0$\composable, if~$\phi \intordgt \phi'$, then \[ l \pcomp_0 \phi \pcomp_0 r \intordgt l \pcomp_0 \phi' \pcomp_0 r\zbox, \] \item \label{prop:term-criterion-interchanger:1-comp} for~$\phi,\phi',\lambda,\rho \in \freecat\P_2$ such that~$\lambda,\phi,\rho$ are $1$\composable, if~$\phi \intordgt \phi'$, then \[ \lambda \pcomp_1 \phi \pcomp_1 \rho \intordgt \lambda \pcomp_1 \phi' \pcomp_1 \rho\zbox. \] \end{enumerate} \end{prop} \begin{proof} Given $\alpha,\beta \in \P_2$ and $f \in \freecat\P_1$ with $\alpha,f,\beta$ are $0$-composable, recall that $X_{\alpha,f,\beta}$ is such that \[ X_{\alpha,f,\beta}\co (\alpha \comp_0 f \comp_0 \csrc_1(\beta)) \comp_1 (\ctgt_1(\alpha) \comp_0 f \comp_0 \beta) \TO (\csrc_1(\alpha) \comp_0 f \comp_0 \beta) \comp_1 (\alpha \comp_0 f \comp_0 \ctgt_1(\beta)) \] Then, we have \[ \intnorm(\csrc_2(X)) = (\len{\ctgt_1(\alpha)} + \len{f},0) \qqtand \intnorm(\ctgt_2(X)) = (0,\len{\csrc_1(\alpha)} + \len{f}). \] Since $\P$ is positive, we have $\len{\ctgt_1(\alpha)} > 0$ so that $\intnorm(\csrc_2(X)) \intordgt \intnorm(\ctgt_2(X))$. Now, \ref{prop:term-criterion-interchanger:0-comp} and~\ref{prop:term-criterion-interchanger:1-comp} can readily be obtained by considering the whisker representations of $\phi$ and $\phi'$ and observing the action of $l \comp_0 - \comp_0 r$ and $\lambda \comp_1 - \comp_1 \rho$ on these representations and the definition of~$\intnorm$. \end{proof} \noindent The positiveness condition is required to prevent $2$\cells with ``floating components'', since Gray presentations with such $2$\cells might not terminate. For example, given a Gray presentation $\P$ where $\P_0$ and $\P_1$ have one element and $\P_2$ has two $2$\generators $\satex{cup}$ and $\satex{cap}$, there are $2$\cells of~${\freecat\P}$ with ``floating bubbles'' which induce infinite reduction sequence with interchange generators as the following one: \[ \satex{capcup-cycle1} \qTO \satex{capcup-cycle2} \qTO \satex{capcup-cycle3} \qTO \satex{capcup-cycle4} \qTO \satex{capcup-cycle1} \qTO \cdots \] \subsection{Critical branchings} \label{ssec:critical-branchings} In term rewriting systems, a classical result called the ``critical pair lemma'' states that local confluence is a consequence of the confluence of a subset of local branchings, called \emph{critical branchings}. The latter can be described as pairs of rewrite rules that are minimally overlapping, see \cite[Sec.~6.2]{baader1999term} for details. Note that we used this result earlier in the proof of \Lemr{precat-locally-confluent}. Here, we show a similar result for rewriting on Gray presentations (introduced in \Cref{ssec:gray-presentation}). For this purpose, we give a definition of critical branchings which is similar to term rewriting systems, \ie as minimally overlapping local branchings, where we moreover filter out some branchings that involve interchange generators and that are automatically confluent by our definition of Gray presentation. Then, we give a coherence theorem for Gray presentation based on the analysis critical branchings together with an associated coherence criterion, and we finish the section by stating a finiteness property on the critical branchings. Let $\P$ be a $3$\prepolygraph. Given a local branching $(S_1\co \phi \TO \phi_1,S_2\co \phi \TO \phi_2)$ of~$\P$, we say that the branching $(S_1,S_2)$ is \begin{itemize} \item \emph{trivial} when $S_1=S_2$, \item \index{minimal branching}\emph{minimal} when for all other local branching~$(S'_1,S'_2)$ such that \[ S_i=\lambda\pcomp_1(l\pcomp_0 S'_i\pcomp_0 r)\pcomp_1\rho \] for~$i \in \set{1,2}$ for some $1$\cells~$l,r$ and $2$\cells~$\lambda,\rho$, we have that~$l,r,\lambda,\rho$ are all identities, \item \emph{independent} when \begin{align*} S_1&=((l_1\comp_0 A_1\comp_0 r_1)\comp_1\chi\comp_1(l_2\comp_0\phi_2\comp_0 r_2)) & S_2&=((l_1\comp_0\phi_1\comp_0 r_1)\comp_1\chi\comp_1(l_2\comp_0 A_2\comp_0 r_2)) \end{align*} for some $l_i,r_i \in \freecat\P_1$ and $A_i\co \phi_i \TO \phi'_i \in \P_3$ for $i \in \set{1,2}$ and $\chi \in \freecat\P_2$. \end{itemize} If moreover $\P = \restrictcat{\Q} 3$, where $\Q$ is a Gray presentation, we say that the the branching $(S_1,S_2)$ is \begin{itemize} \item \emph{natural} when \[ S_1=((A\comp_0 g\comp_0 h)\comp_1(f'\comp_0 g\comp_0\psi)) \] for some $A\co \phi\TO\phi' \co f \To f' \in \P_3$, $\psi\co h \To h'\in \freecat\P_2$ and $g \in \freecat\P_1$, and \[ S_2 = \winterp{\wtrans_{u,v}}_{\phi,g \comp_0 \psi} \qtext{with} u = \letter l_1 \ldots \letter l_{\len{\phi} - 1} \qtand v = \letter r_2 \ldots \letter r_{\len{\psi}} \] and similarly for the situation on the second line of~\eqref{eq:inat-gen}, \item \emph{critical} when it is minimal, and both its symmetrical branching and it are neither trivial nor independent nor natural. \end{itemize} In the following, we suppose given a Gray presentation $\Q$ and we write $(\P,\sequiv)$ for $(\restrictcat{\Q} 3,\stdcong^{\Q})$. Our next goal is to show an adapted version of the critical pair lemma. We start by two technical lemmas: \begin{lem} \label{lem:existence-min-branching} For every local branching $(S_1,S_2)$ of~$\P$, there is a minimal branching $(S'_1,S'_2)$ and $1$-cells $l,r \in \freecat\P_1$ and $2$-cells $\lambda,\rho \in \freecat\P_2$ such that $S_i = \lambda \comp_1 (l \comp_0 S'_i \comp_0 r) \comp_1 \rho$ for $i \in \set{1,2}$. \end{lem} \begin{proof} We show this by induction on $N(S_1)$ where $N(S_1) = \len{\csrc_2(S_1)} + \len{\csrc_1(S_1)}$. Suppose that the property is true for all local branchings $(S'_1,S'_2)$ with $N(S'_1) < N(S_1)$. If $(S_1,S_2)$ is not minimal, then there are rewriting steps $S'_1,S'_2 \in \freecat\P_3$, $l,r \in \freecat\P_1$ and $\lambda,\rho \in \freecat\P_2$ such that $S_i = \lambda \comp_1 (l \comp_0 S'_i \comp_0 r) \comp_1 \rho$ for $i \in \set{1,2}$, such that $l,r,\lambda,\rho$ are not all identities. Since \[ \len{\csrc_1(S_1)} = \len{l} + \len{\csrc_1(S'_1)} + \len{r} \qtand \len{\csrc_2(S_1)} = \len{\lambda} + \len{\csrc_2(S'_1)} + \len{\rho}, \] we have $N(S'_1) < N(S_1)$ so there is a minimal branching $(S''_1,S''_2)$ and $l',r' \in \freecat\P_1$, $\lambda',\rho' \in \freecat\P_2$ such that $S'_i = \lambda' \comp_1 (l' \comp_0 S''_i \comp_0 r') \comp_1 \rho'$ for $i \in \set{1,2}$. By composing with $\lambda,\rho,l,r$, we obtain the conclusion of the lemma. \end{proof} \begin{lem} \label{lem:triv-indep-natural-confluent} A local branching of~$\P$ which is either trivial or independent or natural is confluent. \end{lem} \begin{proof} A trivial branching is, of course, confluent. Independent and natural branching are confluent thanks respectively to the independence generators and interchange naturality generators of a Gray presentation. \end{proof} \goodbreak\noindent The critical pair lemma adapted to our context is then: \begin{theo}[Adapted critical pair lemma] \label{thm:cp} The rewriting system $(\P,\sequiv)$ is locally confluent if and only if every critical branching is confluent. \end{theo} \begin{proof} The first implication is trivial. For the converse, by \Lemr{existence-min-branching}, to check that all local branchings are confluent, it is enough to check that all minimal local branchings are confluent. Among them, by \Lemr{triv-indep-natural-confluent}, it is enough to check the confluence of the critical branchings. \end{proof} \noindent We now state the main result of this section, namely a coherence theorem for Gray presentations based on the analysis of the critical branchings: \begin{theo}[Coherence] \label{thm:gray-coherence} Let $\Q$ be a Gray presentation and $(\P,\sequiv) = (\restrictcat{\Q} 3,\stdcong^{\Q})$ be the associated rewriting system. If $\P$ is terminating and all the critical branchings of $(\P,\sequiv)$ are confluent, then $\Q$ is a coherent Gray presentation. \end{theo} \begin{proof} By \Cref{thm:cp}, the rewriting system $(\P,\sequiv)$ is locally confluent, and by \Cref{thm:newman-modulo} it is confluent. Since $\prespcat{\Q} = \freecat\P/_{\sequiv}$, it implies that $\prespcat{\Q}$ is a confluent $3$\precategory. To conclude, it is sufficient to show that the criterion in the last part of \Cref{prop:confluent-impl-coherence} is satisfied. But the latter is a consequence of \Lemr{equiv-on-nf}. \end{proof} \noindent Note that \Cref{thm:gray-coherence} requires the rewriting system $(\P,\sequiv)$ to be confluent. If it is not the case, one can try to first apply a modified version of the classical Knuth-Bendix completion procedure~\cite{knuth1970simple} (see also \cite[Sec.~7]{baader1999term}) which, in addition to adding new $3$\generators in order to make the system confluent, also adds $4$\generators in order to make it confluent up to $\sequiv$, in order to hopefully obtain a confluent Gray presentation. Such a procedure is detailed in the closely related setting of coherent presentations of monoids in~\cite{guiraud2013homotopical}, where it is called the Knuth-Bendix-Squier completion procedure. Our coherence theorem implies a coherence criterion similar to the ones shown by Squier, Otto and Kobayashi~\cite[Thm.~5.2]{squier1994finiteness} and Guiraud and Malbos~\cite[Prop.~4.3.4]{guiraud2009higher}, which states that adding a tile for each critical branching is enough to ensure coherence: \begin{theo} \label{thm:squier} Let $\Q$ be a Gray presentation, such that $\restrictcat\Q 3$ is terminating and, for every critical branching $(S_1\co\phi\TO\phi_1,S_2\co\phi\TO\phi_2)$ of~$\restrictcat\Q 3$, there exist $\psi \in \freecat\Q_2$, $F_i\co\phi_i \TO \psi\in\freecat\Q_3$ for $i \in \set{1,2}$ and $G\co S_1 \comp_2 F_1 \TOO S_2 \comp_2 F_2 \in \Q_4$. Then, $\Q$ is a coherent Gray presentation. \end{theo} \begin{proof} The definition of~$\Q_4$ ensures that all the critical branchings are confluent, so that \Cref{thm:gray-coherence} applies. \end{proof}% \noindent Note that, in \Cref{thm:squier}, we do not need to add a $4$\generator~$G$ as in the statement for a critical branching $(S_1,S_2)$ if there is already a generator $G'$ for the symmetrical branching $(S_2,S_1)$, so that a stronger statement holds. To finish this section, we mention a finiteness property for critical branchings of Gray presentations. This property contrasts with the case of strict categories, where finite presentations can have an infinite number of critical branchings~\cite{lafont2003towards, guiraud2009higher}. \begin{restatable}{theo}{grayfinitecriticalbranchings}% \label{thm:finite-cp} Given a Gray presentation $\Q$ where $\Q_2$ and $\Q_3$ are finite and $\len{\csrc_2(A)} > 0$ for every $A \in \Q_3$, there is a finite number of local branchings $(S_1,S_2)$ with rewriting steps $S_1,S_2 \in \freecat\Q_3$ such that $(S_1,S_2)$ is a critical branching. \end{restatable} \begin{proof} See \Appr{finiteness-cp}. \end{proof} \noindent The proof of \Cref{thm:finite-cp} happens to be constructive, so that we can extract an algorithm to compute the critical branchings for such Gray presentations. An implementation of this algorithm was used to compute the critical branchings of the examples of the next section.
2023-04-23T08:17:38.071Z
2021-09-14T02:17:02.000Z
redpajama/arxiv
arxiv_0000
370
33,742
7b80d579e5c5d3d6e05ab4db27a57e1d22e366cc
\section{Introduction} One key challenge in developing autonomous mobile robots is the SLAM (Simultaneous Localization And Mapping) problem \cite{Thrun2008}. Graph-based solutions are often based on a static environment, which is unrealistic: objects move, seasons influence the appearance of the surroundings and -- depending on the lighting -- images of the same scene differ. For long-term autonomy in dynamic environments two main requirements must be met. First, the computational complexity of graph optimization, the back-end, must be limited. Second, the process has to be robust against changes in the environment. The processing of sensor data, the front-end, must be developed in such a way that, despite changes, already visited places are recognized again. Answering the question \textit{Have I been here before?} belongs to the most important tasks during SLAM, since in presence of a loop closure, the odometry drift can be corrected retrospectively and the map quality can be improved. \inkscapescale{nodes_detector}{Loops with discrete nodes (blue points) are searched for in a variable radius of the current position (orange arrow). The search space depends on the position uncertainty. Due to the use of different memories not all nodes in the radius are used as loop candidates.}{htbp}{.85} Loop detection approaches can be classified according to the sensor technology used. Due to the high information density in an image and the multitude of effective techniques, visual methods are widely used. Where such methods are stable for static environments, the illumination change can already lead to failure within a few hours. Actively illuminated sensors such as LiDAR (Light Detection And Ranging) scanners are robust in such situations, as generated point clouds are illumination-invariant. This work deals with the combination of the two approaches to realize robust mapping and localization. The basis for this is the widespread library RTAB-Map (Real-Time Appearance-Based Mapping) \cite{Labbe.2018b}. Although the method can be used with a LiDAR, the data is only marginally utilized due to the simplified method of loop search within a constant radius using 2D Iterative Closest Point (ICP), making an extension suitable. Fig.~\ref{fig:nodes_detector} illustrates our approach. Loops are searched based on scan descriptors, and within a variable radius $r$ depending on the largest eigenvalue $\lambda_{\textrm{max}}$ of the position covariance matrix to take the odometry drift into account. If a loop exists, the respective scans are registered and the graph is optimized with the calculated transformation $\mm{T}$. The paper is structured as follows. Sec.~\ref{relatedwork} presents related work. Based on this, Sec.~\ref{cmrlidarloop} describes our contributions: \begin{itemize} \item further development of a loop detection method to enable the necessary scan registration, \item introduction of several loop verification steps to reject false positives, \item elaboration of a robust, open-source ROS package\footnote{\url{https://github.com/MarvinStuede/cmr\_lidarloop}} to close loops with LiDAR data during graph-based SLAM. \end{itemize} This is followed by various validations in Sec.~\ref{expval} and conclusions in Sec.~\ref{conclusions}. \section{Related Work}\label{relatedwork} Approaches to mapping changing environments are manifold. Examples are filtering moving objects and assuming a static environment \cite{Hua.2016}, modeling the dynamics in the frequency domain \cite{Krajnik.} and integrating new data to adapt the map \cite{Krajnik.2016}. The continuous adaptation to changes is also possible by removing old nodes at loop closure \cite{Lazaro.01.10.201805.10.2018} or by representing the environment in several time frames simultaneously \cite{Biber.June8112005}. RTAB-Map \cite{Labbe.2018b} is suitable for large-scale and long-term online SLAM in changing environments due to its memory management \cite{Labbe.2018}: The bounded Working Memory (WM) ensures a bounded demand on computing which is realized by transferring nodes to a database. A buffer of constant size ensures that recently created nodes are not considered for loop detection, since the odometry drift cannot be significantly corrected. Whereas most state-of-the-art methods are either visual or LiDAR-based, RTAB-Map supports both. This key advantage enables the use of many sensor configurations, comparison of results and easy integration into different systems. The availability in the ROS framework, multi-session operation and a high accuracy \cite{Labbe.2018b} further promote its deployment. RTAB-Map's LiDAR-based \textit{proximity detection} module was developed for challenging situations, such as a significant illumination change, in which the visual loop detection is not promising. However, the used 2D ICP algorithm is a simplified method for loop detection. The high amount of data within the three-dimensional point cloud is not used and a fast loop search with several hundred potential pairs is not possible. Beyond that, this module only searches within a constant radius for loops which is problematic, since the error of the estimated position increases due to odometry drift. The search space should thus be permanently adjusted. Further, a local search is useless at the beginning of multi-session operation, because the position in the old map is unknown. Instead, a global search must be performed here so that a link can be established between the two sessions, and the robot can locate itself in the old map. The potential of scans to close loops is thus not exploited, making an extension necessary. To detect loops with LiDAR data, there are a variety of methods which are based on histograms to reduce the scan dimension. Representing a scanning area as a piecewise continuous function using Normal Distribution Transform (NDT) and detecting loops by matching feature histograms realizes a high accuracy \cite{Magnusson.2009}. The time to create such histograms is an important property which is a disadvantage of NDT. Instead, by using simpler histrogram types such as the range or height of each point, a fast histogram generation \cite{Rohling.2015} is possible with results comparable to NDT. The azimuthal and radial division of the scan into bins and use of the maximum height of the points in each bin also allows a fast computation of a global descriptor called Scan Context \cite{gkim-2018-iros}. However, besides height and range, there is further information in the raw scan, which can be obtained with low computational effort. In \cite{Granstrom.}, each point cloud is described with 41 rotationally invariant geometric features, so that a trained classifier decides whether a loop exists. The extensive scan description with small, rapidly computed features and fast prediction make this approach suitable for efficient loop search. However, the method does neither consider the scan registration nor the important loop verification to reject false positives. We thus extend the method of Granström and Schön \cite{Granstrom.} with these essential elements and integrate it into RTAB-Map, so that a robust loop search with laser scans is realized. \section{SLAM with Learned LiDAR Loop Detector}\label{cmrlidarloop} The developed LiDAR extension is presented in Fig.~\ref{fig:cmr_lidarloop} and described in this section. \subsection{Loop Classification}\label{lidarclass} Loop detection is a binary problem: either the robot has already visited the current location in the past or not. To solve this problem based on LiDAR data, a classifier is trained. A point cloud $\m{P}{}{}{}{\mathit{i}}=\{\m{p}{}{}{\mathit{k}}{\mathit{i}}\}^{N}_{\mathit{k}=1}$ from node $i$ in the graph contains $N$ points $\m{p}{}{}{\mathit{k}}{\mathit{i}}\in\mathbb{R}^3$ representing the environment. The rotation invariant classifier presented in \cite{Granstrom.} is used in our work and is briefly introduced in the following. Depending on the LiDAR, the number of points differs. For example, the scanner of the mobile robot Sobi used for evaluation (see Sec.~\ref{expval}) generates $N$$\approx\,$45,000 points per cloud, requiring a dimensionality reduction for comparison. For this, each cloud is described by global features \cite{Granstrom.} \e{\m{f}{}{}{}{\mathit{i}}=(\m{f}{}{}{I}{\mathit{i}},\m{f}{}{}{II}{\mathit{i}})^\textrm{T}.}{feature_vector} Features $\m{f}{}{}{I}{\mathit{i}}$ of type I map the point cloud to a real number, e.g. simple geometric quantities such as the mean range or the range's standard deviation. More complex geometric quantities, such as the center and radius of a sphere fitted to the point cloud, are also calculated. A total of 32 features of type I are computed. In addition, there are nine features $\m{f}{}{}{II}{\mathit{i}}$ of type II. These are range histograms with nine different container sizes $b_{1},\ldots,b_{9}$. For each histogram, starting at the sensor origin, the scan is divided into annular container of constant width and the points lying in each container are counted. Each feature is a vector whose dimension depends on the container size. Before feature computation, every scan is processed in a way that all points with range $r_k$ greater than a maximum range $r_\textrm{max}$ are moved in the direction of the sensor origin so that $r_k \leq r_\textrm{max}$ applies for all points. This limitation allows range histograms of one container size to have the same dimension in any case. A total of 41 features are calculated consisting of 843 real numbers during SLAM with Sobi (see Sec.~\ref{expval}). Thus, the dimension of any scan is significantly reduced. \begin{figure*}[h] \centering \resizebox{1\linewidth}{!}{\input{Abbildungen/cmr_lidarloop.pdf_tex}} \caption{LiDAR extension of RTAB-Map. Loops are permanently searched for using the scan descriptors. In case of a detected loop, the registration of the scans takes place in a parallel thread. After the point clouds have been pre-processed, first a rough registration and then a refinement takes place.} \label{fig:cmr_lidarloop} \end{figure*} If during mapping and localization the features are calculated and stored for each node, the descriptor $\m{f}{}{}{}{c}$ of the current position can be efficiently compared to descriptors of map nodes $\m{f}{}{}{}{\mathit{i}}$ with respect to a possible loop closure. An AdaBoost Classifier \cite{Pedregosa.2011} \cite{Hastie.2009} predicts if a loop is present and the entity $\sk{p}{}{\mathit{i}}{}{c}$ indicates the probability that the corresponding scans are from the same location. The input of the classifier is generated from the respective feature vectors by comparing them appropriately \cite{Granstrom.}. Type I features are compared via the absolute difference and histogram comparison for features of type II is done using Pearson's correlation coefficient. AdaBoost classification \cite{Freund.1997} belongs to the class of boosting algorithms and uses $T$ weak learners, which together form a strong classifier. In each learning round, the data is reweighted depending on the current prediction error. Incorrectly classified and thus difficult cases are prioritized higher, so that the number of training rounds $T$ significantly influences the accuracy. For $T\geq50$ weak classifiers, the error was observed to stop decreasing for the used LiDAR descriptors \cite{Granstrom.}. Accordingly, in the present work $T=50$ rounds are performed for learning. \subsection{Scan Registration}\label{lidarreg} In case of a detection, a further step is necessary to close a loop: the scan registration. This aims at determining the homogeneous transformation $\mm{T}$ between the current point cloud $\m{P}{}{}{}{c}$ and the point cloud $\m{P}{}{}{}{*}$, with which a loop was detected. The transformation is then used to add a link between the two nodes in the map in order to take the loop into account. The entire registration process is illustrated in Fig.~\ref{fig:cmr_lidarloop} and is implemented with the point cloud library \cite{Rusu.09.05.201113.05.2011}. Registering raw point clouds is computationally expensive and the scans also contain regions that prevent successful registration. Examples are points on the ground or regions generated by reflection of the laser beam on glass fronts. To encounter this, the following filters are executed consecutively: \begin{itemize} \item voxel grid filter with voxel side length $l$, \item height filter to remove points with height lower than $z_\textrm{lim}$, \item intensity filter to remove points with intensity lower than $i_\textrm{lim}$ and therefore points which laser beams were reflected, \item range filter to remove points with range larger than $r_\textrm{lim}$ and therefore to remove scan regions which are far away and do not describe the environment in sufficient detail, \item random downsampling filter to randomly remove points until the point cloud consists of $n_\textrm{p,max}$ points. \end{itemize} The pre-processed scans $\m{\tilde{P}}{}{}{}{c}$, $\m{\tilde{P}}{}{}{}{*}$ used for registration can have a large translational and rotational offset. Using a local method like ICP \cite{Besl.1992} would not be expedient due to a convergence into a local optimum. Consequently, a rough registration to compute an initial alignment takes place first according to \cite{Holz.2015}. Thereby, Fast Point Feature Histograms (FPFH) \cite{Rusu.2009} are used as robust multidimensional features that describe local geometry around a point and only persistent features are used \cite{Rusu.2008}. The latter increases robustness, since only unique regions are taken into account. With these keypoints, the correspondences can then be estimated by performing a nearest neighbor search in the feature space. Because of a partial overlap, not every keypoint has a match and a large number of incorrect correspondences would be possible. These outliers negatively influence the registration and must be rejected. By means of outlier rejection based on RANdom SAmple Consensus (RANSAC) \cite{Fischler.1981} a transformation is computed over a multiplicity of iterations with different subsets of correspondences. Only those correspondences are classified as correct whose Euclidean distance, after application of the respective transformation, is smaller than a defined threshold. The outliers that are present during the transformation with the most inliers are rejected. With the filtered correspondences the initial alignment can be completed using Singular Value Decomposition (SVD) \cite{Horn.1987}. The transformation $\m{T}{}{}{\,\textrm{IA}}{}$ of this global method is the initial guess for subsequent refinement, so that the output of the ICP registration $\mm{T}$ is used to correct the odometry drift. Since adding a wrong transformation has drastic consequences for the SLAM procedure, some verification criteria must be met. First, both processed scans must consist of at least $n_\textrm{p,min}$ points so that enough data is included. Second, a transformation is only accepted if at least $n_\textrm{inliers}$ correspondences after outlier rejection are present which ensures that enough matching areas are found. Third, the transformation has a translational offset of at most $t_\textrm{max}$. This condition increases the robustness since a larger translational offset leads to a smaller scan overlap, and thus the registration can become less accurate. \subsection{Extending the RTAB-Map Framework}\label{impl} The presented loop detection and scan registration are integrated as an extension into RTAB-Map which is illustrated in Fig.~\ref{fig:cmr_lidarloop}. For each scan, the corresponding descriptor is calculated and sent to RTAB-Map. Saving the scans with associated features enables compatibility with multi-session operation. The extension is continuously supplied with current map data which contains required information of the graph such as the position $\m{x}{}{}{}{c}$ of the current node. Further, the extension receives the positions $\mm{X}=\{\m{x}{}{}{}{\mathit{i}}|\forall i\}$ of all map nodes and associated descriptors $\mm{F}=\{\m{f}{}{}{}{\mathit{i}}|\forall i\}$, which is the basis for the actual task to detect loops. When receiving new map data, a loop search is done with the trained detector. Similar to \textit{proximity detection}, the current node is compared with other WM nodes within a certain radius only. For example, if the robot is located in a corridor and the entire WM is checked, a loop detection with a node in a similar corridor at a completely different location in the building would be possible due to similar scans. Adding such a loop would be fatal for the SLAM process, so the local restriction is useful for increasing robustness. In determining the search space, the present work differs from RTAB-Map's approach. For large loops, a large odometry drift is present and therefore, a large uncertainty in the estimated position. The search carried out in \textit{proximity detection} in a constant radius is problematic here because the estimated position is highly inaccurate and loops are searched for in the wrong map area. Accordingly, we follow a different philosophy: the use of a variable search space depending on the position inaccuracy. The search radius \e{r(\lambda_{\textrm{max}})=r_\textrm{min}+\beta g_{\textrm{max}}(\lambda_{\textrm{max}})}{r_search} consists of two parts. In the constant radius $r_\textrm{min}$, loops are searched for if the current position can be assumed to be without errors. This is the case at the start of the process as well as at loop closures, where the inaccuracy is corrected. In all other cases, the estimated position is subject to error, whereby we assume the odometry error to be steadily increasing. This property is taken into account by the second component of (\ref{eq:r_search}). Thereby, $g_{\textrm{max}}(\lambda_{\textrm{max}})=2\sqrt{5.991\lambda_{\textrm{max}}}$ is the length of the longest major axis of the $95$\% confidence ellipse of the two-dimensional position. This quantity can be obtained from the largest eigenvalue $\lambda_{\textrm{max}}$ of the covariance matrix, for which the odometry data is processed by the extension. The parameter $\beta$ serves as a scaling factor. Fig.~\ref{fig:R_search} visualizes the variable search space. The approach enables to include nodes close to the exact position in the loop search despite a large position inaccuracy. \inkscapescale{R_search}{Despite the difference between the actual (green arrow) and estimated position (orange arrow), the search space (orange circle) contains the desired circular area with the radius $r_\textrm{min}$ around the actual position. The schematic course shows the increase of the radius with increasing time $t$ due to odometry error. If a loop closure occurs, the uncertainty is reset.}{bp}{1} As already mentioned, adding a wrong loop is fatal for the integrity of the map. Hence, further precautions are taken to reduce the number of false positives. Only when $\sk{p}{}{\mathit{i}}{}{c}>\sk{p}{}{}{}{min}$ applies to the predicted loop probability $\sk{p}{}{\mathit{i}}{}{c}$, the pair is treated as a potential loop pair. Further information on the adjustment of the threshold $\sk{p}{}{}{}{min}$, i.e. the fine tuning of precision and recall, is presented in Sec.~\ref{det_perf}. If the search was successful, a final verification is performed with the node of highest loop probability $\sk{p}{}{*}{}{c}$ at the location $i^*$. The current node is compared with the surrounding nodes of the potential loop candidate. If at least one loop with the current node is detected in the immediate neighborhood consisting of $2n_\textrm{v}$ verification nodes, this candidate is accepted. In each case $n_\textrm{v}$ nodes with $i<i^*$ and $n_\textrm{v}$ nodes with $i>i^*$ are considered. For the strictest variant ($n_\textrm{v}=1$) there must be either another loop with the node created before or after the loop candidate. Otherwise, the loop is rejected and the search starts again when new map data is received. If the verification is successful, the scan registration takes place in a parallel thread and the link between the involved nodes is added with the accepted transformation $\mm{T}$. After further verification in RTAB-Map, the loop can be closed by graph optimization. Two final adjustments remain to be made: First, loops must be searched robustly even in multi-session operation, which is impossible with the previous approach of a local search if the position is unknown. Especially at the beginning of a new session the relative position in the old map is in general not given -- the initial state problem. A position-independent search must be carried out here, so that nodes of the entire WM are examined. As alternative local restriction during multi-session, the size $n_\textrm{ms}$ is introduced which is the number of consecutive loop candidates that must lie within a radius $r_\textrm{ms}$. Only one loop is accepted if several loop pairs are from the same place. This type of loop search is done in multi-session mode for the first $n_\textrm{start}$ accepted loops, after which it has to be checked if the localization in the old map was successful. The criterion for this is the ratio $\alpha=\frac{n_\textrm{local}}{n_\textrm{WM}},$ where $n_\textrm{local}$ is the number of nodes in the local map and $n_\textrm{WM}$ is the number of all WM nodes. If this ratio is smaller than a defined threshold $\alpha_\textrm{min}$, there are few nodes in the local map, so the relative positions of many WM nodes to the robot are unknown. In this case, the entire WM should continue to be searched for loops, since localization is not yet satisfactory. Otherwise, the relative position of many WM nodes with regard to the robot is known and the local search in the variable radius can be started. To meet online requirements, the second adjustment is to limit the number of nodes used in the search. For this, $n_\textrm{n,max}$ nodes are randomly selected from the possible candidates. Due to the fast detector, hundreds of nodes can be used. \section{Experimental Validation}\label{expval} The evaluation is divided into three parts: First, a loop detector is trained and tested in an independent environment (see Sec.~\ref{det_perf}). Second, multi-session experiments were performed with this detector under challenging conditions (see Sec.~\ref{val_slam}). The mobile service robot Sobi \cite{Ehlers.2020} is used for both validations, which is a ROS-based information and guidance system equipped with a differential drive base (Neobotix MP-500), two RGBD cameras (Intel Realsense D435, front and back) and a 3D LiDAR (Velodyne VLP-16). An extended Kalman filter \cite{Moore.2016} is used to fuse the wheel odometry with the IMU data (XSens MTi-30). Third, the general applicability of our approach is shown with the widely used KITTI\cite{Geiger.2012b} dataset in Sec.~\ref{kitti_sec}. \subsection{Detector Performance}\label{det_perf} To train the detector, indoor and outdoor data consisting of descriptors with corresponding coordinates was collected on an university campus. There are large distances between objects in outdoor areas of the campus, so $r_\textrm{max}=40\textrm{m}$ is chosen. With the container sizes $b_{1},\ldots,b_{9}$ suggested in \cite{Granstrom.}, each feature vector consists of 843 entries. There is no ground truth position, so the optimized poses were obtained from RTAB-Map to generate the most accurate positions of the 1248 nodes with a path length of 697m. Comparing each node with itself and with all others gives 779,376 pairs. As the distance at which the detector should treat a pair as positive, we choose 3m. With this distinction, the set is divided into 11,458 positive and 767,918 negative pairs. Training with this set would not be expedient, as the number of pairs for the classes is clearly unbalanced. As in \cite{Granstrom.}, a random subset of negative pairs is used, where we select 11,458 pairs to obtain the same amount of data for both classes. However, it is noticeable that the performance varies considerably depending on the subset, since half of the data is randomly taken from a large set. Thus, an optimization was carried out, whereby 50 detectors were trained with different subsets. For comparison, common criteria consisting of detection rate $D$ and false alarm rate $FA$ are used: \e{D=\frac{\textrm{\# Positive data pairs classified as positive}}{\textrm{\# Positive data pairs}},}{D} \e{FA=\frac{\textrm{\# Negative data pairs classified as positive}}{\textrm{\# Negative data pairs}}.}{FA} An ideal detector would detect every loop ($D=100\%$) and would also not detect any loop incorrectly ($FA=0\%$). Since our extension still verifies every possible loop pair, $FA<1\%$ is set as target. The decisive parameter for this is the introduced threshold $p_\textrm{min}$, which is fine tuned with a training-independent test dataset consisting of 181 nodes. The path of length 72m is a loop in an indoor hall, which is illustrated in Fig.~\ref{fig:detector_performance}a). For each of the 50 detectors, $p_\textrm{min}$ was incrementally increased so that the requirement $FA<1\%$ is met. This process can be visualized with the ROC (Robot Operating Curve) of the best performing detector in Fig.~\ref{fig:detector_performance}b). Increasing the threshold results in a more restrictive detector, which is less erroneous but also detects fewer loops. The best classifier ($D=47.3\%,\,FA=0.8\%,\,p_\textrm{min}=52.4\%$) detects approximately half of all loops with only few false positives in the unknown environment. \begin{figure*}[t] \centerline{\includegraphics[width=\linewidth]{Abbildungen/detector_performance.pdf}} \caption{Detector performance for training-independent test environment a). ROC b) shows performances of best detector at different thresholds $p_\textrm{min}$. Detectors can be compared using their detection rates at the desired boundary of $FA<1\%$ (blue star). Corresponding classification matrix c) compared to distance matrix d) visualizes performance of this detector at $p_\textrm{min}=52.4\%$ with $D=47.3\%$ and $FA=0.8\%$. Orange areas represent loops and grey areas no loops. } \label{fig:detector_performance} \end{figure*} The test results can be illustrated in the form of two matrices. The classification matrix Fig.~\ref{fig:detector_performance}c), in which each node pair is examined by the detector for a loop, is compared with the distance matrix Fig.~\ref{fig:detector_performance}d). The latter represents the underlying ground truth, since all pairs with a distance less than 3m are treated as loop and the rest as negative pairs. The orange area on the diagonal represents the successive nodes that are close enough together. The off-diagonal orange area is the loop that is driven, since the robot returned to an old position at a later point in time and the nodes concerned are a short distance apart. For the trained detector, these matrices are roughly similar and all detected loops are located in areas where the pairs are not far away from each other. However, pairs with a distance minimally greater than the selected 3m are classified as positive as well. This does not pose a problem, as these loop closures are also desirable. \subsection{SLAM under Challenging Conditions}\label{val_slam} The presented method is furthermore validated in an experiment under challenging conditions. Since the introduced extension shall support the loop detection, we compare RTAB-Map extended with our work against the default operation. The extension is termed LL (LiDAR Loop). First, all required sensor data was recorded while driving. The evaluation was performed afterwards on a 2.60GHz Intel Core i7-4720HQ CPU with 8 GB of RAM running Linux, so that both methods had the same data available. Predicting with the detector takes on average 2ms and RTAB-Map's time threshold was set to 0.75s. Depending on the cloud sizes, registration of a loop pair takes 2s--10s. Due to the use of parallel threads, the longer time of registration is unproblematic. The parameters of our extension must be adjusted depending on the environment and robot and were chosen as follows: \begin{itemize} \item $r_\textrm{min}=t_\textrm{max}=3\textrm{m}\,$$\rightarrow\,$loop distance during training, \item $n_\textrm{p,max}=10,000$, $n_\textrm{p,min}=7,000$, $n_\textrm{inliers}=1,000$, \mbox{$i_\textrm{lim}=5$,} $z_\textrm{lim}=0.3\textrm{m}$, $r_\textrm{lim}=30\textrm{m}$, $l=0.03\textrm{m}\,$$\rightarrow\,$fast and robust scan registration, \item $n_\textrm{n,max}=200\,$$\rightarrow\,$many nodes during loop search and computation time less than RTAB-Map's time threshold, \item $\beta=0.25\,$$\rightarrow\,$reduces the increase of the search radius, \item $n_\textrm{v}=1\,$$\rightarrow\,$strict verification, \item $\alpha_\textrm{min}=0.5, n_\textrm{ms}=2, r_\textrm{ms}=5\textrm{m}, n_\textrm{start}=3\,$$\rightarrow\,$global loop search until localization is satisfactory. \end{itemize} According to odometry, the path for the outdoor map has a length of 423m and consists of 769 nodes. When mapping with RTAB-Map, there were 6 visual loop closures, compared to 15 loops closed with the extended method. The potential of the extension is already apparent in single session operation, as significantly more loops were detected. However, there are no major differences in terms of map quality. The main reason is the precise odometry used, which means that with such a path, a few loops are sufficient for an acceptable map quality. Due to similar maps, Fig.~\ref{fig:outdoor_experiments} presents only the map generated by RTAB-Map+LL with the corresponding nodes in orange. The added value becomes evident when taking the usual task of a mobile robot into account: \textit{map the environment and extend the map on another day}. The central requirement for such a multi-session operation is a successful localization in the old map. For this, the paths shown in Fig.~\ref{fig:outdoor_experiments} in light and dark blue were driven on another day at a different time. It was checked individually whether a localization succeeded or failed in the map generated with the respective method. Due to the dynamic environment, RTAB-Map did not detect any loops and, therefore, a localization for all of the five paths failed, preventing to extend the map. In contrast, with RTAB-Map+LL, a localization in the map of the old session was possible in four out of five paths, and the mapping could be continued. Loops were detected in the third path, but localization failed due to a wrong transformation matrix. The participating scans are from a location on the campus which has few descriptive elements except for a multitude of repetitive pillars. Due to the repetitiveness, pillars of one scan were registered with different pillars of the other scan, so that a translational error led to a mapping failure. To illustrate the environmental dynamics, Fig.~\ref{fig:outdoor_experiments} shows pictures of loop pairs successfully added in RTAB-Map+LL operation. A change in sunlight, switched on lights, a person in the picture and a different field of view are the reasons why the visual loop search of RTAB-Map failed. Only by observing the three-dimensional surrounding structure, a localization is still possible. Even if the robot travels to an already visited location with a different orientation and therefore the image comparison returns no match, loop detection succeeds due to the 360° view and the rotation invariant features used. \begin{figure}[tbp] \centerline{\includegraphics[width=\linewidth]{Abbildungen/outdoor_experiments.pdf}} \caption{Outdoor map with associated nodes (orange circles), five paths (light and dark blue) that were driven on another day and images of three exemplary loop pairs detected by RTAB-Map+LL.} \label{fig:outdoor_experiments} \end{figure} Similar experiments were conducted in two challenging indoor environments: an entrance hall with many glass fronts and an office environment with corridors. According to odometry, the mapping paths have a length of 196m and 299m respectively. Analogous to the outdoor experiment, three paths were driven at a different time in each environment. RTAB-Map operation realizes successful localization in two out of six paths, whereby in the respective areas, a change in sunlight has a small influence due to a large amount of artificial light. Using RTAB-Map+LL, a successful localization is present in all six cases. Combining the results of the indoor and outdoor validation, RTAB-Map realizes a successful localization in two out of eleven cases. In contrast to that, with the LiDAR extension it was possible to localize in ten out of eleven paths. \subsection{KITTI Dataset}\label{kitti_sec} Finally, our extension was evaluated with the widely used KITTI\cite{Geiger.2012b} odometry benchmark. We continued to use the detector trained in Sec.~\ref{det_perf}, so that the training data recorded in indoor and outdoor environments on our campus differs significantly from the test data acquired in road traffic. The sequences containing loops are mapped with our proposed extension using the RTAB-Map parameters for KITTI \cite{Labbe.2018b}. Odometry is calculated with the front stereo camera image sequences Frame-To-Map (F2M) wise. The following parameters of our package have been adjusted for the car and road traffic: $r_\textrm{min}=7.5\textrm{m}$, $r_\textrm{max}=50\textrm{m}$, $n_\textrm{v}=3$, $z_\textrm{lim}=1\textrm{m}$, \mbox{$l=0.2\textrm{m}$}, $t_\textrm{max}=10\textrm{m}$. Since the intensities of the point clouds were not given, we set $i_\textrm{lim}=0$. Our extended version of RTAB-Map is compared to the LiDAR-based approach LOAM \cite{zhang2017} which is currently ranked \#2 on KITTI's odometry leaderboard. Table \ref{tab:kitti} shows the results of the two methods using the average translational error as the performance metric. The number of visual and the number of additional LiDAR-based loop closures in RTAB-Map+LL operation are also included. The results show that the performance of our extension is comparable to LOAM. In four out of seven sequences, the error is lower with our method than with LOAM. For sequence 09, RTAB-Map+LL performs significantly worse. This can be explained by the small number of loop closures. Moreover, the potential of the LiDAR extension can be seen in sequence 08. Since the camera is oriented in the opposite direction when traversing back, preventing the detection of visual loops. By using LiDAR data, 30 loops can still be closed in this case. \begin{table}[t] \caption{Results for the KITTI odometry dataset. Average translational error in \%.} \input{kitti_loam.tex} \label{tab:kitti} \end{table} This experiment demonstrates the general applicability of our method. Despite large deviation between training and test environment, loops are still detected successfully. To improve loop detection in road traffic, training could be carried out directly with part of the KITTI data. In this case, the loop distance of 3m defined during training could also be increased, since detection is desired at greater distances on the road. \section{Conclusions}\label{conclusions} We presented a module to close loops based on laser scans to extend graph-based SLAM methods. By global description of a point cloud with rotation invariant features and a trained classifier, the question \textit{Have I been here before?} can be answered under challenging conditions, which is essential for the long-term autonomy of mobile robots. Experiments show that the classifier detects $47.3\%$ of loops with a small number of false positives which can be filtered out by verification. In dynamic environments, localization with RTAB-Map succeeded only in two out of eleven cases. A connection between all different sessions could be established using our extension. Except for a registration error, in ten out of eleven experiments, the localization succeeded. The potential of our module was also demonstrated by a validation on the KITTI dataset. Despite all the positive results, new tasks arise: Since registration can fail with many repeating elements, the acceptance of a calculated transformation should be further restricted, e.g. by considering the percentage of overlapping regions. Further, we will integrate the extension into a map management approach \cite{Ehlers.2020}. With different SLAM configurations, it would be possible to use a different classifier and registration parameters for a short range than for a long range environment. The main potential for improvement lies in the use of absolute positions using WiFi and GPS data. When starting multi-session operation instead of a loop search in the entire map, loops could be searched for directly in the local environment. \bibliographystyle{IEEEtran}
2023-04-23T08:17:39.439Z
2021-05-13T02:17:42.000Z
redpajama/arxiv
arxiv_0000
408
5,954
3823f2701aee64c9b41c11b55d58cc762afd81fc
\section{Introduction} The cosmic microwave background (CMB) and type Ia supernovae observations suggests that, our universe is expanding with acceleration. We may explain this cosmic acceleration of the universe by introducing a cosmological constant $(\Lambda)$ in the Einstein's Hilbert action with the equation of state $p=\omega\rho$, where $\omega=-1$ (with $\rho$ and $p$ the energy density and pressure of the universe, respectively). The field with such type of property is called dark energy. To account for this cosmic acceleration, the cosmological constant $(\Lambda)$ is one of the simple and natural candidate, however it faces vital problems of fine-tuning and a large mismatch between theory and observations. We may explain this dark energy in another way by introducing scalar field with proper potential. There has been significant attempts to construct the dark energy models by modifying the Einstein's Hilbert action. This approach is called modified gravity. The several modification of general relativity has been done by the replacement of the Ricci scalar ($R$) in Einstein's Hilbert action by some function of $f(R)$, $f(R, T)$ etc.\cite{Starobinsky, Capozziello, Capozziello1, Carroll, Nojiri, Chiba, Dolgov, Soussa, Olmo, Faraoni, Bamba, Bamba1, Bamba3, Harko, Houndjo, Jamil, Myrzakulov, Sharma, Shabani, Moraes, Ram, Samanta, Samanta1, Samanta2} \bibliographystyle{IEEEtran} (for reviews on theories of modified gravity as well as the issue of dark energy, which are studied to explain the late-time cosmic acceleration, see, for example,~\cite{Nojiri:2006ri, Nojiri:2010wj, Book-Capozziello-Faraoni, Capozziello:2011et, Bamba:2012cp, Joyce:2014kja, Koyama:2015vza, Bamba:2015uma, Cai:2015emx, Nojiri:2017ncd}). All researchers are explaining the expansion of the universe in several ways. At the same time non-linear electrodynamics may solve the problem of early time inflation and late time cosmic acceleration of the universe by without modification of the general relativity. The electromagnetic fields are very strong at the earlier time of the evolution of the universe. If the radiation-dominated stage in the early universe is governed by the Maxwell's equations, then there will be a space like an initial singularity in the past. However, the initial singularity may avoid by modifying the Maxwell's equations in the early stage of the universe. The finite-time future singularities in various modified gravity theories have been investigated in detail~\cite{Bamba:2008ut, Bamba:2008hq, Bamba:2009uf, Bamba:2012vg}. Recently, in Refs.~\cite{Novello, Novello1, Novello2, Lorenci}, it has been shown that the initial singularity can be avoided and have a period of late time cosmic acceleration when the universe field with magnetic field with Lagrangian $L=-\frac{1}{4}F^2+\alpha F^4-\frac{\gamma^8}{F^2}$, where $F^2=F^{\mu\nu}F_{\mu\nu}$ and $\alpha$ and $\gamma$ are constants. At the early stage of the universe, electromagnetic and gravitational fields are very strong, and therefore, the non-linear electromagnetic source may be taken into consideration. The non-linear electrodynamics is a approximation of the Maxwell's theory for weak field and the space-time with non-linear electrodynamics can be applied for strong fields. Hence, the non-linear electrodynamics has been used to make inflation in the early universe~\cite{Salcedo, Camara} and it can also have cosmological contributions as a source of the late-time acceleration of the universe~\cite{Elizalde, Novello4}. The gravity coupling with non-linear electrodynamics may produce negative pressure and that is the cause of acceleration \cite{Novello5, Vollick, Montiel}. Hence there is no need of dark energy components to explain the late time cosmic accelerated expansion. The isotropic and homogeneous cosmological models coupled with electromagnetic Born-Infeld (BI) field is tested with the standard probes of SNIa, GRBs and direct Hubble parameter \cite{Breton}. The cosmological consequences in the existence of the non-minimal coupling between electromagnetic fields and gravity have been explored~\cite{Bamba:2008ja, Bamba:2008xa}. In addition, the influence of the existence of strong magnetic fields on the propagation of gravitational waves~\cite{Bamba:2018cup} and the relationship between the cosmic accelerated expansion and the cosmic magnetic fields~\cite{MRTB} have been studied. In this paper, we study a model of the non-linear electrodynamics~\cite{Kruglov} coupled with gravitational fields. It is seen that in general relativity, if the source of the gravitational field is the non-linear electromagnetic field, the accelerated expansion of the universe can be realized. We also investigate a pure magnetic universe and show that the accelerated expansion of the universe is driven by the magnetic fields. Moreover, we analyze the stability of the present model explicitly. Furthermore, we explore the energy conditions and future singularities in detail. The organization of the paper is as follows. In Sec.~II, we explain models of non-linear electrodynamics. In Sec.~III, we analyze the cosmological solutions in non-linear electrodynamics. Furthermore, for non-linear electrodynamics, we examine the energy conditions and future singularities. Conclusions are finally provided in Sec.~IV. Throughout the paper, we use the units $c=\hbar=\varepsilon_0=\mu_0=1$. \section{Non-linear electrodynamics models} Let us consider the Lagrangian density of nonlinear electrodynamics \cite{Kruglov} is defined as \begin{equation}\label{1} \mathcal{L}_{em}=-\frac{\mathcal{F}}{2\beta\mathcal{F}+1}, \end{equation} where $\mathcal{F}=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}=\frac{B^2-E^2}{2}$, $F_{\mu\nu}$ is the field strength tensor and $\beta\mathcal{F}$ is a dimensionless quantity. The denominator of the Lagrangian \eqref{1} will not vanish because the strength of the electric field can not reach the value $E_{max}=\frac{1}{\sqrt{\beta}}$. The energy momentum tensor in \cite{Kruglov} is defined as \begin{equation}\label{2} T_{\mu\nu}=-\frac{1}{(2\beta\mathcal{F}+1)^2}[F^{\alpha}_{\mu}F_{\nu\alpha}-g_{\mu\nu}\mathcal{F}(2\beta \mathcal{F}+1)] \end{equation} and it has a nonzero trace. Based on equation \eqref{1}, the scale variance in the nonlinear electrodynamics model is broken and that support for the negative pressure. We can make the average of the electromagnetic fields which are sources in general relativity \cite{Tolman} to have the isotropy of the Friedman-Robertson-Walker (FRW) space-time. Hence, here, we use the average values of the electromagnetic fields as follows: \begin{equation}\label{3} <E>=<B>=0, <E_{i}, B_j>=0, <E_i, E_j>=\frac{1}{3}E^2g_{ij}, <B_i, B_j>=\frac{1}{3}B^2g_{ij} \nonumber \end{equation} Let us consider the Friedman Robertson Walker $(FRW)$ space-time \begin{equation}\label{3} ds^2=-dt^2+a^2(t)\bigg[\frac{dr^2}{1-kr^2}+r^2d\theta^2+r^2\sin^2\theta d\phi^2\bigg], \end{equation} where $k$ is curvature, there are three different values for $k$, such as $-1, 0, 1$. The universe is spatially open for $k=-1$, the universe is spatially closed for $k=1$ and the universe is spatially flat for $k=0$. Where the co-ordinate systems $(r, \theta, \phi)$ are co-moving co-ordinates, i. e. an observer at rest in these coordinates remains at rest. The Einstein's Hilbert action of general relativity coupled with the nonlinear electromagnetic field described by the Lagrangian density \eqref{1} is \begin{equation}\label{4} S=\int\frac{\sqrt{-g}R}{2\kappa^2}d^4x+\int\sqrt{-g}\mathcal{L}_{em}d^4x, \end{equation} where $R$ is the Ricci scalar and $\kappa^{-1}=M_{pl}$, $M_{pl}\equiv (8\pi G)^{\frac{-1}{2}}$ is the reduced Planck mass, where $G$ is Newton's constant. Max Planck introduced his famous units of mass, length and time a hundred years ago and constructed exclusively out of the three fundamental constants, $\hbar=\frac{h}{2\pi}$, $c$ and $G$ \cite{Planckm}, where $\hbar$ is the Planck constant introduced by him only in 1900, $c$ is the velocity of light (leading laws of relativity) and $G$ is the Newtonian gravitational constant. The Planck mass arises very frequently in astrophysics, cosmology, quantum gravity, string theory, etc. The Planck mass is defined as $M_{pl}=\left(\frac{\hbar c}{G}\right)^{\frac{1}{2}}=2.2\times 10^{-5}gm$, however in this paper we have considered $M_{pl}\equiv \left( \frac{1}{8\pi G}\right)^{\frac{1}{2}}$. For gravity, the gravitational constant $G$ is inversely proportional to $M_{pl}^2$, i. e. $G\propto \frac{1}{M_{pl}^2}$. Under weak gravity Plank mass becomes very large. At the Plank energy, all quantum gravitational process becomes very strong. Further, if G changes with time or with energy, or the gravity coupling scales logarithmically with energy, then we can no longer define the Planck units with constant values at high energies. So the parameter $M_{pl}=\left(\frac{\hbar c}{G}\right)^{\frac{1}{2}}=2.2\times 10^{-5}gm$ becomes energy dependant. The field equations follow from equation \eqref{4} as \begin{equation}\label{5} R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=-\kappa^2T_{\mu\nu}, \end{equation} \begin{equation}\label{6} \partial_{\mu}\bigg(\frac{\sqrt{-g}F^{\mu\nu}}{(2\beta\mathcal{F}+1)^2}\bigg)=0. \end{equation} Where $T_{\mu\nu}=(p+\rho)u_{\mu}u_{\nu}+pg_{\mu\nu}$ and the four velocity vector $u^{\mu}$ is defined as $u^{\mu}=(1, 0, 0, 0)$. The energy density $(\rho)$ and the pressure $(p)$ are obtained from \eqref{2} as follows: \begin{equation}\label{7} \rho=\frac{E^2}{(2\beta\mathcal{F}+1)^2}+\frac{\mathcal{F}}{2\beta\mathcal{F}+1} \end{equation} \begin{equation}\label{8} p=\frac{2B^2-E^2}{3(2\beta\mathcal{F}+1)^2}-\frac{\mathcal{F}}{2\beta\mathcal{F}+1} \end{equation} \par The explicit form of the field equations \eqref{5} with the help of space time \eqref{3} having zero curvature (i. e. $k=0$) as follows: \begin{equation}\label{9} 3\frac{\dot{a}^2}{a^2}=\kappa^2 \rho \end{equation} \begin{equation}\label{10} 2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}=-\kappa^2 p \end{equation} By performing equation \eqref{9} and \eqref{10}, we get \begin{equation}\label{11} \frac{\ddot{a}}{a}=-\frac{1}{6}\kappa^2(\rho+3p) \end{equation} which is sometimes called the Raychaudhuri equation. We know, acceleration and deceleration of the universe depends on the sign of $\ddot{a}$. Hence, from equation \eqref{11} we can say that, the universe will accelerate for $\rho+3p<0$ and decelerate for $\rho+3p>0$. If the linear equation of state between $p$ and $\rho$ is $p=\omega\rho$ holds, then the $\omega>\frac{-1}{3}$ for deceleration and $\omega<\frac{-1}{3}$ for acceleration. Let us suppose that the field of nonlinear electrodynamics is the main source of gravity. Here we considered, only magnetic field is important in cosmology, so we can have $E=0$. Thus, the electric field is screened because of the charged primordial plasma, but the magnetic field is not screened \cite{Lemoine}. According to cosmological principle, $<B_i>=0$. Hence the magnetic field does not induce the directional effects. Performing equations \eqref{7} and \eqref{8} with $E^2=0$, we get \begin{equation}\label{12} \rho=\frac{B^2}{2(\beta B^2+1)} \end{equation} \begin{equation}\label{13} \rho+p=\frac{2B^2}{3(\beta B^2+1)^2} \end{equation} \begin{equation}\label{14} \rho+3p=\frac{B^2(1-\beta B^2)}{(\beta B^2+1)^2} \end{equation} The equation of continuity is obtained as \begin{equation}\label{15} \dot{\rho}+3H(p+\rho)=0 \end{equation} For accelerating universe, we need $\rho+3p<0$. Hence, from equation \eqref{14} we observe that $1-\beta B^2<0$, i. e. $1<\beta B^2$ is required to explain cosmic acceleration of the universe. This inequality will happen at epoch under strong magnetic fields. Therefore, the inequality $\rho+3p<0$ can be satisfied and drives the accelerated expansion of the universe under nonlinear electrodynamics and the magnetic field scenario. \section{Cosmological solutions} In this section, we would like to investigate the dynamics of the energy density $(\rho)$, pressure $(p)$, electromagnetic field $(B)$, scale factor $(a)$ and deceleration parameter $(q)$ of the model. From the equations \eqref{9}, \eqref{10}, \eqref{12} and \eqref{13}, we can have \begin{equation}\label{16} B^2=\frac{1}{\beta}\frac{1-q}{1+q}, \end{equation} where $q$ is deceleration parameter and defined as follows \begin{equation}\label{17} q=-\frac{a\ddot{a}}{\dot{a}^2} \end{equation} We can certainly see that, right side of the equation \eqref{16} must be positive. Hence $q$ varies from -1 to 1, i. e. $-1<q<1$. There is a singularity in equation \eqref{16} of electromagnetic field for $q=-1$, so $q\ne -1$. Hence, we confirmed from the above equation \eqref{16} that our universe does not follow de Sitter expansion. For $q=1$, we get $B=0$, i. e. our universe does not contain any electromagnetic field. Since observational data suggests that, the positive deceleration parameter indicates the deceleration phase of the universe. Hence we may conclude that without electromagnetic field, the only general theory of relativity is not a capable candidate to explain the current accelerated expansion of the universe, so $q\ne 1$ is acceptable. Therefore the electromagnetic field reveals that the range of the deceleration parameter is $-1<q<0$, which is absolutely compatible with our current observations. Dividing equations \eqref{10} by \eqref{9}, we can get $-\frac{p}{\rho}=\frac{1}{3}+\frac{2}{3}\frac{a\ddot{a}}{\dot{a}^2}$. Since $q=-\frac{a\ddot{a}}{\dot{a}^2}$, so we can write $\frac{p}{\rho}=-\frac{1}{3}+\frac{2}{3}q$. The usual assumption in cosmology is that there is a unique pressure associated with each density, so that $p\propto p(\rho)$. Such a relation is known as the equation of state. The simplest one is $p=\omega \rho$, where $\omega$ is constant and this $\omega$ is called the equation of state parameter. However, in this paper we have considered $\omega$ is time dependent rather than constant i. e. $p=\omega (t)\rho$. Hence, we can write the deceleration parameter in terms of the time dependent equation of state parameter as follows \begin{equation}\label{18} q=\frac{1+3\omega(t)}{2}. \end{equation} Let us introduce a non-linear function $\eta=\ln a$. Now, we can write $\frac{\ddot{\eta}}{\dot{\eta}^2}=\frac{-3}{2}(\omega(t)+1)=-(q(t)+1)$, which suggests to define a function $f(t)$, and such $f(t)$ can be defined as $f(t)=\frac{3}{2}(\omega(t)+1)=-q(t)-1$. Now, we can write $\omega(t)=-1+\frac{2}{3}f(t)$ and $q(t)=-1-f(t)$. This transformations help us to reduce the order of the differentia equation. Hence, we can write $f(t)=-\frac{\ddot{\eta}}{\dot{\eta}^2}= \left(\frac{1}{\dot{\eta}}\right)^{.}$, which implies $\dot{\eta}=\left(\int f(t) dt+c_1\right)^{-1}$. Now, we can write scale factor, energy density and pressure in quadrature form. \begin{equation}\label{19} a(t)=a_0\exp\left(\int_{t_0}^{t}\left(\int f(t)dt+c_1\right)^{-1}dt\right), \end{equation} \begin{equation}\label{20} \rho=\frac{3}{\kappa^2}\left(\int_{t_0}^{t}f(t)dt+c_1\right)^{-2} \end{equation} and \begin{equation}\label{21} p=\frac{1}{\kappa^2}\frac{3(f(t)-1)}{\left(\int_{t_0}^{t}f(t)dt+c_1\right)^2}. \end{equation} From the equations \eqref{13} and \eqref{15}, one can have the relation between the electromagnetic field $(B(t))$ and the scale factor $(a(t))$ as \begin{equation}\label{22} B\propto \frac{1}{a^2}. \end{equation} This shows that, the evolution of the magnetic field follows inverse square law of the scale factor $(a)$. The dynamics of the model depends on the behavior of $f(t)$. Let us assume that $f(t)$ follows Puiseux series expansion of time around $t=0$. \begin{center} $f(t)=f_0t^{\gamma_0}+f_1t^{\gamma_1}+f_2t^{\gamma_2}+\cdots + f_nt^{\gamma_n}+\cdots , ~~~~\gamma_0<\gamma_1<\cdots$. \end{center} Now, we try to get the explicit form of the scale factor $(a)$, the energy density $(\rho)$ and the pressure $(p)$ at lowest order in cosmic time $t$, \begin{equation}\label{23} \eta (t)=\begin{cases} -\frac{\gamma_0+1}{\gamma_0f_0}t^{-\gamma_0}+\cdots, & \mbox{if } \gamma_0\ne -1, 0 \\ -\frac{t}{f_0}-\frac{f_0+f_1}{2f_0^2}t^2+\cdots, & \mbox{if } \gamma_0=-1, |t|\le 2 \\ \frac{\ln t}{f_0}-\frac{f_1}{2g_0^2}t+\cdots, & \mbox{if } \gamma_0=0 \end{cases} \end{equation} \begin{equation}\label{24} a(t)=\begin{cases} \exp\left(-\frac{\gamma_0+1}{\gamma_0f_0}t^{-\gamma_0}+\cdots \right), & \mbox{if } \gamma_0\ne -1, 0 \\ \exp\left(-\frac{t}{f_0}-\frac{f_0+f_1}{2f_0^2}t^2+\cdots \right), & \mbox{if } \gamma_0=-1, |t|\le 2 \\ \exp\left(\frac{\ln t}{f_0}-\frac{f_1}{2g_0^2}t+\cdots \right), & \mbox{if } \gamma_0=0 \end{cases} \end{equation} \begin{equation}\label{25} \rho(t)=\begin{cases} 3\left(\frac{\gamma_0+1}{f_0}\right)^2t^{-2(\gamma_0+1)}+\cdots, & \mbox{if } \gamma_0\ne -1, 0 \\ \frac{3}{f_0^2 (\ln t)^2}+\cdots, & \mbox{if } \gamma_0=-1 \\ \frac{3t^{-2}}{f_0^2}+\cdots, & \mbox{if } \gamma_0=0 \end{cases} \end{equation} \begin{equation}\label{26} p(t)=\begin{cases} \frac{3}{f_0 t (\ln t)^2}+\cdots , & \mbox{if } \gamma_0=-1 \\ \frac{3(f_0-1)}{f_0^2}t^{-2}+\cdots, & \mbox{if } \gamma_0=0 \\ \frac{3(\gamma_0+1)^2}{f_0}t^{-(\gamma_0+2)}+\cdots , & \mbox{if } \gamma_0\ne -1, \gamma_0<0 \\ -3\left(\frac{\gamma_0+1}{f_0}\right)^2t^{-2(\gamma_0+1)}+\cdots, & \mbox{if } \gamma_0>0. \end{cases} \end{equation} The following observations are made from the above expressions of the scale factor, energy density and pressure of the nonlinear electromagnetic model. \subsection{Stability of the model} The particles of the universe has been classified into three classes, mainly, sub-luminal, luminal and supper-luminal. The particles move with slow speed compare to the speed of light are called sub-luminal particles, for example electrons and neutrons. The particles move with exactly same as the speed of light are called luminal particles, for example photon and graviton. The particles move with faster than the speed of light are called super-luminal particles or tachyons. There are two possibilities for the existence of supper-luminal particles: either they do not exist or, if they do, then they do not interact with an ordinary matter. If the speed of the sound is less than the local light speed, $c_s\le 1$, then only we can say causality may not be violated. The positive square sound speed $(c_s^2>0)$ is necessary for the classical stability of the universe. The speed of sound is defined as $\frac{dp}{d\rho}=c_s^2$ \cite{Ellis}. From the equations \eqref{7} and \eqref{8}, we obtain the speed of sound (with $E=0$) \begin{equation}\label{27} \frac{dp}{d\rho}=c_s^2=-\frac{11\beta B^2+3}{3(\beta B^2+1)} \end{equation} From the equation \eqref{27}, one can verify that our universe follows the classical stability condition for the scale factor $a>\left(\frac{14\beta}{6}\right)^{\frac{1}{4}}\sqrt{B_0}$, however the universe follows instability condition i. e. the universe may contains some abnormal (something not normal) matters (may be tachyons like matters) if the scale factor $a<\left(\frac{14\beta}{6}\right)^{\frac{1}{4}}\sqrt{B_0}$. Hence we may assume that the universe was filled up with some abnormal matters or tachyons in early epoch i. e. from the big bang to inflationary stage and as a result it causes inflation. The following observations are made for the sound speed and causality of the model based on the evolution of the energy density and pressure given in equations \eqref{25} and \eqref{26}. \begin{itemize} \item For $\gamma_0=-1$, the sound speed is obtained as $c_s^2=\frac{dp}{d\rho}=\frac{\ln t}{2t}+\frac{1}{t}$. The model satisfies the stability conditions for $e^{-1}<t<e^{(t-1)}$ and $t>1$. The model is stable and therefore causality cannot be violated during this period. For the existence of abnormal matters the $c_s^2$ must be greater than one i. e. $(c_s^2>1)$, which implies that $t>e^{2(t-1)}$, however this is not feasible. Hence we could not get any theoretical evidence for the existence of abnormal matters in the present universe. Moreover, there may be a chance for the existence of the abnormal matters before inflation. \item For $\gamma_0=0$, the sound speed is obtained as $c_s^2=\frac{dp}{d\rho}=f_0-1$. The model satisfies stability condition for $1<f_0<2$. For $f_0=2$, indicates the existence of ordinary matters and for $f_0>2$, indicates the existence of tachyons in the universe respectively. \item For $\gamma_0 \ne 0, -1$ and $\gamma_0<0$, the sound speed is obtained as $c_s^2=\frac{dp}{d\rho}=\frac{(\gamma_0+2)t^{\gamma_0}}{2(\gamma_0+1)}$. $t<\left(\frac{2(\gamma_0+1)}{\gamma_0+2}\right)^{\frac{1}{\gamma_0}}$ indicates for the existence of ordinary matters (i.e. $c_s^2<1$), $t>\left(\frac{2(\gamma_0+1)}{\gamma_0+2}\right)^{\frac{1}{\gamma_0}}$ indicates for the existence of abnormal matters (i.e. $c_s^2>1$), $t=\left(\frac{2(\gamma_0+1)}{\gamma_0+2}\right)^{\frac{1}{\gamma_0}}$ indicates for the existence of ordinary matters (i.e. $c_s^2=1$) in the universe respectively. The condition $c_s^2>0$ is satisfied for $\gamma_0 \in (-1, 0)\cup (-\infty, -2)$. The model maintains causality for very small time period i. e. $t\in\bigg[0, \left(\frac{2(\gamma_0+1)}{\gamma_0+2}\right)^{\frac{1}{\gamma_0}}\bigg]$, however the model violate causality for very long time period i. e. $t\in \bigg[\left(\frac{2(\gamma_0+1)}{\gamma_0+2}\right)^{\frac{1}{\gamma_0}}, \infty\bigg]$. Hence in this case either the model is not physically realistic or indicates the existence of abnormal matters in present and future universe. The presence of abnormal matters of the universe causes the accelerated expansion of the universe. \item For $\gamma_0 \ne 0, -1$ and $\gamma_0>0$, the sound speed is obtained as $c_s^2=\frac{dp}{d\rho}=-1$, which is either not acceptable or indicates the existence of non-normal matter in the universe. \end{itemize} \subsection{Energy conditions} It is sensible to hope that the stress-energy tensor should satisfy certain conditions, such as positivity of the energy density and dominance of the energy density over the pressure. In general relativity the energy conditions are divided into four parts \cite{Hawking} such as: \begin{itemize} \item [I] Weak Energy Condition (WEC). \item [II] Null Energy Condition (NEC). \item [III] Strong Energy Condition (SEC). \item [IV] Dominant Energy Condition (DEC). \end{itemize} The following observations are made, based on the evolution of the energy density and pressure from the equations \eqref{25} and \eqref{26}. \begin{itemize} \item \textbf{Weak Energy Condition:} The weak energy condition states that the energy density for any kind of matter distribution is non-negative as measured by any observer in space-time. The energy momentum tensor measured by an observer at each $p\in \mathbb{M}$ (where $p$ is the point and $\mathbb{M}$ is the four dimensional manifold) with any time-like vector $u^{\mu}$ is $T_{\mu\nu}u^{\mu}u^{\nu}\ge 0$. By assuming isotropic pressure, which implies $\rho\ge 0, ~~ \rho+p\ge 0$. From the equations \eqref{25} and \eqref{26} we can see that at any point $\rho\ge 0$ for all observer. Similarly at any point $p+\rho\ge 0$ for all observer. Hence, the model satisfies weak energy condition. \item \textbf{Null Energy Condition:} The statement of null energy condition is same as the weak form, except that $u^{\mu}$ is replaced by an arbitrary, future-directed null vector $k^{\mu}$. Hence $T_{\mu\nu}k^{\mu}k^{\nu}\ge 0$ is follow the null energy condition inequality. By assuming isotropic pressure, the $NEC$ implies $p+\rho\ge 0$. Notice that the $WEC$ implies the $NEC$. Therefore, the model also satisfies $NEC$. \item \textbf{Strong Energy Condition:} The $SEC$ is defined as $\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu}\right)u^{\mu}u^{\nu}$, or $T_{\mu\nu}u^{\mu}u^{\nu}\ge -\frac{1}{2}T$, where $u^{\mu}$ is any future-directed, normalized, time-like vector. By assuming isotropic pressure, the explicit form of the $SEC$ is as follows $\rho+3p\ge 0, ~~\rho+p\ge 0$. It is noted that the $SEC$ does not imply the $WEC$ and $NEC$. From equations \eqref{25} and \eqref{26}, we can have \begin{equation}\label{28} \rho+3p=-6\left(\frac{\gamma_0+1}{f_0}\right)^2t^{-2(\gamma_0+1)}, ~\mbox{ for} \gamma_0>0 \end{equation} From equation \eqref{28}, it is observed that the $SEC$ does not satisfy for $\gamma_0>0$. The violation of $SEC$ indicates that the universe contains some non-normal matter (may be supper-luminal particles, for example tachyons). Also, we observed that the model does not satisfies the sound speed and causality conditions. \item \textbf{Dominant Energy Condition:} The $DEC$ states that matter should flow along time-like or null world lines. Precisely, if $u^{\mu}$ is an arbitrary, future directed, time-like vector field, then $-T_{\mu}^{\nu}u^{\mu}$ is a future directed, time-like or null, vector field. The matter's momentum density measured by any observer is $-T_{\mu}^{\nu}u^{\mu}$ with four velocity vector $u^{\mu}$, and this is required to be time-like or null. The explicit form of the $DEC$ is $\rho\ge 0, ~~ \rho\ge |p|$. From the equations \eqref{25} and \eqref{26}, it is found that \begin{equation}\label{29} \rho=\frac{3}{f_0^2(\ln t)^2} \mbox{ for} ~~\gamma_0=-1 \end{equation} \begin{equation}\label{30} p=\frac{3}{f_0 t(\ln t)^2} \mbox{ for}~~ \gamma_0=-1 \end{equation} From equations \eqref{29} and \eqref{30}, it is observed that $\rho\ge |p|$ for $t>1$, whereas $\rho\le |p|$ for $0<t<1$. So $DEC$ does not satisfy for $t<1$, whereas it satisfies for $t>1$. Violating the $DEC$ is normally allied with either a large negative cosmological constant or super-luminal particles. Violating the $SEC$ but not the $WEC$,$NEC$ and $DEC$ is associated with either positive cosmological constant or inflationary epoch. Violating $DEC$ and $WEC$ is associated with negative cosmological constat. \end{itemize} \subsection{Singularities:} The universe has been made up with large number of different species of matter fields. In reality, it is complicated to describe the exact energy momentum tensor even if one know the precise form of the each matter and equation of motion governing it. In fact, we don't have much idea about the behavior of matter under extreme conditions of density and pressure with time. Hence, one may predict the occurrence of singularities in the universe in general relativity. Therefore, in this section, we classify the finite time singularity in the following way: \begin{itemize} \item [I.] \textbf{Type I Singularity:} If the equation of state is less than $-1$ in the context of general relativity, then the universe reaches a singularity at finite time, along with the null energy condition $p+\rho\ge 0$ is violated and this type of singularity is called the big-rip singularity \cite{Nojiri1}. In type I singularity, the scale factor, the energy density and the absolute pressure of the universe are blows up for some finite time i. e. $a\to \infty$, $\rho\to \infty$, $|p|\to\infty$ as $t \to t_{f} $. \item [II.] \textbf{Type II Singularity:} In type-II, only the absolute value of the isotropic pressure is blows up, moreover the scale factor and the energy density of the universe is finite for finite time t, i. e. $a\to a_f$, $\rho\to \rho_f$ and $|p|\to \infty$ as $t\to t_f$. \item [III.] \textbf{Type III Singularity:} In type-III, the energy density and the isotropic pressure of the universe is blows up, however scale factor is finite for finite time t, i. e. $\rho\to\infty$, $|p|\to\infty$ and $a\to a_f$ as $t\to t_f$. \item [IV] \textbf{Type IV Singularity:} In type-IV, the energy density and isotropic pressure are conversing to zero and scale factor is finite for finite time, i. e. $\rho\to 0$, $|p|\to 0$ and $a\to a_f$ as $t\to t_f$. \end{itemize} Here $t_f$, $a_f\ne 0$ and $\rho_f$ are constants. Subsequently, several authors discussed different types of future singularities at finite time, however the null energy condition is not violated \cite{Barrow, Stefancic, Brevik, Dabrowski, Bouhmadi}. Recently, Fernandez-Jambrina and Lazkoz \cite{Lazkoz} studied detail classifications of singularities and future behavior of the universe in FLRW cosmological models by assuming nonlinear equation of state $(p=f(\rho))$. Based on the evolution of the scale factor, energy density and pressure given in equations \eqref{24}, \eqref{25} and \eqref{26}, we made the following observations: \begin{itemize} \item \textbf{For $\gamma_0>0$:} both $p$ and $\rho$ diverge at $t=0$, i. e. $|p|\to \infty$ and $\rho\to\infty$, because the equations \eqref{25} and \eqref{26} contain $t^{-2(\gamma_0+1)}$ term. The scale factor $a$ diverges, i. e. $a\to\infty$ provided $f_0<0$ and $a\to 0$ provided $f_0>0$ at $t=0$. The equation of state parameter is defined as $\omega=-1+\frac{2}{3}f(t)$, which is tends to $-1$ at $t=0$. So, it indicates that the model contains type-III singularity for $f_0>0$, however the singularity for $f_0<0$ has not been considered before in the previous frameworks. \item \textbf{For $\gamma_0=0$:} both $p$ and $\rho$ diverge at $t=0$, i. e. $|p|\to \infty$ and $\rho\to\infty$, because the equations \eqref{25} and \eqref{26} contain $t^{-2}$ term. The scale factor $a$ diverges, i. e. $a\to\infty$ provided $f_0<0$ and $a\to 0$ provided $f_0>0$ at $t=0$. We have defined the equation of state parameter in this work as $\omega=-1+\frac{2}{3}f_0$, which is less than $-1$, i. e. $\omega<-1$ for $f_0<0$ and $\omega>-1$ for $f_0>0$. Hence our model contains type-I or Big Rip singularity for $f_0<0$, which occurs at the phantomlike equation of state, i. e. $\omega<-1$. However, for $f_0>0$ the scale factor becomes zero and the universe collapse, this is called Big Crunch singularities. The Big Crunch is one imaginable situation for the ultimate fate of the universe, in which the expansion of space i. e. the scale factor gradually decreases to zero and the universe re-collapses. \item \textbf{For $\gamma_0\in (-1, 0)$:} The isotropic pressure $(p)$, the energy density $(\rho)$ and the equation of state parameter $(\omega)$ diverge at $t=0$, however the scale factor $(a)$ vanishes. These are called type-III, Big Freeze of finite scale factor singularities. \item \textbf{For $\gamma_0=-1$:} The isotropic pressure $(p)$ and the energy density $(\rho)$ diverge at $t=0$, whereas the scale factor $(a)$ is non zero constant. These are called type-III singularities. \item \textbf{For $\gamma_0\in (-2, -1)$:} The energy density $(\rho)$ vanishes, the scale factor $(a)$ is finite, whereas the isotropic pressure $(p)$ diverges to $\infty$ at $t=0$. These are type-II singularities. \item \textbf{For $\gamma_0=-2$:} The energy density $(\rho)$ vanishes at $t=0$, the isotropic pressure $(p)$ and the scale factor $(a)$ are finite, whereas the equation of state parameter $(\omega)$ diverges. These are generalized sudden singularities. \item \textbf{For $\gamma_0<-2$:} In this case both pressure $(p)$ and energy density $(\rho)$ vanish at $t=0$, the scale factor $(a)$ becomes constant, whereas the equation of state parameter $(\omega)$ diverges. There are called type-IV or w-type singularities. \end{itemize} \section{Conclusions} The scale of the magnetic fields generated by the introduction of nonlinear electrodynamics is much smaller than the cosmic magnetic fields in galaxies, whose scale is about 10 kpc. Such strong magnetic fields originating from the nonlinearity of electrodynamics can be cascade along with the cosmic expansion, so that the magnetic strength can be weaker and the scale of the magnetic fields can be larger. Since the origin of the cosmic magnetic fields such as galactic magnetic fields is not yet understood very well (although there are a number of proposals), the nonlinearity of the electrodynamics can be regarded as a possible mechanism to produce the cosmic magnetic fields which are observed in various astronomical objects like galaxies and the clusters of galaxies at the present time. In the present paper, we have explored the non-linear electrodynamics with the coupling to gravitational fields. It has been found that for general relativity, when the source of the gravitational field is the non-linear electromagnetic field, the cosmic acceleration is accelerating. In a pure magnetic universe, it has also been shown that the accelerated expansion of the universe is driven by the magnetic fields. Furthermore, we have investigated the stability analysis for the present model. In additioin, we have studied the energy conditions and future singularities in detail. From Eq.~\eqref{11}, it is confirmed that $\rho+3p$ must be less than zero to explain the accelerated expansion of the universe. Subsequently, Eq.~\eqref{14} specifies $\beta B^2>1$ is essential to account for the cosmic accelerated expansion. This inequality will take place under strong magnetic fields. Therefore, the inequality $\rho+3p<0$ can be satisfied and drives accelerated expansion of the universe under the framework of nonlinear electrodynamics. From Eq.~\eqref{16}, it is shown that the deceleration parameter $q$ cannot be $-1$ (i.e. $q\ne -1$) and hence our universe does not follow the de Sitter expansion. However, it is observed that for $q=1$, we get $B=0$, i.e. our universe does not contain any electromagnetic field. The positive deceleration parameter indicates the deceleration phase of the universe. Therefore, it may be concluded that without electromagnetic field, only general relativity is not a capable candidate to explain the current accelerated expansion of the universe, so $q\ne 1$ is acceptable. Thus, the electromagnetic field reveals that the range of the deceleration parameter is $-1<q<0$, which is absolutely compatible with our current observations. From Eq.~\eqref{25} and \eqref{26}, it is observed that the equation of state parameter $\omega= \frac{f_0}{t}$ is less than zero for $\gamma_0=-1$ and $f_0<0$. Since $f_0$ is an arbitrary constant, so without loss of generality, we can assume $f_0=\frac{-t_0}{3}$, this implies $\omega=\frac{-t_0}{3t}$. The behavior of the $\omega$ for different range of $`t'$ is given in the table below: \begin{center} \begin{tabular}{|c|c|c|} \hline $\omega=\frac{p}{\rho}=\frac{-t_0}{3t}$ & Range of `t' & Evolution of the universe \\ \hline $\omega\to-\infty$ & $t\to 0$ & Inflationary era \\ \hline $\omega<-1$ & $0<t<\frac{t_0}{3}$ & Universe expanding in accelerating way \\ \hline $\omega=-1$ & $t=\frac{t_0}{3}$ & Dominated by cosmological constant \\ \hline $-1<\omega<-\frac{1}{3}$ & $\frac{t_0}{3}<t<t_0$ & Phantom era \\ \hline $\omega>\frac{-1}{3}$ & $t>t_0$ & Expanding \\ \hline $\omega\to 0$ & $t\to\infty$ & Dust universe \\ \hline \end{tabular} \end{center} From the above table, we can analyze that the EoS parameter $\omega$ tends to negative infinity at the beginning of the universe. This indicates that the universe may occupy with full of abnormal matters and that perhaps causes the initial inflation of the universe. Afterwards, the EoS parameter gradually increases and tends to negative one i. e. $\omega\to -1$ at certain times, during this period $(0<t<\frac{t_0}{3})$ universe experienced an accelerated expansion. During the period $\frac{t_0}{3}<t<t_0$, the EoS parameter lies between $-1<\omega<\frac{-1}{3}$, this indicates the phantom phase of the universe. Subsequently, the EoS parameter tends to zero at infinite time, which indicates that the universe end up with dust. From Eq.~\eqref{25} and \eqref{26}, we can also find $\omega=\begin{cases} f_0-1, & \mbox{if } \gamma_0=0 \\ f_0t^{\gamma_0}, & \mbox{if } \gamma_0<0 \\ -1, & \mbox{if } \gamma_0>0. \end{cases}$ Hence, it is observed that for $\gamma_0=0$, the equation of state parameter $\omega$ is less that -1 if $f_0<0$ and $\omega\to -1$, if $f_0\to 0$ throughout the evolution of the universe. For $\gamma_0<0$, $\omega=f_0t^{\gamma_0}$ which is less than zero if $f_0<0$ and other behavior of $\omega$ is same as the behavior of $\omega$ given in the above table. However, $\omega=-1$ for $\gamma_0>0$ throughout the evolution of the universe, this confirms that the acceleration of the universe is in accelerating way. As a result, it has been seen that the accelerated expansion of the universe can be explained through the non-linear electrodynamics coupled to gravity. The stability analysis of the model has also been executed. It has been revealed that there is no theoretical evidence for the existence of super-luminal particles in the present universe. Furthermore, it has been verified that there may be a possibility for the existence of the super-luminal particles before inflation.\\ \textbf{Acknowledgement:} The first author G. C. Samanta is extremely thankful to Council of Scientific and Industrial Research (CSIR), Govt. of India, for providing financial support \textbf{(Ref. No. 25(0260)/17/EMR-II)} for carrying out the research work. Furthermore, the work of KB was partially supported by the JSPS KAKENHI Grant Number JP25800136 and Competitive Research Funds for Fukushima University Faculty (18RI009).
2023-04-23T08:17:41.971Z
2018-10-05T02:00:18.000Z
redpajama/arxiv
arxiv_0000
477
6,333
793cf3d16f0031c347b6ca625d608ed4903c8526
\section{Introduction} \label{sec:Intro} Studies of globular cluster (GC) systems in galaxies over a vast mass range have revealed surprisingly simple scaling relations. Two of the most accessible properties of globular clusters are their metallicities and their masses. The metallicity is often derived from the colour, via an empirically calibrated transformation, while the masses are derived from the luminosity and colour-dependent mass-to-light ratios. In particular, observations have revealed that the total mass of the GC system is a near-constant fraction of the host halo mass \citep[][]{spitler_forbes09, hudson_etal_2014, harris_etal_2015}. The mean and dispersion in metallicity of the GC system gradually increases with the host halo mass \citep{peng_etal06}. Observations also suggest that the metallicity of the most massive ($M \gtrsim 2\times 10^5\,M_{\odot}$) metal-poor clusters scales weakly with cluster mass \citep[e.g.,][]{strader_etal09, mieske_etal_2006, cockcroft_etal_2009, harris09a, mieske_etal_2010}. This trend is inferred from the scaling of cluster colour with apparent magnitude, and is therefore often referred to as the ``blue-tilt''. No corresponding trend has been observed for metal-rich clusters. In \citet[][hereafter CGL18]{choksi_etal_2018}, we presented an analytic model for the formation of GC systems that matches these trends, based on the earlier models of \cite{muratov_gnedin10} and \cite{li_gnedin14}. The model forms GCs in periods of rapid accretion onto the host dark matter halo. GCs are drawn from a cluster initial mass function (CIMF), and the properties of each cluster are set based on the properties of the host galaxy, which are in turn set using empirically motivated galactic scaling relations. This model successfully reproduces a wide variety of the observed properties of GC systems, including the combined GC mass-halo mass relation, the scaling of the mean metallicity of the GC system, the blue-tilt, and the age-metallicity relation. Following previous versions of our model, the CGL18 model adopted a power-law (PL) CIMF with an index $\beta = 2$. This simple functional form was motivated by observations of the CIMF of young massive clusters in nearby star-forming galaxies \citep{zhang_fall99, lada_lada03, portegies_zwart_etal10}. These young clusters have masses and sizes consistent with the properties of objects which could evolve into GCs after a few Gyr of dynamical and stellar evolution. However, detailed modeling of the CIMF of young clusters reveals deviations from pure power-law (PL) behaviour at high masses \citep{gieles_etal_2006a, larsen2009, bastian_2008, adamo_etal_2015, johnson_etal_2017, messa_etal_2018}. Thus, the entire CIMF of young clusters is best described by a Schechter function, $dN/dM \propto M^{-\beta} e^{-M/M_{\mathrm{c}}}$, where $M_{\mathrm{c}}$ is the characteristic truncation mass \citep{schechter_1976}. Another line of evidence supporting the exponential truncation comes from the \textit{present-day} mass function of old GCs, which shows a roughly log-normal distribution with a near-universal peak at $2 \times 10^5\,M_{\odot}$. Several authors \citep[e.g.,][]{jordan_etal_2007} have shown that this mass function is also well described by an ``evolved" Schechter function, of the form \begin{equation} dN/dM \propto (M + \Delta M)^{-\beta} \exp\left(-\frac{M+\Delta M}{M_{\mathrm{c}}}\right), \label{eq:evolved_mf} \end{equation} where $\Delta M$ is the average mass lost by GCs between formation and $z=0$. \rev{Many works have investigated the physical origin of the maximum mass scales of star forming clumps and stellar clusters \citep[e.g.,][]{dekel_etal_2009, kruijssen_2014, adamo_etal_2015}. These studies have generally suggested that the maximum mass is set by the Toomre mass, corresponding to the maximum mass of a gravitationally unstable clump in a rotationally supported disc \citep{toomre_1964}. Recently, \cite{reina-campos_kruijssen_2017} further noted that feedback from young stars plays an important role in setting the maximum cluster mass by disrupting the cluster before the collapse of a Toomre-unstable region is complete.} \rev{\cite{li_etal_sim1, li_etal_sim2} performed high-resolution cosmological simulations in which all star formation is implemented as occurring in star clusters of various masses. Clusters grow via accretion from the local interstellar medium and their growth is terminated self-consistently by on their own feedback. The Schechter-like CIMF with power-law slope of $\beta \approx 2$ and an $M_{\mathrm{c}}$ that scales with the star formation rate is robustly produced in these simulations. They further show that the maximum cluster mass is sensitive to the star formation efficiency per free fall time $\epsilon_{\rm ff}$, which in turn indirectly sets the strength of stellar feedback in the cluster. \cite{meng_etal_2019} showed that the Toomre analysis can surprisingly accurately predict the unstable regions of the interstellar medium in these simulations, despite the presence of strong turbulent flows. However, they find that the Toomre mass is very large, typically above $10^9\,M_{\odot}$ at high redshift $z>1.5$, which may be too high to influence the maximum mass of individual clusters formed in these simulations.} In this work, we update our model to include the exponential truncation of the CIMF and assess the impact of this modification for predictions of GC scaling relations. We begin in \autoref{sec:methodology} with a brief overview of the model. Then we describe our method for incorporating a Schechter function CIMF in \autoref{sec:sampling}. In \autoref{sec:results} we discuss how this change affects the overall agreement between observed and model mass and metallicity distributions. In \autoref{sec:scalings} we present the model predictions for scaling relations of GC system properties using the modified CIMF. In \autoref{sec:bluetilt} we show how the modified CIMF affects the strength of the blue-tilt that arises naturally arises in our model and then we investigate the dependence of the blue-tilt on galaxy assembly histories. \autoref{sec:discussion} discusses the implications of our results and \autoref{sec:summary} summarizes our main conclusion. \section{Methodology} \label{sec:methodology} Our model predicts GC formation and disruption across the whole of cosmic history. Below we list all the equations required to calculate it, and introduce the cutoff of the CIMF. More details and justification for the choice of equations are provided in CGL18. The two adjustable model parameters\footnote{While the current form of our model has only two adjustable parameters, $p_2$ and $p_3$, we preserve the notation for these parameters for consistency with the past published iterations of our model.} ($p_2,p_3$) are fixed using the comparison with a wide sample of observed GC systems. \subsection{Summary of cluster formation model} Cluster formation is triggered when the accretion rate onto a dark matter halo between two consecutive outputs of our adopted dark matter simulation exceeds an adjustable threshold value $p_3$. For a halo of mass $M_{\rm h,2}$ at time $t_2$, and its progenitor of mass $M_{\rm h,1}$ at time $t_1$, we compute the merger ratio, $R_m$, as: \begin{equation} R_{\rm m} \equiv \frac{M_{\rm h,2} - M_{\rm h,1}}{t_2 - t_1} \frac{1}{M_{\rm h,1}}, \label{eqn:rm} \end{equation} and trigger cluster formation at time $t_2$ if $R_{\rm m} > p_3$. \rev{In this work, as in CGL18, we use the properties of dark matter halos from the collisionless run of the \textit{Illustris} cosmological simulation \citep{vogelsberger_etal_2014, nelson_etal_2015}. We note that in \cite{li_gnedin14}, our cluster formation model was applied to halo merger trees extracted from the \textit{Millenium-II} collisionless simulation, and that we have also tested the model on the EAGLE simulation. In all cases, the results are not sensitive to the adopted simulation.} Once cluster formation is triggered, we form a population of clusters characterized by total mass $M_{\mathrm{tot}}$\footnote{This notation differs from that used in CGL18, in which we referred to it as $M_{\mathrm{GC}}$. To avoid confusion with other quantities, we switch to the label $M_{\mathrm{tot}}$ in this work, reserving $M_{\mathrm{GC}}$ for the total mass in GCs at $z=0$ in a galaxy.}, based on the hydrodynamic cosmological simulations of \cite{kravtsov_gnedin05}: \begin{equation} M_{\mathrm{tot}} = 1.8\times 10^{-4}\, p_2 \, M_g, \label{eqn:mgc} \end{equation} where $p_2$ is the second adjustable model parameter and $M_g$ is the cold gas mass in the host galaxy. \rev{The purpose of the parameter $p_2$ is to normalize the formation rate of a cluster population in a given episode. It absorbs many factors relevant to cluster formation: the fraction of cold gas in the star-forming phase at that epoch, the efficiency of conversion of that gas into stars, the fraction of new stars in clusters above our adopted minimum mass of $10^5 M_{\odot}$, and the variation of all these factors with the galactic environment. We do not attempt to model all these factors in detail, and instead treat $p_2$ as an adjustable parameter. Typical values of $p_2 \sim 10$ (presented in \autoref{tab:models}) imply that, over a given merger event, $\sim 2 \times 10^{-3}$ of a galaxy's cold gas mass is converted into GCs. This fraction is broadly consistent with numerical simulations of galaxy and cluster formation by \cite{li_etal_sim1}. In these simulations the ratio of the total bound mass of clusters with $M > 10^{5} M_{\odot}$ formed within intervals of 100 Myr to the galaxy gas mass varies by an order of magnitude, but the average value for the epochs satisfying our criterion \ref{eqn:rm} is $M_{\mathrm{tot}}/M_{g} \sim (1-2) \times 10^{-3}$. It is therefore reasonable to fit the average normalization of the cluster formation rate within this range, and even a wider range given the expected variation of the rate over time.} The cold gas fraction is parameterized as a \rev{function of the stellar mass $M_{\star}$ and redshift $z$} as: \begin{equation} \eta(M_{\star},z) = 0.35 \times 3^{2.7} \, \left(\frac{M_{\star}}{10^9\,M_{\odot}}\right)^{-n_m(M_{\star})} \ \left(\frac{1+z}{3}\right)^{n_z(z)}. \label{eqn:fg} \end{equation} The redshift and stellar mass scalings, $n_z$ and $n_m$ respectively, are given by: \begin{align} n_z &= 1.4 \;\mathrm{for}\; z > 2, \;\mathrm{and}\; n_z = 2.7 \;\mathrm{for}\; z < 2, \nonumber \\ n_m &= 0.33 \;\mathrm{for}\; M_{\star} > 10^{9}\,M_{\odot}, \;\mathrm{and}\; n_m = 0.19 \;\mathrm{for}\; M_{\star} < 10^{9}\,M_{\odot}. \nonumber \end{align} The stellar mass is increased self-consistently using a modified version of the stellar mass-halo mass relation derived from the abundance matching results of \cite{behroozi_etal_2013_main}. We then draw individual clusters from the cluster initial mass function, as described in detail in the following section. Each cluster is assigned the average metallicity of its host galaxy at formation, which is set by an observed galaxy stellar mass-metallicity relation: \begin{equation} \mathrm{[Fe/H]} = \log_{10}\left[\left(\frac{M_{\star}}{10^{10.5}\,M_{\odot}}\right)^{0.35} (1+z)^{-0.9}\right]. \label{eq:mmr} \end{equation} \subsection{Monte Carlo sampling of the Schechter function} \label{sec:sampling} For a given formation event, with a combined mass $M_{\mathrm{tot}}$ to be distributed into individual clusters, we draw clusters from a mass function of the form: \begin{equation} \frac{dN}{dM} = M_0 M^{-\beta} e^{-M/M_{\mathrm{c}}}, \label{eqn:cimf} \end{equation} where $\beta$ is the index of the power-law, $M_0$ is an overall normalization factor, and $M_{\mathrm{c}}$ is the truncation mass. As in CGL18, we adopt a constant slope $\beta = 2$. Our procedure for drawing clusters is based upon the ``optimal sampling'' method of \cite{schulz_etal_2015}. We begin by drawing the most massive cluster, of mass $M_{\mathrm{max}}$. The value of $M_{\mathrm{max}}$ is obtained from imposed constraints. The first constraint is that the integral mass equals $M_{\mathrm{tot}}$: \begin{equation} M_{\mathrm{tot}} = \int_{M_{\mathrm{min}}}^{M_{\mathrm{max}}} M\frac{dN}{dM} dM, \label{eqn:constraint1} \end{equation} where $M_{\mathrm{min}} = 10^5 \,M_{\odot}$ is the minimum mass of clusters that can form in our model. Clusters with initial masses below $10^5 \,M_{\odot}$ are expected to be disrupted in $\lesssim 10$~Gyr by the external tidal field. The second constraint is derived from assuming there is only one cluster of mass $M_{\mathrm{max}}$: \begin{equation} 1 = \int_{M_{\mathrm{max}}}^{\infty} \frac{dN}{dM} dM. \label{eqn:constraint2} \end{equation} Combining both constraints yields $M_{\mathrm{tot}}$ as a function of $M_{\mathrm{max}}$: \begin{equation} M_{\mathrm{tot}} = \frac{\Gamma(2-\beta, M_{\mathrm{min}}/M_{\mathrm{c}}) - \Gamma(2-\beta, M_{\mathrm{max}}/M_{\mathrm{c}})}{\Gamma(1-\beta, M_{\mathrm{max}}/M_{\mathrm{c}})} \, M_{\mathrm{c}}, \label{eqn:constraint3} \end{equation} \rev{where $\Gamma(s,x)$ is the upper incomplete gamma function.} We solve this equation numerically for $M_{\mathrm{max}}$. After drawing the most massive cluster we calculate the cumulative distribution, $r = N(<M)/N(<M_{\mathrm{max}})$ and invert it numerically. We then draw clusters by sampling the cumulative distribution for $0 \leq r \leq 1$ until the total mass in clusters reaches $M_{\mathrm{tot}}$. \rev{In Appendix \ref{sec:appendix_sampling} we discuss the effects of adopting alternate sampling methods.} The realistic value of $M_{\mathrm{c}}$ may depend on local properties of the ISM such as pressure and density \citep[e.g.,][]{kruijssen_2014}. However, our model contains no spatial information and therefore a detailed calculation of the local value of $M_{\mathrm{c}}$ is beyond the scope of this work and would only significantly complicate the model. Instead, we choose a constant value of $M_{\mathrm{c}}$ throughout the calculation and analyze the impact on the model for a few different values of $M_{\mathrm{c}}$. \citet{jordan_etal_2007} fit the evolved Schechter function (\autorefp{eq:evolved_mf}) to GCs in the Virgo Cluster Survey (VCS) by assuming a constant mass offset $\Delta$ for all clusters in a given galaxy. Their results were updated by \citet{johnson_etal_2017}, who included stellar evolution mass loss and revised stellar mass-to-light ratios. Both of these changes affect the fractional mass loss of each cluster and therefore rescale the expected initial cluster mass by a constant factor ($\approx 2.1$). This in turn results in an increase of the fitted cutoff mass by the same factor. For host galaxies with the stellar mass $10^{9}-10^{12}\,M_{\odot}$, \citet{johnson_etal_2017} find values of $M_{\mathrm{c}}$ ranging from $10^6-10^7\,M_{\odot}$ and a weak scaling with galaxy mass. These inferred values are still expected to underestimate the true value of $M_{\mathrm{c}}$, because the assumption of a constant mass offset for all GCs is inconsistent with the nonlinear scaling of the disruption time with cluster mass (see \autorefp{eqn:ttid} below). The most massive clusters, which determine the fitted value of $M_{\mathrm{c}}$, experience a larger $\Delta M$ than low-mass clusters. This effect should push $M_{\mathrm{c}}$ even higher. Our model sample covers the range of dark matter halo mass from $10^{11}-10^{14.5}\,M_{\odot}$, which maps to a range of median stellar masses similar to the observed galaxy sample used in \cite{johnson_etal_2017}. Motivated by these results, we test constant values of $M_{\mathrm{c}} = 10^{6}, 10^{6.5}, 10^{7}, 10^{7.5}\,M_{\odot}$. We find that lower values of $M_{\mathrm{c}} < 10^{6}\,M_{\odot}$ severely truncate the formation of massive clusters that should form in giant galaxies and therefore cannot reproduce the cluster mass functions. Higher values than $10^{7.5}\,M_{\odot}$ give indistinguishable results to the PL model. The most appropriate value of $M_{\mathrm{c}}$ to match the overall observed distribution appears to be $M_{\mathrm{c}} \approx 10^{6.5}-10^7\,M_{\odot}$. \subsection{Cluster disruption} \label{sec:disruption} CGL18 adopted a modified version of the analytic cluster disruption prescription used in \cite{gnedin_etal14}, which accounts for dynamical disruption in the presence of both a strong and weak tidal field: $$ \frac{dM}{dt} = -\frac{M}{\mathrm{min}\left(t_{\mathrm{iso}}, t_{\mathrm{tid}}\right)}, $$ where $t_{\mathrm{iso}}$ and $t_{\mathrm{tid}}$ are the disruption timescales in the isolated, weak tidal field limit and strong tidal field limit, respectively. However, this prescription did not take into account the expansion of the cluster as two-body relaxation progresses, and thus overestimated the importance of disruption in isolation. To avoid determining a new prescription for $t_{\mathrm{iso}}$, which is beyond the scope of this work, we simplify the disruption prescription to include only disruption in the strong tidal field limit. This change is reasonable, because in the CGL18 model $t_{\mathrm{tid}} < t_{\mathrm{iso}}$ for $M \geq 5\times 10^3\,M_{\odot}$, so disruption in isolation only affected the lowest mass clusters. The final prescription is then: \begin{equation} \frac{dM}{dt} = -\frac{M}{t_{\mathrm{tid}}}, \label{eqn:dmdt} \end{equation} where $t_{\mathrm{tid}}$ is: \begin{equation} t_{\mathrm{tid}}(t) \approx 5\,\mathrm{Gyr}\, \left(\frac{M(t)}{2 \times 10^{5}\,M_{\odot}}\right)^{2/3} \left(\frac{P}{0.5}\right), \label{eqn:ttid} \end{equation} and $P$ is a normalized period of rotation around the galactic center\rev{, defined in \cite{gnedin_etal14}}. As in CGL18, we adopt a constant value of $P=0.5$. Integrating \autoref{eqn:dmdt} gives the mass evolution from dynamical disruption: \begin{equation} M'(t) = M_0\left[ 1 - \frac{2}{3}\frac{t}{t_{\mathrm{tid}}(t=0)} \right]^{3/2}. \end{equation} We count time $t$ from the formation of each cluster individually. In addition to dynamical disruption, we include a time-dependent mass-loss rate due to stellar evolution, $\nu_{\rm se}$, as calculated by \cite{prieto_gnedin08}, and assume it occurs much faster than the dynamical disruption. The combined cluster mass evolution is then: \begin{align} M(t) = M'(t) \left[ 1 - \int_0^t \nu_{\rm se}(t')dt' \right]. \end{align} \begin{table} \centering \begin{tabular}{llccccr} \hline\\[-2mm] Model & $M_{\mathrm{c}}\,(M_{\odot})$ & $p_2$ & $p_3\,(\mathrm{Gyr}^{-1})$ & $G_{Z}$ & $G_{M}$ & ${\cal M}$ \\[1mm] \hline\\[-2mm] S6 & $10^{6}$ & 21 & 0.75 & 0.45 & 0.22 & 12.8 \\ S6.5 & $10^{6.5}$ & 13.5 & 0.70 & 0.51 & 0.40 & 9.1 \\ S7 & $10^{7}$ & 8.8 & 0.58 & 0.54 & 0.57 & 7.9 \\ S7.5 & $10^{7.5}$ & 7.15 & 0.50 & 0.57 & 0.61 & 7.6 \\ PL & $\infty$ & 6.75 & 0.50 & 0.58 & 0.67 & 7.3 \\ \hline \end{tabular} \caption{\small Best fit parameters for different cutoff masses of the Schechter function, the associated metallicity and mass ``goodness'' values (see \autoref{sec:optimization}), and the merit function value.} \label{tab:models} \end{table} \subsection{Parameter optimization} \label{sec:optimization} For each value of $M_{\mathrm{c}}$, we search for new best values of $p_2$ and $p_3$ using the same method as in CGL18. We minimize the ``merit function'', $\cal M$: $$ {\cal M} \equiv \frac{1}{N_h} \sum_h \left(\frac{M_{\mathrm{GC}}(z=0)}{M_{\mathrm{GC,obs}}(M_{\mathrm{h}})} - 1 \right)^2 + \frac{1}{N_h}\sum_{h} \left( \frac{0.58}{\sigma_{Z, h} } \right)^2 + \frac{1}{G_{M}} + \frac{2}{G_{Z}}. $$ The first term in the merit function gives the reduced $\chi^2$ of the total GC system mass-halo mass relation. The second term weights the dispersion of the metallicity distributions of model halos against the mean observed value of 0.58~dex. The final two terms weight the ``goodness'' of the metallicity and mass distributions ($G_Z$ and $G_M$, respectively). In brief, they are defined as the fraction of observed-model GC metallicity or mass distribution pairs that have an acceptable Kolmogorov-Smirnov test probability, $p_{KS} > 1\%$, of being drawn from the same underlying distribution. To make a fair comparison between observed and model galaxies, we calculate the median halo mass corresponding to the stellar mass of each host galaxy, and match the distribution of each observed galaxy against all model halos at $z=0$ within $\pm$0.3~dex in mass. The observational data used are a compilation from the VCS and the HST-BCG survey \citep{cote_etal06, peng_etal06, harris_etal14_bcg1, harris_etal16_bcg2, harris_etal17_bcg3}. Throughout, we refer to our power-law CIMF model as PL, and Schechter function models as S$N$, where $N$ is the adopted value of $\log_{10}M_{\mathrm{c}}/\,M_{\odot}$. The best-fit parameter values and the associated values of $\cal M$ are given in \autoref{tab:models}. \section{Results} \label{sec:results} We find that decreasing $M_{\mathrm{c}}$ worsens both goodness parameters and the residual value of the merit function, after optimizing the parameters $p_2$ and $p_3$. The lowest cutoff mass we consider, $M_{\mathrm{c}} = 10^{6}\,M_{\odot}$, cannot adequately match the observed mass or metallicity distributions. The merit function value is 75\% larger and the GC mass function is consistent with the corresponding observational analogs in only 22\% of the cases. The match of the metallicity distribution is also below 50\%, which we consider unacceptable. However, the choice of $M_{\mathrm{c}} = 10^{6.5}\,M_{\odot}$ is acceptable. Relative to the PL model, the goodness parameters $G_Z$ and $G_M$ are reduced only by 0.07 and 0.27, respectively. This is a reasonable match to the data, given the simplicity of the model. The choice of $M_{\mathrm{c}} = 10^{7}\,M_{\odot}$ works even better, giving the goodness of both mass and metallicity functions above 50\%. Since $M_{\mathrm{c}}$ delineates the maximum of the observed range, we consider it as the highest viable value to investigate in detail, along with the preferred $10^{6.5}\,M_{\odot}$. \rev{Higher values of the cutoff mass produce results close to the original PL model, but are disfavored by modeling of the present-day GC mass function \citep{jordan_etal_2007, johnson_etal_2017}.} \begin{figure} \includegraphics[width=\columnwidth]{GC2/m_main.pdf} \vspace{-5mm} \caption{Combined mass of all GCs as a function of host halo mass, both at $z=0$. In addition to the original PL model from CGL18, we show two models with the values of cutoff mass closest to the distribution inferred by \citet{johnson_etal_2017}: $M_{\mathrm{c}} = 10^{6.5}$ and $10^{7}\,M_{\odot}$. Observed halo masses are estimated from weak lensing by \protect \cite{hudson_etal_2014} and \protect\cite{harris_etal_2015}.} \label{fig:m_sch} \end{figure} \subsection{Scaling relations of globular cluster systems} \label{sec:scalings} In \autoref{fig:m_sch} we show the relation between the total mass of the GC system, $M_{\mathrm{GC}}$, and its host galaxy halo mass, $M_{\mathrm{h}}$, at $z=0$. The $M_{\mathrm{GC}}-M_{\mathrm{h}}$ relation remains robust despite the introduction of the cutoff mass. For most large halos with $M_{\mathrm{h}} > 10^{11.5}\,M_{\odot}$, the models with smaller $M_{\mathrm{c}}$ match the data even better than the PL model. As a simple estimate of a match between a model and the observed data, we compute the \textsc{rms} deviation of the data from the median trend in each model (in log mass) and divide it by the standard deviation of the data, $\sigma$, in 0.5~dex bins of halo mass. All bins except the lowest-mass bin have \textsc{rms}$/\sigma < 1$, and lower values for lower $M_{\mathrm{c}}$. Only at $10^{11}\,M_{\odot} < M_{\mathrm{h}} < 10^{11.5}\,M_{\odot}$ does the deviation increase with decreasing $M_{\mathrm{c}}$ and reach \textsc{rms}$/\sigma \approx 2$. The best-fit value of the normalization of the formation rate $p_2$ increases with decreasing $M_{\mathrm{c}}$, and therefore a larger total mass is initially formed in clusters ($p_3$ changes only slightly). However, at $z=0$ the total mass of the GC system $M_{\mathrm{GC}}$ in the S6.5 (S7) model is \textit{lower} than in the PL case by 0.15 (0.1) dex on average. Because the lower-$M_{\mathrm{c}}$ models preferentially form lower-mass clusters, they lose these clusters faster (see \autorefp{eqn:ttid}): 3 (1.8) times as many clusters are disrupted by $z=0$ in the S6.5 (S7) model as in the PL model. \begin{figure} \includegraphics[width=\columnwidth]{GC2/mean_main.pdf} \vspace{-5mm} \caption{Mean metallicity of GC system as a function of the host halo mass at $z=0$. Data points are a compilation from the Virgo Cluster Survey and HST-BCG, scaled to the metallicity calibration of the VCS. Halo masses are calculated using the stellar mass-halo mass relation of \protect\citet{kravtsov_etal_2014}, and stellar masses are computed using the color-dependent mass-to-light ratios of \protect\citet{bell_etal03}.} \label{fig:mean_sch} \end{figure} \autoref{fig:mean_sch} shows the scaling of the mean metallicity of the GC system with host halo mass for the different $M_{\mathrm{c}}$ models. The rms deviation is larger than for the $M_{\mathrm{GC}}-M_{\mathrm{h}}$ relation, \textsc{rms}$/\sigma \lesssim 1.5$, and generally increases with decreasing $M_{\mathrm{c}}$. In the most massive halo bin, $M_{\mathrm{h}} > 10^{14}\,M_{\odot}$, the ratio reaches \textsc{rms}$/\sigma \approx 4$. We plan to investigate this discrepancy, and its relation to the contribution of satellite galaxies, in future work. At low halo masses ($M_{\mathrm{h}} \sim 10^{11} \,M_{\odot} -10^{11.5} \,M_{\odot}$), lower $M_{\mathrm{c}}$ models have systematically lower mean metallicity. \rev{This is because parameter optimization leads to a higher value of $p_2$ for lower $M_{\mathrm{c}}$ models, producing more blue clusters in small galaxies, thus lowering the mean metallicities.} On the other hand, at higher halo masses, lower $M_{\mathrm{c}}$ models have \textit{higher} mean metallicities than the PL model. In particular, the observed systems show a break in the scaling of the mean metallicity with halo mass in the most massive hosts, with the mean metallicity instead decreasing slightly, by 0.1~dex. While the qualitative trend of the flattening of the relation at these halo masses is robust, models S6.5 and S7 exhibit a shallower decline in the mean $\mathrm{[Fe/H]}$ at $M_{\mathrm{h}} \gtrsim 10^{14}\,M_{\odot}$. To further understand these results, we examined the distribution of cluster populations $M_{\mathrm{tot}}$ (\autorefp{eqn:mgc}) that form in qualified \rev{cluster formation} events. When $M_{\mathrm{tot}} < M_{\mathrm{c}}$, the cutoff in the mass function is irrelevant: the total mass forming in clusters is too low to sample the high-mass end of the CIMF. When $M_{\mathrm{tot}} \gtrsim M_{\mathrm{c}}$, the cutoff becomes important. We find that metal-poor clusters typically form in populations of $M_{\mathrm{tot}} \sim 10^6-10^{7.5}\,M_{\odot}$ (interquartile range) and therefore are somewhat affected by the cutoff in our preferred models S6.5 and S7. In contrast, the metal-rich clusters typically form in populations of $M_{\mathrm{tot}} \sim 10^{6.6}-10^{8.5}\,M_{\odot}$ because their host galaxies are larger, and therefore our choice of $M_{\mathrm{c}}$ affects them more significantly. This effect causes more clusters to form in later events, which inherit higher metallicity, as seen in Fig.~\ref{fig:mean_sch}. The mean metallicity of surviving clusters in the most massive halos, $M_{\mathrm{h}} \gtrsim 10^{13}\,M_{\odot}$, increases relative to the PL model. The scaling and normalization of the metallicity dispersion of GC systems, \rev{which we presented and analyzed in CGL18}, is indistinguishable for the different models, and therefore not shown here for brevity. \begin{figure} \includegraphics[width=\columnwidth]{GC2/mf_stack.pdf} \vspace{-5mm} \caption{GC mass function for different models at $z=0$. The distributions have been weighted by the halo mass function so as to be cosmologically representative.} \label{fig:mf_stack} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{GC2/mf_halos.pdf} \vspace{-5mm} \caption{Examples of the GC mass function for four individual halos at $z=0$ in the $M_{\mathrm{c}} = 10^{6.5}\,M_{\odot}$ model, varying by factors of ten in halo mass. The dashed curve shows the mass function of Galactic GCs with updated mass-to-light ratios from \citet{harris_etal_2017}; the dashed curve shows the average mass function of GCs in VCS galaxies (see \autoref{subsec:mf_evolution} for detail).} \label{fig:mf_panels} \end{figure} \subsection{Evolution of the cluster mass function} \label{subsec:mf_evolution} \autoref{fig:mf_stack} compares the total GC mass function (GCMF) at $z=0$ for the different $M_{\mathrm{c}}$ models. These GCMFs are constructed to represent the total GC sample in a cosmological volume, as would be observed in large-scale surveys. Since the mass function can vary between galaxies of different mass, we weight the contribution of each cluster to the GCMF by the cosmological halo mass function at $z=0$, $n_{\rm h}(M_{\mathrm{h}}, z=0)$. To do so, we split our sample of model halos into bins of mass of 0.3 dex. Then, we weight each cluster by $n_{\rm h}(M_{\mathrm{h}}, z=0)$ divided by the number of halos in that mass range that were used in our model. The latter step is needed because to run the model for different mass galaxies we chose a log-linearly spaced subset of halos from the cosmological simulation \textit{Illustris}. The weighting converts that selection to a cosmologically representative sample. The halo number densities were calculated using the analytic halo mass functions of \cite{sheth_tormen99} implemented in the \textsc{colossus} code \citep{diemer_colossus}. The GCMFs of all models are peaked around $M \approx 10^5\,M_{\odot}$. Smaller $M_{\mathrm{c}}$ leads to stronger truncation of the high-mass end, as expected. The mean value and the width of the mass function decrease monotonically with decreasing $M_{\mathrm{c}}$. The differences between the PL model and $M_{\mathrm{c}} = 10^6 \,M_{\odot}$ model mass functions are 0.3~dex in the mean mass and 0.2~dex in the standard deviation. \autoref{fig:mf_panels} shows several examples of the GCMF for individual galaxies for our preferred choice $M_{\mathrm{c}} = 10^{6.5}\,M_{\odot}$. They are all consistent with a universal GCMF. For comparison, we show the Galactic GCMF and a stacked sample of GCs for VCS galaxies. The masses of Galactic clusters were computed by combining their luminosities from the \citet{harris10} catalog and the luminosity-dependent mass-to-light ratios suggested by \citet{harris_etal_2017}. In the VCS, the most luminous galaxies host the majority of the detected GCs in the survey. To obtain an average GCMF, we weight each cluster by the inverse of the number of clusters in its host galaxy. Unlike the case for the Galactic GCs, the VCS sample also suffers from incompleteness. Therefore, we adjust the normalization of the VCS GCMF by the expected fraction of clusters above the detection limit. In CGL18, we estimated this detection limit to be about $10^{4.5}\,M_{\odot}$. The expected fraction is calculated by integrating the evolved Schechter mass function (\autorefp{eq:evolved_mf}) evaluated for $M_{\mathrm{c}} = 10^{6.5}\,M_{\odot}$ and $\Delta M \approx 10^{5.7}\,M_{\odot}$. Here, $\Delta M$ is the average amount of mass lost by clusters, which is similar in our model and in the fits of \citet{jordan_etal_2007}. Note that we adopt constant values for $\Delta M$ and $M_{\mathrm{c}}$, while in reality both of these quantities depend on the properties of the host galaxy. The high-mass galaxies match the Galactic GCMF well, except for the somewhat wider distribution extending to low cluster masses. The only significant deviation is in the smallest halo, which lacks clusters above $10^6\,M_{\odot}$. However, the peak mass is very consistent at $\approx 10^{5}\,M_{\odot}$ for all the galaxies. The GCMF of the VCS galaxies is slightly offset from the Galactic GCMF. This illustrates the modest galaxy-to-galaxy variation of the mass function and the importance of considering all available datasets when comparing model predictions with observations. Our S6.5 model does not produce as narrow a mass distribution as in the VCS and misses the most massive clusters. However, model S7 (not shown for brevity) can match the VCS GCMF. It may be that $M_{\mathrm{c}} \approx 10^{7}\,M_{\odot}$ is required to describe the GC systems of the early-type galaxies in Virgo cluster, while $M_{\mathrm{c}} \approx 10^{6.5}\,M_{\odot}$ is more appropriate for the Milky Way-type galaxies. This is consistent with a general increase of $M_{\mathrm{c}}$ with host galaxy mass, as suggested by \cite{jordan_etal_2007} and \cite{ johnson_etal_2017}. \rev{\cite{jordan_etal_2007} and \cite{harris_etal_2014} noted that the typical width of the GCMF scales weakly with the mass of the host galaxy. To investigate this trend in our model, we fit Gaussians to our model GCMFs and checked the scaling of the best-fit standard deviations $\sigma_{\log M}$ with host galaxy $M_{\star}$. To ensure fair comparison to observation, we performed the fits only to clusters with $M > 10^{4.5} \,M_{\odot}$ and adjusted the normalization according to the completeness of the observed sample. Over the range $M_{\star} = 10^{10.5} - 10^{12} \,M_{\odot}$, observations show the average $\sigma_{\log M}$ increases from 0.40 dex to 0.5 dex. Our S6.5 model shows a similar trend, but with a slightly higher normalization: the average $\sigma_{\log M}$ increases from 0.45 to 0.55 dex.} \rev{The GCMF evolution described above is predicated on the assumed steady tidal mass loss that depends only on cluster mass. In reality, cluster disruption would depend on the environment, including the overall strength of the tidal field and its rapid variation in time (``tidal shocks''). We are unable to include these effects in the present model framework, and therefore cannot conclusively predict how they would influence our conclusions. However, we expect global statistics of many stacked galaxies (as shown in \autoref{fig:mf_stack}) to be robust. For individual galaxies, such as those shown in \autoref{fig:mf_panels}, tidal shocks could be more important. Because on average more massive clusters have higher half-mass density, they are likely to be more resilient to tidal shocks than the less massive clusters. This could potentially reduce the number of surviving low-mass clusters and bring the model mass functions shown in Figure 4 to better agreement with the observations.} \begin{figure} \includegraphics[width=\columnwidth]{GC2/gc_sfrz.pdf} \vspace{-5mm} \caption{\textit{Upper:} Cluster formation rate density over cosmic time for the S6.5 and PL models. The blue curve shows the best-fit relation to the observed field cosmic star formation rate density from \protect \cite{madau_dickinson_2014} (extrapolated at $z \gtrsim 8$). \textit{Lower:} Clusters that survive to $z=0$, split into a low and high-mass bin by final mass.} \label{fig:gc_sfr} \end{figure} \subsection{Formation history} The top panel of \autoref{fig:gc_sfr} compares the formation rate of GCs with the cosmic star formation history from \citet{madau_dickinson_2014}. In general, the peak of GC formation precedes the peak of the field stellar population by about 2 Gyr, or $\Delta z \approx 2-3$. At its peak epoch, the GC formation rate is of order 1\% of the total star formation rate (SFR). The GC formation rate falls more steeply after the peak than does the field SFR, dropping by four orders of magnitude. Because of the larger normalization $p_2$, the truncated S6.5 model produces a factor of 2 more mass in clusters at high redshift than the PL model. \citet{el-badry_etal_2019} presented a similar GC formation model to ours, in which the cluster formation efficiency is tied to the gas surface density, which is in turn set by an equilibrium inflow-outflow model. Their model similarly predicts that the GC formation rate peaks earlier than that of the field, with a maximum in the range $z=3-5$. The results of both models show that GCs are unlikely to contribute significantly to the production of ionizing photons before the reionization of cosmic hydrogen at $z\gtrsim6$. The bottom panel of \autoref{fig:gc_sfr} shows the dramatically different proportions of very massive clusters ($M > 10^6\,M_{\odot}$) in the S6.5 and PL models. In the PL model, the contributions to the GC formation rate, $\dot{\rho}_{\rm GC}$, from $t=3-8$~Gyr are similar from the low and high-mass clusters. In contrast, in the S6.5 model, massive clusters never make up more than 10\% of $\dot{\rho}_{\rm GC}$. \begin{figure} \includegraphics[width=\columnwidth]{GC2/tilt_slope_new.pdf} \vspace{-5mm} \caption{The blue-tilt slope $\alpha$ for the stacked sample of all clusters in each $M_{\mathrm{c}}$ model, using the Equal-N-Bin and Fixed-Bin methods (see text for details). Solid lines show the best-fit values and shaded regions show the $1\sigma$ errors. The grey shaded band shows the best-fit slope within its error for the stacked sample of clusters in the Virgo and Fornax Cluster surveys, converted to our metallicity calibration.} \label{fig:tilt_slope} \end{figure} \section{Steepening of the blue tilt} \label{sec:bluetilt} In CGL18, we showed that a correlation between cluster mass and metallicity arises naturally for massive metal-poor (``blue'') clusters. No such trend is found for the metal-rich (``red'') clusters. Describing the relation as $$ \mathrm{[Fe/H]} = \alpha \log_{10}M + \mathrm{const}, $$ we found a best value of $\alpha \approx 0.23$ for a stacked sample of all our model clusters with $M \gtrsim 5\times 10^5 \,M_{\odot}$, consistent with data for observed clusters in the VCS (see Fig.~7 of CGL18). In our model, this trend arises because the metal-poor clusters form at high redshift ($7 \lesssim z \lesssim 3$) in low-mass halos ($M_{\mathrm{h}} \lesssim 10^{11} \,M_{\odot}$). Although the gas fractions of galaxies in this redshift range are high, the \textit{total amount} of cold gas available for cluster formation is relatively low, and as a result very massive clusters cannot typically form in these environments. Therefore, the high-mass end of the CIMF is not fully sampled. Instead, massive blue clusters can form only in galaxies with larger cold gas reservoirs. By the adopted galactic scaling relations, the total cold gas mass scales with the galaxy stellar mass, which in turn scales with the galaxy metallicity (which clusters inherit). In this way, massive blue clusters preferentially form in slightly metal-enhanced environments. Hence, the observed mass-metallicity correlation at $z=0$ is a statistical effect. No correlation is produced for the red clusters, because they form in more massive host halos ($M_{\mathrm{h}}\sim 10^{11}-10^{13}\,M_{\odot}$), which have large enough cold gas reservoirs to fully sample the CIMF (see Fig.~8 in CGL18). The truncation of the CIMF at high masses should affect this model result. In particular, because the probability of forming a massive cluster is exponentially suppressed, the formation of massive blue clusters will be pushed to galaxies with even higher gas masses and metallicities \rev{(recently independently argued by \citealt{usher_etal_2018})}. Thus, a Schechter CIMF should \textit{increase} the strength of the blue tilt -- the value of the parameter $\alpha$. \subsection{Quantifying the blue tilt} How exactly we quantify the blue tilt matters. Our survey of the observational literature showed that two different ways of calculating the blue tilt have been adopted. We find that they produce different results and impact interpretation of trends of the slope $\alpha$. Below we describe these two methods and suggest another method that we think produces more reliable results. The first method (hereafter ``Equal-N-Bin") follows that used by \citet{mieske_etal_2010} for the combined sample of GCs in the Virgo and Fornax cluster galaxies. We divide clusters above some minimum mass $M_{\mathrm{lim}}$, usually set by observational completeness, into at most 25 bins of mass with an equal number of clusters in each bin. We take $M_{\mathrm{lim}} = 10^{5.15}\,M_{\odot}$ as in \citet{mieske_etal_2010} to allow direct comparison with the observations. Then we fit a sum of two Gaussians to the metallicity distribution of clusters in each bin, representing red and blue clusters. To obtain a reliable fit we need a sufficient number of objects, and therefore we impose a minimum number of clusters per bin: $N_{\rm min, GMM} = 50$. This requirement sets the number of bins as $N_{\rm bins} = \mathrm{min}\left(N_{\rm tot}/N_{\rm min, GMM}, 25\right)$, where $N_{\rm tot}$ is the total number of clusters in the sample. Using this binning method, in our stack of all model clusters we have between 14,000 and 44,000 clusters per bin, depending on the $M_{\mathrm{c}}$ model. For the metallicity distribution fit we use the Gaussian Mixture Modeling (GMM) method described in \citet{muratov_gnedin10}. It gives us the peak locations of the red and blue cluster metallicities, $\mu_{\rm red}$ and $\mu_{\rm blue}$, and the corresponding Gaussian dispersions $\sigma_{\rm red}$ and $\sigma_{\rm blue}$. To ensure robust GMM results, we exclude any bins which do not have the peaks clearly separated. Specifically, if the separation parameter $$ D \equiv \frac{|\mu_{\rm blue} - \mu_{\rm red}|}{\left[(\sigma^2_{\rm blue} + \sigma^2_{\rm red})/2\right]^{1/2}} $$ for the bin is less than 1.4 then we do not include the bin in the fit. We also exclude bins where the blue peak is ``too red": $\mu_{\rm blue} > -0.8$. These cuts primarily affect the highest mass bins, where the metallicity distributions are nearly unimodal due to merging of the blue and red peaks at high cluster mass. Given this set of peak metallicities of the metal-poor clusters, we do a linear regression fit between $\mu_{\rm blue}$ and the mean $\log_{10}M$ in the bin: \begin{equation} \mu_{\rm blue} = \alpha \log_{10}\frac{M}{10^6\,M_{\odot}} + \beta. \label{eqn:mu_tilt} \end{equation} The pivot at a typical mass of $10^6\,M_{\odot}$ is chosen to minimize the uncertainty of the intercept $\beta$. The result of applying the Equal-N-Bin method on the model clusters for several different values of $M_{\mathrm{c}}$ is shown by the purple curve in \autoref{fig:tilt_slope}. Our model sample includes a stack of clusters from all galaxy halos selected from the \textit{Illustris} cosmological simulation with log-linear spacing in halo mass. Despite our previous argument, this curve shows no correlation between $\alpha$ and $M_{\mathrm{c}}$. We investigated possible reasons for the lack of a correlation with $M_{\mathrm{c}}$ and found it to be caused by the variable bin widths used in the Equal-N-Bin method. To illustrate the dependence of this result on bin width selection, we test an alternative method of binning clusters for GMM fits: with fixed bin widths of 0.1~dex in mass (hereafter ``Fixed-Bin"). This leads to a median number of clusters per bin between 2,400 and 12,000, depending on the $M_{\mathrm{c}}$ model. The red curve in Fig.~\ref{fig:tilt_slope} shows the result of using the Fixed-Bin method. Now we see that $\alpha$ decreases monotonically with $M_{\mathrm{c}}$, consistent with the argument advanced above. In the power-law limit $M_{\mathrm{c}} \rightarrow \infty$, the slope asymptotes to a value $\alpha \approx 0.20$. Since more massive galaxies are expected to have had larger $M_{\mathrm{c}}$ at the time of GC formation, fitting the blue-tilt with the Equal-N-Bin method may wash out information about the variation of $\alpha$ with galaxy mass. \begin{table*} \centering \begin{tabular}{llccccccr} \hline\\[-2mm] Fit method & Model & $\alpha$ & $\beta$ & $\gamma_z$ & $\delta_z$ \\[1mm] & & 25\% {\bf50\%} 75\% & 25\% {\bf50\%} 75\% & 25\% {\bf50\%} 75\% & 25\% {\bf50\%} 75\% \\[1mm] \hline\\[-2mm] Equal-N-Bin& S6.5 & 0.20 {\bf 0.38} 0.56 & -1.18 {\bf -1.10} -1.03 & -0.026 {\bf-0.051} -0.079 & 0.66 {\bf0.69} 0.71 \\ Fixed-Bin & S6.5 & 0.16 {\bf 0.31} 0.48 & -1.16 {\bf -1.09} -1.04 & -0.022 {\bf-0.042} -0.066 & 0.66 {\bf0.69} 0.71 \\ Single-Split & S6.5 & 0.19 {\bf 0.24} 0.28 & -1.25 {\bf -1.14} -1.03 & -0.026 {\bf-0.032} -0.037 & 0.63 {\bf0.67} 0.71 \\ \hline Equal-N-Bin& S7 & 0.21 {\bf 0.28} 0.37 & -1.19 {\bf -1.17} -1.11 & -0.029 {\bf-0.038} -0.050 & 0.65 {\bf0.66} 0.68 \\ Fixed-Bin & S7 & 0.12 {\bf 0.24} 0.39 & -1.28 {\bf -1.17} -1.10 & -0.015 {\bf-0.033} -0.053 & 0.62 {\bf0.66} 0.68 \\ Single-Split & S7 & 0.16 {\bf 0.20} 0.25 & -1.26 {\bf -1.19} -1.09 & -0.022 {\bf-0.027} -0.033 & 0.63 {\bf0.65} 0.69 \\ \hline \end{tabular} \caption{Parameters describing the blue tilt for the different fitting methods in different $M_{\mathrm{c}}$ models (see equations \ref{eqn:mu_tilt} and \ref{eqn:color_tilt}), in both mass-metallicity and magnitude-colour space (using the same metallicity calibration and color-metallicity relation as in CGL18). We fit the blue tilt in each model galaxy individually and quote the 25th, 50th, and 75th percentiles of the distribution obtained from fitting all model galaxies. All fits were calculated using a minimum mass $M_{\mathrm{lim}} = 10^{5.15}\,M_{\odot}$.} \label{tab:tilt} \end{table*} Observations of extragalactic GCs compiled by \citet{mieske_etal_2010} directly measure the slope of GC $(g-z)$ colour as a function of the $z$-band magnitude: $$ \gamma_z \equiv \frac{d(g-z)}{dM_z}. $$ We can convert our mass-metallicity slope $\alpha$ to this equivalent observational proxy using the colour-metallicity relation adopted in CGL18: \begin{equation} (g-z) = c_0 + c_1 \,\mathrm{[Fe/H]} + c_2 \,\mathrm{[Fe/H]}^2, \label{eqn:colour_met} \end{equation} where $c_0 = 1.513$, $c_1 = 0.481$, $c_2 = 0.051$. To convert magnitudes to cluster masses, we use the colour-dependent mass-to-light ratio of \citet{bell_etal03}: $$ \log_{10}M/L_z = a_{z} + b_{z}\, (g-z), $$ where $a_z = -0.171$ and $b_z = 0.322$. Analytic transformations lead to the expression for the metallicity slope: $$ \alpha = \frac{d\mathrm{[Fe/H]}}{d\log_{10}M} = \frac{-2.5\,\gamma_z}{1 - 2.5\, b_z \gamma_z} \frac{1}{c_1 + 2c_2\mathrm{[Fe/H]}_{\rm blue}}, $$ where the average value for the blue clusters is $\mathrm{[Fe/H]}_{\rm blue} \approx -1.5$. Using the above conversions, we recast our fit for the metallicity as a function of cluster mass in terms of the equivalent observational quantities, $(g-z)$ and $M_z$: \begin{equation} (g-z) = \gamma_z M_z + \delta_z. \label{eqn:color_tilt} \end{equation} On the right axis of \autoref{fig:tilt_slope} we convert $\alpha$ to $\gamma_z$ using the above relations. The grey shaded band shows the result found by \citet{mieske_etal_2010}: $\gamma_z = -0.029 \pm 0.0085$ for the stacked sample of all clusters with $M_z < -8.1$, corresponding to $M_{\mathrm{lim}} \approx 10^{5.15}$ for the \citet{bell_etal03} mass-to-light ratios. Using the Equal-N-Bin method, which was adopted by \citet{mieske_etal_2010}, the predicted best-fit slopes for the stacked sample of all clusters in our model are somewhat higher than their median value, but still generally consistent within the errors. Furthermore, comparison to any single set of observations of the blue tilt is insufficient, because observations show that the strength of the blue-tilt is not universal, but rather varies strongly between galaxy, even at fixed galaxy mass \citep{strader_etal06, cockcroft_etal_2009, mieske_etal_2010}. Indeed, several galaxies, including the Milky Way, show no blue-tilt. Therefore, it is more meaningful to compute the blue tilt slope for GC samples of individual galaxies, and then compare the distribution of slopes for a given set of galaxies. \begin{figure} \includegraphics[width=\columnwidth]{GC2/slope_hist.pdf} \vspace{-5mm} \caption{Distribution of $\alpha$ using the Single-Split method for all model galaxies with greater than 50 clusters and using a minimum mass for fitting $M_{\mathrm{lim}} = 10^{5.15} \,M_{\odot}$. At lower values of $M_{\mathrm{c}}$, the distribution shifts towards higher values of $\alpha$, consistent with the expectation discussed in \autoref{sec:bluetilt}. The Equal-N-Bin method, shown in the dashed curves, gives systematically higher values of $\alpha$ than our Single-Split method.} \label{fig:slope_hist} \end{figure} For this reason we introduce a new fitting method (hereafter ``Single-Split"). For each galaxy, we perform a GMM fit to the metallicity distribution of \textit{all} clusters in the galaxy to determine the peak metallicities of the blue and red subpopulations. We label as blue all clusters in the region where the \rev{value of the metal-poor Gaussian} is larger than that of the metal-rich Gaussian. Finally, we fit the metallicity as a function of cluster mass for all the blue clusters above the threshold mass $M_{\mathrm{lim}}$. Thus, this method differs significantly from the Equal-N-Bin and Fixed-Bin methods because we perform only a single GMM split for each galaxy, rather than once for each cluster mass bin, as well as use the actual cluster metallicities instead of the mean value in bins for our linear fits. We impose the same lower limit on the number of clusters in a galaxy needed to perform a GMM split as we do for a single bin in the Equal-N-Bin and Fixed-Bin methods, $N_{\rm min, GMM} = 50$. This method has several advantages over the Equal-N-Bin and Fixed-Bin methods. By performing the linear fit on all blue clusters, rather than on mean values for each bin, we can more accurately quantify the intrinsic scatter in the blue-tilt. Furthermore, this method can be reliably applied to galaxies with fewer clusters, because it requires only a single GMM split. Finally, the effects of merging of the blue and red peaks are mitigated because we perform the fit only for metallicities where blue clusters dominate. \begin{figure} \includegraphics[width=\columnwidth]{GC2/mlim.pdf} \vspace{-5mm} \caption{Dependence of the blue-tilt slope $\alpha$ on the minimum mass $M_{\mathrm{lim}}$ used in fitting the blue tilt. We show the median value of $\alpha$ across all galaxies. Solid lines show the result from our Single-Split method, dashed lines show the result from the Equal-N-Bin method.} \label{fig:mlim} \end{figure} \autoref{fig:slope_hist} shows the distribution of tilt slopes for all model galaxies with the Single-Split method. There is a a wide variance of the outcomes, similar to the observations. For example, in the PL model, the distribution of slopes peaks at a value of $\alpha \approx 0.18$ but also includes galaxies with $\alpha \approx 0$, which means no tilt. \rev{We note that we have also fit the Galactic GC population with the Single Split method and find it to be consistent with $\alpha \approx 0$.} Analogously with the result for the stacked sample of all clusters (red curve in \autoref{fig:tilt_slope}), smaller values of $M_{\mathrm{c}}$ shift the peak of $\alpha$ to larger values and significantly broaden the distribution. Dashed curves in \autoref{fig:slope_hist} plot the result of the Equal-N-Bin method applied to individual galaxies, and clearly show that that method produces systematically higher values of $\alpha$ than the Single-Split, with a larger scatter. Table~\ref{tab:tilt} lists the median values of the best-fit parameters $\alpha$ and $\beta$, and their corresponding observational counterparts $\gamma_z$ and $\delta_z$, for the three different fitting methods for our two preferred $M_{\mathrm{c}}$ models. In the above discussion, we used a constant minimum cluster mass $M_{\mathrm{lim}} = 10^{5.15} \,M_{\odot}$ in fitting the blue tilt. However, several observational studies have noted that $\alpha$ increases with increasing $M_{\mathrm{lim}}$ \citep[e.g.,][]{mieske_etal_2010}, suggesting the blue-tilt is actually non-linear. Such behaviour is a natural prediction of our model, because the formation of more massive clusters depends even more sensitively on galaxy mass, and therefore metallicity. \autoref{fig:mlim} shows that the median value of $\alpha$ indeed increases with $M_{\mathrm{lim}}$ in the S6.5 and S7 models. We find that the value of $\alpha$ scales more strongly with $M_{\mathrm{lim}}$ in lower $M_{\mathrm{c}}$ models, where the number of very massive blue clusters is smaller. We also find that this trend is stronger in the Equal-N-Bin method compared to the Single-Split method. This effect should be taken into account when comparing the results on the blue tilt in surveys or models with different limiting cluster mass. \subsection{On the origin of blue tilt} \label{sec:discussion} The variation in the strength of the blue-tilt at $z=0$ reflects the assembly history of the host galaxy: if the cold gas reservoirs at high redshift were relatively low then a strong cluster mass-metallicity relation should be produced. \begin{figure} \includegraphics[width=\columnwidth]{GC2/bluetilt_tracks.pdf} \vspace{-5mm} \caption{Mass growth of two halos with approximately the same mass at $z=0$, $M_{\mathrm{h}} \approx 10^{13}\,M_{\odot}$. Solid regions along the curve show the periods when GCs are actively forming (i.e., the condition $R_{\rm m} > p_3$ is satisfied). The halo depicted in blue shows a blue-tilt at $z=0$, while the halo in green does not. No mass-metallicity correlation arises in the green halo because it is massive enough at early times to have a sufficient amount of cold gas for the formation of massive blue clusters.} \label{fig:bluetilt_tracks} \end{figure} To illustrate this effect, we show in \autoref{fig:bluetilt_tracks} the evolution of the main-branch of two halos, which we label ``Halo A" (green) and ``Halo B" (blue) of the same mass at $z=0$, but with very different assembly histories. At redshifts $z>1$, Halo A is overmassive relative to Halo B. In accordance with the arguments outlined above, Halo A shows zero blue-tilt, while Halo B has a typical blue-tilt: $\alpha \approx 0.2$ (using the Single-Split method). Because Halo A's cold gas reservoir -- which, by our adopted galactic scaling relations, is set by the galaxy stellar mass, in turn set by the halo mass -- was already large enough at high redshift, the CIMF could be fully sampled. This allows for the formation of massive, metal-poor clusters in Halo A. In contrast, Halo B's cold-gas reservoirs were relatively small, and thus massive clusters formed preferentially later, when Halo B was more massive and more metal-rich. We note that for the purposes of this discussion we have compared only the clusters formed in the main branch of the halos, ignoring the contribution of satellites. In an upcoming paper, we will examine the contribution of cluster formation in satellite galaxies in more detail (N. Choksi \& O. Gnedin, in prep). Because the metal-enhanced galaxies in which blue clusters preferentially form will be more abundant at later times, our model also predicts an ``age-tilt'' for the massive, blue clusters. We find that over the range $M = 5\times 10^5 - 5\times 10^6\,M_{\odot}$, the median age of clusters decreases by 0.3~Gyr. Unfortunately, precise age measurements of extragalactic GCs are extremely difficult and have typical errors of $\approx$2 Gyr \citep[e.g.,][]{georgiev_etal12}. Improved age measurements or large samples of ages for extragalactic GCs will be required to test this model prediction. \subsection{Comparison with other models} Our model provides a natural explanation for the blue tilt. This removes the need for alternative models that invoke self-enrichment during the GC formation event \citep{strader_smith_2008, bailin_harris09, bailin_2018}. Of course, our results cannot rule out a possible additional contribution from self-enrichment, but they do suggest it is not needed to explain observations. A similar conclusion has been reached by \citet{usher_etal_2018} using the E-MOSAICS model for cluster formation, which combines analytic prescriptions for cluster formation and evolution with cosmological hydrodynamic simulations using the EAGLE model \citep{schaye_etal_2015, pfeffer_etal_2018}. They present a detailed comparison of different calibrations to convert metallicity into visual color and cluster mass to luminosity, and show that the inference of a blue tilt is robust. They find essentially the same mean slope of the blue tilt for Milky Way-sized galaxy, $\gamma_z \approx -0.03$, and a considerable scatter among different galaxy realizations. It is interesting and reassuring that these similar conclusions are reached despite some differences in the construction of our model and that of \citet{usher_etal_2018}. The CIMF cutoff mass in their simulations is not constant but varies with local properties of the ISM using the model of \cite{reina-campos_kruijssen_2017}. The average $M_{\mathrm{c}}$ increases with cluster metallicity and host galaxy mass, and ranges between $10^6$ and $10^{7.5}\,M_{\odot}$. Thus, in their model the blue tilt appears as a consequence of different cutoff masses for red and blue clusters, whereas in our model it is due to the insufficient gas supply in the host galaxies of blue clusters. \rev{However, \citet{usher_etal_2018} find a stronger tilt in more massive galaxies and at smaller galactocentric radii, which also favor higher $M_{\mathrm{c}}$.} This trend is opposite to our results, where a stronger tilt is found for lower $M_{\mathrm{c}}$. A caveat to this comparison is that their cluster metallicity distribution does not show an obvious bimodality, so that the selection of clusters to be ``blue" is skewed towards higher metallicity. The average color of their sample of blue clusters, $(g-z)\approx 1$, corresponds to $\mathrm{[Fe/H]}\approx -1.2$ on our metallicity scale and is instead close to the boundary separating the red and blue cluster populations in our model. We note that for a \textit{fixed} $M_{\mathrm{c}}$ model, we find no correlation between galaxy mass and strength of the blue-tilt, regardless of the binning method used. The lack of correlation is expected because the median halo mass in which blue clusters form is independent of the $z=0$ halo mass (see Fig.~8 of CGL18). As a result, the cold gas available for their formation is also essentially constant as the $z=0$ halo mass varies. \rev{Thus a \textit{fixed} $M_{\mathrm{c}}$ model cannot match the trend emerging in available observations that the typical blue-tilt strength increases in higher mass galaxies \citep[][]{mieske_etal_2010}. However, this does not invalidate our overall model framework. We expect the value of $M_{\mathrm{c}}$ to not be a constant, but instead to increase with galaxy mass \citep[e.g.,][]{johnson_etal_2017}, which could potentially alleviate this tension with the observations. In the present work we chose to keep $M_{\mathrm{c}}$ fixed across the entire galaxy mass range, to maintain simplicity of the model.} \rev{It would be interesting to test these different predictions of the two models by future observations. Such a test would help in two ways. The \cite{usher_etal_2018} predictions result from the parametric cluster model coupled with simulations of galaxy formation; therefore, it is a test of its prescriptions for cluster formation and disruption as well as the physics of the underlying simulations. The predictions of our model result from the adopted galactic scaling relations; therefore, it is a test of the uncertainties of their extrapolation to high redshift. Both of these tests are important for improving our understanding of galaxy formation at high redshift. In this sense, the two models provide complementary probes.} \section{Summary} \label{sec:summary} We have introduced a cutoff of the cluster initial mass function in our model of globular cluster formation and evolution. Our main results are: \begin{enumerate} \item Fixed cutoff masses of $M_{\mathrm{c}} = 10^{6.5}\,M_{\odot}$ or $M_{\mathrm{c}} = 10^{7}\,M_{\odot}$ matches many observed scaling relations, including the GC system mass-host halo mass relation, the average metallicity of the GC system-host halo mass relation, and the cluster mass functions. This range of the cutoff mass agrees with the indirect measurements of the initial mass function of GCs in the Virgo and Fornax cluster galaxies as well as several nearby galaxies. \item Models with $M_{\mathrm{c}} < 10^{6.5}\,M_{\odot}$ cannot reproduce the observed GC metallicity and mass distributions in massive early-type galaxies. Models with $M_{\mathrm{c}} > 10^7\,M_{\odot}$ produce results similar to the model without a cutoff (i.e., the power-law model) and are inconsistent with the observational constraints on $M_{\mathrm{c}}$. \item The peak of the GC formation rate density occurs about 2~Gyr earlier than that of the field star formation rate density, and corresponds to $z \approx 4-6$. \item Lower $M_{\mathrm{c}}$ leads to a higher total mass formed in GCs at high redshift, to compensate for the increased effect of disruption of the many small clusters. \item The slope of the mass-metallicity relation for metal-poor clusters (blue tilt) for all $M_{\mathrm{c}}$ models is consistent with the observations within the errors, when measured using the same method: fitting peak metallicities (or colors) of the GMM mode for blue clusters in bins of cluster mass with an equal number of clusters in each bin. Using alternative methods, either with a fixed bin size or a single GMM split for all cluster masses, reveals the trend that the typical tilt strength increases with decreasing $M_{\mathrm{c}}$. \item The spread of the tilt slope values of GC systems for individual galaxies also increases for lower $M_{\mathrm{c}}$. We find no clear correlation between the tilt slope and galaxy mass at fixed $M_{\mathrm{c}}$. \item In our model, the blue tilt arises because the metal-poor clusters form in relatively low-mass galaxies which lack sufficient cold gas to sample the CIMF at the highest masses. Massive blue clusters form in progressively more massive galaxies and inherit their higher metallicity. The metal-rich clusters do not exhibit such a tilt because they form in significantly more massive galaxies, which have enough cold gas to fully sample the CIMF. \end{enumerate} These results confirm that our simple model provides a good description of the origin of most observed scaling relations of GC systems. The introduction of the fixed cutoff of the cluster initial mass function makes the model predictions more realistic, while retaining the simplicity of only two adjustable parameters. \section*{Acknowledgements} We are grateful to Hui Li, Cliff Johnson, and the participants of the KITP program ``The Galaxy-Halo Connection Across Cosmic Time" in June 2017 for useful discussions, as well as Diederik Kruijssen for a constructive referee report which improved this work. Research at KITP is supported in part by the National Science Foundation through grant PHY17-48958. We thank the Illustris team for releasing their halo catalogs and Benedikt Diemer for making the \textsc{colossus} code publicly available. NC thanks Goni Halevi for helpful comments and support throughout the preparation of this work. This work was supported in part by the National Science Foundation through grant 1412144. \bibliographystyle{mnras}
2023-04-23T08:17:42.357Z
2019-03-22T01:01:44.000Z
redpajama/arxiv
arxiv_0000
485
10,569
aac14f4785409e9c7d21a8d39bfce9d87584b917
\section{Introduction} \label{sec:introduction} The computation of visibility is a fundamental problem on terrains and is at the core of many applications such as planning the placement of communication towers or watchtowers, planning of buildings such that they do not spoil anybody's view, finding routes on which you can travel while seeing a lot or without being seen, and computing solar irradiation maps which can in turn be used in predicting vegetation cover. The basic problem is point-to-point visibility: Two points $a$ and $b$ on a terrain are visible to each other if the interior of their \emph{line-of-sight} $ab$ (the line segment between $a$ and b) lies entirely above the terrain. Based on this one can define the viewshed: Given a terrain and an arbitrary (view)point $v$, not necessarily on the terrain, the \emph{visibility map} or \emph{viewshed} of $v$ is the set of all points in the terrain that are visible from $v$; see Figure~\ref{fig:viewshed-ex}. A variety of problems pertaining to visibility have been researched in computational geometry and computer graphics, as well as in geographic information science and geospatial engineering. The key in defining and computing visibility is choosing a terrain model and an interpolation method. The most common terrain models are the grid and the TIN (triangular irregular network). A grid terrain is essentially a matrix of elevation values, representing elevations sampled from the terrain with a uniform grid; the x,y coordinates of the samples are not stored in a grid terrain, they are considered implicit w.r.t. to the corner of the grid. A TIN terrain consists of an irregular sample of points (x,y and elevation values), and a triangulation of these points is provided. Grid terrains are the most widely used in GIS because of their simplicity. Our algorithms in this paper discuss the computation of visibility maps on \emph{grid} terrains. To decide whether a point $p$ is visible on a given terrain model, one needs to interpolate the elevation along the line-of-sight $pv$ between the viewpoint $v$ and $p$ (more precisely, along the projection of the line-of-sight on the horizontal plane) and check whether the interpolated elevations are below the line of sight. Various algorithms differ in what and how many points they select to interpolate along the line-of-sight, and in the interpolation method used. These choices crucially affect the efficiency and accuracy of the algorithms. In order to be useful in practice, viewshed algorithms need to be fast and scalable to very large terrains. The last decade witnessed an explosion in the availability of terrain data at better and better resolution. In 2002, for example, NASA's Shuttle Radar Topography Mission (SRTM) acquired 30~m-resolution terrain data for the entire USA, in total approximately 10 terabytes of data. With more recent technology it is possible to acquire data at sub-meter resolution. This brings tremendous increases in the size of the datasets that need to be processed: Washington state at 1~m resolution, using 4 bytes for the elevation of each sample, would total 689 GiB of data; Ireland would be 262 GiB---only counting elevation samples on land. Data at this fine resolution has started to become available. \begin{figure}[t] \centering \includegraphics[width=6cm]{viewshed-3d.pdf} \includegraphics[width=5cm]{los.pdf} \caption{(a) The viewshed of a point on a grid terrain is shown in green. The viewpoint is marked in blue. (b) Two points on a terrain are visible to each other if the interior of their line-of-sight lies entirely above the terrain. Here $a$ is visible from $v$, but $b$ is not.} \label{fig:viewshed-ex} \end{figure} \subsection{I/O-efficiency} Working with large terrains require efficient algorithms that scale well and are designed to minimize ``I/O'': the swapping of data between main memory and disk. We assess the efficiency of algorithms in this paper not only by studying the number of computational steps they need and by measuring their running times in practical experiments, but also by studying how the number of I/O-operations grows with the input size. To this end we use the standard model defined by Aggarwal and Vitter~\cite{aggarwal:input}. In this model, a computer has a memory of size $M$ and a disk of unbounded size. The disk is divided into blocks of size $B$. Data is transferred between memory and disk by transferring complete blocks: transferring one block is called an ``I/O\xspace''. Algorithms can only operate on data that is currently in main memory; to access the data in any block that is not in main memory, it first has to be copied from disk. If data in the block is modified, it has to be copied back to disk later, at the latest when it is evicted from memory to make room for another block. The I/O\xspace-efficiency of an algorithm can be assessed by analysing the number of I/Os\xspace it needs as a function of the input size $n$, the memory size $M$, and the block size $B$. The fundamental building blocks and bounds in the I/O-model are sorting and scanning: scanning $n$ consecutive records from disk takes $\mathsf{scan}(n) = \Theta(n/B)$ I/Os\xspace; sorting takes $\mathsf{sort}(n) = \Theta(\Sort{n})$ I/Os\xspace in the worst case~\cite{aggarwal:input}. It is sometimes assumed that $M = \Omega(B^2)$. We distinguish \emph{cache-aware} algorithms and \emph{cache-oblivious} I/O-efficient algorithms: Cache-aware algorithms may use knowledge of $M$ and $B$, (and to some extent even control $B$) and they may use it to control which blocks are kept in memory and which blocks are evicted. Cache-oblivious algorithms, as defined by Frigo et al.~\cite{prokop:cob}, do not know $M$ and $B$ and cannot control which blocks are kept in memory: the caching policy is left to the hardware and the operating system. Nevertheless, cache-oblivious algorithms can often be designed and proven to be I/O\xspace-efficient~\cite{prokop:cob}. The idea is to design the algorithm's pattern of access to locations in files and temporary data structures such that effective caching is achieved by any reasonable general-purpose caching policy (such as least-recently-used replacement) \emph{for any values} of $M$ and $B$. As a result, any bounds that can be proven on the I/O\xspace-efficiency of a cache-oblivious algorithm hold for \emph{any} values of $M$ and $B$ simultaneously. Thus they do not only apply to the transfer of data between disk and main memory, but also to the transfer of data between main memory and the various levels of smaller caches. However, in practice, cache-oblivious algorithms cannot always match the performance of cache-aware algorithms that are tuned to specific values of $M$~and~$B$~\cite{brodal:cobsorting-jea}. \subsection{Problem definition} A \emph{terrain} $T$ is a surface in three dimensions, such that any vertical line intersects $T$ in at most one point. The \emph{domain} $D$ of $T$ is the projection of $T$ on a horizontal plane. The \emph{elevation angle} of any point $q = (q_x,q_y,q_z)$ with respect to a viewpoint~$v = (v_x,v_y,v_z)$ is defined as: $$\textrm{ElevAngle}(q) = \arctan\frac{q_z - v_z}{\textrm{Dist}(q)},$$ where $\textrm{Dist}(q) := |(q_x,q_y)-(v_x,v_y)|$. A point $u = (u_x,u_y,u_z)$ is visible from $v$ if and only if the elevation angle of $u$ is higher than the elevation angle of any point of $T$ whose projection on the plane lies on the line segment from $(u_x,u_y)$ to $(v_x,v_y)$. We define the elevation angle of any point $(q_x,q_y)$ of $D$ as the elevation angle of the point $q = (q_x,q_y,q_z)$ where the vertical line through $(q_x,q_y)$ intersects $T$. In this paper we consider terrains that are represented by a set of $n$ points whose projections on~$D$ form a regular rectangular grid. To decide whether a point $u$ is visible from a point $v$, we need to \emph{interpolate} the elevation angle of points of $T$ whose projection on the plane lie along the line segment from $(u_x,u_y)$ to $(v_x,v_y)$. We want to compute the following: given any terrain $T$ and any viewpoint $v = (v_x,v_y,v_z)$, find which grid points of the terrain are visible to $v$ and which are not. We assume the terrain is given as a matrix $Z$, stored row by row, where $Z_{ij}$ is the elevation of the point in row $i$ and column $j$. The output visibility map is a matrix $V$, stored row by row, in which $V_{ij}$ is $1$ if the point in row $i$ and column $j$ is visible, and $0$ otherwise. For ease of presentation, throughout the rest of the paper we assume that the grid is square and has size $\sqrt n \cdot \sqrt n$ ; of course the actual implementations of our algorithms can handle rectangular grids as well. \subsection{Related work} The standard method for computing viewsheds on grid terrains is the algorithm \texttt{R3}\xspace by Franklin and Ray~\cite{franklin:sdh94}. \texttt{R3}\xspace determines the visibility of each point in the grid as follows: it computes the intersections between the horizontal projection of the line-of-sight and the horizontal and vertical grid lines, and computes the elevation of the terrain at these intersection points by linear interpolation. Since a line of sight intersects $O(\sqrt n)$ grid lines, determining the visibility of a point takes $O(\sqrt n)$ time. This is considered to be the standard model and \texttt{R3}\xspace is considered to produce the ``exact'' viewshed~\cite{izraelevitz:vis}. However, as described by Franklin and Ray, \texttt{R3}\xspace runs in $O(n \sqrt n)$~time, which is too slow in practice, especially for multiple viewshed computations. A variety of viewshed algorithms have been proposed that optimize \texttt{R3}\xspace while approximating in some way the resulting viewshed: Some algorithms consider only a subset of the $O(n)$ lines-of-sight; others interpolate the line-of-sight only at a subset of the $O(\sqrt n)$ intersection points with the grid lines; yet others have some other way of determining in $O(1)$ time whether a point in the grid is visible. The optimized viewshed algorithms run in $o(n \sqrt n)$ time, most often $O(n)$. Examples are \textsc{XDraw} by Franklin and Ray~\cite{franklin:sdh94}; \textsc{Backtrack} by Izraelevitz~\cite{izraelevitz:vis}; \texttt{R2}\xspace by Franklin and Ray~\cite{franklin:sdh94}; and van Kreveld's radial sweep algorithm~\cite{kreveld:viewshed}---below we describe briefly the results which are relevant to this paper. The algorithm named \texttt{R2}\xspace, proposed by Franklin and Ray~\cite{franklin:sdh94}, is an optimization of \texttt{R3}\xspace that runs in $O(n)$ time. The idea of \texttt{R2}\xspace is to examine the lines-of-sight \emph{only} to the $O(\sqrt n)$ grid points on the boundary of the grid; a grid point that is not on the boundary is considered to be visible if the nearest point of intersection between a grid line and one of the examined lines-of-sight is determined to be visible. Overall \texttt{R2}\xspace is fast and, according to its authors, produces a good approximation of \texttt{R3}\xspace that outweighs its loss in accuracy~\cite{franklin:sdh94}. The other algorithm, \emph{XDraw}, computes the visibility of the grid points incrementally in concentric layers around the viewpoint, starting at the viewpoint and working its way outwards. For a grid point $v$ in layer $i$, the algorithm computes whether $v$ is visible, and what is the maximum height above the horizon along the line of sight to $v$. To do so, it determines which are the two grid points $q$ and $r$ in layer $i - 1$ that are nearest to $\overline{pv}$, and then it estimates the maximum height above the horizon along $\overline{pv}$ by interpolating between the lines of sight to $q$ and $r$. Thus, the visibility of each point is determined in constant time per point. XDraw is faster than R3 and R2, due to the simplicity of the calculations, but it is also the least accurate~\cite{franklin:sdh94}. Izraelevitz~\cite{izraelevitz:vis} presented a generalization of \emph{XDraw} that allows to user to set a parameter $k$, which is the number of previous layers that are taken into account when computing the visibility of a grid point. \begin{figure}[t] \centering{ \includegraphics[width=4cm]{grid1.pdf} \includegraphics[width=4cm]{grid5.pdf} \caption{Van Kreveld's model: (a) Each grid point represents a square cell centered at that point. The elevation angle of all points inside a cell is the same as the elevation angle of the center of the cell (the grid point). (b) To determine if two grid points are visible, we need to consider all cells that intersect the line segment beween the points. } \label{fig:kreveld-model} } \end{figure} Van Kreveld described a different approach for computing viewsheds on grids that could also be seen as an optimization of \texttt{R3}\xspace~\cite{kreveld:viewshed}. In his model the terrain is seen as a tessellation of square cells, where each cell is centered around a grid point and has the same view angle as the grid point throughout the cell, that is, the cell appears as a horizontal line segment to the viewer (Figure~\ref{fig:kreveld-model}). This property allows for the viewshed to be computed in a radial sweep of the terrain in $O(n \lg n)$ time. Because cells have constant view angle, they can be stored in an efficient data structure as the ray rotates around the viewpoint. This data structure supports insertions of cells, deletions of cells, and visibility queries for a point along the ray in $O(\lg n)$ time per operation, and thus the whole viewshed can be computed while rotating the ray in $O(n \lg n)$ time. The viewshed algorithms mentioned so far assume that the computation fits in memory and are not IO-efficient. I/O-efficient viewshed algorithms have been proposed by Magalh\~aes et al.~\cite{magalhaes:geo}, Ferreira et al.~\cite{ferreira:vis} and in our previous work~\cite{havertoma:visibility-journal,ht:vis2,ht:accuracy}; we discuss these results below. Haverkort, Toma and Zhuang~\cite{havertoma:visibility-journal} presented the first IO-efficient viewshed algorithm using Van Kreveld's model. Using a technique called \emph{distribution sweeping} they turned Van Kreveld's algorithm into an algorithm running in $O(n \log n)$ time and $O(\mathsf{sort}(n))$~I/Os\xspace, cache-obliviously. The authors also presented practical results showing that their algorithm scales well to large data and outperforms Van Kreveld's algorithm running in (virtual) memory. Subsequently, Magalh\~aes et al.~\cite{magalhaes:geo} and Ferreira et al.~\cite{ferreira:vis} described I/O-efficient versions of Franklin's R2 algorithm. The first algorithm runs in $O(n \log n)$ time and $O(\mathsf{sort}(n))$~I/Os\xspace~\cite{magalhaes:geo}. As in R2, the idea is to evaluate lines-of-sight only to the points on the perimeter of the grid. To do this I/O\xspace-efficiently, the algorithm first copies all grid points from the input file row by row, annotating each point $p$ with the endpoints of the lines of sight whose evaluation requires the elevation of $p$. Next, all annotated points are sorted by line of sight. The algorithm then evaluates each line of sight, determining for each point on a line of sight whether it is visible or not, and writes the results to a file, in order of computation. As a result, the file contains the visibility map, ordered by line of sight. The last step is to sort this file into row-by-row order. A further improved version of R2 was presented i ~\cite{ferreira:vis}. Here the idea is to partition the grid in blocks and run the (in-memory) version of the R2 algorithm modified so that it bypasses the VMM (virtual memory management) system, and instead it maintains a data structure of ``active'' blocks that constitute the block footprint of the algorithm. Whenever the line-of-sight intersects a block, that block is brought in main memory. Blocks are evicted using LRU policy. Their algorithm, \textsc{TiledVS}, consists of three passes: convert the grid to Morton order, compute visibility using the R2 algorithm, and convert the output grid from Morton order to row-major order. In practice, this algorithm is much faster than the one in~\cite{magalhaes:geo}, achieving on the order of 5,000 seconds on SRTM dataset of 7.6 billion points (\texttt{SRTM1.region06}, 28.4GiB using 4 bytes per elevation value) that is, .7$\mu$s per point. Another advantage of \textsc{TiledVS} is that its first step can be viewed as a preprocessing step common to all viewpoints and thus \textsc{TiledVS} computes the viewshed in only two passes over the grid. \vspace{.5cm} The IO-efficient algorithms discussed above differ in how many points they chose to interpolate, how many lines-of-sight they consider, and how they interpolate the terrain. These choices affect both the running time and output of the algorithms. All algorithms described can be considered as approximations of \texttt{R3}\xspace and make some assumptions that they exploit to improve efficiency. \textsc{TiledVS} derives its efficency in part from considering only $O(\sqrt n)$ LOS's instead of $O(n)$. Van Kreveld's approach exploits crucially that cells have constant elevation angle across their azimuth range. Generalizing to linear interpolation is difficult: it would mean that cells have variable elevation angle across their azimuth tange, and one would need a kinetic data structure as active structure to store elevation angles that change in time. To evaluate viewshed algorithms it is important to consider both efficiency of running time and accuracy of the computed viewshed. While efficiency is easy to compare, comparing accuracy is much harder. The straightforward way to assess accuracy is to compare the computed viewshed with ground truth data. Ideally one would consider a large sample of viewpoints, compute the viewshed from each one in turn, compare it with the \emph{real} viewshed at that point, and aggregate the differences. Unfortunately, ground truth viewsheds are hard, if not impossible, to obtain. The algorithms mentioned above assume grid terrains. For an overview of internal-memory algorithms for visibility computations on the second most common format of terrain elevation models, the \emph{triangular irregular network} or TIN, we refer to ~\cite{colesharir:visib,floriani:visdtm,floriani:intervisibility}. Visibility algorithms on TINs use the concept of a \emph{horizon} or \emph{silhouette} $\sigma$ of the terrain, which is the upper rim of the terrain, as it appears to a viewer at $v$. More formally, $\sigma_T$ is a function from azimuth angles (compass direction) to elevation angles, such that $\sigma_T(\alpha)$ is the maximum elevation angle of any point on the intersection of $T$ with the ray that extends from $v$ in direction $\alpha$. On a triangulated terrain, the horizon is equivalent to the upper envelope of the triangle edges of $T$, projected on an infinite vertical cylinder centered on the viewpoint; it has complexity $O(n \cdot \alpha(n))$, where $\alpha$ is the inverse Ackermann function~\cite{colesharir:visib}. Horizons have been used to solve various visibility-related problems on triangulated polyhedral terrains. For example, the visibility of all the vertices in a TIN can be computed in $O(n \alpha(n) \lg n)$ time~\cite{colesharir:visib}. A central idea in these solutions is that horizons can be merged in time that is linear in their size, and thus allow for efficient divide-and-conquer algorithms. \subsection{Our contributions} This paper describes IO-efficient algorithms for computing viewsheds on massive grid terrains in a couple of different models. Our first two algorithms work in Van Kreveld's model, and sweep the terrain radially by rotating a ray around the viewpoint while maintaining the terrain profile along the ray. The difference between the two new algorithms is in the preprocessing before the sweep: the first algorithm, which we describe in Section~\ref{sec:radial-layers}, sorts the grid points in concentric bands around the viewpoint; the second algorithm, which we describe in Section~\ref{sec:radial-sectors}, sorts the grid points into sectors around the viewpoint. Both algorithms run in $O(n \log n)$ time and $O(\mathsf{sort}(n))$ I/Os\xspace. The third algorithm, \textsc{io-centrifugal}\xspace, which we describe in Section~\ref{sec:conc-sweep}, uses a complementary approach and sweeps the terrain centrifugally. The algorithm is similar to \emph{XDraw}: it grows a region around the viewpoint, while maintaining the horizon of the terrain within the region seen so far. To maintain the horizon efficiently, we represent it by a grid model itself: we maintain the maximum elevation angle (the ``height'') of the horizon for a discrete set of regularly spaced azimuth angle intervals. The horizontal resolution of the horizon model is chosen to be similar to the horizontal resolution of the original terrain model, so that we maintain elevation angles for $\Theta(\sqrt n)$ azimuth angle intervals. This allows a significant speed-up as compared to algorithms that process events at $\Theta(n)$ different azimuth angles, or work with horizons of linear complexity. Also, we note that this gives the algorithm the potential for higher accuracy than \emph{XDraw}, which represents the horizon up to a given layer by only as many grid points as there are in that layer---which can be quite inaccurate close to the viewpoint. Another difference with \emph{XDraw} is that our algorithm does not proceed layer by layer, but instead grows the region in a recursive, more I/O\xspace-efficient way; this results in a significant speed-up in practice. The centrifugal sweep algorithm runs in $O(n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace cache-obliviously, and is our fastest algorithm. Our last two algorithms constitute an improved, IO-efficient version of Franklin's \texttt{R3}\xspace algorithm. We distinguish between two models (Figure~\ref{fig:model}), which we describe in Section~\ref{sec:algorithms}: In the \emph{gridlines} model we view the points in the input grid as connected by horizontal and vertical lines, and visibility is determined by evaluating the intersections of the line-of-sight with the grid lines using linear interpolation; this is the model underlying \texttt{R3}\xspace. We also consider a slightly different model, the \emph{layers} model, in which we view the points in the input grid to be connected in concentric layers around the viewpoint and visibility is determined by evaluating the intersections of the line-of-sight with these layers using linear interpolation. The layers model considers only a subset of the intersections considered by the gridlines model and therefore the viewshed generated will be larger (more optimistic) than the one generated with the gridlines model. Preliminary results (\cite{ht:vis2}) show that these differences are practically insignificant. The layers model is faster in practice, while having practically the same accuracy as the gridlines model. We describe our last two algorithms, \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace, in Section~\ref{sec:algorithms}. They are based on computing and merging horizons in an iterative or divide-and-conquer approach, respectively. Horizon-based algorithms for visibility problems have been described by de Floriani and Magillo~\cite{floriani:visdtm}. On a triangulated terrain $T$, the horizon is equivalent to the upper envelope of the triangle edges of $T$ as projected on a view screen, and has complexity $H(n) = O( n \cdot \alpha(n))$, where $n$ is the number of vertices in the TIN and $\alpha()$ is the inverse of the Ackerman function~\cite{cole:visibility}. In Section ~\ref{sec:visiter} we show that we can prove a better bound for our setting: that is, we prove that the upper envelope of a set $S$ of line segments in the plane such that the widths of the segments do not differ in length by more than a factor $d$ has complexity $O(dn)$. From here we show that the horizon on a grid of $n$ points with linear interpolation has complexity $O(n)$ in the worst case. In Section~\ref{sec:results} we describe an experimental analysis and comparison of our algorithms on datasets up to 28 GB. All algorithms are scalable to volumes of data that are more than 50 times larger than the main memory. Our main finding is that, in practice, horizons are significantly smaller than their theoretical upper bound, which makes horizon-based algorithms unexpectedly fast. Our last two algorithms, which compute the most accurate viewshed, turn out to be very fast in practice, although their worst-case bound is inferior. We conclude in Section~\ref{sec:discussion}. \section{I/O-Efficient radial sweep}\label{sec:radial-layers} This section describes our first approach to computing a viewshed. It is loosely based on Van Kreveld's radial sweep algorithm, which we present below. {\bf The model.} We consider that the terrain is represented by a set of $n$ points whose projections on~$D$ form a regular grid with inter-point distance~1. Furthermore, we assume that each grid point $q = (q_x,q_y,q_z)$ represents a square ``cell'' $D(q)$ on $D$ of size $1\times 1$, centered on $(q_x,q_y)$. For any given viewpoint $v$, we treat the terrain above $D(q)$ as if each point of $D(q)$ has elevation angle $\textrm{ElevAngle}(q)$. This is the interpolation method used by Van Kreveld~\cite{kreveld:viewshed}. Determining whether a point $p$ is visible from viewpoint $v$ comes down to deciding whether there is any other grid point $q$ such that the square cell $D(q)$ intersects the line segment from $(v_x,v_y)$ to $(p_x,p_y)$ and $\textrm{ElevAngle}(q) \geq \textrm{ElevAngle}(p)$; see Figure~\ref{fig:kreveld-model}. \subsection{Van Kreveld's radial sweep algorithm} The basic idea of Van Kreveld's algorithm~\cite{kreveld:viewshed} is to rotate a half-line (ray) around the viewpoint~$v$ and compute the visibility of each grid point in the terrain when the sweep line passes over it (see Figure~\ref{fig:kreveld}). For this we maintain a data structure (the \emph{active} structure) that, at any time in the process, stores the elevation angles for the cells currently intersected by the sweep line (the \emph{active cells}). Three types of events happen during the sweep: \begin{denseitems} \item \textsc{enter}\xspace events: When a cell starts being intersected by the sweep line, it is inserted in the active structure; \item \textsc{center}\xspace events: When a the sweep line passes over the grid point $q$ at the center of a cell, the active structure is queried to find out if $q$ is visible. \item \textsc{exit}\xspace events: When a cell stops being intersected by the sweep line, it is deleted from the active structure; \end{denseitems} Thus, each cell in the grid has three associated events. Van Kreveld~\cite{kreveld:viewshed} uses a balanced binary search tree for the active structure, in which the active cells are stored in order of increasing distance from the viewpoint. Because the cells are convex, this is always the same as ordering the active cells in order of increasing distance from the viewpoint to pothe \emph{grid points} corresponding to the cells. With each cell we store its elevation angle. In addition, each node in the tree is augmented with the highest elevation angle in the subtree rooted at that node. A query if a point $q$ is visible is answered by checking if the active structure contains any cell that lies closer to the viewpoint than $q$ and has elevation angle at least $\mathrm{ElevAngle}(q)$: if yes, then $q$ is \emph{not} visible, otherwise it is. Such a query can be answered in $O(\log n)$ time. To run the complete algorithm, we first generate and sort the $3n$ events by their azimuth angles (the sweep line directions at which they happen). Then we process the events in order of increasing azimuth angle. The whole algorithm runs in $O(n \log n)$ time. \begin{figure}[t] \centering{ \includegraphics[width=5cm]{sweep.pdf} \caption{Van Kreveld algorithm. Cells will be present in the active structure as long as the sweep ray intersects the indicated diagonal.} \label{fig:kreveld} } \end{figure} In our previous work we adapted Van Kreveld's algorithm to make it I/O-efficient~\cite{havertoma:visibility-journal}. The first step was still to generate and sort the events. For each event we stored its location in the plane and its elevation angle. Using four bytes per coordinate, this resulted in an event stream of $36n$ bytes. For large $n$, this is a significant bottleneck. \subsection{A new I/O-efficient radial sweep algorithm}\label{sec:iorad2} The main idea of our new radial sweep algorithm is therefore to avoid generating and sorting a fully specified event stream. The purpose of the event stream was to supply the azimuth angle and the elevation angle of the events in order. Note, however, that the azimuth angle of the events only depends on how the sweep progresses over the grid, but not on the elevation values stored in the input file. Only the elevation angles have to be derived from the input file. Our ideas for making the sweep I/O-efficient are now the following. We can compute the azimuth angles of the events on the fly, without accessing the input file, instead of computing all events in advance. Only when processing an \textsc{enter}\xspace event corresponding to a grid point $q$, the elevation of $q$ needs to be retrieved in order to insert $\langle \mathrm{Dist}(q), \mathrm{ElevAngle}(q) \rangle$ into the active structure---for \textsc{center}\xspace events the elevation angle can then be found in the active structure and for \textsc{exit}\xspace events the elevation angle is not needed. To allow efficient retrieval of elevations for \textsc{enter}\xspace events, we pre-sort the elevation grid into lists of elevation values, stored in the order of the \textsc{enter}\xspace events that require them. Thus we can retrieve all elevation values in $O(\mathsf{scan}(n))$ I/Os\xspace during the sweep. Sorting the \emph{complete} elevation grid into a \emph{single} list would be relatively expensive (it would require several sorting passes); we avoid that by dividing the grid into concentric bands around the viewpoint, making one list of elevation values for each band. As long as the number of bands in small enough so that we can keep a read buffer of size $\Theta(B)$ for each band in memory during the sweep, we will still be able to retrieve all elevation values during the sweep in $\Theta(\mathsf{scan}(n))$ I/Os\xspace. \vspace{\baselineskip} {\bf Notation.} For ease of description, assume that the viewpoint $v$ is in the center of the grid at coordinates $(0,0,0)$ and the grid has size $(2m + 1) \times (2m + 1)$, where $m = (\sqrt n - 1)/2$. The elevations of the grid points are given in a two-dimensional matrix $Z$ that is ordered row by row, with rows numbered from $-m$ to $m$ from north to south and columns numbered from $-m$ to $m$ from west to east. By $p(i,j)$ we denote the grid point $q = (q_x,q_y,q_z)$ in row $i$ and column $j$ with coordinates $q_x = j$, $q_y = -i$ and $q_z = Z_{ij}$; by \emph{cell $(i,j)$} we denote the square $D(p(i,j))$. Let $\textsc{enter}\xspace (i,j)$ denote the azimuth angle of the \textsc{enter}\xspace event of cell $(i,j)$. \vspace{\baselineskip} {\bf Description of the algorithm.} We now describe our algorithm in detail. Let \emph{layer} $l$ of the grid denote the set of grid points whose $L_\infty$-distance from the viewpoint, measured in the horizontal plane, is $l$. We divide the grid in concentric bands of width $w$ around the viewpoint. Band $k$ (denoted $B_k$), with $k > 0$, contains all grid points of layers $(k-1) w + 1$ up to $k w$, inclusive; so $p(i,j)$ would be found in band $\lceil\max(|i|,|j|) / w\rceil$ (see Figure~\ref{fig:layered-radial-sweep}). We choose $w = \Theta(\sqrt M)$; more precisely, $w$ is the largest power of two such that the elevation and visibility values of a square tile of $(w+1) \cdot (w+1)$ points fit in one third of the memory. \begin{figure}[t] \centering \includegraphics[height=2.5in]{bands.pdf} \caption{A layered radial sweep.} \label{fig:layered-radial-sweep} \end{figure} Our algorithm proceeds in three phases. The first phase is to generate, for each band $B_k$, a list $E_k$ containing the elevations of all points $p(i,j)$ in the band, ordered by increasing $\textsc{enter}\xspace(i,j)$ values (recall that $\textsc{enter}\xspace(i,j)$ denotes the azimuth angle of the enter event of the cell $(i,j)$). Points $p(i,j)$ with the same $\textsc{enter}\xspace(i,j)$ value are ordered secondarily by increasing distance $\mathrm{Dist}(i,j)$ from the viewpoint. The algorithm that builds the lists $E_k$ is given below. The basic idea is to read the grid points from the elevation grid going in counter-clockwise order around the viewpoint. This is achieved by maintaining a priority queue with points just in front of the sweep line; the priority queue is organised by the azimuth angles of the enter events corresponding to the points to be read. The queue is initialised with all points of $B_k$ that lie straight right of the viewpoint (Note: such a point $(0,j)$ will have its $\textsc{enter}\xspace(0,j)$ given by the south-west corner of its cell at $(1/2, j-1/2)$ which corresponds to an angle in the fourth quadrant $(3\pi/2, 2\pi)$; we subtract $2\pi$ to bring it to $(-\pi/2, 0)$; this guarantees that points straight right of the viewpoint are first in radial order). Then we extract points from the queue one by one in order of increasing $\textsc{enter}\xspace(i,j)$; when we extract a point, we read its elevation from the elevation grid, write the elevation value to $E_k$, and insert the next point from the same layer in the priority queue (this is the point above, to the left, below, or to the right, depending on which octant the current point is in). In this way, from neighbor to neighbor, all points are eventually reached. Below we describe the algorithm only for the first quadrant (Figure~\ref{fig:layered-radial-sweep}); the others are handled similarly. \addvspace{.5\baselineskip} \noindent \textbf{Algorithm} \textsc{BuildBands}:\\[.25\baselineskip] \textbf{for }$k \leftarrow 1$ \textbf{to} $\lceil m /w \rceil$\\ \textbf{do }initialise empty list $E_k$ and priority queue $Q$\\ \hphantom{\textbf{do }}\textbf{for }$j \leftarrow (k-1)\cdot w + 1$ \textbf{to} $k \cdot w$\\ \hphantom{\textbf{do }}\textbf{do }insert $\langle \textsc{enter}\xspace (0,j)-2\pi, 0,j \rangle$ into $Q$\\ \hphantom{\textbf{do }}\textbf{while} $E_k$ is not complete\\ \hphantom{\textbf{do }}\textbf{do }$\langle \alpha, i, j \rangle \leftarrow Q$.deleteMin()\\ \hphantom{\textbf{do do }}read $Z_{ij}$ from the grid and write it to $E_k$\\ \hphantom{\textbf{do do }}\textbf{if }$-i < j$\hfill\textit{(next cell is north)}\\ \hphantom{\textbf{do do }}\textbf{then }insert $\langle \textsc{enter}\xspace (i-1,j), i-1,j \rangle$ into $Q$\\ \hphantom{\textbf{do do }}\textbf{else}\hfill\textit{(next cell is west)}\\ \hphantom{\textbf{do do then }}insert $\langle \textsc{enter}\xspace (i,j-1), i,j-1 \rangle$ into $Q$\\ \hphantom{\textbf{do }}clear $Q$ \addvspace{.5\baselineskip} After constructing the lists $E_k$, the second phase of the algorithm starts: computing which points are visible. To do this we perform a radial sweep of all events in azimuth order. Again, we generate the events on the fly with the help of a priority queue, using only the horizontal location of the grid points. We use a priority queue to hold events in front of the sweep line, and an active structure to store the cells that currently intersect the sweep line, sorted by increasing distance from the viewpoint (as in Van Kreveld's algorithm). The algorithm starts by inserting all \textsc{enter}\xspace events of the points straight to the right of the viewpoint into the priority queue. When the next event in the priority queue is an \textsc{enter}\xspace event for cell $(i,j)$, the algorithm inserts the corresponding \textsc{center}\xspace and \textsc{exit}\xspace events in the queue, as well as the \textsc{enter}\xspace event of the next cell in the same layer. In addition, it reads the elevation $Z_{ij}$ of $p(i,j)$ from the list of elevation values $E_k$ of the band $B_k$ that contains $p(i,j)$, and it inserts the cell $(i,j)$ in the active structure with key $\textrm{Dist}(i,j)$. When the next event in the priority queue is a $\textsc{center}\xspace$ event for cell $(i,j)$, the algorithm queries the active structure for the visibility of the point with key $\textrm{Dist}(i,j)$. When the next event in the priority queue is an $\textsc{exit}\xspace$ event for cell $(i,j)$, the algorithm deletes the element with key $\textrm{Dist}(i,j)$ from the active structure. \addvspace{.5\baselineskip} \noindent \textbf{Algorithm} \textsc{ComputeVisibility}:\\[.25\baselineskip] Initialise empty active structure $A$ and priority queue $Q$\\ \textbf{for }$j \leftarrow 1$ \textbf{to} $m$\\ \textbf{do }insert $\langle \textsc{enter}\xspace (0,j)-2\pi, \textsc{enter}\xspace, 0,j \rangle$ into $Q$\\ \textbf{for }$k \leftarrow 1$ \textbf{to} $\lceil m /w \rceil$\\ \textbf{do }set read pointer of $E_k$ at the beginning\\ \hphantom{\textbf{do }}initialise empty list $V_k$\\ \textbf{while} not all visibility values have been computed\\ \textbf{do }$\langle \alpha, \mathit{type}, i, j \rangle \leftarrow Q$.deleteMin()\\ \hphantom{\textbf{do }}\textbf{if }$\mathit{type} = \textsc{enter}\xspace$\\ \hphantom{\textbf{do }}\textbf{then }insert $\langle \textsc{center}\xspace(i,j), \textsc{center}\xspace, i,j\rangle$ in $Q$\\ \hphantom{\textbf{do then }}insert $\langle \textsc{exit}\xspace(i,j), \textsc{exit}\xspace, i,j\rangle$ into $Q$\\ \hphantom{\textbf{do then }}\textbf{if }$|i| < j$ or $i = j > 0$\quad\quad\textit{(next cell is north)}\\ \hphantom{\textbf{do then }}\textbf{then }insert $\langle \textsc{enter}\xspace (i-1,j), \textsc{enter}\xspace, i-1,j \rangle$ in $Q$\\ \hphantom{\textbf{do then }}[... \textit{similar for west, south, and east} ...]\\ \hphantom{\textbf{do then }}compute band number $k \leftarrow \lceil\max(|i|,|j|) / w\rceil$\\ \hphantom{\textbf{do then }}$z \leftarrow$ the next unread value from $E_k$\\ \hphantom{\textbf{do then }}$\beta \leftarrow \arctan(z/\mathrm{Dist}(i,j))$\quad\quad($= \mathrm{ElevAngle}(p(i,j))$)\\ \hphantom{\textbf{do then }}insert $\langle \mathrm{Dist}(i,j), \beta\rangle$ into $A$\\ \hphantom{\textbf{do }}\textbf{else if }$\mathit{type} = \textsc{center}\xspace$\\ \hphantom{\textbf{do }}\textbf{then }compute band number $k \leftarrow \lceil\max(|i|,|j|) / w\rceil$\\ \hphantom{\textbf{do then }}query $A$ if element with key $\mathrm{Dist}(i,j)$ is visible;\\ \hphantom{\textbf{do then }}if yes, write 1 to $V_k$, otherwise write 0 to $V_k$\\ \hphantom{\textbf{do }}\textbf{else }($\mathit{type} = \textsc{exit}\xspace$)\\ \hphantom{\textbf{do then }}delete element with key $\mathrm{Dist}(i,j)$ from $A$ \addvspace{.5\baselineskip} The crux of the \textsc{ComputeVisibility} algorithm above is the following: when it needs to read $Z_{ij}$, it simply takes the next unread value from its band $E_k$. This is correct, because within each band $B_k$, the above algorithm requires the $Z_{ij}$ values in the order of the corresponding $\textsc{enter}\xspace$ events, and this is exactly the order in which these values were put in $E_k$ by algorithm \textsc{BuildBands}. The output of the second phase is a number of lists $V_k$ with visibility values: one list for each band, in order of the azimuth angle of the grid points. The third phase of the algorithm sorts the lists $V_k$ into one visibility map. To do so we run an algorithm that is more or less the reverse of algorithm \textsc{BuildBands}: we only need to swap the roles of reading and writing, and use azimuth values for \textsc{center}\xspace events instead of \textsc{enter}\xspace events. \vspace{\baselineskip} {\bf Efficiency analysis.} We will now argue that the above algorithm computes a visibility map in $O(n \log n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace under the assumption that the input grid is square, and we have $M \geq c_1 \sqrt n$ and $M \geq c_2 B^2$ for sufficiently large constants $c_1$ and $c_2$. We start with the first phase: \textsc{BuildBands}. Consider the part of band $B_1$ which lies in the first quadrant. This part consists of all points $p(i,j)$ such that $0 \leq -i \leq w$ and $0 \leq j \leq w$ (except the viewpoint itself). It is a tile of size $(w+1) \cdot (w+1)$, which fits in one third of the main memory by definition of $w$. As the algorithm iterates through the points of $B_1$, it accesses their elevations, loading blocks from disk, until eventually the entire tile is in main memory, after which there are no subsequent I/O\xspace-operations on the input grid. The number of I/Os\xspace to access the tile is $O(w + w^2/B) = O(\sqrt{M} + M^2/B)$. By the assumption that $M = \Omega(B^2)$, this is $O(M^2/B)=O(|B_1|/B)$, where $|B_1|$ denotes the number of grid points in $B_1$. In fact any band $B_k$ with $k \geq 1$ can be subdivided into $8k-4$ tiles of size at most $(w+1) \cdot (w+1)$, such that for any band, the sweep line will intersect at most two such tiles at any time (see Figure~\ref{fig:layered-radial-sweep}). Since a tile fits in at most one third of the memory, two tiles fit in memory together. Therefore the algorithm can process each band by reading tiles one by one, without ever reading the same tile twice. Thus each band $B_k$ is read in $O(\mathsf{scan}(|B_k|))$ I/Os\xspace, and algorithm \textsc{BuildBands} needs $O(\mathsf{scan}(n))$ I/Os\xspace in total to read the input. The output lists $E_k$ are written sequentially, taking $O(\mathsf{scan}(n))$ I/Os\xspace as well. It remains to discuss the operation of the priority queue. Note that at any time the priority queue stores one cell from each layer, and therefore it has size $m < \frac12 \sqrt n$; by assumption this is at most $\frac12 M / c_1$. Hence, for a sufficiently large value of $c_1$, the priority queue fits in memory together with the two tiles from the input file mentioned above (which each take at most one third of the memory). Thus the operation of the priority queue takes no I/O\xspace, but it will take $O(\log n)$ CPU-time per operation, and thus, $O(n \log n)$ time in total. The second phase, \textsc{ComputeVisibility}, reads and writes each list $E_k$ and $V_k$ in a strictly sequential manner. There are $O(m/w) = O(\sqrt{n/M})$ bands. Under the assumption $M \geq c_1 \sqrt n$ and $M \geq c_2 B^2$, this is only $O(\sqrt{n/M}) = O(\sqrt{n}/B) = O(M/B)$. This implies that, when $c_1$ and $c_2$ are sufficiently high constants, one block from each list $E_k$ or $V_k$ can reside in memory as a read or write buffer during the sweeping. Thus all lists $E_k$ and $V_k$ can be read and written in parallel in $O(\mathsf{scan}(n))$ I/Os\xspace in total. The priority queue and the active structure have size $O(\sqrt n)$ and therefore fit in memory by the arguments given above, so the second phase needs $O(n \log n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace in total. The third phase, sorting the output lists into a visibility grid, also takes $O(n \log n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace: the analysis is the same as for the first phase. Note that in practice, the number of visible points is often very small compared to the size of the grid. In that case it may be better to change the algorithm \textsc{ComputeVisibility} as follows: instead of writing the visibility values of \emph{all} grid points to separate lists for each band and sorting these into a grid, we record only the \emph{visible} grid points with their grid coordinates, write them to a single list~$V$, sort this list, and produce a visibility map from the sorted output. \subsection{An algorithm for very large inputs} The above algorithm computes a visibility map in $\Theta(\mathsf{scan}(n))$~I/Os\xspace under the assumption that $M \geq c_1 \sqrt n$, and $M \geq c_2 B^2$ for sufficiently large constants $c_1$ and $c_2$. Note that $\mathsf{sort}(n) = \Theta(\mathsf{scan}(n))$ under these assumptions. The idea of a layered radial sweep can be extended to a recursive algorithm that runs in $O(\mathsf{sort}(n))$ I/Os\xspace for any $n$, without both these assumptions. The idea is the following: we divide the problem into $\Theta(M/B)$ bands, scan the input to distribute the grid points into separate lists for each band, then compute visibility recursively in each band, and merge the results. More precisely, for each band we will compute a list of ``locally'' visible points and a ``local'' horizon: these are the points and the horizon that would be visible in absence of the terrain between the viewpoint and the band. The list of visible points is stored in azimuth order around the viewpoint. The horizon is a step function whose complexity is linear in the number of points of the terrain; it is also stored as a list of points in azimuth order around the viewpoint. Now we can merge two adjacent bands as follows. Let $V_1$ and $H_1$ be the list of visible points and the horizon of the inner band, and let $V_2$ and $H_2$ be the list of visible points and the horizon of the outer band. The merge proceeds as follows. We scan these four lists in parallel, in azimuth order, and output two lists in azimuth order. First, a list of visible points containing all points of $V_1$, and all points $V_2$ that are visible above $H_1$. Second, the merged horizon: the upper envelope of $H_1$ and $H_2$. This correctly computes visibility because a point is visible if and only if is visible in its band, and is not occluded by any of the bands that are closer to $v$. The idea of the merge step can be extended to merge $M/B$ bands, resulting in an algorithm that runs in $O(\mathsf{sort}(n))$ I/Os\xspace. To see why, observe that there are $\Theta(M/B)$ levels of recursion, and the base-case runs in linear time. Each band and its horizon have size $O(n/(M/B))$. The merging can be performed in linear time because it involves scanning of $\Theta(M/B)$ lists of size $\Theta(n/(M/B))$; a block from each list fits in memory and the total size of all lists is $\Theta(n)$. It remains to show that a horizon can be computed in linear time in a base-case band of width $\Theta(\sqrt M)$. To see this, we note that a band consists of tiles of size $\sqrt M \cdot \sqrt M$, and the horizon can be computed tile by tile. The details are similar to ones already discussed, and we omit them. Overall, we get a divide-and-conquer algorithm that can compute visibility in $\Theta(\mathsf{sort}(n))$ I/Os\xspace for any $n$, assuming $M \geq c_2 B^2$. Because another algorithm with a theoretical I/O-efficiency of $O(\mathsf{sort}(n))$ was already known from our previous work~\cite{havertoma:visibility-journal}, this ``new'' divide-and-conquer algorithm is not particularly interesting. In practice such a recursive algorithm would probably never be needed: it would only be useful when $n$ would be at least as big as $(M/c_1)^2$. \section{A radial sweep in sectors}\label{sec:radial-sectors} This section describes our second algorithm for computing the visibility map of a point $v$. It does not achieve better asymptotic bounds on running time and I/Os\xspace than the algorithm from the previous section, but, as we will see in Section~\ref{sec:results}, it is faster. Like the algorithm from Section~\ref{sec:radial-layers}, our second algorithm sweeps the terrain radially around the viewpoint. As before, the azimuth angles of the events are computed on the fly using a priority queue. Elevation values of grid points are only needed when their \textsc{enter}\xspace events are processed. To make access to elevation values efficient, we first divide the elevation grid into sectors of $\Theta(M)$ grid points each---this is the main difference with the algorithm from the previous section, which divided the elevation grid into concentric bands. The algorithm proceeds in three phases. First, for any pair of azimuth angles $\alpha,\beta$, let $S(\alpha,\beta)$ be the set of grid points whose corresponding \textsc{enter}\xspace events have azimuth angle at least $\alpha$ and less than $\beta$. The first phase of our algorithm starts by computing a set of azimuth angles $\alpha_0 < ... < \alpha_s$, where $\alpha_0 = 0$ and $\alpha_s = 2\pi$, such that for any $1 \leq k \leq s$ we have that the coordinates and elevation values of $S(\alpha_{k-1},\alpha_k)$ fit in one third of the main memory. Note that this can be done without accessing the elevation grid: the algorithm only needs to know the size of the grid and the location of the viewpoint in order to be able to divide the full grid into memory-size sectors. We then scan the elevation grid and distribute the grid points based on their \textsc{enter}\xspace azimuth angle into lists: one list $E_k$ for each sector $S(\alpha_{k-1},\alpha_k)$. (Cells straight right of the viewpoint need to be entered at the beginning of the sweep and are additionally put in $E_1$). In the second phase we do the radial sweep as before, sector by sector, with two modifications: (i) whenever we enter a new sector $S(\alpha_{k-1},\alpha_k)$, we load the complete list $E_k$ into memory and sort it by the azimuth angle of the \textsc{enter}\xspace events; (ii) we do not keep a list of visibility values per sector, but instead we write the row and column coordinates of the points that are found to be visible to a single list $L$. \begin{figure}[t] \centering \includegraphics[height=2.5in]{sectors.pdf} \caption{A radial sweep in sectors.} \label{fig:sector-radial-sweep} \end{figure} Finally, in the third phase we sort $L$ and scan it to produce a visibility map of the full grid. Thus the full algorithm is as follows: \newpage \addvspace{.5\baselineskip} \noindent \textbf{Algorithm} \textsc{SectoredSweep}:\\[.25\baselineskip] \textit{First phase---distribution:}\\ Compute sector boundaries $\alpha_0,...,\alpha_s$ analytically such that each sector $S(\alpha_{k-1},\alpha_k)$ fits in one third of the memory.\\ \textbf{for }$k \leftarrow 0$ \textbf{to} $s$\\ \textbf{do }initialise empty list $E_k$\\ \textbf{for }all points $p(i,j)$ in row-by-row order (except $v$)\\ \textbf{do }compute $k$ s.t. $\alpha_{k-1} \leq \textsc{enter}\xspace(i,j) < \alpha_k$\\ \hphantom{\textbf{do }}read $Z_{ij}$ and write $\langle i,j,Z_{ij}\rangle$ to $E_k$\\ \textbf{for }all points $p(i,j)$ from $v$ (excl.) straight to the right\\ \textbf{do }read $Z_{ij}$ and write $\langle i,j,Z_{ij}\rangle$ to $E_0$\\[.25\baselineskip] \textit{Second phase---sweep:}\\ initialise empty active structure $A$ and priority queue $Q$\\ initialise empty output list $L$\\ \textbf{for }$j \leftarrow 1$ \textbf{to} $m$\\ \textbf{do }insert $\langle \textsc{enter}\xspace (0,j)-2\pi, \textsc{enter}\xspace, 0,j \rangle$ into $Q$\\ $k \leftarrow 1$; load $E_1$ in memory and sort it by $\textsc{enter}\xspace(i,j)$\\ \textbf{while} not all visibility values have been computed\\ \textbf{do }$\langle \alpha, \mathit{type}, i, j \rangle \leftarrow Q$.deleteMin()\\ \hphantom{\textbf{do }}\textbf{if }$\mathit{type} = \textsc{enter}\xspace$\\ \hphantom{\textbf{do }}\textbf{then }\textbf{if }$E_k$ contains no more unread elements\\ \hphantom{\textbf{do then }}\textbf{then }delete $E_k$; $k \leftarrow k + 1$; load $E_k$ in memory\\ \hphantom{\textbf{do then then }}sort $E_k$ and set read pointer at beginning\\ \hphantom{\textbf{do then }}$z \leftarrow$ read the next unread value from $E_k$\quad\quad (=$Z_{ij}$) \\ \hphantom{\textbf{do then }}$\beta \leftarrow \arctan(z/\mathrm{Dist}(i,j))$\quad\quad($= \mathrm{ElevAngle}(p(i,j))$)\\ \hphantom{\textbf{do then }}insert $\langle \mathrm{Dist}(i,j), \beta\rangle$ into $A$\\ \hphantom{\textbf{do then }}insert $\langle \textsc{center}\xspace(i,j), \textsc{center}\xspace, i,j\rangle$ in $Q$\\ \hphantom{\textbf{do then }}insert $\langle \textsc{exit}\xspace(i,j), \textsc{exit}\xspace, i,j\rangle$ into $Q$\\ \hphantom{\textbf{do then }}\textbf{if }$|i| < j$ or $i = j > 0$\quad\quad\textit{(next cell is north)}\\ \hphantom{\textbf{do then }}\textbf{then }insert $\langle \textsc{enter}\xspace (i-1,j), \textsc{enter}\xspace, i-1,j \rangle$ in $Q$\\ \hphantom{\textbf{do then }}[... \textit{similar for west, south, and east} ...]\\ \hphantom{\textbf{do }}\textbf{else if }$\mathit{type} = \textsc{center}\xspace$\\ \hphantom{\textbf{do }}\textbf{then }query $A$ if element with key $\mathrm{Dist}(i,j)$ is visible;\\ \hphantom{\textbf{do then }}if yes, write $\langle i,j\rangle$ to $L$\\ \hphantom{\textbf{do }}\textbf{else }($\mathit{type} = \textsc{exit}\xspace$)\\ \hphantom{\textbf{do then }}delete element with key $\mathrm{Dist}(i,j)$ from $A$\\[.25\baselineskip] \textit{Third phase---produce visibility map:} \\ Sort $L$ lexicographically by row, column\\ Set read pointer of $L$ at the beginning\\ \textbf{for }all points $p(i,j)$ in row-by-row order\\ \textbf{do if }next element of $L$ is $(i,j)$\\ \hphantom{\textbf{do }}\textbf{then }$V_{ij} \leftarrow 1$; advance read pointer of $L$\\ \hphantom{\textbf{do }}\textbf{else }$V_{ij} \leftarrow 0$\\ \vspace{\baselineskip} {\bf Efficiency analysis.} We will now briefly argue that the above algorithm computes a visibility map of the first quadrant in $O(n \log n)$ time and $O(\mathsf{scan}(n) + \mathsf{sort}(t))$ I/Os\xspace, where $t$ is the number of visible grid points, under the assumption that the input grid is square and $M^2/B \geq c n$ for a sufficiently large constant $c$. The first phase of the algorithm reads the elevation grid once and writes elevation values to $O(n/M) = O(M/B)$ sector lists. Therefore we can keep, for each sector, one block of size $\Theta(B)$ in memory as a write buffer, and thus the first phase produces the sector lists in $O(\mathsf{scan}(n))$ I/Os\xspace. The running time of the first phase is $\Theta(n)$. During the second phase, we read the sector lists one by one, in $O(\mathsf{scan}(n))$ I/Os\xspace in total. The priority queue and the active structure can be maintained in memory by the arguments given in the previous section. Creating and sorting $L$ takes $O(\mathsf{sort}(t))$ I/Os\xspace, after which it is scanned to produce a visibility map. Thus the algorithm runs in $O(n \log n)$ time and $O(\mathsf{scan}(n) + \mathsf{sort}(t))$ I/Os\xspace. \vspace{\baselineskip} {\bf An algorithm for very large inputs.} When the assumption $M^2/B \geq c n$ does not hold, a radial sweep based on distribution into sectors is still possible: one can use the recursive distribution sweep algorithm from our previous work~\cite{havertoma:visibility-journal} and apply the ideas described above to reduce the size of the event stream. The result is an algorithm that runs in $O(n \log n)$ time and $O(\mathsf{sort}(n))$ I/Os\xspace. We sketch the main ideas below. First we note that, for large $n$, the ``diagonal'' of the grid is larger than $M$ and does not fit in a sector. The splitter values for sectors can still be computed without any I/O\xspace because they depend solely on the position $(i,j)$ of the points wrt $v$, and not on their elevation. For example, one could do a pass through the points in $\textsc{enter}\xspace(i,j)$ order using an I/O-efficient priority queue, in the same way as during the sweep, but without accessing the elevation. Using a counter we can keep track of the number of points processed, and output every $\Theta(M)$-th one as a splitter. Given the splitters, we can proceed recursively: first distribute the grid into $\Theta(M/B)$ sectors, and then distribute each sector recursively until each sector has size $\Theta(M)$. This takes $\Theta(\log_{M/B} n)$ passes over the grid. Thus, distribution into $\Theta(M)$-sized sectors can be performed in $\Theta(\mathsf{sort}(n))$ I/Os\xspace. If $n > M^2$, the active structure does not fit in memory and the sweep of the sectors with a common active structure does not work, even though each sector is $\Theta(M)$. We need to refine the distribution to process carefully long cells that span more than one sector, so that we can process each sector individually. This can be done I/O-efficiently in $\Theta(\mathsf{sort}(n))$ I/Os\xspace and we refer to our previous algorithm for details~\cite{havertoma:visibility-journal}. \section{A centrifugal sweep algorithm}\label{sec:conc-sweep} In this section we describe our third algorithm for computing the visibiliy map. It uses a complementary approach to the radial sweep in the previous sections and sweeps the terrain centrifugally, by growing a region $R$ around the viewpoint. This region is kept \emph{star-shaped} around~$v$: for any point $u$ inside $R$, the line segment from $(u_x,u_y)$ to $(v_x,v_y)$ lies entirely inside~$R$. The idea is to grow $R$ point by point until it covers the complete grid, while maintaining the horizon $\sigma_R$ of $R$. Recall that the horizon $\sigma_R$ is a function from azimuth angles to elevation angles, such that $\sigma_R(\alpha)$ is the maximum elevation angle of any point on the intersection of $R$ with the ray that extends from $v$ in direction $\alpha$. Whenever a new point $u$ is added to $R$, we decide whether it is visible. The star shape of $R$ guarantees that all points along the line of sight from $v$ to $u$ have already been added, so we can in fact decide whether $u$ is visible by determining whether $u$ is visible above the horizon of $R$ just before adding $u$ (see Figure~\ref{fig:centrifugal}). The key to a good performance is to have a way of growing $R$ that results in an efficient disk access pattern, and to have an efficient way of maintaining the horizon structure. Below we explain how to do this, given an elevation grid with a fixed number of bytes per grid point. For ease of description, we remind the reader our notation: We assume that the viewpoint $v$ is in the center of the grid at coordinates $(0,0,0)$ and the grid has size $(2m + 1) \times (2m + 1)$, where $m = (\sqrt n - 1)/2$. The elevations of the grid points are given in a two-dimensional array $Z$ that is ordered row by row, with rows numbered from $-m$ to $m$ from north to south and columns numbered from $- m$ to $m$ from west to east. By $p(i,j)$ we denote the grid point $q = (q_x,q_y,q_z)$ in row $i$ and column $j$ with coordinates $q_x = j$, $q_y = -i$ and $q_z = Z_{ij}$; by \emph{cell $(i,j)$} we denote the square $D(p(i,j))$. To maintain the horizon efficiently, we represent it by a grid model itself: more precisely, it is maintained in an array $S$ of $32m$ slots, where slot $i$ stores the highest elevation angle in $R$ that occurs within the azimuth angle range from $i\cdot 2\pi /32m$ to $(i+1)\cdot 2\pi / 32m$. For growing the region $R$ the idea is to do so cache-obliviously using a recursive algorithm. Initially we call this algorithm with the smallest square that contains the full grid and whose width is a power of two. When called on a square of size larger than one, it makes recursive calls on each of the four quadrants of the square, in order of increasing distance of the quadrants from~$v$. For a square tile with upper left corner $(i,j)$ and width $s$, this distance $\mathrm{TileDist}(i,j,s)$ is the distance from $v$ to the closest point of the tile. This is determined as follows. Let $v_i, v_j$ be the row and column of the viewpoint. \begin{itemize} \item when $i \leq v_i < i + s$ and $j \leq v_j + s$, then the tile contains~$v$, and $\mathrm{TileDist}(i,j,s) = 0$; \item otherwise, when $i \leq v_i < i + s$, the tile intersects the row that contains $v$, and $\mathrm{TileDist}(i,j,s) = \min(|i - v_i|, |i + s - 1 - v_i|)$; \item otherwise, when $j \leq v_j < j + s$, the tile intersects the column that contains $v$, and $\mathrm{TileDist}(i,j,s) = \min(|j - v_j|, |j + s - 1 - v_j|)$; \item otherwise, $\mathrm{TileDist}(i,j,s) = \min(|i - v_i| + |j - v_j|, |i - v_i| + |j + s - 1 - v_j|, |i + s - 1 - v_i| + |j - v_j|, |i + s - 1 - v_i| + |j + s - 1 - v_j|)$. \end{itemize} When called on a square of size~1, that is, a square that contains only a single grid point~$p(i,j)$, we proceed as follows. We retrieve the elevation $Z_{ij}$ of~$p(i,j)$ from the input file and compute its azimuth angle $\mathrm{AzimAngle}(p(i,j))$ and its elevation angle. $\mathrm{ElevAngle}(p(i,j))$. Then we check if $p(i,j)$ is visible: this is the case if and only if $p(i,j)$ appears higher above the horizon than the current horizon in the direction of $p(i,j)$; that is, if and only if $\mathrm{ElevAngle}(p(i,j)) > S[\lfloor \mathrm{AzimAngle}(p(i,j))/2\pi \cdot 32m\rfloor]$. The visibility of $p(i,j)$ is recorded in the output grid $V$. Next we update the horizon to reflect the inclusion of $p(i,j)$ in $R$. To this end we check all slots in the horizon array whose azimuth angle range intersects the azimuth angle range of cell $(i,j)$; let ${\cal A}(p(i,j))$ denote this set of slots. For each slot of ${\cal A}(p(i,j))$ that currently stores an elevation angle lower than $\mathrm{ElevAngle}(p(i,j))$, we raise the elevation angle to $\mathrm{ElevAngle}(p(i,j))$. We thus have the following algorithm: \addvspace{\baselineskip} \noindent \textbf{Algorithm} \textsc{CentrifugalSweep}:\\[.25\baselineskip] create horizon array $S[0..32m-1]$\\ \textbf{for }$k \leftarrow 0$ \textbf{to} $32m-1$ \textbf{do} $S[k] \leftarrow -\infty$\\ $s \leftarrow$ smallest power of two $\geq 2m+1$\\ $\textsc{SweepRecursively}(\mathit{-m,-m,s})$ \addvspace{\baselineskip} \noindent \textbf{Algorithm} $\textsc{SweepRecursively}(i,j,s)$:\\ \textit{(Recursively computes visibility for the tile with upper left cell $(i,j)$ and width $s$)} \\[.25\baselineskip] \textbf{if }$s = 1$\\ \textbf{then }$\alpha \leftarrow \mathrm{AzimAngle}(p(i,j))$\\ \hphantom{\textbf{then }}$\beta \leftarrow \arctan(Z_{ij}/\mathrm{Dist}(i,j))$\hfill($=\mathrm{ElevAngle}(p(i,j))$)\\ \hphantom{\textbf{then }}\textbf{if} $\beta > S[\lfloor \alpha/2\pi \cdot 32m\rfloor]$ \textbf{then} $V_{ij} \leftarrow 1$ \textbf{else} $V_{ij} \leftarrow 0$\\ \hphantom{\textbf{then }}$\alpha^{-} \leftarrow$ smallest azimuth of any corner of cell $(i,j)$\\ \hphantom{\textbf{then }}$\alpha^{+} \leftarrow$ largest azimuth of any corner of cell $(i,j)$\\ \hphantom{\textbf{then }}\textbf{for }$k \leftarrow \lfloor \alpha^{-}/2\pi \cdot 32m\rfloor$ \textbf{to} $\lceil \alpha^{+}/2\pi \cdot 32m\rceil-1$\\ \hphantom{\textbf{then }}\textbf{do }$S[k] \leftarrow \max(S[k], \beta)$\\ \textbf{else }\textit{Let $Q$ be the four subquadrants:}\\ \hphantom{\textbf{then }}$s \leftarrow s/2$\\ \hphantom{\textbf{then }}$Q \leftarrow \{\langle i,j,s\rangle,\langle i+s,j,s\rangle,\langle i,j+s,s\rangle,\langle i+s,j+s,s\rangle\}$\\ \hphantom{\textbf{then }}sort the elements $\langle i,j,s\rangle$ of $Q$ by incr. $\mathrm{TileDist}(i,j,s)$\\ \hphantom{\textbf{then }}\textbf{for }$\langle i,j,s\rangle \in Q$\\ \hphantom{\textbf{then }}\textbf{do }$\textsc{SweepRecursively}(i,j,s)$ \begin{figure}[t] \centering{ \includegraphics[height=2in]{centrifugal.pdf} \caption{$R$ (dark shade) is a star-shaped region of the terrain (light shade) around $v$. The horizon of $R$ is maintained in array $S$. When $u$ is added to $R$, the elevation angles in the black slots of $S$ are updated. } \label{fig:centrifugal} } \end{figure} \subsection{Accuracy of the centrifugal sweep} Note that when the algorithm updates the horizon array, the elevation angle of $p(i,j)$ may be used to raise the elevation angles of a set of horizon array slots ${\cal A}(p(i,j))$, of which the total azimuth range may be slightly larger than that of the cell corresponding to $p(i,j)$---this is due to the rounding of the azimuth angles $\alpha^{-}$ and $\alpha^{+}$ in the algorithm. However, this is not a problem: The azimuth angles of grid points that lie next to each other (as seen from the viewpoint) differ by at least roughly $1/m$. The size of the horizon array is chosen such that its horizontal resolution is more than four times bigger: it divides the full range of azimuth angles from $0$ to $2\pi$ over $32m$ slots, each of which covers an azimuth angle range of $2\pi / 32m < 1/4m$. Therefore, if the resolution of the horizon array would be insufficient, then surely the resolution of the original elevation grid would not be sufficient. \subsection{Efficiency of the centrifugal sweep} The number of recursive calls made by the region-growing algorithm is $O(n)$. The only part of any recursive call that takes more than constant time is the updating of the horizon. We analyse this layer by layer, where this time layer $l$ is defined as the cells $(i,j)$ such that $|i| + |j| = l$. There are $O(\sqrt n)$ layers, and on each layer, each of the $O(\sqrt n)$ slots of the horizon array is updated at most twice. Thus the total time for updating the horizon is $O(n)$, and the complete algorithm runs in $O(n)$ time. The number of I/Os under the tall-cache assumption ($M = \Omega(B^2)$) can be analysed as follows. Let $w$ be the largest power of two such that the elevation and visibility values of a square tile of $w \times w$ points fit in half of the main memory. There are $O(n/w^2) = O(n/M)$ recursive calls on tiles of this size, and for each of them the relevant blocks of the input and output files can be loaded in $O(w(w/B + 1)) = O(w^2/B + w) = O(M/B + \sqrt M) = O(M/B)$ I/Os. Thus all I/O for reading and writing blocks of the input and output files can be done in $O(n/M \cdot M/B) = O(\mathsf{scan}(n))$ I/Os in total. It remains to discuss the I/Os that are caused by swapping parts of the horizon array in and out of memory. To this end we distinguish (i) recursive calls on tiles of size $w \times w$ at distance at least $c \cdot \sqrt{n/M}$ from the viewpoint (for a suitable constant~$c$), and (ii) calls on the remaining tiles around the viewpoint. For case (i), observe that each tile $G$ of size $w \times w$ at distance at least $c \cdot \sqrt{n/M}$ from the viewpoint has an azimuth range of $O(w/\sqrt{n/M}) = O(M/\sqrt{n})$; since the horizon array has $O(\sqrt n)$ slots, $G$ spans $O(M/\sqrt{n} \cdot \sqrt n) = O(M)$ slots of the horizon array. Therefore, when $c$ is sufficiently large, the part of the horizon array that is relevant to the call on $G$ can be read into the remaining half of the main memory at once, using $O(\mathsf{scan}(M))$ I/Os. In total we get $O(n/M) \cdot O(\mathsf{scan}(M)) = O(\mathsf{scan}(n))$ I/Os for reading and writing the horizon array in instances of case~(i). For case (ii), note that we access the horizon array $O(n)$ times in total (as shown in our running time analysis above). Because the tiles of case (ii) contain only $O(n/M)$ grid points in total, the accesses to the horizon array are organised in $O(n/M)$ runs of consecutive horizon array slots. The total number of I/O-operations induced by these accesses is therefore $O(n/B + n/M) = O(\mathsf{scan}(n))$. Adding it all up, we find that the centrifugal sweep algorithms runs in $O(n)$ time and $O(\mathsf{scan}(n))$ I/Os. The algorithm does not use or control $M$ and $B$ in any way: it is cache-oblivious. The I/O-efficiency analysis for the maintenance of the horizon array is purely theoretical as far as disk I/O is concerned: the complete horizon array easily fits in main memory for files up to several trillion grid points. However, the I/O-efficiency analysis also applies to the transfer of data between main memory and smaller caches. \section{An IO-efficient algorithm using linear interpolation} \label{sec:algorithms} In this section we describe our last two algorithms for computing viewsheds, \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace. These algorithms use linear interpolation to evaluate the intersection of the line-of-sight with the grid lines, and constitute an improved, IO-efficient version of Franklin's \texttt{R3}\xspace algorithm. {\bf Notation.} Recall that the horizon $H_v$ (wrt to viewpoint $v$) is the upper rim of the terrain as it appears to a viewer at $v$. Suppose we recenter our coordinate system such that $v = (0,0,0)$, and consider a \emph{view screen} around the viewer that consists of the Cartesian product of the vertical axis and the square with vertices $(1, 0)$, $(0, 1)$, $(-1, 0)$ and $(0, -1)$. The projection of a point $p = (p_x, p_y, p_z)$ towards $v$ onto the view screen has coordinates: $p / (|p_x| + |p_y|)$. Note that any line segment that does not cross the north-south or east-west axis through $v$, will appear as a line segment in the projection onto the view screen. We now define the horizon of the terrain as it appears in the projection. More precisely, for $t \in [0,2]$, we define the horizon $H_v(t)$ as the maximum value of $p_z / (|p_x| + |p_y|)$ over all terrain points $p$ such that $p_x / (|p_x| + |p_y|) = 1 - t$ and $p_y \leq 0$ (this defines the horizon of the terrain south of the viewpoint). For $t \in [2,4]$, we define the horizon $H_v(t)$ as the maximum value of $p_z / (|p_x| + |p_y|)$ over all terrain points $p$ such that $p_x / (|p_x| + |p_y|) = 3 + t$ and $p_y \geq 0$ (this defines the horizon of the terrain north of the viewpoint). {\bf The model.} We consider two models, shown in Figure~\ref{fig:model}: In the \emph{gridlines} model the grid points are connected by vertical and horizontal lines in a grid, and visibility is determined by evaluating the intersections of the LOS with the grid lines. The gridlines model is the model used by \texttt{R3}\xspace. We also consider a slightly different model, the \emph{layers} model, in which the grid points are connected in concentric layers around the viewpoint and visibility is determined by evaluating the intersections of the LOS with the layers. The layers model is a relaxation of the gridlines model because it considers only a subset of the intersections (obstacles) considerd by the gridlines model; any point visible from $v$ in the grid model is also visible in the layers model (but not the other way), and the viewshed generated by the grid model is a subset of the viewshed generated with the layers model. \begin{figure}[tb] \label{fig:model} \centering{ \begin{tabular}{c c} \includegraphics[width=3.5cm]{model-grid.pdf} & \includegraphics[width=3.5cm]{model-layers.pdf} \\ (a) & (b)\\ \end{tabular} \caption{\small{(a) The gridlines model: visibility is determined by the intersections of the LOS with all the grid lines. (b) The layers model: visibility is determined by the intersections of the LOS with the layers. }} } \end{figure} {\bf General idea and comparison to previous algorithms.} Our algorithms \textsc{vis-dac}\xspace and \textsc{vis-iter}\xspace use an overall approach that is a combination of our radial sweep algorithm (Section~\ref{sec:radial-layers}) which partitions the grid into bands, and our centrifugal sweep algorithm (Section~\ref{sec:conc-sweep}) which traverses the grid in layers around the viewpoint and maintains the horizon of the region traversed so far. \begin{itemize} \item Recall that the radial sweep algorithm from Section~\ref{sec:radial-layers} consists of three phases: (1) partition the elevation grid in bands; (2) rotate the ray and compute visibility bands; (3) sort the visibility bands into a visibility grid. Phase 2 accesses data sequentially from all bands while the ray rotates around the viewpoint. The width of a band is chosen $w = \Theta(\sqrt M)$. Our algorithms \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace have the same first and third phase, but in phase (2) they process the bands one at a time. The size of a band is set so that a band fits fully in memory. \item The centrifugal sweep algorithm Section~\ref{sec:conc-sweep} uses horizons which are stored discretized in an array. Our algorithms \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace use linear interpolation and therefore horizons are piecewise-linear functions and are stored as a list of $\{azimuth, zenith \}$ pairs with full precision. \end{itemize} We start by describing how to compute viewsheds in the layers model in Section~\ref{sec:visiter} and \ref{sec:visdac}; in Section~\ref{sec:gridlines} we show how our algorithms can be extended to the gridlines model while maintaining the same worst-case complexity. \subsection{An iterative algorithm: \textsc{vis-iter}\xspace } \label{sec:visiter} This section describes our first viewshed algorithm in the layers model, \textsc{vis-iter}\xspace. The main idea of \textsc{vis-iter}\xspace is to traverse the grid in layers around the viewpoint, one layer at a time, while maintaining the horizon of the region traversed so far. The horizon is maintained as a sequence of points $(t,H_v(t))$, sorted by $t$-coordinate, between which we interpolate linearly. When traversing a point $p$, the algorithm uses the maintained horizon to determine if $p$ is visible or invisible. In order to do this IO-efficiently, it divides the grid in bands around the viewpoint and processes one band at a time. The output visibility grid is generated band by band, and is sorted into a grid in the final phase of the algorithm. The size of the band is chosen such that a band fits in memory. Below we explain these steps in more detail. {\bf Notation.} The notation is the same as before and we review it for clarity: we ssume that the viewpoint $v$ is in the center of the grid at coordinates $(0,0,0)$ and the grid has size $(2m+1) \times (2m+1)$, where $m = (\sqrt n -1) /2$. The elevations of the grid points are given in a two-dimensional matrix $Z$ that is ordered row by row, with rows numbered from $-m$ to $m$ from north to south, and columns numbered from $-m$ to $m$ from west to east. By $p(i,j)$ we denote the grid point $q=(q_x, q_y, q_z)$ in row $i$ and column $j$ with coordinates $q_x = j, q_y = -i$ and $q_z= Z_{ij}$. For $l\ge 0$, let \emph{layer} $l$ of the grid, denoted $L_l$, denote the set of grid points whose $L_\infty$-distance from the viewpoint, measured in the horizontal plane, is $l$. By definition, $L_0$ consists of only one point, $v$. We divide the grid in concentric bands around the viewpoint: For $k>0$, band $k$, denoted $B_k$, contains all points in layers $w_{k-1}$ (inclusive) to $w_k$ (exclusive), where $w_k (k \ge 0)$ denote the indices of layers corresponding to the band boundaries. Thus band $B_1$ contains all points in layers $w_0=1$ to $w_1$, and so on. The algorithm starts with a preprocessing step which, given an arbitrary constant $K$, computes the band boundaries $w_k (k \ge 1)$ such that a band has size $\Theta(K)$ as follows: it cycles through each layer $i$ in the grid, computes (analytically) the number of points in that layer, and checks whether including this layer in the current band makes the band go over $K$ points. If yes, then layer $i$ marks the start of the next band. Otherwise, it adds the points in layer $i$ to the current band and continues. The maximum size $K$ of a band is chosen such that a band fills roughly a constant fraction of memory, and each band is at least one layer wide. More precisely, we choose $K = c_1 M$ and assume $\sqrt n \leq c_1 M$, for a sufficiently small constant $c_1$ which will be defined more precisely later. Thus the number of bands, $N_{bands}$, is $O(N/M)$. Once the band boundaries are set, the algorithm proceeds in three phases. The first phase is to generate, for each band $B_k$, a list $E_k$ containing the elevations of all points in the band. It does this by scanning the grid in row-column order: for each point $p(i,j)$, it calculates the index $k$ of the band that contains the point and writes $Z_{ij}$ to $E_k$. We note that the first phase writes the lists $E_k$ sequentially, and thus list $E_k$ contains the points in the order in which they are encountered during the (row-by-row) scan of the grid. The algorithm is given below. \noindent \textbf{Algorithm} \textsc{BuildBands}:\\[.25\baselineskip] load list containing band boundaries in memory \\ \textbf{for }$k \leftarrow 1$ \textbf{to} $N_{bands}$\\ \textbf{do }initialize empty list $E_k$\\ \textbf{for } $i\leftarrow -m$ \textbf {to} $m$\\ \textbf{do } \textbf{for } $j\leftarrow -m$ \textbf {to} $m$\\ \hphantom{\textbf{do }} \textbf{do} read next elevation $Z_{ij}$ from grid \\ \hphantom{\textbf{do }\textbf{do }} $k \leftarrow \textrm{band containing point } (i,j)$\\ \hphantom{\textbf{do }\textbf{do } }append $Z_{ij}$ to $E_k$ Given the lists $E_k$, the second phase of the algorithm computes which points are visible. To do this it traverses the grid one band at a time, reading the list $E_k$ into memory. Once a band is in memory, it traverses it layer by layer from the viewpoint outward, counter-clockwise in each layer. The output of the second phase is a set of lists $V_k$ with visibility values, one list for each band. While traversing the grid in this fashion the algorithm maintains the horizon of the region encompassed so far. More precisely, let $L_{1,i} (i\ge 1)$ denote the set of points in layers $L_1$ through $L_i$. Before traversing the next layer $L_{i+1}$, the algorithm knows the horizon $H_{1,i}$ of $L_{1,i}$. While traversing the points in $L_{i+1}$, the algorithm determines for each point $p$ if it is above or below the horizon $H_{1,i}$ and records this in $V_k$. At the same time it updates $H_{1,i}$ on the fly to obtain $H_{1,i+1}$. To do so, the algorithm computes, for each point $p$, the projection $h$ onto the view screen of the line segment that connects $p$ to the previous point in the same layer, the algorithm computes the intersection of $h$ with the current horizon as represented by $H_{1,i}$, and then updates $H_{1,i}$ to represent the upper envelope of the current horizon and $h$. After traversing the entire grid in this manner, the set of points that have been marked visible during the traversal constitute the viewshed of $v$. The algorithm is given below only for the first octant; the other octants are handled similarly: \noindent \textbf{Algorithm} \textsc{VisBands-ITER}:\\[.25\baselineskip] $H_{1,0} \leftarrow \emptyset$\\ \textbf{for }$k \leftarrow 1$ \textbf{to} $N_{bands}$ \\ \textbf{do } load list $E_k$ in memory\\ \hphantom{\textbf{do }} create list $V_k$ in memory and initialize it as all invisible\\ \hphantom{\textbf{do }} \textbf{for } $l \leftarrow w_{k-1}$ to $w_k$ //for each layer in the band \\ \hphantom{\textbf{do }} \textbf{do } //traverse layer $l$ in ccw order\\ \hphantom{\textbf{do } \textbf{do }} \textbf{for } $r \leftarrow 0$ to $-l$ //first octant\\ \hphantom{\textbf{do } \textbf{do }} \textbf{do } get elevation $Z_{rl}$ of $p(r,l)$ from $E_k$\\ \hphantom{\textbf{do } \textbf{do } \textbf{do }} determine if $Z_{rl}$ is above $H_{1,l-1}$\\ \hphantom{\textbf{do } \textbf{do } \textbf{do }} if visible, set value $V_{rl}$ in $V_k$ as visible\\ \hphantom{\textbf{do } \textbf{do } \textbf{do }} $h \leftarrow$ projection of $p(r-1,l)p(r,l)$\\ \hphantom{\textbf{do } \textbf{do } \textbf{do }} merge $h$ into horizon $H_{1,l-1}$\\ \hphantom{\textbf{do } \textbf{do }} $H_{1,l} \leftarrow H_{1,l-1}$\\ The third and final phase of the algorithm creates the visibility grid $V$ from the lists $V_k$. We note that in phase~2 the lists $V_k$ are stored in the same order as $E_k$, that is, the order in which the points in the band are encountered during a row-by-row scan of the grid; keeping points in this order is convenient because it saves an additional sort, and in the same time this is precisely the order in which they are needed by phase 3. Phase 3 is the reverse of phase 1: for each point $(i,j)$ in the grid in row-major order, it computes the band $k$ where it falls, accesses list $V_k$ to retrieve the visibility value of point $(i,j)$, and writes this value to the output grid $V$. The crux in this phase is that it simply reads the lists $V_k$ sequentially. The algorithm is given below: \noindent \textbf{Algorithm} \textsc{CollectBands}:\\[.25\baselineskip] load list containing band boundaries in memory\\ initialize empty list $V$\\ \textbf{for } $i\leftarrow -m$ \textbf {to} $m$\\ \textbf{do } \textbf{for } $j\leftarrow -m$ \textbf {to} $m$\\ \hphantom{\textbf{do }} \textbf{do} $k \leftarrow \textrm{band containing point } (i,j) $\\ \hphantom{\textbf{do }\textbf{do }} get value $V_{ij}$ of point $(i,j)$ from $V_k$\\ \hphantom{\textbf{do }\textbf{do } }append $V_{ij}$ to list $V$ {\bf Efficiency analysis of \textsc{vis-iter}\xspace.} We now analyze each phase in \textsc{vis-iter}\xspace under the assumption that $n \leq c M^2$ for a sufficiently small constant $c$. The pre-processing phase runs in $O(n)$ time and no I/O\xspace (does not access the grid). The output of this step is a list of $O(N_{bands}) = O(n/M)$ band boundaries, which fits in memory assuming that $n \leq c M^2$ for a sufficiently small constant $c$. The first phase, \textsc{BuildBands}, reads the points of the elevation grid in row-column order, which takes $O(n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace. With the list of band boundaries in memory, the band containing a point $(i,j)$ can be computed with, for example, binary search in $O(\lg n/M) = O(\lg n)$ time and no I/O\xspace. The lists $E_k$ are written to in sequential order. If one block from each band fits in memory, which happens when $n \leq c M^2 / B$ for a sufficiently small constant $c$ (so that $N_{bands} = O(n/M) = O(M/B)$), then writing the lists $E_k$ directly takes $O(\mathsf{scan}(n)) = O(\mathsf{sort}(n))$ I/Os\xspace (note that $O(\mathsf{sort}(n))$ and $O(\mathsf{scan}(n))$ are equal if $n = O(M^2/B)$). If we cannot keep one block of each band in memory, that is, $n > c M^2/B$, then we perform a hierarchical distribution as follows: we group the $N_{bands}$ bands in $O(M/B)$ super-bands, keep a write buffer of one block for each super-band in memory, distribute the points in the grid to these super-bands, and recurse on the super-bands to distribute the grid points to individual bands. A pass takes $O(\mathsf{scan}(n))$ I/Os\xspace, overall it takes $O(\log_{M/B} N_{bands}) = O(\log_{M/B} N/M)$ passes, and thus the first phase has I/O-complexity $O(\mathsf{sort}(n))$. In total, the first phase takes $O(n \lg n)$ time and $O(\mathsf{sort}(n))$ I/Os. The second phase, \textsc{VisBands-ITER}, takes as input the lists $E_k$ and computes the visibility bands $V_k$. We choose $K = c_1 M$ such that the elevations $E_k$ and the visibility map $V_k$ of any band $B_k$ of size $K$ fits in 2/3 of the memory; the remaining 1/3 of the memory is saved for the horizon structure. While processing a band $B_k$ in the second phase, the points in $E_k$ and $V_k$ are not accessed sequentially. However, given the band boundaries, the location of any point in a band can be determined analytically, and thus the value (elevation or visibility) of any point in a band can be accessed in constant time, without any search structure, and without any I/O\xspace. Let us denote by $H_{tot}$ the total cumulative size of all partial horizons $H_{1,l}$: $H_{tot} = \sum_{l=1}^{\sqrt n} |H_{1,l}|$. The horizon $H_{1,l}$ is maintained as a list $\{ (t, h)\}$ of horizontal and vertical coordinates on the view screen, sorted counter-clockwise (ccw) around the viewpoint. As the algorithm traverses a layer $l$ in ccw order, it also traverses $H_{1,l-1}$ in ccw order, and constructs $H_{1,l}$ in ccw order. To determine whether a point is above the horizon, it is compared with the last segment in the horizon; if the point is above the horizon, it is added to the horizon. Thus the traversal of a layer $l$ runs in $O(|L_l| + |H_{1,l-1}| + |H_{1,l}|)$ time. Over the entire grid, phase~2 runs in $O(\sum_l(|L_l| + |H_{1,l}|)) = O(n + H_{tot})$~time. The IO-complexity of the second phase: The algorithm reads $E_k$ into memory, and writes $V_k$ to disk at the end. Over all the bands this takes $O(\mathsf{scan}(n))$ I/Os\xspace. If the horizon $H_{1,l}$ is small enough so that it fits in memory (for any $l$), then accessing the horizon does not use any IO. If the horizon does not fit in memory, we need to add the cost of traversing the horizon in ccw order, for every layer, $O(\mathsf{scan}(\sum_{l=1}^{\sqrt n} |H_{1,l}|)) = \mathsf{scan}(H_{tot})$ I/Os\xspace. Finally, the third phase, \textsc{CollectBands}, takes as input the lists $V_k$ and the list of band boundaries and writes the visibility map. For $n \leq c M^2$, the list of band boundaries fits in memory. For any point $(i,j)$ the band containing it can be computed in $O(\lg n)$ time and no IO. The bands $V_k$ store the visibility values in the order in which they are encountered in a (row-column) traversal of the grid. Thus, once the index $k$ of the band that contains point $(i,j)$ is computed, the visibility value of this point is simply the next value in $V_k$. As with step 1, we distinguish two cases: if the number of bands is such that one block from each band fit in memory, then this step runs in $O(n)$ time and $O(\mathsf{sort}(n)) = O(\mathsf{scan}(n))$ I/Os\xspace. Otherwise, this step first performs a multi-level $M/B$-way merge of the bands into $O(M/B)$ super-bands so that one block from each can reside in main memory; in this case, the complexity of the step is $O(n \lg n)$ time and $O(\mathsf{sort}(n))$ I/Os\xspace. Putting everything together, we have the following: \begin{theorem} \label{th:visiter} The algorithm \textsc{vis-iter}\xspace computes viewsheds in the layers model in $O(n \lg n + H_{tot})$ time and $O(\mathsf{sort}(n) + \mathsf{scan}(H_{tot}))$ I/Os\xspace, provided that $n \leq c M^2$ for a sufficiently small constant $c$. \end{theorem} Furthermore, if $n= O(M^2/B)$ and the partial horizons $H_{1,l}$ are small enough to fit in memory for any $l$, the overall IO complexity becomes $O(\mathsf{scan}(n))$~I/Os\xspace. We note that when $n = \Omega(M^2)$ the algorithm can be adapted using standard techniques to run in the same bounds from Theorem~\ref{th:visiter}; we do not detail on this because it has no relevance in practice. {\bf Discussion.} Phase 1 and 3 of the algorithm are very simple and perform a scanning pass over the grid and the bands, provided that $n \leq c M^2/B$: Phase 1 reads the input elevation grid sequentially and writes the elevation bands sequentially; Phase~3 reads the visibility bands sequentially and writes the visibility grid sequentially. We found this condition to be true in practice on our largest test grid ($28GB$) and with as little as $.5GB$ of RAM. With more realistic value of $M=8 GB$ (and $B=16KB$), the condition is true for $n$ up to $10^{15}$ points. Thus, handling the sub-case $n \leq c M^2/B$ separately in the algorithm provides a simplification and a speed-up without restricting generality. Phase 2, which scans partial horizons $H_{1,l}$ for every layer, runs in $O(n + H_{tot})$ time and $O(\mathsf{scan}(n + H_{tot}))$ I/Os\xspace. As we will prove below, in the worst case $|H_{1,l}| = \Theta(l^2)$, and the running time of the second phase could be as high as $O(H_{tot}) = O(\sum_{l=1}^{O(\sqrt n)} \Theta(l^2)) = O(n \sqrt n)$, with handling the horizon dominating the running time. The worst-case complexity is high but, on the other hand, if $H_{1,l}$ are small, they fit in memory and the algorithm is fast. In particular if $H_{tot}$ is $O(n)$, then phase 2 is linear. This seems to be the case on all terrains and all viewpoints that we tried and may be a feature of realistic terrains. In Section~\ref{sec:results} we'll discuss our empirical findings in more detail. \newcommand{\xwd}[1]{\mathrm{width}(#1)} {\bf Worst-case complexity of the horizon.} Since the horizon is the upper envelope of the projections of grid line segments onto the view screen, its complexity is at most $O(n \alpha(n))$, where $n$ is the number of line segments~\cite{HS86,WS88}. We will now show that we can prove a better bound for our setting. Let the \emph{width} $\xwd{s}$ of a line segment $s$ be the length of its projection on a horizontal line. We need the following lemma. \begin{figure} \centering \includegraphics[width=0.8\hsize]{boundedwidth.pdf} \caption{Illustration of the proof of Lemma~\ref{lem:boundedwidth}. The white subsegments take the charge for $s_{b,q}$; together they are at least as wide as $s_b$, since they stick out from under $s_b$ on both sides.} \label{fig:boundedwidth} \end{figure} \begin{lemma}\label{lem:boundedwidth} If $S$ is a set of $n$ line segments in the plane, such that the widths of the line segments of $S$ do not differ in length by more than a factor $d$, then the upper envelope of $S$ has complexity $O(dn)$. \end{lemma} \begin{proof} Let $s_1,...,s_n$ be the segments of $S$. Each segment $s_i$ consists of a number of maximal subsegments such that the interior of each subsegment lies either entirely on or entirely below the upper envelope. Let the subsegments of $s_i$ be indexed by $s_{i,j}$, such that the subsegments of $s_i$ from left to right are indexed by consecutive values of $j$, and such that $s_{i,j}$ is part of the upper envelope if and only if $j$ is odd. Let $u_1,...,u_m$ be the line segments of the upper envelope. We consider two categories of line segments on the upper envelope: (i) segments that have at least one endpoint that is an endpoint of a segment of $S$; (ii) segments whose endpoints are no endpoints of segments in $S$. Clearly, there can be only $O(n)$ segments of category (i), one segment to the left of each endpoint of a segment in $S$ and one segment to the right of each endpoint. We analyze the number of segments of category (ii) with the following charging scheme. Given a segment $u_h = s_{b,q}$ of category (ii), let $s_{a,p}$ be the segment $u_{h-1}$ and let $s_{c,r}$ be the segment $u_{h+1}$. We charge $u_h$ to $s_{a,p+1}$ and $s_{c,r-1}$. Observe that with this scheme, each segment $s_{i,j}$ can only be charged twice, namely by the successor of $s_{i,j-1}$ on the upper envelope and by the predecessor of $s_{i,j+1}$ on the upper envelope. Since each segment $s_i$ has only one leftmost and only one rightmost subsegment, and each is charged at most twice (in fact, once), there are at most $O(n)$ segments of category (ii) that put charges on leftmost or rightmost subsegments. If neither $s_{a,p+1}$ is the rightmost subsegment of $s_a$ nor $s_{c,r-1}$ is the leftmost subsegment of $s_c$, then $s_a$ must appear on the upper envelope again somewhere to the right of the right end of $s_b$, and $s_c$ must appear on the upper envelope again somewhere to the left of the left end of $s_b$ (see Figure~\ref{fig:boundedwidth}) Therefore $\xwd{s_{a,p+1}} + \xwd{s_{c,r-1}} > \xwd{s_b} \geq \frac{1}{dn}\sum_{i=1}^n \xwd{s_i}$. Since each subsegment is charged at most twice, the total length of subsegments charged is at most $2\sum_{i=1}^n \xwd{s_i}$. Thus there are less than $2dn$ segments of category (ii) that put charges on subsegments that are not leftmost or rightmost. \end{proof} Note that the widths of the projections of the edges of layer $l$ on the view screen vary between $1/(l+1)$ and $1/(4l-2)$. Therefore, the widths of the projections of the edges of the $l/2$ outermost layers in a square region of $l$ layers around $v$ differ by less than a factor 8. Thus, from Lemma~\ref{lem:boundedwidth} we get: \begin{corollary}\label{cor:outerhorizon} If $S$ consists of the $O(l^2)$ edges of the $l/2$ outermost layers in a square region of $l$ layers around $v$, then the horizon of $S$ has complexity $O(l^2)$. \end{corollary} \begin{lemma}\label{lem:combine} If $S$ and $T$ are two $x$-monotone polylines of $m$ and $n$ vertices, respectively, then the upper envelope of $S$ and $T$ has at most $2(m+n)$ vertices. \end{lemma} \begin{proof} There are two types of vertices on the upper envelope: vertices of $S$ or $T$, and intersection points between edges of $S$ and $T$. Clearly, there are at most $m + n$ vertices of the first type. Between any pair of vertices of the second type, there must be a vertex of the first type. Thus there are at most $m + n - 1$ vertices of the second type. \end{proof} \begin{theorem}\label{thm:gridhorizon} If $S$ consists of $l$ layers in a square region around $v$, then the horizon of $S$ has complexity $O(l^2)$ in the worst case. \end{theorem} \begin{proof} Let $T(l)$ be the complexity of the horizon of the innermost $l$ layers around $S$. By Lemma~\ref{lem:combine}, $T(l)$ is at most twice the complexity of the horizon of the innermost $l/2$ layers, plus twice the complexity of the remaining $l/2$ layers. By Corollary~\ref{cor:outerhorizon}, the latter is $O(l^2)$, and therefore we have $T(l) \leq 2T(l/2) + O(l^2)$. This solves to $T(l) = O(l^2)$. \end{proof} \subsection{A refined algorithm: \textsc{vis-dac}\xspace} \label{sec:visdac} This section describes our second algorithm for computing viewsheds in the layers model, \textsc{vis-dac}\xspace. \textsc{vis-dac}\xspace is a divide-and-conquer refinement of \textsc{vis-iter}\xspace and uses the same general steps: it splits the grid into bands, computes visibility one band at a time, and creates the visibility grid from the bands. The first phase (\textsc{BuildBands}) and last phase (\textsc{CollectBands}) are the same as in \textsc{vis-iter}\xspace; the only phase that is different is computing visibility in a band, \textsc{VisBands-DAC}, which aims to improve the time to merge horizons in a band using divide-and-conquer. Similar to \textsc{VisBands-Iter}, \textsc{VisBands-DAC} processes the bands one at a time: for each band $k$ it loads list $E_k$ in memory, creates a visibility list $V_k$ and initializes it as all visible. It then marks as invisible all points that are below $H_{prev}$, where $H_{prev}$ represents the horizon of the first $k-1$ bands (more on this below). The bulk of the work in \textsc{visBands-DAC} is done by the recursive function \textsc{Dac-Band}, which computes and returns the horizon $H$ of $E_k$, and updates $V_k$ with all the points that are invisible inside $E_k$. This is described in detail below. Finally, the horizon $H$ is merged with $H_{prev}$ setting it up for the next band. In order to mark as invisible the points in band $k$ that are below $H_{prev}$ we first sort the points in the band by azimuth angle and then scan them in this order while also scanning $H_{prev}$ (recall that $H_{prev}$ is stored in ccw order). Let $(a_1=0, h_1), (a_2, h_2)$ be the first two points in the horizon $H_{prev}$. For every point $p=(a,h)$ in $E_k$ with azimuth angle $a \in [a_1, a_2]$, we check whether its height $h$ is above or below the height of segment $(a_1,h_1)(a_2,h_2)$ in $H_{prev}$. When we encounter a point in $E_k$ with $a > a_2$, we proceed to the next point in $H_{prev}$ and repeat. The recursive algorithm \textsc{DAC-Band} takes as arguments an elevation band $E_k$, a visibility band $V_k$, and the indices $i$ and $j$ of two layers in this band ($w_{k-1} \le i \le j < w_k$). It computes visibility for the points in layers $i$ through $j$ (inclusive) in this band, and marks in $V_k$ the points that are determined to be invisible during this process. In this process it also computes and returns the horizon of layers $i$ through $j$ in this band. \textsc{Dac-Band} uses divide-and-conquer in a straightforward way: first it computes a ``middle'' layer $m, i \le m \le j$ between $i$ and $j$ that splits the points in layers $i$ through $j$ approximately in half. Then it computes visibility and the horizon recursively on each side of $m$; marks as invisible all points in the second half that fall below the horizon of the first half; and finally, merges the two horizons on the two sides and returns the result. \noindent \textbf{Algorithm} \textsc{Dac-Band}($E_k, V_k, i,j$):\\ [.25\baselineskip] \textbf{if } $i==j$\\ \hphantom{\textbf{if }} $h \leftarrow$ compute-layer-horizon(i)\\ \hphantom{\textbf{if }} return $h$\\ \textbf{else } \\ \hphantom{\textbf{else }} $m \leftarrow$ middle layer between $i$ and $j$\\ \hphantom{\textbf{else }} $h_1 \leftarrow \textsc{Dac-Band}(E_k, V_k, i,m)$\\ \hphantom{\textbf{else }} $h_2 \leftarrow \textsc{Dac-Band}(E_k, V_k, m+1,j)$\\ \hphantom{\textbf{else }} mark invisible all points in $L_{m+1, j}$ that fall below $h_1$\\ \hphantom{\textbf{else }} $h \leftarrow$ merge($h_1,h_2$)\\ \hphantom{\textbf{else }} return $h$ {\bf Efficiency analysis of \textsc{vis-dac}\xspace.} The analysis of the first and last phase of \textsc{vis-dac}\xspace, \textsc{buildBands} and \textsc{CollectBands}, is the same as in Section~\ref{sec:visiter}. We now analyze \textsc{VisBands-DAC}. Recall that we can assume that $E_k$ and $V_k$ both fit in memory during this phase (see Section~\ref{sec:visiter}). The elevation and visibility of any point in a band can be accessed in $O(1)$ time, without any search structure and without any I/O. We denote $H^B_{1,i}$ the horizon of (the points in) the first $i$ bands; and by $H^B_{tot} = H^B_{1,1} + H^B_{1,2} + H^B_{1,3} + ...= \sum_{i=1}^{N_{bands}} H^B_{1,i}$. \begin{itemize} \item Marking as invisible the points in $E_k$ that are below $H_{prev}$ (here $H_{prev}$ represents $H^B_{1,k-1}$): this can be done by first sorting $E_k$ and then scanning $H^B_{1,k-1}$ and $E_k$ in sync. Over the entire grid, this takes $O(n \lg n + H^B_{tot})$ CPU and $O(\mathsf{scan}(n) + \mathsf{scan}(H^B_{tot}))$ I/Os\xspace. \item Merging horizons: After \textsc{Dac-band} is called in a band, the returned horizon is merged with $H_{prev}$. Two horizons can be merged in linear time and I/Os\xspace. Over the entire grid this is $O(H^B_{tot})$ time and $O(\mathsf{scan}(H^B_{tot}))$~I/Os\xspace. \item \textsc{Dac-band}: This is a recursive function, with the running time given by the recurrence $T(k) = 2T(k/2) + \texttt{merge cost + update cost}$, where $k$ is the number of points in the slice between layers $i$ and $j$ given as input. The base case computes the horizon of a layer $l$, which takes linear time wrt to the number of points in the layer. Summed over all the layers in the slice the base case takes $O(\sum_{l=i}^{j}|L_l|) = O(k)$ time and no I/O\xspace (band is in memory). \item The update time in \textsc{Dac-band} represents the time to mark as invisible all points in the second half that fall below the horizon $h_1$ of the first half. Recall that a band fits in memory and thus an input slice in \texttt{Dac-Band} fits in memory. If the band is sorted, the update can be done as above in $O(k + |h_1|) = O(k)$ time (by Theorem~\ref{thm:gridhorizon} we have $|h_1| = O(k)$). \item The merge time in \textsc{Dac-band} represents the time to merge the horizons $h_1$ and $h_2$ of the first and second half of the slice, respectively. This takes $O(|h_1| + |h_2|) = O(k)$ time. \item Putting it all together in the recurrence relation we get $T(k) = 2T(k/2) + O(k)$, which solves to $O(k \lg k)$ time. Summed over all bands in the grid \textsc{Dac-Band} runs in $O(n \lg n)$ time and $O(\mathsf{scan}(n))$ I/Os\xspace. \end{itemize} Overall we have the following: \begin{theorem} The algorithm \textsc{vis-dac}\xspace computes viewsheds in the layers model in $O(n \lg n + H^B_{tot})$ time and $O(\mathsf{sort}(n) + \mathsf{scan}(H^B_{tot}))$ I/Os\xspace, provided that $n \le cM^2$, for a sufficienly small constant $c$. \end{theorem} {\bf Discussion:} The worst case complexity of $H^B_{tot}$ is $\sum_{i=1}^{N_{bands}} H^B_{1,i} = O(N_{bands} \cdot n) = O(n^2/M)$; This is an improvement over $O(n \sqrt n)$ ( provided that $n \le cM^2$). Consider a band that extends from layer $L_i$ to layer $L_j$ and contains $k$ points. The algorithm \textsc{Dac-Band} runs in $O(k \lg k)$ time, while the iterative algorithm \textsc{VisBands-Iter} scans iteratively through all cumulative horizons of the layers in the band $H_{1,i}, H_{1,i+1}, ...$ and so on and runs in $O(k + |H_{1, i}| + |H_{1, i+1}| + ... +|H_{1,j}|)$. When the horizons are small, \textsc{vis-iter}\xspace runs in $O(k)$ time and is faster than \textsc{vis-dac}\xspace. The divide-and-conquer merging is not justified unless the horizons are large enough to benefit from it. \subsection{The gridlines model} \label{sec:gridlines} The algorithms \textsc{vis-dac}\xspace and \textsc{vis-iter}\xspace described in Section~\ref{sec:visiter} and \ref{sec:visdac} above compute viewsheds in the layers model. Let $X_i$ denote the line segments connecting points at distance $i-1$ with points at distance $i$ (Figure~\ref{fig:gridlines}). The set $X_i$ represents the additional ``obstacles'' in the $i$th layer that could intersect the LOS in the gridlines model. With this notation the horizon of the $i$th layer in the gridlines model is $H(L_i) \cup H(X_i)$. The algorithms \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace can be extended to compute viewsheds in the gridlines model---the only difference is that they compute the horizon of a layer as $H(L_i) \cup H(X_i)$ instead of $H(L_i)$. Since $|X_i| = \Theta(|L_i|)$, the analysis and the bounds of the algorithms are the same in both models up to a constant factor. Our algorithms \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace, when using the gridlines model, compute the same viewshed as \texttt{R3}\xspace~\cite{franklin:sdh94}. \textsc{vis-dac}\xspace's upper bound of $O(n \lg n + n^2/M)$ is an improvement over \texttt{R3}\xspace's bound of $O(n \sqrt n)$, provided that $n \le c M^2$. The results on the worst-case complexity of the horizon in the layer model extend to the gridlines model. The extension is not entirely straightforward, because the differences in width in the projection between non-layer edges are larger than between layer edges. We defer the proof to the journal version of this paper. \begin{figure}[t] \centering{ \includegraphics[width=3.5cm]{layers-grid.pdf} } \caption{The segments contributing to a layer's horizon in the gridlines model} \label{fig:gridlines} \end{figure} \section{Experimental results} \label{sec:results} In this section we describe the implementation details and the results of the experiments with our algorithms. We implemented the five algorithms described above: \textsc{io-radial2}\xspace is the layered radial sweep algorithm described in Section~\ref{sec:radial-layers}; \textsc{io-radial3}\xspace is the radial sweep algorithm from Section~\ref{sec:radial-sectors}; \textsc{io-centrifugal}\xspace is the centrifugal sweep algorithm from~Section~\ref{sec:conc-sweep}; \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace are the two algorithms described in Section~\ref{sec:visdac} and \ref{sec:visiter}, respectively. We use as reference the algorithm from our previous work, \textsc{io-radial1}\xspace\cite{havertoma:visibility-journal}, which is also based on Van Kreveld's model. \subsection{Implementation details} We start by reviewing the implementation details of our algorithms. \textsc{io-radial1}\xspace scans the elevation grid, creates 3 events for each cell and writes them to the event stream; with 12 bytes per event, the event stream is $36n$ bytes. It then sorts the event stream by azimuth angle. To sweep, it scans the event stream in order while using an active structure to keep track of the events that intersect the sweep line. During the sweep, the cells that are visible are written to a file. In the end, the file is sorted by location and written to the output grid. \textsc{io-radial1}\xspace can function in a recursive mode if it determines that the active structure does not fit in memory. However in all datasets the active structure is small ($<$30~MiB) and completely fits in memory~\cite{havertoma:visibility-journal}. \textsc{io-radial2}\xspace performs 2 passes over the elevation grid. It first maps the elevation grid file in (virtual) memory and creates the sorted layers $E_k$. During this phase, the elevation grid and the sorted arrays $E_k$ are kept in memory, and \textsc{io-radial2}\xspace relies on the virtual memory manager (VMM) to page in blocks from the elevation grid as necessary when accessing the points in band $k$. Note that the accesses to the elevation grid are not sequential (although they amount to $O(\mathsf{scan}(n))$ I/Os\xspace). To help the VMM we implemented the following strategy: whenever the current band needs to access an elevation from the grid, we load an entire square tile of $\Theta(M)$ points in memory, and keep track of the two most recent tiles. Once all $E_k$ are computed the elevation grid is freed. During the second phase (the sweep) the elevations are accessed sequentially from the bands $E_k$ and the output grid is kept in (virtual) memory as a bitmap grid. \textsc{io-radial3}\xspace also performs 2 (sequential) passes over the elevation grid. The first pass scans over the elevation grid and places each point $(i,j, Z_{ij})$ in its sector. Sectors are stored as streams on disk. The second pass sorts and sweep the points in one sector at a time by $\textsc{enter}\xspace(i,j)$. The output grid is kept in (virtual) memory as a bitmap grid. Except for the output grid, \textsc{io-radial3}\xspace does not use the VMM. \textsc{io-radial1}\xspace, \textsc{io-radial2}\xspace and \textsc{io-radial3}\xspace use the same data structures: a heap as a priority queue; a red-black tree for the active structure~\cite{cormen:introduction}; and the same in-memory sorting (optimized quicksort). For the largest terrains the priority queue and the active structure are at most 30MB and fit in memory. \textsc{io-centrifugal}\xspace is implemented in one (non-sequential) pass over the elevation grid. The implementation is recursive, as described in Section~\ref{sec:conc-sweep}. Theoretically the algorithm could run completely cache-obliviously with help of the VMM, but this turned out to be slow. Therefore we implemented a cache-aware version: whenever the recursion enters a tile $G$ of the largest size that fits in memory, we load the elevation values for the entire tile into memory; when the algorithm returns from the recursive call on $G$, the visibility values for $G$ are written to disk. \textsc{vis-iter}\xspace performs two sequential passes over the elevation grid, and one over the visibility grid. The first pass reads the elevation grid and creates the bands, and the second pass loads the bands one by one in memory and computes the visibility bands. The horizon is maintained as an array of (azimuth, zenith) pairs, and is accessed sequentially; in all our experiments it never exceeded 200,000 points. \textsc{vis-dac}\xspace implements a divide-and-conquer refinement of \textsc{vis-iter}\xspace. When when a band is loaded in memory, \textsc{vis-dac}\xspace will compute and merge the layers in the band in a way similar to mergesort, which leads to an improved upper bound for its time complexity. \textsc{vis-dac}\xspace and \textsc{vis-iter}\xspace share the same code. The user can switch between the two by turning on or off a flag that triggers the divide-and-conquer. There is another flag to select the model (gridlines or layers). The implementations of all of our algorithms avoid taking square roots and arctangents, and do not store any angles. Instead of elevation angles, they use the signed squared tangents of elevation angles, and instead of azimuth angles, they use tangents of azimuth angles relative to the nearest axis direction (north, east, south, or west). \subsection{Platform} The algorithms are implemented in C and compiled with gcc/g++ 4.1.2 (with optimization level -O3). All experiments were run on HP 220 blade servers, with an Intel 2.83 GHz processor and a 5400 rpm SATA hard drive (the HP blade servers come only with this HD option). The machine is quad-core, but only one CPU was used. We ran experiments rebooting the machine with 512 MiB and 1 GiB of RAM. These sizes do not reflect current technology, and have been chosen in order to emphasize the scalability of the algorithms to a volume of data that is much larger than the amount of RAM available. \subsection{Datasets} The algorithms were tested on real terrains ranging up to over 7.6 billion elements, see~Table~\ref{tbl:datasets} for some examples. The largest datasets are SRTM1 data, 30m resolution, available at \texttt{http://www2.jpl.nasa.gov/srtm/} We selected these datasets because they are easily available, and are large. In practice it does not make sense to compute a viewshed on a very large area at low resolution; instead we would want to use a grid corresponding to a relatively small area at high resolution. We used SRTM data because it was easily and freely available, and served our goal to compare the algorithms. On all datasets up to 4~GiB (\texttt{Washington}), viewshed timings were obtained by selecting several viewpoints uniformly on each terrain and taking the average time. For the larger datasets we chose a viewpoint approximately in the middle of the terrain. This gives a good indication of the algorithm's performance, and we note that for all our algorithms the majority of the running time is spent handling the bands, and we expect that the total runing time will vary insignificantly with the position of the viewpoint. \begin{table*}[t] \centering \begin{tabular}{|r|r@{ $\times$}rr|} \hline Dataset & \multicolumn{3}{c|}{Size} \\ & cols & rows & GiB \\ \hline {Cumberlands} & 8\,704 & 7\,673 & 0.25 \\ {USA DEM 6} & 13\,500 & 18\,200 & 0.92 \\ {USA DEM 2} & 11\,000 & 25\,500 & 1.04 \\ {Washington} & 31\,866 & 33\,454 & 3.97 \\ {SRTM1-region03} & 50\,401 & 43\,201 & 8.11 \\ {SRTM1-region04} & 82\,801 & 36\,001 & 11.10 \\ {SRTM1-region06} & 68\,401 &111\,601 & 28.44 \\ \hline \end{tabular} \caption{Sample datasets} \label{tbl:datasets} \end{table*} \subsection{Results} Figure~\ref{fig:512mb} shows the total running times of all our algorithms with 512~MiB RAM. First, we note that all our algorithms, being based on I/O-efficient approaches, are scalable to data that is more than sixty times larger than the memory of the machine. This is in contrast with the performance of an internal-memory algorithm, which would start thrashing and could not handle terrains moderately larger than memory, as showed in~\cite{havertoma:visibility-journal}. \textsc{io-radial1}\xspace, \textsc{io-radial2}\xspace and \textsc{io-radial3}\xspace are all based on radial sweeps of the terrain, and theoretically they all use $\Theta(\mathsf{sort}(n))$~I/Os\xspace. In practice, however, both our new algorithms are significantly faster than \textsc{io-radial1}\xspace. On \textrm{Washington} (3.97~GiB), \textsc{io-radial1}\xspace runs in 32,364 seconds with 16\% CPU\footnote{The numbers for \textsc{io-radial1}\xspace are different than the ones reported in~\cite{havertoma:visibility-journal} because the current platform has a slower disk.}, while \textsc{io-radial2}\xspace runs in 13,780 seconds (22\% CPU), and \textsc{io-radial3}\xspace in 3,009 seconds (89\% CPU). This is a speed-up factor of more than 10. \begin{figure}[t] \centering { \includegraphics[height=1.7in]{all-512.pdf} \includegraphics[height=1.7in]{all-peritem-512.pdf} } \caption{Running times with 512MB RAM. (a) Total time (log scale) with dataset size (log scale). (b) Total time per point.} \label{fig:512mb} \end{figure} Both \textsc{io-radial2}\xspace and \textsc{io-radial3}\xspace perform two passes over the elevation grid, however \textsc{io-radial3}\xspace is much faster. On a machine with 512~MiB RAM, on \texttt{SRTM-Region 3} (8.11~GiB), \textsc{io-radial2}\xspace takes 37,982 seconds (16\% CPU), while \textsc{io-radial3}\xspace runs in 6,644 seconds (81\% CPU). Overall, for \textsc{io-radial2}\xspace roughly 20\% of the time is CPU time, while for \textsc{io-radial3}\xspace the CPU utilization is 80\% or more. The difference may be explained by the fact that the first pass of \textsc{io-radial2}\xspace is non-sequential (although it performs $O(n/B)$ I/Os\xspace), while both passes of \textsc{io-radial3}\xspace are sequential. Another difference is that \textsc{io-radial2}\xspace uses the VMM more than \textsc{io-radial3}\xspace. Our third algorithm, \textsc{io-centrifugal}\xspace, is the fastest. It finishes a 28.4 GiB terrain (\texttt{SRTM-Region 6}) in 12,186 seconds (203 minutes), while \textsc{io-radial2}\xspace takes 26,193 seconds (437 minutes). For \textsc{io-radial3}\xspace, 61\% of this time is CPU time, while for \textsc{io-centrifugal}\xspace only 18\%. The reason is that \textsc{io-centrifugal}\xspace does a single pass through the elevation grid. For any grid point $u$, the highest elevation angle of the $O(\sqrt n)$ cells that may be on the line of sight from $v$ to $u$ is retrieved from the horizon array in $O(1)$ time, and the horizon array is maintained in $O(1)$ time per point on average. As a result, \textsc{io-centrifugal}\xspace is CPU-light and the bottleneck is loading the blocks of data into memory. \textsc{io-radial3}\xspace, on the other hand, is more computationally intensive---the highest elevation angle on the line of sight to $u$ needs to be retrieved from a red-black tree in $O(\log n)$ time, and that tree is maintained in $O(\log n)$ time per point. In addition, \textsc{io-radial3}\xspace needs time to sort events. One of our findings is that relying purely on VMM, even for a theoretically I/O-efficient data access, is slow. The analysis of the I/O-efficiency of both \textsc{io-radial2}\xspace and \textsc{io-centrifugal}\xspace is based on the assumption that the VMM will automatically load tiles of size $\Theta(M)$ into memory in the optimal way, and that in practice the performance will not be very different (the theoretical foundations for this assumption were given by~\cite{prokop:cob}). In practice this did not work out so well: a fully cache-oblivious, VMM-based implementation of \textsc{io-centrifugal}\xspace and \textsc{io-radial2}\xspace turned out to be slow. By telling the algorithms explicitly when to load a memory-size block (and not using the VMM), we obtained significant speedups (without sacrificing I/O-efficiency for the levels of caching of which the algorithm remained oblivious, and without sacrificing CPU-efficiency). We believe that we could further improve the running time of \textsc{io-centrifugal}\xspace by having it manage the process of caching in memory the blocks of data that it needs to access from the grid on disk (write its own block manager with LRU policy). Interestingly, our linear interpolation algorithms \textsc{vis-iter}\xspace and \textsc{vis-dac}\xspace are faster than \textsc{io-radial3}\xspace, and slightly slower than \textsc{io-centrifugal}\xspace. For all datasets we found that $n \leq cM^2/B$ for a sufficiently small constant $c$, which means that \textsc{BuildBands} and \textsc{CollectBands} run in a single pass over the data. Thus both algorithms perform two passes over the elevation grid, and a pass over the visibility grid to assemble to visibility bands. The total running time is split fairly evenly between the three phases. The actual visibility calculation runs at 100\% CPU and represents $<$ 25\% of the running time. More than 75\% of the total time is spent reading or writing the bands. In all our tests we found that the iterative algorithm \textsc{vis-iter}\xspace is consistently 10-20\% faster than \textsc{vis-dac}\xspace. To understand this we investigated the size of the horizons computed by \textsc{vis-dac}\xspace and \textsc{vis-iter}\xspace: $H_i$, the horizon of layer $i$; and $H_{1,i}$, the horizon of the points in the first $i$ layers. Note that the number of grid points on level $i$ is $8i$, and the total number of points on levels $1$ through $i$ is $4i^2 + 4i+1 = \Theta(i^2)$. We know that $H_i = O(i) = O(\sqrt n)$, and $H_{1,i} = O(i^2) = O(n)$ (Theorem~\ref{thm:gridhorizon}). We recorded $|H_i|$ and $|H_{1,i}|$ for each layer $i$ during the execution of \textsc{vis-iter}\xspace. Figure~\ref{fig:horizons} shows the results for two datasets; the results for the other datasets look similar. \begin{figure*}[htb] \centering{ \includegraphics[width=6cm]{wash_horizongrowth.pdf} \includegraphics[width=6cm]{region05_horizongrowth.pdf} } \caption{Horizon growth for a viewshed computation on datasets \texttt{Washington} and \texttt{srtm1.region05} } \label{fig:horizons} \end{figure*} We see that $|H_i|$ is very close to its theoretical bound of $8i = \Theta(i)$. As $i$ gets larger the $i$th layers starts to fit only partially in the grid, and this causes $|H_i|$ to drop and have steep variations. The main finding is that for all datasets $H_{1,i}$ stays very small, far below its theoretical upper bound of $\Theta(i^2)$. $H_{1,i}$ grows fast initially and then flattens out; For e.g. on \texttt{Washington} dataset (approx. 1 billion points), $|H_{1,i}|$ flattens at 13,452 points; and on \texttt{srtm1.region05} (approx. 2.5 billion points), $|H_{1,i}|$ flattens at 460 points. All SRTM datasets have the horizon $H_{1,O(\sqrt n)}$ between 132 and 32,689. \begin{figure*}[t] \centering{ \includegraphics[width=6cm]{final_horizon_valid.pdf} \includegraphics[width=6cm]{horizonsum_valid.pdf} } \caption{(a) Size of final grid horizon $ |H_{1,O(\sqrt n)}|$ with dataset size. (b) $\sum_{i=1}^{O(\sqrt n)} |H_{1,i}|$ and $\sum_{i=1}^{O(\sqrt n)} |H_i|$ with dataset size.} \label{fig:finalhorizon} \end{figure*} Given a dataset, we refer to the horizon $ H_{1,O(\sqrt n)}$ as its \emph{final} horizon. Figure~\ref{fig:finalhorizon}(a) shows the size of the final horizon for each dataset as function of the number of \emph{valid} points in the grid ---- this excludes the points in the grid that are labeled as \emph{nodata}, and which are used for e.g. to label the water/ocean; these points do not affect the size of the horizon, as chains of \emph{nodata} points are compressed into a single horizon segment. We see that the final horizon: (1) has a lot of variation especially for the larger SRTM datasets, jumping from low to high values. This is likely due to the position of the viewpoint and possibly the topology of the terrain; (2) the horizon stays small, below $\sqrt n$ for all datasets, far below its worst-case bound of $O(n)$. Figure~\ref{fig:finalhorizon}(b) shows the cumulative sums, $\sum_{i=1}^{O(\sqrt n)} |H_{1,i}|$ and $\sum_{i=1}^{O(\sqrt n)} |H_i|$, for each dataset, as a function of the number of valid points in the grid; we recorded these sums because they come up in the analysis of \textsc{vis-iter}\xspace and can shed light on its performance. In Figure~\ref{fig:finalhorizon} we see that $\sum_{i=1}^{O(\sqrt n)} |H_i|$ grows indeed linearly with the number of valid points in the grid. The sum $\sum_{i=1}^{O(\sqrt n)} |H_{1,i}| $ has a lot of variability similar with the final horizon shown in Figure~\ref{fig:finalhorizon} (a), and for all datasets stays far from its worst-case upper bound of $O(n \sqrt n)$. We note that Figure~\ref{fig:horizons} and \ref{fig:finalhorizon} are based on a single viewpoint, but we expect the results will carry over. Comparing to the work of Ferreira et al.~\cite{ferreira:vis}: Their algorithm, \textsc{TiledVS}, also consists of three passes: convert the grid to Morton order, compute visibility using R2 algorithm, and convert the output grid from Morton order to row-major order. They report on the order of 5,000 seconds with \textsc{TiledVS} for \texttt{SRTM1.region06}, using a similar platform as ours and additional optimization like data compression. Assuming that this time includes all three passes, and modulo variations in setup, it is approx. 2.5 times faster than \textsc{vis-iter}\xspace. We note that \textsc{TiledVS} uses a different model and we believe our algorithms and their analysis are of independent interest. \section{Conclusion}\label{sec:discussion} In this paper we described new I/O-efficient algorithms for computing the visibility map of a point on a grid terrain using several different models. The algorithms are provably efficient in terms of the asymptotic growth behaviour of the number of I/Os, but at the same time are designed to exploit that the terrain model is a grid. This leads to much improved running times compared to our previous work~\cite{havertoma:visibility-journal}. On the largest terrains, using as little as 512~MiB of memory, our algorithms perform at most two passes through the input data, and one pass through the output grid. We were able to compute viewsheds on a terrain of 28.4~GiB in 203 minutes with a laptop-speed hard-drive. The algorithms that compute (what is considered to be) the exact viewshed have inferior worst-case upper bounds, but in practice are faster than the radial-sweep algorithms due to the small size of horizons. We conclude that horizon-based algorithms emerge as a fast approach for computing viewsheds. As avenues for future reesarch we mention the problem of proving a sub-linear bound for the expected complexity of a horizon, and obtaining an output-sensitive viewshed algorithm. \section{Acknowledgments} The authors thank DJ Merill for setting up and administering the platform used for the experiments. And former Bowdoin students Jeremy Fishman and Bob PoFang Wei for working on the earlier versions of this paper. \bibliographystyle{acmtrans}
2023-04-23T08:17:42.621Z
2018-10-05T02:03:04.000Z
redpajama/arxiv
arxiv_0000
491
21,344
d63fad1eff7250e222f19135b00cb4aefefb0238
\section{Introduction}\label{ch:intro} Beyond the success of cryptocurrencies, blockchain has recently emerged as a technology platform that offers secure decentralized consistent transaction ledgers and has powered innovations across domains including financial systems, supply chains and health care. Despite the high demand in distributed ledger technology~\cite{bcbook15}, commercialization opportunities have been obstructed by long processing time for consensus, and high power consumption. These issues have been addressed in consensus algorithms such as \cite{algorand16, algorand17, sompolinsky2016spectre, PHANTOM08}. Distributed database systems often address \emph{Byzantine} fault tolerance~\cite{Lamport82} in which up to just under one-third of the participant nodes may be compromised. Consensus algorithms ensures the integrity of transactions between participants over a distributed network~\cite{Lamport82} and is equivalent to the proof of \emph{Byzantine} fault tolerance in distributed database systems~\cite{randomized03, paxos01}. Byzantine consensus is not guaranteed for deterministic, completely asynchronous system with unbounded delays~\cite{flp}. But achieving consensus is feasible for nondeterministic system with probability one. There are several approaches to consensus in distributed system. The original Nakamoto consensus protocol in Bitcoin uses Proof of Work (PoW), which requires large amounts of computational work to generate the blocks by participants~\cite{bitcoin08}. Alternative schemes such as Proof Of Stake (PoS)~\cite{ppcoin12,dpos14} have been proposed. PoS uses participants' stakes to generate the blocks respectively. Another approach utilizes directed acyclic graphs (DAG)~\cite{dagcoin15, sompolinsky2016spectre, PHANTOM08, PARSEC18, conflux18} to facilitate consensus. Examples of DAG-based consensus algorithms include Tangle~\cite{tangle17}, Byteball~\cite{byteball16}, and Hashgraph~\cite{hashgraph16}. Tangle selects the blocks to connect in the network utilizing accumulated weight of nonce and Monte Carlo Markov Chain (MCMC). Byteball generates a main chain from the DAG and reaches consensus through index information of the chain. Hashgraph connects each block from a node to another random node. Hashgraph searches whether 2/3 members can reach each block and provides a proof of Byzantine fault tolerance via graph search. \subsection{Motivation} Practical Byzantine Fault Tolerance (pBFT) allows all nodes to successfully reach an agreement for a block (information) when a Byzantine node exists \cite{Castro99}. In pBFT, consensus is reached once a created block is shared with other participants and the share information is shared with others again \cite{zyzzyva07, honey16}. After consensus is achieved, the block is added to the participants’ chains~\cite{Castro99, Blockmania18}. Currently, it takes $O(N^4)$ for pBFT. HashGraph~\cite{hashgraph16} proposes ``gossip about gossip" and virtual voting to reach consensus. There are several limitations with HashGraph. First, the algorithm operates on a known network, which needs full awareness of all authoritative participants. Second, gossip propagation is slow and latency increases to $O(n)$ with $n$ participants. Third, it remains unclear whether virtual voting is faster than chain weight aka longest chain/proof of work concept. These issues are gossip problems and not consensus problems. We are interested in a new approach to address the aforementioned issues in pBFT approaches \cite{Castro99,zyzzyva07, honey16} and HashGraph~\cite{hashgraph16}. Specifically, we propose a new consensus algorithm that addresses the following questions: (1) Can we reach local consensus in a $k$-cluster faster for some $k$?, (2) Can we make gossips faster such as using a broadcast based gossip subset?, (3) Can continuous common knowledge be used for consensus decisions with high probability? (4) Can complex decisions be reduced to binary value consensus? In this paper, we propose a new approach that can quickly search for Byzantine nodes within the block DAG. In particular, we introduce a new class of consensus protocols, namely Lachesis protocol denoted by $\mathcal{L}$. The core idea of Lachesis is to use a new DAG structure, the OPERA chain, which allows faster path search for consensus. We then propose an example of the Lachesis protocol class, which is called the Lachesis protocol $L_0$. \subsection{Generic framework of $\mathcal{L}$ Protocols} We introduce a generic framework of Lachesis protocols, called $\mathcal{L}$. The basic idea of Lachesis protocol is a DAG-based asynchronous non-deterministic protocol that guarantees pBFT. We propose OPERA chain --- a new DAG structure for faster consensus. Lachesis protocol generates each block asynchronously and the Lachesis algorithm achieves consensus by confirming how many nodes know the blocks using the OPERA chain. Figure~\ref{fig:operachain} shows an example of OPERA chain constructed through a Lachesis protocol. \begin{figure}[h] \centering \includegraphics[height=7cm, width=1.0\columnwidth]{lachesis_output} \caption{An Example of OPERA Chain} \label{fig:operachain} \end{figure} The main concepts of Lachesis are given as follows: \begin{description} \item[$\bullet$ Event block] All nodes can create event blocks as time $t$. The structure of an event block includes the signature, generation time, transaction history, and hash information to references. The information of the referenced event blocks can be copied by each node. The first event block of each node is called a \emph{leaf event}. \item[$\bullet$ Lachesis protocol] Lachesis protocol is the rule-set to communicate between nodes. When each node creates event blocks, it determines which nodes choose other nodes to broadcast to. Node selection can be random or via some cost function. \item[$\bullet$ Happened-before] Happened-before is the relationship between nodes which have event blocks. If there is a path from an event block $x$ to $y$, then $x$ Happened-before $y$. ``$x$ Happened-before $y$" means that the node creating $y$ knows event block $x$. \item[$\bullet$ Root] An event block is called a \emph{root} if either (1) it is the first generated event block of a node, or (2) it can reach more than two-thirds of other roots. Every root can be candidate for Clotho. \item[$\bullet$ Root set] Root set ($R_s$) is the set of all roots in the frame. The cardinality of the set is $2n/3 < R_s \leq$ $n$, where $n$ is the number of all nodes. \item[$\bullet$ Frame] Frame $f$ is a natural number that separates Root sets. The frame increases by 1 in case of a root in the new set ($f+1$). And all event blocks between the new set and the previous Root set are included in the frame $f$. \item[$\bullet$ Flag table] The Flag table stores reachability from an event block to another root. The sum of all reachabilities, namely all values in flag table, indicates the number of reacheabilities from an event block to other roots. \item[$\bullet$ Lamport timestamps] For topological ordering, Lamport timestamps algorithm uses the happened-before relation to determine a partial order of the whole event block based on logical clocks. \item[$\bullet$ Clotho] A Clotho is a root that satisfies that they are known by more than $2n/3$ nodes and more than $2n/3$ nodes know the information that they are known in nodes. A Clotho can be a candidate for Atropos. \item[$\bullet$ Atropos] An Atropos is assigned consensus time through the Lachesis consensus algorithm and is utilized for determining the order between event blocks. Atropos blocks form a Main-chain, which allows time consensus ordering and responses to attacks. \item[$\bullet$ Reselection] To solve the byzantine agreement problem, each node reselects a consensus time for a Clotho, based on the collected consensus time in the root set of the previous frame. When the consensus time reaches byzantine agreement, a Clotho is confirmed as an Atropos and is then used for time consensus ordering. \item[$\bullet$ OPERA chain] The OPERA chain is the local view of the DAG held by each node, this local view is used to identify topological ordering, select Clotho, and create time consensus through Atropos selection. \item[$\bullet$ Main-Chain] Main-chain is a core subset of the OPERA chain. It is comprised of Atropos event blocks. Thus, the OPERA chain uses Main-chain to find rapid ordering between event blocks. In OPERA chain, each event block is assigned a proper consensus position. \end{description} \begin{figure}[h] \centering \includegraphics[height=7cm, width=1.0\columnwidth]{pBFTtoPath} \caption{Consensus Method through Path Search in a DAG (combines chain with consensus process of pBFT)} \label{fig:pBFTtoPath} \end{figure} As a motivating example, Figure~\ref{fig:pBFTtoPath} illustrates how consensus is reached through the path search in the OPERA chain. In the figure, leaf set, denoted by $R_{s0}$, consists of the first event blocks created by individual participant nodes. $V$ is the set of event blocks that do not belong neither in $R_{s0}$ nor in any root set $R_{si}$. Given a vertex $v$ in $V \cup R_{si}$, there exists a path from $v$ that can reach a leaf vertex $u$ in $R_{s0}$. Let $r_1$ and $r_2$ be root event blocks in root set $R_{s1}$ and $R_{s2}$, respectively. $r_1$ is the block where a quorum or more blocks exist on a path that reaches a leaf event block. Every path from $r_1$ to a leaf vertex will contain a vertex in $V_1$. Thus, if there exists a vertex $r$ in $V_1$ such that $r$ is created by more than a quorum of participants, then $r$ is already included in $R_{s1}$. Likewise, $r_2$ is a block that can be reached for $R_{s1}$ including $r_1$ through blocks made by a quorum of participants. For all leaf event blocks that could be reached by $r_1$, they are shared with more than quorum participants through the presence of $r_1$. The existence of the root $r_2$ shows that information of $r_1$ is shared with more than a quorum. This kind of a path search allows the chain to reach consensus in a similar manner as the pBFT consensus processes. It is essential to keep track of the blocks satisfying the pBFT consensus process for quicker path search; our OPERA chain and Main-chain keep track of these blocks. \subsection{Lachesis protocol $L_0$} We now introduce a new specific Lachesis consensus protocol, called $L_0$. The new protocol $L_0$ is a DAG-based asynchronous non-deterministic protocol that guarantees pBFT. $L_0$ generates each block asynchronously and uses the OPERA chain for faster consensus by checking how many nodes know the blocks. In this $L_0$ protocol, we propose several algorithms. In particular, we introduce an algorithm in which a node can identify lazy participants from cost-effective peers --- say its $k$ peers. We must stress that a generic Lachesis protocol does not depend on any $k$ peer selection algorithm; each node can choose $k$ peers randomly. Each message created by a node is then signed by the creating node and its $k$ peers. We also introduce a flag table data structure that stores connection information of event blocks. The flag table allows us to quickly traverse the OPERA chain to find reachability between event blocks. OPERA chain can be used to optimize path search. By using certain event blocks (Root, Clotho, and Atropos), Main chain --- a core subgraph of OPERA chain, can maintain reliable information between event blocks and reach consensus. Generating event blocks via Lachesis protocol, the OPERA chain and Main chain are updated frequently and can respond strongly to attack situations such as forking and parasite attack. Further, using the flag table over the OPERA chain, consensus can be quickly reached, and the ordering between specific event block can be determined. \subsection{Contributions} In summary, this paper makes the following contributions. \begin{itemize} \item We propose a new family $\mathcal{L}$ of Lachesis protocols. We introduce the OPERA chain and Main-chain for faster consensus. \item We define a topological ordering of nodes and event blocks in the OPERA chain. By using Lamport timestamps, the ordering is more intuitive and reliable in distributed system. We introduce a flag table at each block to improve root detection. \item We present proof of how a DAG-based protocol can implement concurrent common knowledge for consistent cuts. \item The Lachesis protocols allow for faster node synchronization with $k$-neighbor broadcasts. \item A specific Lachesis protocol $L_0$ is then introduced with specific algorithms. The benefits of Lachesis protocol $L_0$ include (1) root selection algorithm via flag table; (2) an algorithm to build the Main-chain; (3) an algorithm for $k$ peers selection via cost function; (4) faster consensus selection via $k$ peer broadcasts; (5) data pruning via root creation. \end{itemize} The rest of this paper is organised as follows. Section~\ref{se:Previous} gives an overview of Blockchain related work as well as existing DAG-based protocols. Section~\ref{se:protocol} describes our new Lachesis protocol. Section~\ref{se:lca} presents Lachesis consensus algorithm. Several discussions about Lachesis protocols are presented in Section~\ref{se:discuss}. Section~\ref{se:con} concludes with some future work. Section~\ref{se:appendix}. Proof of Byzantine fault tolerance is described in Section~\ref{se:proof}. In Section~\ref{se:ra}, we present responses to certain attacks with the Lachesis protocol and consensus algorithm. \section{Related work}\label{se:Previous} \subsection{Lamport timestamps} Lamport~\cite{lamport1978time} defines the "happened before" relation between any pair of events in a distributed system of machines. The happened before relation, denoted by $\rightarrow$, is defined without using physical clocks to give a partial ordering of events in the system. The relation "$\rightarrow$" satisfies the following three conditions: (1) If $b$ and $b'$ are events in the same process, and $b$ comes before $b'$, then $b \rightarrow b'$. (2) If $b$ is the sending of a message by one process and $b'$ is the receipt of the same message by another process, then $b \rightarrow b'$. (3) If $b \rightarrow b'$ and $b' \rightarrow b''$ then $b \rightarrow b''$. Two distinct events $b$ and $b'$ are said to be concurrent if $b \nrightarrow b'$ and $b' \nrightarrow b$. The happens before relation can be viewed as a causality effect: that $b \rightarrow b'$ implies event $b$ may causally affect event $b'$. Two events are concurrent if neither can causally affect the other. Lamport introduces logical clocks which is a way of assigning a number to an event. A clock $C_i$ for each process $P_i$ is a function which assigns a number $C_i(b)$ to any event $b \in P_i$. The entire system of blocks is represented by the function $C$ which assigns to any event $b$ the number $C(b)$, where $C(b) = C_j(b)$ if $b$ is an event in process $P_j$. The Clock Condition states that for any events $b$, $b'$: if $b \rightarrow b'$ then $C(b)$ $<$ $C(b')$. To satisfies the Clock Condition, the clocks must satisfy two conditions. First, each process $P_i$ increments $C_i$ between any two successive events. Second, we require that each message $m$ contains a timestamp $T_m$, which equals the time at which the message was sent. Upon receiving a message timestamped $T_m$, a process must advance its clock to be later than $T_m$. Given any arbitrary total ordering $\prec$ of the processes, the total ordering $\Rightarrow$ is defined as follows: if $a$ is an event in process $P_i$ and $b$ is an event in process $P_j$, then $b \Rightarrow b'$ if and only if either (i) $C_i(b) < C_j(b')$ or (ii) $C(b)= Cj(b')$ and $P_i \prec P_j$. The Clock Condition implies that if $b \rightarrow b'$ then $b \Rightarrow b'$. \subsection{Concurrent common knowledge}\label{se:cck} In the Concurrent common knowledge (CCK) paper ~\cite{cck92}, they define a model to reason about the concurrent common knowledge in asynchronous, distributed systems. A system is composed of a set of processes that can communicate only by sending messages along a fixed set of channels. The network is not necessarily completely connected. The system is asynchronous in the sense that there is no global clock in the system, the relative speeds of processes are independent, and the delivery time of messages is finite but unbounded. A local state of a process is denoted by $s^j_i$. Actions are state transformers; an action is a function from local states to local states. An action can be either: a send(m) action where m is a message, a receive(m) action, and an internal action. A local history, $h_i$, of process $i$, is a (possibly infinite) sequence of alternating local states—beginning with a distinguished initial state—and actions. We write such a sequence as follows: $h_i = s_i^0 \xrightarrow{ \alpha_i^1 } s_i^1 \xrightarrow{\alpha_i^2} s_i^2 \xrightarrow{\alpha_i^3} ...$ The notation of $s^j_i$ ($\alpha^j_i$) refers to the $j$-th state (action) in process $i$'s local history An event is a tuple $\langle s , \alpha, s' \rangle$ consisting of a state, an action, and a state. The $j$th event in process $i$'s history is $e^j_i$ denoting $\langle s^{j-1}_i , \alpha^j_i, s^j_{i} \rangle$. An asynchronous system consists of the following sets. \begin{enumerate} \item A set $P$ = \{1,...,$N$\} of process identifiers, where $N$ is the total number of processes in the system. \item $A$ set $C$ $\subseteq$ \{($i$,$j$) s.t. $i,j \in P$\} of channels. The occurrence of $(i,j)$ in $C$ indicates that process $i$ can send messages to process $j$. \item A set $H_i$ of possible local histories for each process $i$ in $P$. \item A set $A$ of asynchronous runs. Each asynchronous run is a vector of local histories, one per process, indexed by process identifiers. Thus, we use the notation $a = \langle h_1,h_2,h_3,...h_N \rangle$. Constraints on the set $A$ are described throughout this section. \item A set $M$ of messages. A message is a triple $\langle i,j,B \rangle$ where $i \in P$ is the sender of the message, $j \in P$ is the message recipient, and $B$ is the body of the message. $B$ can be either a special value (e.g. a tag to denote a special-purpose message), or some proposition about the run (e.g. “$i$ has reset variable $X$ to zero”), or both. We assume, for ease of exposition only, that messages are unique. \end{enumerate} The set of channels $C$ and our assumptions about their behavior induce two constraints on the runs in $A$. First, $i$ cannot send a message to $j$ unless $(i,j)$ is a channel. Second, if the reception of a message $m$ is in the run, then the sending of $m$ must also be in that run; this implies that the network cannot introduce spurious messages or alter messages. The CCK model of an asynchronous system does not mention time. Events are ordered based on Lamport's happens-before relation. They use Lamport’s theory to describe global states of an asynchronous system. A global state of run $a$ is an $n$-vector of prefixes of local histories of $a$, one prefix per process. The happens-before relation can be used to define a consistent global state, often termed a consistent cut, as follows. \begin{defn}[Consistent cut] A consistent cut of a run is any global state such that if $e^x_i \rightarrow e^y_j$ and $e^y_j$ is in the global state, then $e^x_i$ is also in the global state. \end{defn} A message chain of an asynchronous run is a sequence of messages $m_1$, $m_2$, $m_3$, $\dots$, such that, for all $i$, $receive(m_i)$ $\rightarrow$ $send(m_{i+1})$. Consequently, $send(m_1)$ $\rightarrow$ $receive(m_1)$ $\rightarrow$ $send(m_2)$ $\rightarrow$ $receive(m_2)$ $\rightarrow$ $send(m_3)$ $\dots$. \subsection{Consensus algorithms} In a consensus algorithm, all participant nodes of a distributed network share transactions and agree integrity of the shared transactions~\cite{Lamport82}. It is equivalent to the proof of Byzantine fault tolerance in distributed database systems~\cite{randomized03, paxos01}. The Practical Byzantine Fault Tolerance (pBFT) allows all nodes to successfully reach an agreement for a block when a Byzantine node exists \cite{Castro99}. There are numerous consensus algorithms being proposed~\cite{algorand16, algorand17}. Proof of Work (PoW) requires large amounts of computational work to generate the blocks~\cite{bitcoin08}. Proof of Stake (PoS)~\cite{ppcoin12,dpos14} use participants' stakes and delegated participants' stake to generate the blocks respectively. Alternative schemes are proposed to improve algorithms using directed acyclic graphs (DAG)~\cite{dagcoin15}. These DAG-based approaches utilize the graph structures to decide consensus; blocks and connections are considered as vertices and edges, respectively. \subsection{DAG-based Approaches} IOTA~\cite{tangle17} published a DAG-based technology called Tangle. The Tips concept was used to address scalability issues with the limitations of the Internet of Things. Also, a nonce by using weight level was composed to achieve the transaction consensus by setting the user’s difficulty. To solve the double spending problem and parasite attack, they used the Markov Chain Monte Carlo (MCMC) tip selection algorithm, which randomly selects the tips based on the size of the accumulated transaction weights. However, if a transaction conflicts with another, there is still a need to examine all past transaction history to find the conflict. Byteball~\cite{byteball16} uses an internal pay system called bytes. This is used to pay for adding data to the distributed database. Each storage unit is linked to each other that includes one or more hashes of earlier storage units. In particular, the consensus ordering is composed by selecting a single Main Chain, which is determined as a root consisting of the most roots. A majority of roots detects the double-spend attempts through consensus time of Main Chain. The fee is charged according to the size of the bytes, and the list of all units should be searched and updated in the process of determining the roots. RaiBlocks~\cite{raiblock17} has been developed to improve high fees and slow transaction processing. It is a process of obtaining consensus through the balance weighted vote on conflicting transactions. Each node participating in the network becomes the principal and manages its data history locally. However, since RaiBlocks generate transactions in a similar way to an anti-spam tool of PoW, all nodes must communicate to create transactions. In terms of scalability, there is a need for steps to verify the entire history of transactions when a new node is added. Hashgraph~\cite{hashgraph16} is an asynchronous DAG-based distributed ledger. Each node is connected by its own ancestor and randomly communicates known events through a gossip protocol. At this time, any famous node can be determined by the \textit{see} and \text{strong see} relationship at each round to reach consensus quickly. They state that if more than 2/3 of the nodes reach consensus for an event, it will be assigned consensus position. Conflux~\cite{conflux18} is a DAG-based Nakamoto consensus protocol. Conflux is a fast, scalable and decentralized block chain system that optimistically processes concurrent blocks without discarding any as forks. The Conflux protocol achieves consensus on a total order of the blocks. The total order of the transactions is decided by all participants of the network. Conflux can tolerate up to half of the network as malicious while the BFT-based approaches can only tolerate up to one third of malicious nodes. Parsec~\cite{PARSEC18} proposes an algorithm for reaching consensus in the presence of Byzantine faults in a randomly synchronous network. Like Hashgraph~\cite{hashgraph16}, it has no leaders, no round robin, no proof-of-work and reaches eventual consensus with probability one. Unlike Hashgraph, it can provide high speed even in the presence of faults. Parsec algorithm reaches BFT consensus with very weak synchrony assumptions. Messages are delivered with random delays, such that the average delay is finite. It allows up to one-third Byzantine (arbitrary) failures. Phantom~\cite{PHANTOM08} is a PoW based protocol for a permissionless ledger that generalizes Nakamoto’s blockchain to a DAG of blocks. PHANTOM includes a parameter $k$ to adjust the tolerance level of the protocol to blocks that were created concurrently, which can be set to accommodate higher throughput. It thus avoids the security-scalability tradeoff as in Satoshi’s protocol. PHANTOM uses a greedy algorithm on the DAG to distinguish between blocks by honest nodes and those by non-cooperating nodes. This distinction gives PHANTOM a robust total order of the blocks that is eventually agreed upon by all honest nodes. Similar to PHANTOM, the GHOSTDAG protocol selects a $k$-cluster, which induces a colouring of the blocks as Blues (blocks in the selected cluster) and Reds (blocks outside the cluster). However, instead of searching for the largest $k$-cluster, GHOSTDAG finds a cluster using a greedy algorithm. Spectre~\cite{sompolinsky2016spectre} is a new protocol for the consensus core of cryptocurrencies. SPECTRE, which is PoW-based protocol, relies on a data structure that generalizes Nakamoto’s blockchain into a DAG. It remains secure from attackers with up to 50\% of the computational power even under high throughput and fast confirmation times. Sprectre protocol satisfies weaker properties than classic consensus requires. In SPECTRE, the order between any two transactions can be decided from transactions performed by honest users. This is different from the conventional paradigm in which the order must be decided by all non-corrupt nodes. Blockmania~\cite{Blockmania18} is a mechanism to achieve consensus with several advantages over the more traditional pBFT protocol and its variants. In Blockmania nodes in a quorum only emit blocks linking to other blocks, irrespective of the consensus state machine. The resulting directed acyclic graph of blocks (block DAG) is later interpreted to ensure consensus safety, finality and liveliness. The resulting system has communication complexity $O(N^2)$ even in the worse case, and low constant factors — as compared to $O(N^4)$ for pBFT. \section{Generic framework of Lachesis Protocols}\label{se:protocol} This section describes the key concepts of our new family of Lachesis protocols. \subsection{OPERA chain} The core idea of Lachesis protocols is to use a DAG-based structure, called OPERA chain for our consensus algorithm. In Lachesis protocol, a (participant) node is a server (machine) of the distributed system. Each node can create messages, send messages to and receive messages from other nodes. The communication between nodes is asynchronous. Lachesis Protocol consists of event blocks including user information and edges between event blocks. In Lachesis Protocol, event blocks are created by a node after the node communicates information of OPERA chain with another node. The OPERA chain is comprised of event blocks as vertices and block communication as edges. Let $n$ be the number of participant nodes. For consensus, the algorithm examines whether an event block is \emph{shared} with $2n/3$ nodes, where $n$ is the number of all nodes. Sharing an event block with $2n/3$ nodes means that more than two-thirds of all nodes in the OPERA chain knows the event block. \subsection{Main-chain} For faster consensus, we introduce the \emph{Main-chain}, which is a special sub-graph of the OPERA chain. To improve path search, we propose to use a local hash table structure as a cache that is used to quickly determine the closest root to an event block. In the OPERA chain, an event block is called a \emph{root} if the event block is linked to more than two-thirds of previous roots. A leaf vertex is also a root itself. With root event blocks, we can keep track of ``vital'' blocks that $2n/3$ of the network agree on. The Main chain --- a core subgraph of OPERA chain, plays the important role for ordering the event blocks. The Main chain stores shortcuts to connect between the Atropos. After the topological ordering is computed over all event blocks through Lachesis protocol, Atropos blocks are determined and form the Main chain. Figure~\ref{fig:mainchain} shows an example of Main chain composed of Atropos event blocks. In particular, the Main chain consists of Atropos blocks those are derived from root blocks and so are agreed by $2n/3$ of the network nodes. Thus, this guarantees that at least $2n/3$ of nodes have come to consensus on this Main chain. \begin{figure} [H] \centering \includegraphics[height=8cm, width=1.0\columnwidth]{mainchain} \caption{An Example of Main-chain} \label{fig:mainchain} \end{figure} Each participant node has a copy of the Main chain and can search consensus position of its own event blocks. Each event block can compute its own consensus position by checking the nearest Atropos event block. Assigning and searching consensus position are introduced in the consensus time selection section. The Main chain provides quick access to the previous transaction history to efficiently process new coming event blocks. From Main chain, information about unknown participants or attackers can be easily viewed. The Main chain can be used efficiently in transaction information management by providing quick access to new event blocks that have been agreed on by the majority of nodes. In short, the Main-chain gives the following advantages: - All event blocks or nodes do not need to store all information. It is efficient for data management. - Access to previous information is efficient and fast. Based on these advantages, OPERA chain can respond strongly to efficient transaction treatment and attacks through its Main-chain. \begin{algorithm} \caption{Main Procedure}\label{al:main} \begin{algorithmic}[1] \Procedure{Main Procedure}{} \State\hskip-\ALG@thistlm \emph{loop}: \State A, B = $k$-node Selection algorithm() \State Request sync to node A and B \State Sync all known events by Lachesis protocol \State Event block creation \State (optional) Broadcast out the message \State Root selection \State Clotho selection \State Atropos time consensus \State\hskip-\ALG@thistlm \emph{loop}: \State Request sync from a node \State Sync all known events by Lachesis protocol \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Lachesis Consensus Algorithm (LCA)} Our Lachesis algorithm (LCA) is presented. LCA is one of the consensus algorithms for solving the byzantine agreement problem. In LCA, the OPERA chain uses root, Clotho and Atropos blocks to find consensus time for event blocks. Algorithm~\ref{al:main} shows the pseudo algorithm of a OPERA chain. The algorithm consists of two parts and runs them in parallel. - In one part, each node requests synchronization and creates an event block. In line 3, a node runs the Node Selection Algorithm. The Node Selection Algorithm returns the $k$ IDs of other nodes to communicate with. In line 4 and 5, the node synchronizes the OPERA chain with other nodes. Line 6 runs the Event block creation, at which step the node creates an event block and checks whether it is root. Then the node broadcasts the created event block to other nodes in line 7. The step in this line is optional. In line 8 and 9, Clotho selection and Atropos time consensus algorithms are invoked. The algorithms determinte whether the specified root can be a Clotho, assign the consensus time, and then confirm the Atropos. - The second part is to respond to synchronization requests. In line 10 and 11, the node receives a synchronization request and then sends its response about the OPERA chain. \subsection{Node Structure} This section gives an overview of node structure in Lachesis. Each node has a signature stamp, height vector, in-degree vector, flag table, root hash list, and Main-chain. Signature stamp is the data structure for storing the hash value that indicates the most recently created event block by the node. We call the most recently created event block the top event block. The flag table is a n dimensional vector. If an event block $e$ created by $i^{th}$ node can reach $j^{th}$ root, then the $j^{th}$ value in the flag table of $e$ becomes 1 (otherwise 0). Each node only maintains the flag table of the top event block. \begin{figure} \centering \includegraphics[width=.4\textwidth]{node_structure} \caption{An Example of Node Structure} \label{fig:node} \end{figure} Figure~\ref{fig:node} shows an example of the node structure component of a node $A$. In the figure, the $signature_A$ stores the hash value of the top event block of $A$. Each value in the height vector is the number of event blocks created by other nodes respectively. The value of $h_i$ is the number of event blocks created by the $i^{th}$ node. Each value in the in-degree vector is the number of edges from other event blocks created by other nodes to the top event block. The root hash list is the data structure storing the hash values of the root. The Main-chain is a data structure storing hash values of the Atropos blocks. The Main-chain is used to find event blocks with complete consensus. The root, Clotho and Atropos selection algorithm are introduced in Section~\ref{se:lca}. \subsection{Event block creation} In Lachesis protocol, every node can create an event block. Each event block refers to other event blocks. Reference means that the event block stores the hash values of the other event blocks. In a Lachesis protocol, an event block refers to $k$-neighbor event blocks under the conditions as follows: \begin{enumerate} \item The reference event blocks are the top event blocks. \item One reference should be made to a self-parent. \item The own top event block refers to at least $k$-neighbor of other nodes. \end{enumerate} \subsection{Topological ordering of events using Lamport timestamps} Every node has a physical clock and it needs physical time to create an even block. However, for consensus, Lachesis protocols relies on a logical clock for each node. For the purpose, we use \textit{"Lamport timestamps"} \cite{lamport1978time} to determine the time ordering between event blocks in a asynchronous distributed system. \\ The Lamport timestamps algorithm is as follows: \begin{enumerate} \item Each node increments its count value before creating an event block. \item When sending a message include its count value, receiver should consider which sender’s message is received and increments its count value. \item If current counter is less than or equal to the received count value from another node, then the count value of the recipient is updated. \item If current counter is greater than the received count value from another node, then the current count value is updated. \end{enumerate} \begin{figure}\centering \includegraphics[width=0.9\columnwidth]{Lamport_timestamps.pdf} \caption{An example of Lamport timestamps} \label{fig:Lamport} \end{figure} We use the Lamport's algorithm to enforce a topological ordering of event blocks and uses it in Atropos selection algorithm. Since an event block is created based on logical time, the sequence between each event blocks is immediately determined. Because the Lamport timestamps algorithm gives a partial order of all events, the whole time ordering process can be used for Byzantine fault tolerance. \subsection{Topological consensus ordering} The sequential order of each event block is an important aspect for Byzantine fault tolerance. In order to determine the pre-and-post sequence between all event blocks, we use Atropos consensus time, Lamport timestamp algorithm and the hash value of the event block. First, when each node creates event blocks, they have a logical timestamp based on Lamport timestamp. This means that they have a partial ordering between the relevant event blocks. Each Clotho has consensus time to the Atropos. This consensus time is computed based on the logical time nominated from other nodes at the time of the 2n/3 agreement. In the LCA, each event block is based on the following three rules to reach an agreement: \begin{enumerate} \item If there are more than one Atropos with different times on the same frame, the event block with smaller consensus time has higher priority. \item If there are more than one Atropos having any of the same consensus time on the same frame, determine the order based on the own logical time from Lamport timestamp. \item When there are more than one Atropos having the same consensus time, if the local logical time is same, a smaller hash value is given priority through hash function. \end{enumerate} \begin{figure}[h] \centering \includegraphics[height=8cm, width=1.0\columnwidth]{topological_consensus_ordering.pdf} \caption{An example of topological consensus ordering} \label{fig:topological consensus ordering} \end{figure} Figure~\ref{fig:topological consensus ordering} depicts an example of topological consensus ordering. \begin{figure} \centering \includegraphics[height=8cm, width=1.0\columnwidth]{sequence_operachain} \caption{An Example of time ordering of event blocks in OPERA chain} \label{fig:sequence of operachain} \end{figure} Figure~\ref{fig:sequence of operachain} shows the part of OPERA chain in which the final consensus order is determined based on these 3 rules. The number represented by each event block is a logical time based on Lamport timestamp. Final topological consensus order containing the event blocks based on agreement for the apropos. Based on each Atropos, they will have different colors depending on their range. \subsection{Peer selection algorithm} In order to create an event block, a node needs to select $k$ other nodes. Lachesis protocols does not depend on how peer nodes are selected. One simple approach can use a random selection from the pool of $n$ nodes. The other approach is to define some criteria or cost function to select other peers of a node. Within distributed system, a node can select other nodes with low communication costs, low network latency, high bandwidth and high successful transaction throughputs. \section{Lachesis Consensus Protocol $L_0$}\label{se:lca} This section presents our new Lachesis Consensus Protocol $L_0$, which is a specific example of the Lachesis class. We describe the main ideas and algorithms used in the protocol. \subsection{Root Selection} All nodes can create event blocks and an event block can be a root when satisfying specific conditions. Not all event blocks can be roots. First, the first created event blocks are themselves roots. These leaf event blocks form the first root set $R_{S1}$. If there are total $n$ nodes and these nodes create the event blocks, then the cardinality of the first root set $|R_{S1}|$ is $n$. Second, if an event block $e$ can reach at least 2n/3 roots, then $e$ is called a root. This event $e$ does not belong to $R_{S1}$, but the next root set $R_{S2}$. Thus, excluding the first root set, the range of cardinality of root set $R_{Sk}$ is $2n/3 < |R_{Sk}| \leq n$. The event blocks including $R_{Sk}$ before $R_{Sk+1}$ is in the frame $f_k$. The roots in $R_{Sk+1}$ does not belong to the frame $f_k$. Those are included in the frame $f_k+1$ when a root belonging to $R_{Sk+2}$ occurs. We introduce the use of a flag table to quickly determine whether a new event block becomes a root or not. Each node maintains a flag table of the top event block. Every event block that is newly created is assigned $k$ hashes for its $k$ parent event blocks. We apply an $OR$ operation on the flag tables of the parent event blocks. Figure~\ref{fig:ex_ft} shows an example of how to use flag tables to determine a root. In this example, $r_1$ is the most recently created event block. We apply an $OR$ operation on the flag tables of $r_1$'s $k$ parent event blocks. The result is the flag table of $r_1$. If $r_1$'s flag table has more than $2n/3$ set bits, $r_1$ is a root. In this example, the number of set bits is 4, which is greater than $2n/3$ ($n$=5). Thus, $r_1$ becomes root. The root selection algorithm is as follows: \begin{figure} [t] \centering \includegraphics[height=8cm, width=1.0\columnwidth]{flagtable} \caption{An Example of Flag Table Calculation} \label{fig:ex_ft} \end{figure} \begin{enumerate} \item The first event blocks are considered as root. \item When a new event block is added in the OPERA chain, we check whether the event block is a root by applying $OR$ operation on the flag tables connected to the new event block. If the sum of the flag table for the new event block is more than 2n/3, the new event block becomes a root. \item When a new root appears on the OPERA chain, nodes update their root hash list. If one of new event blocks becomes a root, all nodes that share the new event block add the hash value of the event block to their root hash list. \item The new root set is created if the cardinality of previous root set $R_{Sp}$ is more than 2n/3 and the new event block can reach 2n/3 root in $R_{S_p}$. \item When the new root set $R_{S_{k+1}}$ is created, the event blocks from previous root set $R_{S_k}$ to before $R_{S_{k+1}}$ belong to the frame $f_k$. \end{enumerate} \subsection{Clotho Selection} A Clotho is a root that satisfies the Clotho creation conditions. Clotho creation conditions are that more than 2n/3 nodes know the root and a root knows this information. Figure~\ref{fig:Clotho} shows an example of Clotho. Circles with a label $r_i$ (or $c$) represents a root (or Clotho) event block. If there are three other sets of root and there exists one root after the recent clotho set, then one of the roots in the first root set become Clotho. \begin{figure} [t] \centering \includegraphics[height=8cm, width=1.0\columnwidth]{Clotho_example} \caption{An Example of Clotho} \label{fig:Clotho} \end{figure} Clotho selection algorithm checks whether root event blocks in the root hash list satisfy the Clotho condition. If a root satisfies Clotho condition, the root becomes Clotho and makes a candidate time for Atropos. After the root is concluded as a Clotho, Atropos consensus time selection algorithm is triggered. For a root $r$, we denote $frame(i, r)$ to be the root $r$ in $i$-th frame. For example, $frame(1, r)$ is the first root belong to the frame $f_1$. Algorithm~\ref{al:acs} shows the pseudo code for Clotho selection. The algorithm takes a root $r$ as input. Line 4 and 5 set $c.is\_clotho$ and $c.yes$ to $nil$ and 0 respectively. Line 6-8 checks whether any root $c'$ in $frame(i-3,r)$ shares $c$ where $i$ is the current frame. In line 9-10, if number of roots in $frame(i-2,r)$ which shares $c$ is more than $2n/3$, the root $c$ is set as a Clotho. The time complexity of Algorithm 3 is $O(n^{2})$, where $n$ is the number of nodes. \begin{algorithm} \caption{Clotho Selection}\label{al:acs} \begin{algorithmic}[1] \Procedure{Clotho Selection}{} \State \textbf{Input}: a root $r$ \For{$c$ $\in$ $frame(i-3, r)$} \State$c.is\_clotho$ $\leftarrow$ $nil$ \State$c.yes$ $\leftarrow$ 0 \For{$c'$ $\in$ $frame(i-2, r)$} \If{$c'$ share $c$} \State c.yes $\leftarrow$ c.yes + 1 \EndIf \EndFor \If{$c.yes > 2n/3$} \State $c.is\_clotho$ $\leftarrow$ $yes$ \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Atropos Selection} Atropos selection algorithm is the process in which the candidate time generated from Clotho selection is shared with other nodes, and each root re-selects candidate time repeatedly until all nodes have same candidate time for a Clotho. After a Clotho is nominated, each node then computes candidate time of the Clotho. If there are more than two-thirds of the nodes that compute the same value for candidate time, that time value is recorded. Otherwise, each node reselects candidate time from some candidate time which the node collects. By the reselection process, each node reaches time consensus for candidate time of Clotho as OPERA chain grows. The candidate time reaching the consensus is called Atropos consensus time. After Atropos consensus time is computed, Clotho is nominated to Atropos and each node stores the hash value of Atropos and Atropos consensus time in Main-Chain. The Main-chain is used for time order between event blocks. The proof of Atropos consensus time selection is shown in the section~\ref{se:proof}. \begin{algorithm}[H] \caption{Atropos Consensus Time Selection}\label{al:atc} \begin{algorithmic}[1] \Procedure{Atropos Consensus Time Selection}{} \State \textbf{Input}: $c.Clotho$ in frame $f_i$ \State$c.consensus\_time$ $\leftarrow$ $nil$ \State$m$ $\leftarrow$ the index of the last frame $f_m$ \For{d from 3 to (m-i)} \State $R$ $\leftarrow$ be the Root set $R_{S_{i+d}}$ in frame $f_{i+d}$ \For{$r$ $\in$ $R$} \If{d is 3} \If{$r$ confirms $c$ as Clotho} \State $r.time(c)$ $\leftarrow$ $r.lamport\_time$ \EndIf \ElsIf{d $>$ 3} \State s $\leftarrow$ the set of Root in $f_{j-1}$ that $r$ can share \State t $\leftarrow$ RESELECTION(s, $c$) \State k $\leftarrow$ the number of root having $t$ in $s$ \If{d mod $h$ $>$ 0} \If{$k$ $>$ 2n/3} \State $c.consensus\_time$ $\leftarrow$ $t$ \State $r.time(c)$ $\leftarrow$ $t$ \Else \State $r.time(c)$ $\leftarrow$ $t$ \EndIf \Else \State $r.time(c)$ $\leftarrow$ the minimum value in $s$ \EndIf \EndIf \EndFor \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Algorithm~\ref{al:atc} and~\ref{al:resel} show pseudo code of Atropos consensus time selection and Consensus time reselection. In Algorithm~\ref{al:atc}, at line 6, $d$ saves the deference of relationship between root set of $c$ and $w$. Thus, line 8 means that $w$ is one of the elements in root set of the frame $f_{i+3}$, where the frame $f_i$ includes $c$. Line 10, each root in the frame $f_j$ selects own Lamport timestamp as candidate time of $c$ when they confirm root $c$ as Cltoho. In line 12, 13, and 14, $s$, $t$, and $k$ save the set of root that $w$ can share $c$, the result of $RESELECTION$ function, and the number of root in $s$ having $t$. Line 15 is checking whether there is a difference as much as $h$ between $i$ and $j$ where $h$ is a constant value for minimum selection frame. Line 16-20 is checking whether more than two-thirds of root in the frame $f_{j-1}$ nominate the same candidate time. If two-thirds of root in the frame $f_{j-1}$ nominate the same candidate time, the root $c$ is assigned consensus time as $t$. Line 22 is minimum selection frame. In minimum selection frame, minimum value of candidate time is selected to reach byzantine agreement. Algorithm~\ref{al:resel} operates in the middle of Algorithm~\ref{al:atc}. In Algorithm~\ref{al:resel}, input is a root set $W$ and output is a reselected candidate time. Line 4-5 computes the frequencies of each candidate time from all the roots in $W$. In line 6-11, a candidate time which is smallest time that is the most nomitated. The time complexity of Algorithm~\ref{al:resel} is $O(n)$ where $n$ is the number of nodes. Since Algorithm~\ref{al:atc} includes Algorithm~\ref{al:resel}, the time complexity of Algorithm~\ref{al:atc} is $O(n^2)$ where $n$ is the number of nodes. \begin{algorithm} [H] \caption{Consensus Time Reselection}\label{al:resel} \begin{algorithmic}[1] \Function{Reselection}{} \funclabel{alg:a} \State \textbf{Input}: Root set $R$, and Clotho $c$ \State \textbf{Output}: candidate time $t$ \State $\tau$ $\leftarrow$ set of all $t_i = r.time(c)$ for all $r$ in $R$ \State $D$ $\leftarrow$ set of tuples $(t_i, c_i)$ computed from $\tau$, where $c_i= count(t_i)$ \State $max\_count$ $\leftarrow$ $max(c_i)$ \State $t$ $\leftarrow$ $infinite$ \For{tuple $(t_i, c_i)$ $\in$ $D$} \If{$max\_count$ $==$ $c_i$ $\&\&$ $t_i$ $<$ $t$} \State $t$ $\leftarrow$ $t_i$ \EndIf \EndFor \State \textbf{return} $t$ \EndFunction \end{algorithmic} \end{algorithm} In the Atropos Consensus Time Selection algorithm, nodes reach consensus agreement about candidate time of a Clotho without additional communication (i.e., exchanging candidate time) with each other. Each node communicates with each other through the Lachesis protocol, the OPERA chain of all nodes grows up into same shape. This allows each node to know the candidate time of other nodes based on its OPERA chain and reach a consensus agreement. The proof that the agreement based on OPERA chain become agreement in action is shown in the section~\ref{se:proof}.\\ \subsection{Peer selection algorithm via Cost function} We define three versions of the Cost Function ($C_F$). Version one is focused around updated information share and is discussed below. The other two versions are focused on root creation and consensus facilitation, these will be discussed in a following paper. We define a Cost Function ($C_F$) for preventing the creation of lazy nodes. The lazy node is a node that has a lower work portion in the OPERA chain. When a node creates an event block, the node selects other nodes with low values outputs from the cost function and refers to the top event blocks of the reference nodes. An equation~(\ref{eq1}) of $C_F$ is as follows, \begin{equation}\label{eq1} C_{F} =I/H \end{equation} where $I$ and $H$ denote values of in-degree vector and height vector respectively. If the number of nodes with the lowest $C_F$ is more than $k$, one of the nodes is selected at random. The reason for selecting high $H$ is that we can expect a high possibility to create a root because the high $H$ indicates that the communication frequency of the node had more opportunities than others with low $H$. Otherwise, the nodes that have high $C_F$ (the case of $I$ $>$ $H$) have generated fewer event blocks than the nodes that have low $C_F$. Then we can judge that those kind of nodes are lazy. If we can detect whether a node is lazy based on cost function, we can change the lazy nodes to other participants or remove them. \begin{figure}[htp] \centering \includegraphics[height=6cm]{fig_cfEx1} \caption{An Example of Cost Function 1} \label{fig:cfEx1} \end{figure} Figure~\ref{fig:cfEx1} shows an example of the node selection based on the cost function after the creation of leaf events by all nodes. In this example, there are five nodes and each node created leaf events. All nodes know other leaf events. Node $A$ creates an event block $v_1$ and $A$ calculates the cost functions. Step 2 in Figure~\ref{fig:cfEx1} shows the results of cost functions based on the height and in-degree vectors of node $A$. In the initial step, each value in the vectors are same because all nodes have only leaf events. Node $A$ randomly selects $k$ nodes and connects $v_1$ to the leaf events of selected nodes. In this example, we set $k$=3 and assume that node $A$ selects node $B$ and $C$. \begin{figure} \centering \includegraphics[height=7cm]{fig_cfEx2} \caption{An Example of Cost Function 2} \label{fig:cfEx2} \end{figure} Figure~\ref{fig:cfEx2} shows an example of the node selection after a few steps of the simulation in Figure~\ref{fig:cfEx1}. In Figure~\ref{fig:cfEx2}, the recent event block is $v_5$ created by node $A$. Node $A$ calculates the cost function and selects the other two nodes that have the lowest results of the cost function. In this example, node $B$ has 0.5 as the result and other nodes have the same values. Because of this, node $A$ first selects node $B$ and randomly selects other nodes among nodes $C$, $D$, and $E$. The height of node $D$ in the current OPERA chain of the example is 2 (leaf event and event block $v_4$). On the other hand, the height of node $D$ in node structure of $A$ is 1. Node $A$ is still not aware of the presence of the event block $v_4$. It means that there is no path from the event blocks created by node $A$ to the event block $v_4$. Thus, node $A$ has 1 as the height of node $D$. Algorithm~\ref{al:ns} shows the selecting algorithm for selecting reference nodes. The algorithm operates for each node to select a communication partner from other nodes. Line 4 and 5 set min\_cost and $S_{ref}$ to initial state. Line 7 calculates the cost function $c_f$ for each node. In line 8, 9, and 10, we find the minimum value of the cost function and set min\_cost and $S_{ref}$ to $c_f$ and the ID of each node respectively. Line 11 and 12 append the ID of each node to $S_{ref}$ if min\_cost equals $c_f$. Finally, line 13 selects randomly $k$ node IDs from $S_{ref}$ as communication partners. The time complexity of Algorithm 2 is $O(n)$, where $n$ is the number of nodes. \begin{algorithm} \caption{$k$-neighbor Node Selection}\label{al:ns} \begin{algorithmic}[1] \Procedure{$k$-node Selection}{} \State \textbf{Input:} Height Vector $H$, In-degree Vector $I$ \State \textbf{Output:} reference node $ref$ \State min\_cost $\leftarrow$ $INF$ \State $s_{ref}$ $\leftarrow$ None \For{$k \in Node\_Set$} \State $c_f$ $\leftarrow$ $\frac{I_k}{H_k}$ \If {min\_cost $>$ $c_f$} \State min\_cost $\leftarrow$ $c_f$ \State $s_{ref}$ $\leftarrow$ {k} \ElsIf {min\_cost $equa$l $c_f$} \State $s_{ref}$ $\leftarrow$ $s_{ref}$ $\cup$ $k$ \EndIf \EndFor \State $ref$ $\leftarrow$ random select in $s_{ref}$ \EndProcedure \end{algorithmic} \end{algorithm} After the reference node is selected, each node communicates and shares information that is all event blocks known by them. A node creates an event block by referring to the top event block of the reference node. The Lachesis protocol works and communicates asynchronously. This allows a node to create an event block asynchronously even when another node creates an event block. The communication between nodes does not allow simultaneous communication with the same node. \begin{figure} \centering \includegraphics[height=5cm]{fig_nsEx.pdf} \caption{An Example of Node Selection} \label{fig:nsEx} \end{figure} Figure~\ref{fig:nsEx} shows an example of the node selection in Lachesis protocol. In this example, there are five nodes ($A, B, C, D,$ and $E$) and each node generates the first event blocks, called leaf events. All nodes share other leaf events with each other. In the first step, node $A$ generates new event block $v_1$ (\textit{blue}). Then node $A$ calculates the cost function to connect other nodes. In this initial situation, all nodes have one event block called leaf event, thus the height vector and the in-degree vector in node $A$ has same values. In other words, the heights of each node are 1 and in-degrees are 0. Because of this reason, node $A$ randomly select other two nodes and connect $v_1$ to the top two event blocks by other two nodes. The step 2 shows the situation after connections. In this example, node $A$ select node $B$ and $C$ to connect $v_1$ and the event block $v_1$ is connected to the top event blocks of node $B$ and $C$. Node $A$ only knows the situation of the step 2. After that, in the example, node $B$ generates new event block $v_2$ (\textit{green}) and also calculates the cost function. $B$ randomly select the other two nodes; $A$, and $D$, since $B$ only has information of the leaf events. Node $B$ requests to $A$ and $D$ to connect $e_B$, then nodes $A$ and $D$ send information for their top event blocks to node $B$ as response. The top event block of node $A$ is $v_1$ and node $D$ is the leaf event. The event block $v_2$ is connected to $v_1$ and leaf event from node $D$. Step 4 shows these connections. \section{Discussions }\label{se:discuss} This section presents several discussions on our Lachesis protocol. \subsection{Lamport timestamps} This section discusses a topological order of event blocks in DAG-based Lachesis protocols using Lamport timestamps~\cite{lamport1978time}. Our Lachesis protocols relies on Lamport timestamps to define a topological ordering of event blocks in OPERA chain. The ``happened before" relation, denoted by $\rightarrow$, gives a partial ordering of events from a distributed system of nodes. Given $n$ nodes, they are represented by $n$ processes $P = (P_0, P_1, \dots, P_{n-1})$. For a pair of event blocks $b$ and $b'$, the relation "$\rightarrow$" satisfies: (1) If $b$ and $b'$ are events of process $P_i$, and $b$ comes before $b'$, then $b \rightarrow b'$. (2) If $b$ is the send($m$) by one process and $b'$ is the receive($m$) by another process, then $b \rightarrow b'$. (3) If $b \rightarrow b'$ and $b' \rightarrow b''$ then $b \rightarrow b''$. Two distinct events $b$ and $b'$ are said to be concurrent if $b \nrightarrow b'$ and $b' \nrightarrow b$. For an arbitrary total ordering $\prec$ of the processes, a relation $\Rightarrow$ is defined as follows: if $b$ is an event in process $P_i$ and $b'$ is an event in process $P_j$, then $b \Rightarrow b'$ if and only if either (i) $C_i(a) < C_j(b)$ or (ii) $C(b)= C_j(b')$ and $P_i \prec P_j$. This defines a total ordering, and that the Clock Condition implies that if $a \rightarrow b$ then $a \Rightarrow b$. We use this total ordering in our Lachesis algorithms. By using Lamport timestamps, we do not rely on physical locks to determine a partial ordering of events. \subsection{Semantics of Lachesis protocols} This section discusses the possible usage of concurrent common knowledge, described in Section~\ref{se:cck} to understand DAG-based consensus protocols. Let $G=(V, E)$ denote directed acyclic graph (DAG). $V$ is a set of vertices and $E$ is a set of edges. DAG is a directed graph with no cycle. Namely, in DAG, there is no path that source and destination at the same vertex. A path is a sequence of vertices ($v_1$, $v_2$, ..., $v_\textit{(k-1)}$, $v_k$) that uses no edge more than once. An asynchronous system consists of the following sets. \begin{enumerate} \item A set $P$ = \{1,...,$n$\} of process identifiers, where $n$ is the total number of processes $P_i$ in the system. \item A set $C$ $\subseteq$ \{($i$,$j$) s.t. $i,j \in P$\} of channels. If $(i,j)$ in $C$, it indicates that process $i$ can send messages to process $j$. \item A set $H_i$ of possible local histories for each process $i$ in $P$. \item A set $A$ of asynchronous runs. Each asynchronous run is a vector of local histories, denoted by $a = \langle h_1,h_2,h_3,...h_N \rangle$. Each process has a single run. Histories are indexed by process identifiers. \item A set $M$ of messages. A message is a triple $\langle i,j,B \rangle$ where $i \in P$ is the sender of the message, $j \in $ is the message recipient, and $B$ is the message body. \end{enumerate} In Lachesis protocol, each node selects $k$ other nodes as peers. For certain gossip protocol, nodes may be constrained to gossip with its $k$ peers. In such a case, the set of channels $C$ can be modelled as follows. If node $i$ selects node $j$ as a peer, then $(i,j) \in C$. In general, one can express the history of each node in Lachesis protocol in the same manner as in the CCK paper~\cite{cck92}. Thus, a proof of consensus can be formalized via the consistent cuts. \section{Conclusion}\label{se:con} In order to realize the distributed ledger technology, we have proposed a new family of asynchronous DAG-based consensus protocol, namely $\mathcal{L}$. We introduce the OPERA chain and Main-chain for faster consensus. By using Lamport timestamps, the topological ordering of event blocks in OPERA chain and Main chain is more intuitive and reliable in distributed system. We introduce a flag table at each block to improve root detection. Further, we have presented a specific Lachesis consensus protocol, called $L_0$, as an example of $\mathcal{L}$. The $L_0$ protocol uses a new flag table in each block as a shortcut to check for reachability from an event block to a root along the OPERA chain. The path search is used as a proof of pBFT consensus. In terms of effectiveness, using flag table in $L_0$ protocol is more effective for consensus compared to the path searching approaches. To ensure the distribution of participating nodes, the Lachesis protocol defines a new cost function and an algorithm that efficiently and quickly selects peers. We also propose new algorithms for root selection and Clotho block selection based on the flag table; for Atropos selection by Weight after time consensus ordering. Based on the $L_0$ protocol and the new consensus algorithm, the OPERA chain can protect against malicious attacks such as forks, double spending, parasite chains, and network control. These protections guarantee the safety of OPERA chain. We can also verify existence of Atropos with the OPERA chain. It concludes that the OPERA chain reaches consensus and guarantees liveliness. Finally, the time ordering ensures guarantee by weight value on the flag table. Based on these properties, the LCA provides a fair, transparent, and effective consensus algorithm. \subsection{Future work} There are a number of directions for future work: \begin{itemize} \item With the Lachesis protocols, we are investigating a fast node synchronization algorithm with $k$-neighbor broadcasts. With OPERA chain and $k$ peer selection, it is possible to achieve a faster gossip broadcast. We are interested in comparing performance of different gossip strategies, such as randomized gossip, broadcast gossip and collection tree protocol for distributed averaging in wireless sensor networks. \item We are also investigating the semantics of DAG-based protocols in general and Lachesis protocols in particular. We aim to have a formal proof of pBFT using concurrent common knowledge via consistent cuts. \end{itemize} \newpage \section{Appendix}\label{se:appendix} \subsection{Proof of Lachesis Consensus Algorithm}\label{se:proof} In this section, we provide proof of liveness and safety in OPERA chain and show the Byzantine fault tolerance. To represent the Byzantine fault tolerance, we assume that more than two-thirds of participants are reliable nodes. Based on the assumption, we provide some definitions, lemmas and theorems. Then, we eventually validate the Byzantine fault tolerance. \subsubsection{Preliminaries} Let $G=(V, E)$ denote directed acyclic graph (DAG). $V$ is a set of vertices and $E$ is a set of edges. DAG is a directed graph with no cycle. Namely, in DAG, there is no path that source and destination at the same vertex. A path is a sequence $P$ of vertices ($v_1$, $v_2$, ..., $v_\textit{(k-1)}$, $v_k$) that uses no edge more than once. Suppose that we have a current vertex $v_c$ and current event block $e_c$ respectively. A vertex $v_p$ is parent of $v_c$ if there is a path from $v_c$ to $v_p$ and the length of path is 1. A vertex $v_a$ is ancestor of $v_c$ if there is a path from $v_c$ to $v_a$ and the length of path is more than equal to 1. \subsubsection{Proof of Byzantine Fault Tolerance for Lachesis Consensus Algorithm} \begin{defn}[node] The machine that participates in the OPERA chain and creates event blocks. The total number of nodes is $n$. \end{defn} \begin{defn}[event block] In OPERA chain, we call a vertex an event block. \end{defn} \begin{defn}[self parent] An event block $v_s$ is self parent of an event block $v_c$ if $v_s$ is parent of $v_c$ and both event blocks have same signatures. \end{defn} \begin{defn}[self ancestor] An event block $v_a$ is self ancestor of an event block $v_c$ if $v_a$ is ancestor of $v_c$ and both event blocks have same signatures. \end{defn} \begin{defn}[Happened-Before] An event block $v_x$ Happened-Before an event block $v_y$ if there is a path from $v_x$ to $v_y$. \end{defn} \begin{defn}[Root] The first created event blocks (leaf events) become root or an event block $v$ that can reach more than 2n/3 other roots, becomes a root. \end{defn} \begin{defn}[Root set] All first event blocks (leaf events) are elements of root set $R_1$ ($|R_1|$ = $n$). And the root set $R_k$ is a set of roots such that $r_i$ $\in$ $R_k$ cannot reach more than 2n/3 other roots in $R_k$ $(k > 1)$. \end{defn} \begin{defn}[Frame] Frame $f$ is a natural number that separates Root sets. \end{defn} \begin{defn}[Clotho] A root $r_k$ in the frame $f_{a+3}$ can nominate a root $r_a$ as Clotho if more than 2n/3 roots in the frame $f_{a+1}$ Happened-Before $r_a$ and $r_k$ Happened-Before the roots in the frame $f_{a+1}$. \end{defn} \begin{defn}[Atropos] If the consensus time of Clotho is validated, the Clotho become Atropos. \end{defn} \begin{prop} \label{prop:seen} At least 2n/3 roots in the frame $f_i$ Happened-Before at least 2n/3 roots in the frame $f_{i+1}$. \end{prop} \begin{proof} The number of roots in each root set is more than 2n/3. Since a root in the frame $f_{i+1}$ Happened-Before more than 2n/3 roots in the frame $f_i$, when the cardinalities of the root sets in the frames $f_i$ and $f_{i+1}$ are n and 2n/3 respectively, the number of paths from root set in the frame $f_{i+1}$ to root set in the frame $f_{i}$ is at least (2n/3)$^{2}$. The average and the maximum of the number of paths from root set in the frame $f_{i+1}$ to an root in the frame $f_{i}$ are (4n/9) and (2n/3) respectively. Thus, at least 2n/3 roots in the frame $f_{i}$ Happened-Before at least n/3 root in the frame $f_{i+1}$. \end{proof} \begin{prop} \label{prop:share} If a root in the frame $f_{i}$ Happened-Before from more than n/3 roots in the frame $f_{i+1}$, the root Happened-Before all roots in the frame $f_{i+2}$. \end{prop} \begin{proof} Based on the definition of Root, each root can reach more than 2n/3 other roots in the previous frame. It means that a root in the frame $f_{i+2}$ should have a number of paths more than 2n/3 to roots in the frame $f_{i+1}$. Thus, if a root $r$ in the frame $f_{i}$ Happened-Before more than n/3 roots in the frame $f_{i+1}$, all roots in the frame $f_{i+2}$ have path to the root $r$. \end{proof} \begin{lem}[Sharing] \label{lem:share} If a root $r_a$ in the frame $f_{a+3}$ is created, the root in the frame $f_{a+3}$ knows that more than 2n/3 roots in the frame $f_{a}$ become known by more than 2n/3 nodes. \end{lem} \begin{proof} Based on propositions~\ref{prop:seen} and ~\ref{prop:share}, the root in the frame $f_{a+3}$ knows that more than 2n/3 roots in the frame $f_{a}$ become known by more than 2n/3 nodes. \end{proof} \begin{lem}[Fork] \label{lem:fork} If the pair of event blocks ($x, y$) is a fork, roots happened-before at least one fork in OPERA chain. Therefore, they can know fork before becoming Clotho. \end{lem} \begin{proof} Suppose that a node creates two event blocks ($x, y$) and the event blocks are a fork. To create two Clotho that can reach each event block in the pair, the event blocks should be shared in more than 2n/3 nodes. Therefore, if there exist fork event blocks, the OPERA chain can structurally detect the fork before roots become Clotho. \end{proof} \begin{thm} \label{thm:same} All node grows up into same shape in OPERA chain. \end{thm} \begin{proof} Suppose that each node $A$ and $B$ will have a different shape (or a structure). For any two nodes $A$ and $B$, there is two event blocks $x$ and $y$ which are in both $OPERA(A)$ and $OPERA(B)$. Also, their path between $x$ and $y$ in $OPERA(A)$ is not equal to that in $OPERA(B)$. For any two event blocks, if each node has different paths, we can consider that the difference is fork attacks. Based on Lemma~\ref{lem:fork}, if an attacker forks an event block, the OPERA chain can detect and remove it before the Clotho is generated. It contradicts our assumptions. For this reason, two nodes have consistent OPERA chain. \end{proof} \begin{lem} \label{lem:root} For any root set $R$, all nodes nominate same root into Clotho. \end{lem} \begin{proof} Based on Theorem~\ref{thm:same}, each node nominates a root into Clotho via the flag table. If all nodes have an OPERA chain with same shape, the values in flag table should be equal to each other in OPERA chain. Thus, all nodes nominate the same root into Clotho since the OPERA chain of all nodes has same shape. \end{proof} \begin{lem} \label{lem:resel} In the Reselection algorithm, for any Clotho, a root in OPERA chain selects the same consensus time candidate. \end{lem} \begin{proof} Based on Theorem~\ref{thm:same}, if all nodes have an OPERA chain with the same partial shape, a root in OPERA chain selects the same consensus time candidate by the Reselection algorithm. \end{proof} \begin{thm} \label{thm:ct} Lachesis consensus algorithm guarantees to reach agreement for the consensus time. \end{thm} \begin{proof} For any root set $R$ in the frame $f_{i}$, time consensus algorithm checks whether more than 2n/3 roots in the frame $f_{i-1}$ selects the same value. However, each node selects one of the values collected from the root set in the previous frame by the time consensus algorithm and Reselection process. Based on the Reselection process, the time consensus algorithm can reach agreement. However, there is a possibility that consensus time candidate does not reach agreement~\cite{Fischer85}. To solve this problem, time consensus algorithm includes minimal selection frame per next $h$ frame. In minimal value selection algorithm, each root selects minimum value among values collected from previous root set. Thus, the consensus time reaches consensus by time consensus algorithm. \end{proof} \begin{thm} \label{thm:bft} If the number of reliable nodes is more than $2n/3$, event blocks created by reliable nodes must be assigned to consensus order. \end{thm} \begin{proof} In OPERA chain, since reliable nodes try to create event blocks by communicating with every other nodes continuously, reliable nodes will share the event block $x$ with each other. Based on Proposition~\ref{prop:seen}, if a root $y$ in the frame $f_{i}$ Happened-Before event block $x$ and more than 2n/3 roots in the frame $f_{i+1}$ Happened-Before the root $y$, the root $y$ will be nominated as Clotho and Atropos. Thus, event block $x$ and root $y$ will be assigned consensus time $t$. For an event block, assigning consensus time means that the validated event block is shared by more than 2n/3 nodes. Therefore, malicious node cannot try to attack after the event blocks are assigned consensus time. When the event block $x$ has consensus time $t$, it cannot occur to discover new event blocks with earlier consensus time than $t$. There are two conditions to be assigned consensus time earlier than $t$ for new event blocks. First, a root $r$ in the frame $f_{i}$ should be able to share new event blocks. Second, the more than 2n/3 roots in the frame $f_{i+1}$ should be able to share $r$. Even if the first condition is satisfied by malicious nodes (e.g., parasite chain), the second condition cannot be satisfied since at least 2n/3 roots in the frame $f_{i+1}$ are already created and cannot be changed. Therefore, after an event block is validated, new event blocks should not be participate earlier consensus time to OPERA chain. \end{proof} \subsection{Response to Attacks}\label{se:ra} Like all other decentralized blockchain technologies, OPERA chain will likely be subject to attacks by attackers which aim to gain financial profit to damage the system. Here we describe several possible attack scenarios and how the OPERA chain intends to take preventive measures. \subsubsection{Transaction Flooding} A malicious participant may run a large number of valid transactions from their account under their control with the purpose of overloading the network. In order to prevent such a case, the chain intends to impose a minimal transaction fee. Since there is a transaction fee, the malicious user cannot continue to perform such attacks. Participants who participate in nodes are rewarded, and those who contribute to the ecosystem, such as by running transactions, are continuously rewarded. Such rewards are expected to be adequate in running transactions for appropriate purposes. However, since it would require tremendous cost to perform abnormal attacks, it would be difficult for a malicious attacker to create transaction flooding. \subsubsection{Parasite chain attack} In a DAG-based protocol, a parasite chain can be made with a malicious purpose, attempting connection by making it look like a legitimate event block. When the Main Chain is created, verification for each event block is performed. In the verification process, any event block that is not connected to the Main Chain is deemed to be invalid and is ignored, as in the case of double spending. We suppose that less than one-third of nodes are malicious. The malicious nodes create a parasite chain. By the root definition, roots are nominated by 2n/3 node awareness. A parasite chain is only shared with malicious nodes that are less than one-third of participating nodes. A parasite chain is unable to generate roots and have a shared consensus time. \subsubsection{Double Spending} A double spend attack is when a malicious entity attempts to spend their funds twice. Entity $A$ has 10 tokens, they send 10 tokens to $B$ via node $n_A$ and 10 tokens to $C$ via node $n_Z$. Both node $n_A$ and node $n_Z$ agree that the transaction is valid, since $A$ has the funds to send to $B$ (according to $n_A$) and $C$ (according to $n_Z$). Consensus is a mechanism whereby multiple distributed parties can reach agreement on the order and state of a sequence of events. Let’s consider the following 3 transactions: - Transaction $tx_A$: $A$ (starting balance of 10) transfers 10 to $B$ - Transaction $tx_B$: $B$ (starting balance of 0) transfers 10 to $C$ - Transaction $tx_C$: $C$ (starting balance of 0) transfers 10 to $D$ We consider Node $n_A$ received the order $tx_A$ $tx_B$ $tx_C$. The state of Node $n_A$ is $A:0$, $B:0$, $C:0$, $D:10$ Now, we consider Node $n_B$ that receives the order $tx_C$ $tx_B$ $tx_A$. The state of Node $n_B$ is $A:0$, $B:10$, $C:0$, $D:0$ Consensus ordering gives us a sequence of events. If the pair of event blocks $(x, y)$ has a double spending transaction, the chain can structurally detect the double spend and delay action for the event blocks until the event blocks assign time ordering. Suppose that the pair of event blocks $(x, y)$ has same frame $f_1$. Then, all nodes must detect two event blocks before frame $f$+$2$. By the root definition, each root happened-before more than $2n/3$ previous roots. For this reason, when two roots in $f$+$1$ are selected, they must have happened-before the roots which are more than one-thirds of roots in $f$. This means that more than $2n/3$ roots in $f$+$1$ share both two roots which include the pair respectively. With the root definition and previous explanation, all roots in $f$+$2$ share both the pairs. Thus, all nodes detect the double spending event blocks at $f$+$2$ or earlier. \subsubsection{Long-range attack} In blockchains an adversary can create another chain. If this chain is longer than the original, the network will accept the longer chain. This mechanism exists to identify which chain has had more work (or stake) involved in its creation. $2n/3$ participating nodes are required to create a new chain. To accomplish a long-range attack you would first need to create $>$ $2n/3$ participating malicious nodes to create the new chain. \subsubsection{Bribery attack} An adversary could bribe nodes to validate conflicting transactions. Since $2n/3$ participating nodes are required, this would require the adversary to bribe $>$ $n/3$ of all nodes to begin a bribery attack. \subsubsection{Denial of Service} LCA is a leaderless system requiring $2n/3$ participation. An adversary would have to deny $>$ $n/3$ participants to be able to successfully mount a DDoS attack. \subsubsection{Sybil} Each participating node must stake a minimum amount of FTM to participate in the network. Being able to stake $2n/3$ total stake would be prohibitively expensive. \clearpage \section{Reference}\label{se:ref} \renewcommand\refname{\vskip -1cm} \bibliographystyle{unsrt}
2023-04-23T08:17:42.785Z
2018-10-05T02:11:11.000Z
redpajama/arxiv
arxiv_0000
497
12,774
705901d989a0d348b82eb63540aae46963e90f12
\section{Introduction} Approximately 3.6 billion diagnostic radiological examinations, such as radiographs (x-rays), are performed globally every year \cite{report}. Chest radiographs are performed to evaluate the lungs, heart and thoracic viscera. They are crucial for diagnosing various lung disorders in all levels of health care. Computer-aided diagnostic (CAD) tools serve an important role to assist the radiologists with the growing number of chest radiographs. Accurate segmentation of anatomical structures in chest radiographs is essential for many analysis tasks in CAD. For example: segmentation of the lungs field can help detecting lung diseases and shape irregulars; segmentation of the heart outline can help to predict cardiomegaly; and the segmentation of clavicles can improve the diagnosis of pathologies near the apex of the lung. Evaluating a chest radiograph is a challenging task due to the high variability between patients, unclear and overlapping organs borders, and image artifacts. A clear and high quality radiograph is not easy to acquire. This challenge drew many researchers over the years to improve the segmentation of anatomical structures in chest radiographs \cite{novikov,park,ibragimov,yang}. An open benchmark dataset that was provided by Ginneken et al. \cite{ginneken_scr} facilitated over the years an objective comparison between the different segmentation methods. Classic approaches include active shape and appearance models, pixel classification methods, hybrid models and landmark based models. More recently deep learning approaches were suggested \cite{novikov,park} based on the successful employment of convolutional neural networks (CNNs) on various detection and segmentation tasks in the medical imaging domain \cite{dl_overview}. CNN architectures for semantic segmentation usually incorporate encoder and decoder networks \cite{unet_olaf,FCN} that reduce the resolution of the image to capture the most important details and then restore the resolution of the image. Another semantic segmentation approach is to keep the resolution of the network by incorporating dilated convolutions \cite{DRN} that enlarge the global receptive field of the CNN to larger context information. In both approaches, the CNN can output single-class or multiple-class segmentation masks. The resolution of the output mask is the same as the input radiograph image. The training process of each CNN is affected by several training features: One is the selection of the loss function that guides the optimization process during the training process (with different loss functions effecting differently the final output segmentation performance results); The other is the initialization of the network weights - random initialization or weights transferred from another trained network (transfer learning from a totally different task). In this paper, we explore the segmentation of anatomical structures in chest radiographs, namely the lungs field, the heart and the clavicles, using a set of the most advanced CNN architectures for multi-class semantic segmentation. We propose an improved encoder-decoder style CNN with pre-trained weights of the encoder network and show its superiority over other state of the art CNN architectures. We further examine the use of multiple loss functions for training the best selected network and the effect of multi-class vs. single-class training. We present qualitative and quantitative comparisons on a common benchmark data, based on the JSRT database \cite{JSRT}. Our best performing model, the U-net with an ImageNet pre-trained encoder, outperformed the currently state-of-the-art segmentation methods for all anatomical structures. \section{Methods} \subsection{Fully Convolutional Neural Network Architectures} \label{FCN_arch} Fully convolutional networks (FCN) are extensively used for semantic segmentation tasks. In this study, four different state of the art architectures have been tested as follows: \textbf{FCN} - The first FCN architecture that we used in this work is based on the FCN-8s net that uses the VGG-16 layer net \cite{FCN,vgg}. The VGG-16 net is converted into an FCN by decapitating the final classification layer and converting fully connected layers into convolution. Deconvolution layers are then used to upsample the coarse outputs to pixel-dense outputs. Skip connections are used to merge output from previous pooling layers in the network which was shown to improve the segmentation quality \cite{FCN}. \textbf{Fully convolutional DenseNet} - The second network architecture that was tested is based on the fully convolutional DenseNet shown in \cite{fc_densenet_tiramisu}. DenseNet architecture \cite{densenet} proposes intensive layer fusion. Each dense block consists of a set of convolution layers using a similar scale where each convolution layer processes the concatenation of all its previous layers thus enabling the fusion of numerous representation levels. For the fully convolutional DenseNet architecture a decoding path is added to generate the segmentation output. The fusion between different layers consists of intra dense block layers fusion as well as the concatenation of the preceding high level feature maps and the ones coming from the encoding block at the same scale. \textbf{Dilated residual networks} - The dilated residual network (DRN) \cite{DRN} uses dilated convolution \cite{atrous} to increase the resolution of output feature maps without reducing the receptive field of individual neurons. It was shown to improve the performance compared to the standard residual networks presented in \cite{resnet}. We have implemented the DRN-C-26 as stated in \cite{DRN}. \textbf{U-Net with VGG-16 encoder} - The U-Net architecture \cite{unet_olaf} has been extensively used for different image-to-image tasks in computer vision with a major contribution to the image segmentation task. The U-Net includes a contracting path (the encoder) with several layers of convolution and pooling for down-sampling. The second half of the network includes an expansion path (the decoder) that uses up-sampling and convolution layers sequentially to generate an output with a similar size as the input image. Additionally, the U-Net architecture combines the encoder features with the decoder features in different levels of the network using skip connections. Iglovikov et al \cite{TernausNet} proposed to use a VGG11 \cite{vgg} as an encoder which was pre-trained on ImageNet \cite{Imagenet} dataset and showed that it can improve the standard U-Net performance in binary segmentation of buildings in aerial images. A similar concept was used in the current study with the more advanced VGG16 \cite{vgg} as an encoder. Figure \ref{fig:architecture} shows a diagram of our proposed network. The chest X-ray image is duplicated to obtain an input image with 3 channels similar to the RGB images that are used as input to the VGG-16 net (which is the encoder in the proposed architecture). \begin{figure} \centering \includegraphics[height=5.0 cm]{Unet_vgg_architecture.png} \caption{The proposed U-Net architecture with a VGG-16 based encoder.} \label{fig:architecture}\end{figure} \subsection{Objective loss functions} The loss function is used to guide the training process of a convolutional network by measuring the compatibility between the network prediction and the ground truth label. Let us denote S as the estimated segmentation mask and G as the ground truth mask. In a multi-class semantic segmentation task including $C = \{c_1,...,c_m\}$ classes, the total loss (TS) between S and G is defined as the sum of losses in every class: \begin{equation} TL(S,G)=\sum_{c=1}^{m}L_c(S,G) \end{equation} In this study we explore the influence of using different loss functions in the FCNs training process. The Dice similarity coefficient (DSC) and Jaccard similarity coefficient (JSC) are two well known measures in segmentation and can be used as objective loss functions in training. These segmentation measures between S and G are defined as: \begin{equation} DSC(S,G)=2\frac{|SG|}{|S|+|G|} \end{equation} \begin{equation} JSC(S,G)=\frac{|SG|}{|S|+|G|-|SG|} \end{equation} when used as loss in training, both measures weights FP and FN detections equally. The Tversky loss \cite{Tversky} introduces weighting into the loss function for highly imbalanced data, where we want to segment small objects. The Tversky index is defined as: \begin{equation} Tversky(S,G;\alpha,\beta)=\frac{|SG|}{|SG|+\alpha|S / G|+\beta|G / S|} \end{equation} where $\alpha$ and $\beta$ control the magnitude of penalties for FPs and FNs, respectively. In our study we used $\alpha=0.3$ and $\beta=0.7$. An additional loss function tested is the Binary Cross-Entropy (BCE). BCE was calculated separately for each class segmentation map. For each pixel $s_i\in S$ and pixel $g_i\in G$ that share the same pixel position i, the loss is averaged over all pixels $N$ as follows: \begin{equation} BCE(S,G)=\frac{1}{N}\sum_{i=1}^N g_i\log(s_i) + (1-g_i)\log(1-s_i) \end{equation} \section{Segmentation of Anatomical Structures} \subsection{Dataset} Evaluation of the chest anatomical structures segmentation was done on chest radiographs from the JSRT database \cite{JSRT}. This public database includes 247 posterior-anterior (PA) chest radiograph images of size $2048\times2048$ pixels, 0.175 mm pixel spacing and 12-bit gray levels. Ginneken et al. \cite{ginneken_scr} publicized the Segmentation in Chest Radiographs (SCR) database, a benchmark set of segmentation masks for the lungs field, heart and clavicles (see Figure \ref{fig:data_sample}). The annotations were made by two human observers and a radiologist consultant. The segmentations of the first observer generate the ground-truth segmentation masks and the other - human observer results. The benchmark data is split into two folds of 124 and 123 cases, each containing equal amount of normal cases and cases with lung nodules. Following the suggested instructions for comparison between the segmentation results, images in one fold were used for training and images from the other fold were used for testing, and vise versa. The final evaluation is defined as the average performance over the two folds. \begin{figure} [H] \centering \includegraphics[height=3.0 cm]{data_sample.png} \caption{Data sample from \cite{ginneken_scr}: (a) chest radiograph image; (b) clavicles segmentation mask; (c) lung segmentation mask; (d) heart segmentation mask.} \label{fig:data_sample} \end{figure} For training, we resize the images to $224\times 224$ pixels and normalize each image by its mean and standard deviation. The networks are trained using Adam optimizer with initial learning rate of $10^{-5}$ and default parameters for 100 epochs. We use augmentations of scaling, translation and small rotations. In testing, We threshold the output score maps with $threshold = 0.25$ to generate binary segmentation masks of each anatomical structure. \subsection{Performance Measures} To measure the performance of the proposed architectures and compare to state-of-the-art results, we use well accepted metrics for segmentation: Dice similarity coefficient, Jaccard index (also known as intersection over union) and mean absolute contour distance (MACD). MACD is a measure of distance between two contours. For each point on contour A, the closest point on contour B is computed by the euclidean distance $d(a_{i},B) = min_{b_{j}\in B}\norm{b_{j} - a_{i}}$. The distance values are then averaged over all points. Since distances from A to B are not the same as B to A, we derive a common average between the two averages as follows: \begin{equation} MACD(A,B)=\frac{1}{2}(\frac{\sum_{i=1}^{n} d(a_{i},B)}{n} + \frac{\sum_{i=1}^{m} d(b_{i},A)}{m}) \end{equation} Because MACD measure is given in millimeters, we multiply the original pixel spacing by a factor of $2048 / {224}$ to match the target image resolution. \subsection{Experimental Results} Table \ref{tabel:arch_compare} compares the segmentation performance of the four state of the art fully convolutional networks for semantic segmentation as listed in section \ref{FCN_arch}. All models are trained for multi-class segmentation into three classes: $lungs \ field,\\ heart, \ clavicles$. We use the $sigmoid$ activation function after the last layer of each network with $Dice$ as the loss function. An additional column in Table \ref{tabel:arch_compare} shows if the network is fine-tunned (FT) from a pre-trained network. The results show that the best performing architecture for the segmentation of all anatomical structures in chest radiograph, is the U-Net including the VGG16 encoder pre-trained on ImageNet. This architecture achieved the highest segmentation overlap scores (Jaccard) of 0.961, 0.906 and 0.855 for the Lungs field, Heart and Clavicles respectively. It is noticeable that between all four architectures, the fine-tuned networks performed better than the networks trained from scratch. \begin{table}[t] \caption{Segmentation results of four compared architectures trained with multi-class Dice loss showing the Dice(D), Jaccard (J) and MACD metrics. Fine tuned (FT) architectures include a pre-trained VGG16 as an initial encoder.} \label{tabel:arch_compare} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{3-11} \multicolumn{2}{c}{} &\multicolumn{3}{|c|}{Lungs} & \multicolumn{3}{c|}{Heart} & \multicolumn{3}{c|}{Clavicles} \\ \hline Architecture & FT & D & J & MACD & D & J & MACD & D & J & MACD \\ \hline \hline FCN & v & 0.976 & 0.953 & 1.341 & 0.944 & 0.895 & 3.099 & 0.884 & 0.795 & 1.277 \\ \hline U-Net (VGG16) & v & \textbf{0.980} & \textbf{0.961} & \textbf{1.121} & \textbf{0.950} & \textbf{0.906} & \textbf{2.569} & \textbf{0.921} & \textbf{0.855} & \textbf{0.871} \\ \hline FC DenseNet & {} & 0.973 & 0.947 & 1.511 & 0.934 & 0.879 & 3.396 & 0.884 & 0.796 & 1.349 \\ \hline DRN & {} & 0.966 & 0.935 & 1.842 & 0.936 & 0.881 & 3.365 & 0.840 & 0.727 & 1.860 \\ \hline \end{tabular} \end{table} For the top performing architecture, the U-Net based network, we further analyzed several training features. Table \ref{tabel:loss_compare} summarizes the multi-class segmentation performance using different objective loss functions. It is evident that structures with smaller pixel area, like the clavicles, benefits from loss metrics with pixel weighing such as Tversky loss function. We also tested the performance of training a single-class network for each of the three classes vs. the multi-class training. For the lungs, the single class training did not resolve in significant improvement. However, for the heart and clavicles, the Dice and Jaccard scores in a single-class training were improved each by 1\% in comparison to the multi-class training. The last improvement in performance of the multi-class segmentation was achieved using post-processing including small objects removal and hole fill. While the Dice and Jaccard metrics were not improved, the MACD metric showed an improvement from 1.121, 2.569 and 0.871 [mm] for the lungs, heart and clavicles to 1.019, 2.549 and 0.856 [mm] respectively. Figure \ref{fig:examples} shows a few segmentation examples of our best performing model. A comparison of our U-Net based model trained with multi-class dice loss to existing state-of-the-art methods, validated on the same benchmark of chest radiographs and a human observer, is presented in Table \ref{tabel:state_of_the_art_compare}. \begin{table}[t] \caption{Multi-class segmentation results using different loss functions including DSC, JSC, Tversky and BCE (rows). The Dice(D), Jaccard (J) and MACD are used as metrics (columns) for each anatomical structure.} \label{tabel:loss_compare} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c}{} &\multicolumn{3}{|c|}{Lungs} & \multicolumn{3}{c|}{Heart} & \multicolumn{3}{c|}{Clavicles} \\ \hline Loss Function & D & J & MACD & D & J & MACD & D & J & MACD \\ \hline \hline DSC & \textbf{0.980} & \textbf{0.961} & 1.121 & \textbf{0.950} & \textbf{0.906} & \textbf{2.569} & 0.921 & 0.855 & \textbf{0.871} \\ \hline JSC & 0.979 & 0.960 & \textbf{1.082} & 0.949 & 0.905 & 2.602 & 0.921 & 0.855 & 0.920 \\ \hline Tversky & 0.979 & 0.960 & 1.139 & \textbf{0.950} & 0.905 & 2.581 & \textbf{0.923} & \textbf{0.858} & 0.987 \\ \hline BCE & \textbf{0.980} & \textbf{0.961} & 1.119 & \textbf{0.950} & \textbf{0.906} & 2.592 & 0.911 & 0.838 & 1.145 \\ \hline \end{tabular} \end{table} \begin{figure}[h] \centering \includegraphics[height=6.5 cm]{seg_examples4.png} \caption{Segmentation results of our best performing architecture with Jaccard score above each image for the Lungs(L), Heart(H) and Clavicles(C); Ground-truth segmentation is shown in blue, CNN segmentation in red and the overlap (true detections) in green.} \label{fig:examples} \end{figure} \begin{table}[h] \caption{Our best performing architecture compared to state-of-the-art models; "-" means that the score was not reported; (*) used different data split than suggested in SCR benchmark} \label{tabel:state_of_the_art_compare} \centering \begin{tabular}{p{4cm} p{2.5cm} p{2.5cm} p{2.5cm}} \hline {} & Dice & Jaccard & MACD (mm) \\ \hline \textit{Lungs} & {} & {} & {} \\ Human observer \cite{ginneken_scr} & - & $0.946\pm 0.018$ & $1.64\pm 0.69$ \\ Hybrid voting \cite{ginneken_scr} & - & $0.949\pm 0.020$ & $1.62\pm 0.66$ \\ Ibragimov et al. \cite{ibragimov} & - & $0.953\pm 0.020$ & $1.43\pm 0.85$ \\ Hwang and Park \cite{park} & $0.980\pm 0.008$ & $0.961\pm 0.015$ & $1.237\pm 0.702$ \\ Novikov et al. \cite{novikov}(*) & $0.974$ & $0.950$ & - \\ Yang et al. \cite{yang} & $0.975\pm 0.001$ & $0.952\pm 0.018$ & $1.37\pm 0.67$ \\ U-Net (VGG16) & $0.980\pm 0.008$ & $0.961\pm 0.014$ & $1.019\pm 0.564$ \\ \\[-0.7em] \textit{Heart} & {} & {} & {} \\ Human observer \cite{ginneken_scr} & - & $0.878\pm 0.054$ & $3.78\pm 1.82$ \\ Hybrid voting \cite{ginneken_scr} & - & $0.860\pm 0.056$ & $4.24\pm 1.87$ \\ Novikov et al. \cite{novikov}(*) & $0.937$ & $0.882$ & - \\ U-Net (VGG16) & $0.950\pm 0.021$ & $0.906\pm 0.038$ & $2.549\pm 1.126$ \\ \\[-0.7em] \textit{Clavicles} & {} & {} & {} \\ Human observer \cite{ginneken_scr} & - & $0.896\pm 0.037$ & $0.68\pm 0.26$ \\ Hybrid voting \cite{ginneken_scr} & - & $0.736\pm 0.106$ & $1.88\pm 0.93$ \\ Novikov et al. \cite{novikov}(*) & $0.929$ & $0.868$ & - \\ U-Net (VGG16) & $0.921\pm 0.027$ & $0.855\pm 0.045$ & $0.855\pm 0.322$ \\ \hline \end{tabular} \end{table} \section{Discussion and Conclusion} Segmentation of anatomical structures in chest radiographs is a challenging task that attracted considerable interest over the years. The advantages of newly introduced CNN architectures, together with the public benchmark dataset provided in \cite{ginneken_scr} on the JSRT images, motivated further studies in this field. Some of the recent studies focused only on the problem of lung segmentation, and a few have also dealt with the problem of heart and clavicles segmentation. In this paper, we employed and evaluated the segmentation performance of four top FCN architectures \cite{fc_densenet_tiramisu,FCN,DRN,TernausNet} for semantic segmentation for all three anatomical structures, using multi-class dice loss. The network architectures presented in this study are well known and showed promising results in many computer vision semantic segmentation tasks. The FCN \cite{FCN} and the U-Net \cite{unet_olaf} are considered classical approaches while the FC DenseNet and the DRN are more advanced and relatively new approaches for semantic segmentation. Hence, it was interesting to see in Table \ref{tabel:arch_compare} that the classic U-Net and FCN showed superior segmentation performance over the more advanced approaches. The advantage of using pre-trained networks for medical imaging tasks has already been shown in several studies \cite{dl_overview}, and even though only the encoder part of the FCN and U-Net (VGG16 encoder) networks was pre-trained using the ImageNet database in our case, it seemed to be advantageous. The best segmentation performance was obtained using the proposed U-Net based architecture including the pre-trained VGG16 encoder (Table \ref{tabel:arch_compare}). Next, we explored the effect of training multi-class segmentation model using different loss functions (Table \ref{tabel:loss_compare}). We demonstrated that small structures such as the clavicles can benefit from weighted loss functions such the Tversky loss function while the larger structures (lung and heart) achieved the best segmentation results using Dice or Binary Cross-Entropy loss functions. Applying additional minor post-processing resulted in further decrease of the MACD measure with cleaner and more precise segmentations for all three structures as displayed in Figure \ref{fig:examples}. Table \ref{tabel:state_of_the_art_compare} presents the final comparison between our top selected model, the multi-class U-Net VGG16 with dice loss, to state-of-the-art methods \cite{ginneken_scr,ibragimov,park,novikov,yang} and human observer segmentations \cite{ginneken_scr}. Our model outperformed all state-of-the-art methods tested in this study and the human observer for the lungs and heart segmentation. For the clavicles segmentation, fewer studies were conducted. Novikov et al. \cite{novikov} reported results on different data split than the benchmark recommendation so its not an objective comparison. However, our proposed network outperformed an additional top reported method \cite{ginneken_scr}. In conclusion, we presented an experimental study in which four top segmentation architectures and several losses were compared for the task of segmenting anatomical structures on chest X-Ray images. Results were evaluated quantitatively with qualitative examples of our best performing model. Improving the segmentation of the lung field, heart and clavicles is the foundation for better CAD tools and the development of new applications for medical thoracic images analysis.
2023-04-23T08:17:42.846Z
2018-10-05T02:09:05.000Z
redpajama/arxiv
arxiv_0000
500
3,519
06a9476d02eeae0d33c36d1d1d338212b048cb75
\section{Introduction} Over the last decade, the scattering approach \cite{Lambrecht2006,Rahi2009} to the Casimir effect~\cite{Casimir1948} has allowed for the derivation of exact results for a number of non-trivial geometries, including the ones most often investigated experimentally: the plane-sphere \cite{Emig2008,MaiaNeto2008,CanaguierDurand2009,CanaguierDurand2010,CanaguierDurand2010B, Zandi2010,Hartmann2017,Hartmann2018} and sphere-sphere \cite{Emig2007,Rodriguez-Lopez2011,Umrath2015}. Within the scattering approach, the Casimir effect arises from the recurrent multiple scattering of electromagnetic fluctuations between the interacting surfaces~\cite{Jaekel1991}. In the particular case of two homogenous half-spaces separated by a layer of empty space or of a third homogeneous medium, one recovers the standard Lifshitz \cite{Lifshitz1956} and Dzyaloshinskii-Lifshitz-Pitaevskii (DLP)~\cite{DLP1961} results, respectively. The unretarded van der Waals interaction is obtained in the limit of short distances as a particular case \cite{Genet2004}. Applications of the scattering approach in colloid sciences and biophysics requires the inclusion of the screening caused by ions dissolved in a polar liquid (water in most cases). As in the double-layer interaction between prescribed charged surfaces \cite{Israelachvili2011,Butt2010}, movable ions could indeed be expected to screen slowly fluctuating charges. Alternatively, in the language of the scattering approach, the electrolyte solution displays a nonlocal electric response (spatial dispersion) allowing for the existence of longitudinal modes \cite{Davies1972} in addition to the standard transverse ones. The complementary case in which the intervening medium is local, whereas the interacting half-spaces are nonlocal, has been extensively analyzed in connection with the anomalous skin depth effect in metals. Indeed, free electrons exhibit a nonlocal response that modifies the Casimir interaction between metallic plates \cite{Kats1977,Esquivel2003,Esquivel-Sirvent2004,Contreras-Reyes2005,Esquivel-Sirvent2006}. The nonlocal response of metals has been recently considered in connection with quantum friction \cite{Reiche2019}. The unretarded van der Waals interaction between nonlocal half-spaces across a local medium has been derived in the context of metals \cite{Barton1979} and electrolytes \cite{Davies1972}. In the case of electrolytes, the common view is that only the Matsubara zero-frequency contribution as given by DLP result~\cite{DLP1961} is modified by screening (see for instance~\cite{Woods2016} for a recent review), since the plasma frequency associated to the presence of ions is always much smaller than $k_BT/\hbar$ (where $T$ is the temperature). In other words, fluctuations at all nonzero Matsubara frequencies are too fast to be screened by ions in solution. The zero-frequency contribution is then usually considered apart from the nonzero Matsubara terms, and results are derived from the linear Poisson-Boltzmann equation, either by considering its Green function~\cite{Mitchell1974}, or by analyzing the zero-point energy of surface modes \cite{MahantyNinham1976,Parsegian2006} as in Ref.~\cite{VanKampen1968}. For the interaction between metallic plates, the zero frequency contribution is relevant only at very long distances, in the micrometer range and beyond. Screening is hence negligible in the experiments probing the Casimir force between metallic surfaces across ethanol \cite{Munday2008,LeCunuder2018} at distances up to $\sim 10^2\,{\rm nm}.$ On the other hand, the zero frequency contribution provides a sizable fraction of the total interaction energy even at distances in the nanometer range if the electric permittivities of the interacting and intervening media are approximately matched in the infrared spectral range. An important example, given its applications in cell biology, is that of lipid layers interacting across an aqueous medium \cite{Parsegian1970,Parsegian1971}, particularly on account of the very large dielectric constant of water at zero frequency. The screening of the van der Waals force was inferred from measurements of the distance between lipid membranes (in the nanometer range) as a function of salt concentration \cite{Petrache2006}. Given the complexity of such systems, comparison with an ab-initio theoretical model for the van der Waals interaction appears as a daunting task. Simpler configurations, more amenable to theoretical descriptions, could provide a testing ground for investigating the salt screening effect. Indeed, the zero frequency contribution dominates the Casimir interaction between polystyrene surfaces across a layer of water even at distances as low as $\sim 10^2\,{\rm nm}$ \cite{Parsegian1975,Russel1989,Ether2015}. Unfortunately, in this range the overall attractive signal is weak and a comparison with theory is difficult to implement~\cite{Bevan1999}, also in part because of surface roughness effects~\cite{MaiaNeto2005,vanZwol2008}. Recent force measurements with polystyrene microspheres for distances up to $\sim 20\,{\rm nm}$ are not sensitive to screening \cite{Elzbieciak-Wodka2014} (see also \cite{Trefalt2016} for a review). Very weak double-layer forces between polystyrene microspheres, of the order of $10\,{\rm fN},$ were recently measured with the help of optical tweezers \cite{Ether2015}. Optical tweezers are ideally suited to probe the zero-frequency contribution to the Casimir interaction, and hence its reduction by salt screening, since trapping with a single laser beam requires a condition of nearly index matching at the laser wavelength (typically in the near infrared). Such experiment should allow for a comparison with theoretical models built on scattering theory. In this paper, we develop the scattering theory of the Casimir interaction between two parallel planar surfaces separated by a layer of an electrolyte solution. We take the non-local response of the electrolyte into account and analyze the propagation of longitudinal and transverse modes and their coupling by reflection at the surface of the (local) dielectric medium. The Casimir interaction energy is then derived from the matrix describing a round-trip of the electromagnetic waves propagating in between the two interacting surfaces. When taking typical values for the salt concentration, we find that the presence of ions in solution does not change the contribution of nonzero Matsubara frequencies. For the zero frequency case, we recover the result of Refs. \cite{Mitchell1974,MahantyNinham1976,Parsegian2006}, which is now reinterpreted as the screened contribution of longitudinal modes written in terms of the corresponding reflection coefficient. We also find an additional term, accounting for the contribution of transverse magnetic (TM) modes at zero frequency. Our model is based on macroscopic Maxwell equations and constitutive equations for the different materials involved. We take the constitutive equation of a bulk electrolyte to describe its nonlocal response. Note, however, that the surfaces bounding the electrolyte modify the constitutive equations and the derived reflection coefficients \cite{Agarwal1971A,Agarwal1971B,Agarwal1971C,Maradudin1973}, which should modify our results at distances smaller than the characteristic Debye screening length. Results for the van der Waals interaction beyond the bulk approximation were derived in Ref.~\cite{Gorelkin1974}. A more microscopic theory, built on the analysis of charge fluctuations as in Refs.~\cite{Sarabadani2010,Dean2013,Dean2014}, would be required to take into account ion specific effects and density correlations \cite{Dean2014B}, which could play a role at very short distances. The paper is organized in the following way. In Sec.~II we derive the reflection matrix describing the coupling between longitudinal and transverse modes propagating in the electrolyte solution. Such matrix is the building block for developing the scattering formalism of the Casimir interaction in Sec.~III. Numerical results for the case of two polystyrene surface interacting across an aqueous medium are presented in Sec.~IV. Sec.~V contains concluding remarks. \section{Reflection matrix for the electrolyte-dielectric interface} The key ingredient for the scattering theory to be developed in the next section is the reflection matrix for the interface between the electrolyte solution and the local medium, describing the coupling between longitudinal and transverse magnetic waves. We start by reviewing the hydrodynamical model for 1:1 electrolytes in the bulk approximation~\cite{Davies1972}. In Fourier space, the constitutive equation for the ionic current is \begin{equation}\label{J} {\bf J}({\bf K},\omega) = \sigma_{\ell}(K,\omega){\bf E}_{\ell}({\bf K},\omega) + \sigma_{t}(\omega){\bf E}_{t}({\bf K},\omega), \end{equation} where ${\bf E}_{\ell}({\bf K})$ and ${\bf E}_{t}({\bf K})$ are the longitudinal and transverse components of the electric field, respectively. The transverse conductivity is local and given by the usual Drude-like model, whereas the longitudinal conductivity is nonlocal: \begin{eqnarray} \sigma_t(\omega) &=&\frac{\omega_P^2}{\gamma-i\omega},\\ \label{longsigma} \sigma_{\ell}(K,\omega)& =& \frac{\omega_P^2}{\gamma-i\omega+iv_{\rm th}^2\frac{K^2}{\omega}}, \end{eqnarray} We have taken $\epsilon_0=\mu_0=1.$ The plasma frequency is $\omega_P= \sqrt{N e^2/m}$ where $N$ is the number of free charge carriers per volume, $e$ is the electric charge of cations and $m$ is the mass of both cations and anions (assumed to be equal). The nonlocal behavior in real space translates into the $K-$ dependence (spatial dispersion) in (\ref{longsigma}), which is controlled by the parameter \( v_{\rm th}=\sqrt{k_BT/m} \) representing the thermal average velocity of the ions in solution ($k_B=$ Boltzmann constant). The electrolyte dielectric functions for transverse and longitudinal waves follow from the conductivities discussed above: \begin{eqnarray} \label{Deff2} \epsilon_1(\omega)& =&\epsilon_{ b}(\omega)-\frac{\omega_P^2}{\omega(\omega+i\gamma)} \\ \label{epsl} \epsilon_{\ell}({\bf K},\omega)& =& \epsilon_{b}(\omega)-\left(\frac{\omega(\omega+i\gamma)}{\omega_P^2}-\frac{\lambda_D^2}{\epsilon_{b}{}_0}K^2\right)^{-1} \end{eqnarray} where ${\epsilon}_b$ is the dielectric function of pure water at zero ionic concentration. We have introduced the Debye screening length in terms of the electrostatic permittivity of the medium $\epsilon_b{}_0:$ \begin{eqnarray}\label{def_lambdaD} \lambda_D= \sqrt{\epsilon_b{}_0}\,\frac{v_{\rm th}}{\omega_P}= \sqrt{\frac{\epsilon_{b}{}_0k_BT}{Ne^2}}, \end{eqnarray} which can be tuned by changing the salt concentration~$N.$ The spatial dispersion explicit in Eq.~(\ref{epsl}) allows for the propagation of longitudinal waves satisfying the dispersion relation \begin{equation} \label{longwaves} \epsilon_{\ell}({\bf K}_{\ell},\omega)=0. \end{equation} We write the wave-vector ${\bf K}_{\ell}= k_{\ell}\,\mathbf{\hat z}+{\bf k}$ in terms of its projection ${\bf k}$ on the $xy$ plane. Transverse waves satisfy the standard dispersion relation \( \epsilon_1(\omega){\omega^2}/{c^2}= k_1^2+k^2, \) with ${\bf K}_{t}= k_1\,\mathbf{\hat z}+{\bf k}.$ We now consider the reflection of longitudinal and transverse waves propagating in the electrolyte by a planar interface perpendicular to the $z$-axis. For a general oblique incidence, TM and longitudinal waves become coupled by reflection, while transverse electric (TE) waves are reflected following the standard Fresnel formula. The frequency $\omega$ and wavevector projection parallel to the surface $\bf k$ are conserved by reflection. In Appendix \ref{sec:appendixA}, we derive the reflected fields for a general incident wave propagating from the electrolyte. In addition to the usual boundary conditions for the tangential electric and magnetic fields, we take the condition for the ionic current $J_z=0$ at the interface at $z=0$~\cite{Davies1972}. We use the indices $s,p$ to represent TE and TM polarizations, respectively. We cast the results in terms of the block-diagonal reflection matrix ${\cal R}$ giving the reflected fields as a linear combination of incident tranverse and longitudinal waves: \[ \pmatrix{ E_{s}^{(r)} \cr { E}_{p}^{(r)} \cr{ E}_{\ell}^{(r)} \cr}= {\cal R} \pmatrix{{ E}_{s}^{\rm in} \cr { E}_{p}^{\rm in} \cr{ E}_{\ell}^{\rm in} \cr} \] \begin{equation}\label{Rmatrix3x3} {\cal R}= \pmatrix{r_{ss}&0&0 \cr 0 & r_{pp}&r_{p \ell} \cr 0 & r_{\ell p}&r_{\ell\ell} \cr} \end{equation} $r_{ss}$ is the standard Fresnel coefficient for TE polarization, which is not modified by the presence of ions: \begin{equation}\label{RTE} r_{ss}=\frac{k_1-k_2}{k_1+k_2}. \end{equation} The Fresnel coefficient for TM polarization $r_{pp}$ is modified by the coupling with longitudinal waves: \begin{eqnarray} \label{Rtt} r_{pp}= \frac{\epsilon_2k_1-\epsilon_1k_2+\frac{k^2}{k_{\ell}}\frac{\epsilon_2}{\epsilon_b}\left(\epsilon_1-\epsilon_b\right)} {\epsilon_2k_1+\epsilon_1k_2-\frac{k^2}{k_{\ell}}\frac{\epsilon_2}{\epsilon_b}\left(\epsilon_1-\epsilon_b\right)} \end{eqnarray} and the diagonal element for longitudinal waves in Eq.~(\ref{Rmatrix3x3}) is given by \begin{eqnarray} \label{Rll} r_{\ell\ell}= \frac{\epsilon_2k_1+\epsilon_1k_2+\frac{\epsilon_2}{\epsilon_b}\frac{k^2}{k_{\ell}}\left(\epsilon_1-\epsilon_b\right)} {\epsilon_2k_1+\epsilon_1k_2-\frac{\epsilon_2}{\epsilon_b}\frac{k^2}{k_{\ell}}\left(\epsilon_1-\epsilon_b\right)} \end{eqnarray} The nondiagonal matrix elements describe the conversion between TM-polarized and longitudinal waves: \begin{eqnarray} \label{Rtilde} r_{\ell p}&=& \frac{2\frac{k}{k_{\ell}}k_1 \frac{\epsilon_2}{\epsilon_b}\left(\epsilon_1-\epsilon_b\right)} {\epsilon_2k_1+\epsilon_1k_2-\frac{k^2}{k_{\ell}}\frac{\epsilon_2}{\epsilon_b}\left(\epsilon_1-\epsilon_b\right)}\, \frac{\sqrt{k_{\ell}^2+k^2}}{\sqrt{\epsilon_1}\,\omega/c}\\ \label{Rtilde2} r_{p \ell}&=& \frac{2\epsilon_2 k} {\epsilon_2k_1+\epsilon_1k_2-\frac{k^2}{k_{\ell}}\frac{\epsilon_2}{\epsilon_b}\left(\epsilon_1-\epsilon_b\right)} \, \frac{\sqrt{\epsilon_1}\,\omega/c}{\sqrt{k_{\ell}^2+k^2}} \end{eqnarray} At normal incidence ($k=0$), the reflection matrix is diagonal, and $r_{pp}$ coincides with the standard Fresnel coefficient $r_{\rm TM}$ for TM polarization as expected. We also recover the standard TM Fresnel coefficient at frequencies $\omega\gg \omega_P,$ since $\epsilon_1\approx\epsilon_b$ according to Eq.~(\ref{Deff2}) in this case. The ions are too slow to couple transverse and longitudinal waves at such large field frequencies, and then the reflection matrix is approximately diagonal. Such property entails that the contribution of nonzero Matsubara frequencies are nearly unaffected by the presence of ions in solution, as discussed in the next section. \section{Round-Trip Matrix and the Casimir interaction energy} In this section, we derive the Casimir interaction energy between two local dielectric half-spaces separated by a layer (thickness $L$) of a (non-local) electrolyte solution, as depicted in Fig.~1. For simplicity, we assume that the local media on both sides have the same electromagnetic properties. Then the reflection matrices at the two interfaces are identical. \begin{figure}[htbp] \begin{center} \includegraphics[width=7.5cm]{scheme} \end{center} \caption{Casimir interaction across a layer of thickness $L$ containing a non-local electrolyte solution, with dielectric functions $\epsilon_1$ and $\epsilon_{\ell}$ for transverse and longitudinal waves, respectively. For simplicity, we assume that the interacting (local) half-spaces share the same dielectric function $\epsilon_2.$ The Casimir energy is computed from the matrix ${\cal R}$ describing the coupling between longitudinal and transverse waves by reflection at the interface. } \end{figure} After a Wick rotation, the Casimir free energy is written as a sum over the Matsubara frequencies $\xi_n=2\pi\, n\,k_B T/\hbar,$ with $n$ integer. The (nonlocal) longitudinal dielectric constant (\ref{epsl}) is then written as \begin{equation}\label{epsl_imag} \epsilon_{\ell}({\bf K},|\xi_n|) = \epsilon_{b}(i|\xi_n|)+ \frac{\omega_P^2}{|\xi_n|(|\xi_n|+\gamma) +v_{\rm th}^2 K^2} \end{equation} Eq.~(\ref{epsl_imag}) shows that $\epsilon_{\ell}({\bf K},|\xi_n|)$ is a Lorentzian of width $\approx|\xi_n|/v_{\rm th}$ for any nonzero Matsubara frequency. In real space, the displacement field $\bf D$ is then given in terms of the electric field $\bf E$ by a convolution integral with an exponential kernel, corresponding to the characteristic length scale \begin{equation} \label{sn} \lambda_n\equiv v_{\rm th}/|\xi_n|= \lambda_{\rm dB}/[(2\pi)^{3/2}|n|], \end{equation} where $\lambda_{\rm dB}=\left(\frac{2\pi \hbar^2}{mk_BT}\right)^{\frac12}$ is the thermal de Broglie wavelength of the ions at room temperature. Since this length is extremely small, we conclude that the electrolyte behaves approximately as a local medium for all nonzero Matsubara frequencies, as discussed in further detail below. On the other hand, for the zero frequency (\ref{epsl_imag}) yields \( \epsilon_{\ell}({\bf K},0) = \epsilon_{b0}(1+ 1/(\lambda_D K)^2), \) so that the scale of variation with $K$ is now controlled by the Debye screening length $\lambda_D$ instead of the de Broglie wavelength. Thus, we expect strong nonlocal effects and the contribution of longitudinal modes as far as the zero-frequency contribution is concerned. The Casimir free energy is written in terms of the round-trip operator ${\cal M}(|\xi_n|)$ describing the scattering of longitudinal and transverse waves between the interacting surfaces of area $A$: \begin{eqnarray}\label{main_sum} {\cal F} &=& \sum_{n=-\infty}^{\infty}\,{\cal F}_n\\ \frac{{\cal F}_n}{A} &=& \frac{k_B T}{2}\,\int\frac{d^2k}{(2\pi)^2}\, \ln \det [1-{\cal M}(|\xi_n|)] \label{mainF} \end{eqnarray} The round-trip matrix ${\cal M}(|\xi_n|)$ is given by \begin{equation} {\cal M}(|\xi_n|)={\cal R}\,e^{-{\cal K} L}\,{\cal R}\,e^{-{\cal K} L} \end{equation} The reflection matrix ${\cal R}$ for the electrolyte-dielectric interface was derived in the previous section. We replace $\omega$ by $i|\xi_n|$ and the axial wave-vector components for the transverse waves in medium $m=1,2$ by $i\kappa_{m}=i\sqrt{k^2 + \epsilon_m \xi_n^2/c^2}.$ The longitudinal waves in the electrolyte correspond to \[ i \kappa_{\ell}=i\sqrt{k^2 + \frac{1}{v_{\rm th}^2}\left( |\xi_n|(|\xi_n|+\gamma)+\frac{\omega_P^2}{\epsilon_b(i|\xi_n|)}\right)}. \] The propagation matrix $e^{-{\cal K} L}$ is diagonal: \begin{equation} e^{-{\cal K} L}= \pmatrix{e^{-\kappa_1 L}&0&0 \cr 0 & e^{-\kappa_1 L}&0 \cr 0 & 0&e^{-\kappa_{\ell} L} \cr} \end{equation} When writing the scattering formula (\ref{mainF}), we have assumed a condition of full thermodynamical equilibrium for all scattering channels, including the longitudinal modes associated to the presence of movable ions. In addition, our derivation is based on the bulk model for the non-local response reviewed in the previous section. Given such assumptions, we are not allowed to take the limit $L\ll \lambda_D,$ which would eventually suppress the longitudinal channel and introduce the effect of the interfaces already at the level of the constitutive equations. The determinant in Eq.~(\ref{mainF}) is evaluated explicitly: \begin{equation} \det [1-{\cal M}(|\xi_n|)]= \left(1-r_{ss}^2e^{-2\kappa_1 L}\right)\frac{a_0+a_1\,\Delta + a_2\,\Delta^2}{(1+\Delta)^2} \label{finalgeral} \end{equation} with $r_{ss}$ defined by Eq.~(\ref{RTE}). The parameter $\Delta$ quantifies the strength of the coupling between TM and longitudinal waves: \begin{eqnarray} \Delta &=& \frac{\epsilon_2}{\epsilon_b} \frac{k^2(\epsilon_1-\epsilon_b)}{\kappa_{\ell}(\epsilon_2\kappa_1+\epsilon_1\kappa_2)} \end{eqnarray} and also \begin{eqnarray} a_0&=&(1-e^{-2\kappa_{\ell} L}) \left[1-\left(\frac{\epsilon_2\kappa_1-\epsilon_1\kappa_2}{\epsilon_2\kappa_1+\epsilon_1\kappa_2}\right)^2e^{-2\kappa_1 L}\right]\nonumber\\ a_1&=&2(1+e^{-2\kappa_{\ell} L}) \left(1+\frac{\epsilon_2\kappa_1-\epsilon_1\kappa_2}{\epsilon_2\kappa_1+\epsilon_1\kappa_2}e^{-2\kappa_1 L}\right)\nonumber\\ && +\frac{8\epsilon_2\kappa_1} {\epsilon_2\kappa_1+\epsilon_1\kappa_2}e^{-(\kappa_1+\kappa_{\ell}) L}\nonumber\\ a_2&=&(1-e^{-2\kappa_{\ell} L})(1-e^{-2\kappa_1 L}) \end{eqnarray} Since $k_BT\gg \hbar \omega_P$ for electrolytes, for nonzero Matsubara frequencies we have $\epsilon_1-\epsilon_b\ll 1$ and then $\Delta\ll 1.$ Thus, the round-trip matrix is approximately diagonal leading to independent contributions from the three polarizations: \begin{eqnarray}\label{Lifshitz} \frac{{\cal F}_n}{A}& \approx & \frac{k_B T}{2}\,\int\frac{d^2k}{(2\pi)^2}\, \left[\sum_{\sigma=s,p}\ln(1-r_{\sigma\sigma}^2e^{-2\kappa_1 L})\right.\label{Deltaequals0}\\ &&\;\;\;+\,\ln(1-e^{-2\kappa_{\ell} L})\Biggr]\;\;\;\;\;\;\;(n\neq 0)\nonumber \end{eqnarray} As $\epsilon_1\approx\epsilon_b,$ we can ignore the presence of ions when computing the TM Fresnel reflection coefficients $r_{pp}$ in (\ref{Lifshitz}) and use instead the standard result for local dielectric media \[ r_{pp} \approx \frac{\epsilon_2\kappa_1-\epsilon_1\kappa_2}{\epsilon_2\kappa_1+\epsilon_1\kappa_2}\;\;\;\;\;\;\;\;\; (n\neq 0) \] In addition, we can approximate $\kappa_{\ell}\approx \sqrt{k^2+ \frac{\xi_n^2}{ v_{\rm th}^2}}$ when computing the contribution of longitudinal modes, corresponding to the second term in the rhs of (\ref{Deltaequals0}): \[ \frac{{\cal F}_{n}^{\rm long}}{A} \approx -\frac{k_BT}{8\pi \lambda_n \, L} \exp\left(-\frac{2L}{\lambda_n}\right),\;\;(n\neq 0) \] where $\lambda_n,$ defined by Eq.~(\ref{sn}), is the characteristic nonlocal length at nonzero Matsubara frequencies. Since $\lambda_n$ is smaller than the thermal de Broglie wavelength of the ions and is below the \r{A}ngstr\"om range, ${\cal F}_{n}^{\rm long}$ is exponentially suppressed. Thus, we recover the standard DLP result for local materials \cite{DLP1961} for all nonzero Matsubara frequencies. For the zero frequency contribution, we find \[ a_0=a_2= (1-e^{-2\sqrt{k^2+1/\lambda_D^2}\, L})(1-e^{-2k L}) \] \begin{equation} a_1= 2(1+e^{-2\sqrt{k^2+1/\lambda_D^2}\, L})(1-e^{-2k L}). \end{equation} The expression given by (\ref{finalgeral}) then simplifies, and Eq.~(\ref{mainF}) leads to \begin{eqnarray}\label{zero_dielectric_material} \frac{{\cal F}_{0}}{A} = & \frac{k_B T}{2}& \Biggl[-\frac{\zeta(3)}{8\pi L^2}\\ & &+ \int\frac{d^2k}{(2\pi)^2}\ln\left(1-r_{\ell\ell}^2 e^{-2\sqrt{k^2+1/\lambda_D^2} L}\right)\Biggr] \nonumber \end{eqnarray} with the longitudinal reflection coefficient obtained from (\ref{Rll}) by taking $\xi=0:$ \begin{eqnarray}\label{Rllzero} r_{\ell\ell}=\frac{\epsilon_b\sqrt{k^2+1/\lambda_D^2}-\epsilon_2k}{\epsilon_b\sqrt{k^2+1/\lambda_D^2}+\epsilon_2k} \;\;\;\;(n=0) \end{eqnarray} The first term in the rhs of (\ref{zero_dielectric_material}) accounts for the contribution of TM modes and is written in terms of the value of the Riemann zeta function $\zeta(3)\approx 1.202.$ The result (\ref{zero_dielectric_material}) can also be obtained directly from the reflection matrix ${\cal R}$ by noting that $r_{p \ell}r_{\ell p}\rightarrow 0$ and $r_{pp}\rightarrow -1$ when $\xi\rightarrow 0.$ In conclusion, we find that the modification of the nonzero Matsubara frequency contributions on account of the movable ions is very small. For the zero-frequency case, on the other hand, we find a (screened) contribution from longitudinal waves, the second term in the r.-h.-s. of (\ref{zero_dielectric_material}), which coincides with the result of Refs.~\cite{Mitchell1974,MahantyNinham1976,Parsegian2006}. Within the scattering approach, such contribution is written in terms of the coefficient $r_{\ell \ell}$ describing the reflection of longitudinal waves at the limit of zero frequency. According to Eq.~(\ref{zero_dielectric_material}), the screened contribution of longitudinal waves is accompanied by an unscreened contribution from the TM-polarized modes, which is not suppressed even in the limit of strong screening. \section{A numerical example: polystyrene surfaces interacting across an aqueous solution} In this section, we apply the formal expressions derived previously to the important example of polystyrene half-spaces interacting across a layer of an aqueous solution. We take $T=293\,{\rm K}$ and the Lorentz model with the parameters given by Ref.~\cite{vanZwol2010} to describe the required dielectric functions. Similar results are obtained by taking the models proposed in Ref.~\cite{Russel1989}. It is convenient to define the Hamaker coefficient \cite{Russel1989} \begin{equation}\label{def_Hamaker} H(L) = -12 \pi\, L^2\, \frac{{\cal F}(L)}{A}. \end{equation} In Fig.~2, we plot $H$ (in units of $k_B T$) as a function of $L/\lambda_D.$ We consider two different values for the monovalent salt concentration: $90\,{\rm mM}$ yielding $\lambda_D= 1\,{\rm nm}$ and $0.9\,{\rm mM}$ corresponding to $\lambda_D= 10\,{\rm nm}.$ They are represented by the black and blue (dark grey) lines in Fig.~2, respectively, which are calculated by combining (\ref{main_sum})-(\ref{mainF}) with the full exact expression (\ref{finalgeral}). As discussed in connection with Eq.~(\ref{Lifshitz}), the contribution from nonzero frequencies is well approximated by the DLP standard result \cite{DLP1961} neglecting the presence of ions in solution. For the examples shown in Fig.~2, the relative difference between the exact and DLP results for the nonzero-frequency contribution is of the order of $10^{-5}.$ The red (light grey) line in Fig.~2 corresponds to the separate zero-frequency contribution as computed from Eq.~(\ref{zero_dielectric_material}). The resulting contribution to the Hamaker coefficient is an universal function of $L/\lambda_D$ exhibiting two well defined plateaus, with a crossover at $L/\lambda_D\sim 1.$ At short distances, $L\ll \lambda_D,$ we add the contribution of longitudinal channels, whose magnitude is controlled by the reflection coefficient $r_{\ell \ell}$ given by (\ref{Rllzero}), to the universal constant value $H_0^{\rm TM} = \frac{3}{4}\zeta(3)k_BT\approx 0.9\, k_BT$ arising from TM-polarized modes. As the distance increases, the former is suppressed by screening, while the latter defines the asymptotic limit of the total Hamaker coefficient at long distances. Indeed, as the distance approaches the thermal wavelength $\hbar c /(k_B T),$ the nonzero-frequency contribution is also exponentially suppressed, and then the Hamaker coefficient goes to the zero-frequency asymptotic value $H_0^{\rm TM}.$ Such behaviour is indicated in Fig.~2, particularly for the blue (dark grey) curve corresponding to $\lambda_D= 10\,{\rm nm},$ since larger distances are shown in this case. The contribution of nonzero frequencies is maximized at short distances. When added to the zero-frequency value, it defines the unretarded Hamaker `constant' $H(0)$ corresponding to the short-distance plateau for the black and blue (dark grey) lines. The former, corresponding to $\lambda_D=1\,{\rm nm},$ develops a second plateau at intermediate distances such that $\lambda_D\ll L \ll \lambda_0,$ with $\lambda_0$ representing the typical scale for the resonance wavelengths of water and poystyrene. In this range, the longitudinal zero-frequency term is suppressed by ionic screening, but the nonzero-frequency contribution is still approximately unaffected by electrodynamical retardation. On the other hand, when considering $\lambda_D=10\,{\rm nm},$ both screening and retardation take place at approximately the same distance range, leading to the more steady decay of the Hamaker coefficient shown by the blue (dark grey) line in Fig.~2. \begin{figure}[htbp] \begin{center} \includegraphics[width=7.5cm]{plot_Hamaker} \end{center} \caption{ Variation of the Hamaker coefficient (in units of $k_B T$) with distance (in units of the Debye screening length $\lambda_D$). We consider two polystyrene surfaces interacting across an aqueous solution. Black: $\lambda_D=1\,{\rm nm};$ blue (dark grey): $\lambda_D=10\,{\rm nm}.$ The red (light grey) line represents the zero-frequency contribution, which is an universal function of $L/\lambda_D.$ } \label{Hamaker} \end{figure} The results obtained in this section can be directly applied to the interaction between two polystyrene microspheres across an aqueous solution, provided that their radii $R_1$ and $R_2$ are much larger than the distance. In this case, we can take the proximity force approximation (PFA), also known as Derjaguin approximation \cite{Derjaguin1934}, in order to derive the attractive Casimir force $F_{\rm SS}$ between the two spheres from the free energy for parallel planar surfaces taken at the distance of closest approach $L:$ \begin{equation} F_{\rm SS} = 2\pi R_{\rm eff} \, \frac{{\cal F}(L)}{A}\label{PFA} \end{equation} with $R_{\rm eff}= R_1R_2/(R_1+R_2).$ In Fig.~\ref{spheres}, we plot $|F_{\rm SS}|/R_{\rm eff}$ versus distance taking $\lambda_D=10\,{\rm nm}.$ The solid line corresponds to the scattering theory and is obtained from the results for the Hamaker coefficient displayed in Fig.~\ref{Hamaker} combined with Eqs.~(\ref{def_Hamaker}) and (\ref{PFA}). The dashed line shows the results calculated from the linear Poisson-Boltzmann equation for the zero-frequency contribution \cite{Mitchell1974,MahantyNinham1976,Parsegian2006}, combined with the DLP formalism \cite{DLP1961} for the nonzero frequencies. The numerical difference between the two curves arises from the contribution of TM modes at the limit $\xi\rightarrow 0,$ which is taken into account within the scattering theory but not by the approach of Refs.~\cite{Mitchell1974,MahantyNinham1976,Parsegian2006}. At short distances, zero-frequency TM contribution is a relatively small fraction of the total interaction energy. Thus, the two curves shown in Fig.~\ref{spheres} are close to each other for $L\stackrel{<}{\scriptscriptstyle\sim}10\,{\rm nm},$ which is the typical range probed with polystyrene colloids \cite{Elzbieciak-Wodka2014,Trefalt2016}. Specifically, the relative discrepancy increases from $35\%$ at $L=1\,{\rm nm}$ to $48\%$ at $L=10\,{\rm nm}.$ On the other hand, Fig.~\ref{spheres} shows that the two models deviate strongly from each other as $L$ increases past the length scales $\lambda_D$ and $\lambda_0,$ due to the suppression of the longitudinal and nonzero-frequency terms as discussed in connection with Fig.~\ref{Hamaker}. An order-of-magnitude discrepancy is found at $L=100\,{\rm nm}.$ Such range of distances might be reachable by employing optical tweezers \cite{Ether2015}, which allow for the measurement of very weak interactions. \begin{figure}[htbp] \begin{center} \includegraphics[width=7.5cm]{force_screened} \end{center} \caption{ Variation of the Casimir force $F_{\rm SS}$ between two polystyrene microspheres with distance $L.$ The microspheres of radii $R_1$ and $R_2$ interact across an aqueous solution with $\lambda_D=10\,{\rm nm}.$ We assume that $R_1,R_2\gg L$ and calculate the force within the proximity force approximation, which is proportional to the effective radius $R_{\rm eff}=R_1R_2/(R_1+R_2).$ The dashed line is calculated by considering longitudinal channels at zero frequency alongside the nonzero frequency modes, whereas the solid line also adds TM channels at the limit of zero frequency in accordance with the scattering approach. } \label{spheres} \end{figure} \section{Conclusion} We have developed the scattering approach to the Casimir interaction across an electrolyte solution. The key ingredient in our formalism is the matrix describing the reflection of transverse and longitudinal waves at the interfaces between the electrolyte and the interacting dielectric materials. Our derivation considers arbitrary frequencies, and the zero-frequency contribution is obtained by taking the limit $\xi\rightarrow 0$ at the very end. As expected, we find that the ions in solution do not modify, to a very good approximation, the contributions at nonzero Matsubara frequencies. At zero frequency, we find a screened contribution of longitudinal scattering channels which agrees with the result of previous derivations based on the linear Poisson-Boltzmann equation~\cite{Mitchell1974,MahantyNinham1976,Parsegian2006}. Within the scattering approach, such screened contribution is cast in terms of the reflection amplitude $r_{\ell \ell}$ for longitudinal waves. Our derivation provides a new insight into the nature of the screened contribution to the Casimir force, and paves the way for the generalization to more general setups and geometries. In addition to the contribution of longitudinal channels, we find a second contribution at zero frequency, associated to TM-polarized modes, which is not screened by the presence of ions and as a consequence defines the asymptotic behavior of the interaction at long distances. Our results are based on the bulk model for the electromagnetic response of the electrolyte. Hence they should be valid for distances $L\stackrel{>}{\scriptscriptstyle\sim} \lambda_D.$ This condition overlaps with the distance range that allows for the suppression of the double-layer interaction between charged dielectric surfaces. Thus, our derivation is well adapted to experimental conditions aiming at isolating the Casimir interaction from electrostatic force signals. {\bf Acknowledgements.} This work has been supported by Centre National de la Recherche Scientifique (CNRS) and Sorbonne Universit\'e through their collaboration programs Projet International de Coop\'eration Scientifique (PICS) and Convergence International, respectively. We also acknowledge partial financial support by the Brazilian agencies Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'{\i}vel Superior (CAPES), Funda\c c\~ao de Amparo \`a Pesquisa do Estado de Minas Gerais (FAPEMIG), Funda\c c\~ao Carlos Chagas Filho de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ) and Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP). {\bf Author contribution statement} All authors contributed to the development of the theoretical model and the interpretation of the results.
2023-04-23T08:17:43.184Z
2019-06-18T02:02:33.000Z
redpajama/arxiv
arxiv_0000
514
5,271
f577648064d227602adfa510c8901928abf39aaa
\section{Introduction} \IEEEPARstart{S}{mart} Meters (SMs) are a cornerstone for the development of smart electrical grids. These devices are able to report power consumption measurements of a house to an utility provider at every hour or even at every few minutes. This feature generates a considerably amount of useful data which enables several applications in almost real-time such as power quality monitoring, timely fault detection, demand response, energy theft prevention, etc.~\cite{alahakoon2016,wang2019,depuru2011}. However, this fine-grained power consumption monitoring poses a thread to consumers privacy. In fact, it has been shown that simple algorithms, known in general as NonIntrusive Load Monitoring (NILM) methods, can readily be used to infer the types of appliances being used at a home in a given time, even without any prior knowledge about the household \cite{molina2010}. Since these features are highly correlated with the presence of people at the dwelling and their personal habits \cite{giaconi2018privacy}, this induces serious privacy concerns which can have an impact on the acceptance and deployment pace of SMs \cite{mckenna2012,cuijpers2013}. The natural challenge raised here is: how privacy can be enhanced while preserving the utility of the data? Although this problem has been widely studied in the context of data science \cite{jain2016}, the time series structure of SMs data requires a particular treatment \cite{asghar2017}. For further details the reader may be referred to~\cite{giaconi2018privacy}. Simple approaches for privacy-preserving in the context of SMs include data aggregation and encryption \cite{li2010,rottondi2013}, the use of pseudonyms rather than the real identities of users \cite{efthymiou2010smart}, downsampling of the data \cite{mashima2015authenticated,eibl2015} and random noise addition \cite{barbosa2016}. However, these methods often restrict the potential applications of the SMs data. For instance, domwnsampling of the data may incur in time delays to detect critical events, while data aggregation degrades the positioning and accuracy of the power measurements. A formal approach to the privacy problem has been presented in \cite{sankar2013smart} from an information-theoretic persepctive, where it has been proposed to assess privacy by the Mutual Information (MI) between the sensitive and released variables. More specifically, the authors model the power measurements of SMs with a hidden Markov model in which the distribution of the measurements is controlled by the state of the appliances, and for each particular state the distribution of power consumption is modeled as Gaussian. This model is then used to obtain the privacy-utility trade-off using tools from rate-distortion theory \cite{cover2006elements}. Although this approach is very appealing it has s two important limitations for its application to real-time scenarios with actual data. First, the privacy measure does not capture the causal time dependence and processing of the data, which is an essential feature of the problem. Second, the Gaussian model is quite restrictive in practice. The first limitation has been addressed in \cite{erdogdu2015}, where it is shown that for, an online scenario, the privacy measure should be based on Directed Information (DI) \cite{massey1990causality}, which is the privacy measure that we will adopt in the present work. We will further address the second limitation by taking a data-based approach in which no explicit constraints on the distributions or statistics of the involved variables are assumed. A more sophisticated approach considers the use of Rechargeable Batteries (RBs) and Renewable Energy Sources (RES) in homes in order to modify the actual energy consumption of users with the goal of hiding the sensitive information \cite{kalogridis2010,tan2013,acs2012,zhao2014achieving,li2018information,giaconi2018,erdemir2019privacy}. Many of these works borrow ideas from the well-known principle of differential privacy \cite{dwork2008}, which seems to be better suited for fixed databases than for time series data \cite{asghar2017}. The main motivation to introduce the use of physical resources into the privacy problem comes from the observation that this approach does not require any distortion in the actual SMs measurements, which means that there is no loss in terms of utility. However, the incorporation of physical resources does not only make the problem more complex and limited in scope, but can also generate a significant cost to users due to the faster wear of the RBs as a consequence of the increased charging/discharging rate \cite{giaconi2018privacy}. On the other hand, the required level of distortion for a specific privacy goal in a realistic scenario in which the attacker threatening privacy has only partial information is still an open question. Thus, the need and convenience of these solutions is questionable. As a matter of fact, in this work, we show that under some conditions the privacy-utility trade-off may be much less severe than expected. However, it is important to note that these approaches are complementary to the ones based on distorting the power measurements rather than alternative. Thus, for simplicity, we assume that no RBs and/or RESs are available and distortion is the only mean to achieve a desired privacy level. The use of neural networks to model an attacker has been considered in \cite{wang2017}. However, a more powerful formulation of the problem assumes that both the releaser (or privatizer) and the attacker are deep neural networks (DNNs) that are trained simultaneously based on a minimax game, an idea that is inspired by the well-known Generative Adversarial Networks (GANs) \cite{goodfellow2014}. This concept can be referred to as Generative Adversarial Privacy (GAP) \cite{huang2018} and is the basis for our approach. It should be mentioned that the concept of GAP has been studied for different applications related to images \cite{tripathy2019,feutry2018} but, to the best of our knowledge, not in the context of SMs time series data. In these works, the authors consider i.i.d. data and deep feed-forward neural networks for the releaser and attacker, while in this paper we consider deep Recurrent Neural Networks (RNNs) to capture and exploit the time correlation. The idea of time-series generation with an adversarial approach has been considered in \cite{esteban2018} for medical data based in the principle of differential privacy. As we mentioned previously, our approach is instead based on the DI, an information-theoretic measure of privacy. In summary, the main contributions of this paper are the following: \begin{enumerate}[(i)] \item We applied DI as a privacy measure similarly to \cite{erdogdu2015}. However, unlike this and previous works, we impose no explicit assumptions on the generating model of the power measurements, but take a more versatile data-driven approach. \item We study different possible distortion measures which provide more flexibility to control the specific features to be preserved in the released signals that is, the relevant features for the target applications. \item For the sake of computational tractability, we propose a loss function for training the privacy-preserving releaser based on an upper bound to DI. Then, considering an attacker that minimizes a standard cross-entropy loss, we show that this leads to an adversarial framework based on two RNNs to train the releaser. \item We perform an extensive statistical study with actual data from three different data sets and frameworks motivated by real-world theaters to characterize the utility-privacy trade-offs and the nature of the distortion generated by the releaser network. \item We investigate the data mismatch problem in the context of SMs privacy, which occurs when the data available to the attacker is not the same as the one used for training the releaser mechanism, and show that it has an important impact on the privacy-utility trade-off. \end{enumerate} The rest of the paper is organized as follows. In Section \ref{sec:formulation}, we present the theoretical formulation of the problem that motivates the loss functions for the releaser and attacker. Then, in Section \ref{sec:model}, the privacy-preserving adversarial framework is introduced along with the training algorithm. Extensive results are presented and discussed in Section \ref{sec:results}. Finally, some concluding remarks are presented in Section \ref{sec:conclusion}. \subsection*{Notation and conventions} \begin{itemize} \item $X^T = (X_1, \ldots, X_T)$ : A sequence of random variables, or a time series, of length $T$; \item $x^T = (x_1, x_2, \ldots, x_T)$: A realization of $X^T$; \item $x^{(i)T} = (x^{(i)}_1, x^{(i)}_2, \ldots, x^{(i)}_T)$: The $i^{\text{th}}$ sample in a minibatch used for training; \item $\E[X]$: The expectation of a random variable $X$; \item $p_X$: The distribution of $X$; \item $I(X;Y)$: Mutual information between random variables $X$ and $Y$ \cite{cover2006elements}; \item $H(X)$: Entropy of random variable $X$; \item $I(X^T \to Y^T)$: Directed information between two time series $X^T$ and $Y^T$; \item $H(X^T||Y^T)$: Causally conditional entropy of $X^T$ given $Y^T$ \cite{kramer2003}; \item ${X -\!\!\!\!\minuso\!\!\!\!- Y -\!\!\!\!\minuso\!\!\!\!- Z}$: Markov chain among $X$, $Y$ and $Z$. \end{itemize}{} \section{Problem Formulation and Training Loss} \label{sec:formulation} \subsection{Main definitions} Consider the private time series $X^T$, the utile process $Y^T$, and the observed signal $W^T$. We assume that $X_t$ takes values on a fixed discrete alphabet $\mathcal{X}$, for each $t \in \{ 1, \ldots, T \}$. A releaser $\mathcal{R}_{\theta}$ (this notation is used to denote that the releaser is controlled by a vector of parameters $\theta$) produces the release process as $Z_t$ based on the observation $W^t$, for each time $t \in \{ 1, \ldots, T \}$, while an attacker $\mathcal{A}_{\phi}$ attempts to infer $X_t$ based on $Z^t$ by finding an approximation of $p_{X^T|Z^T}$, which we shall denote by $p_{\hat{X}^T|Z^T}$. Thus, the Markov chain $(X^t,Y^t) -\!\!\!\!\minuso\!\!\!\!- W^t -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$ holds for all $t \in \{ 1, \ldots, T \}$. In addition, due to causality, the distribution $p_{Z^T\hat{X}^T|W^T}$ can be decomposed as follows: \begin{equation} p_{Z^T\hat{X}^T|W^T}(z^T,\hat{x}^T|w^T) = \prod_{t=1}^{T} p_{Z_t|W^t}(z_t|w^t) p_{\hat{X}_t|Z^t}(\hat{x}_t|z^t). \end{equation} The goal of the releaser $\mathcal{R}_\theta$ is to minimize the flow of information from the sensitive process $X^T$ to $\hat{X}^T$ while simultaneously keeping the distortion between the release time series $Z^T$ and the useful signal $Y^T$ small. On the other hand, the goal of the attacker $\mathcal{A}_{\phi}$ (again, this notation is used to denote that the attacker is controlled by a vector of parameters $\phi$) is to learn the optimal decision rule based on the distribution $p_{X_t|Z^t}$, for each $t$, as accurately as possible. Note that after the approximation $p_{\hat{X}_t|Z^t}$ is obtained, the attacker can estimate the realization $x^T$ corresponding to $z^T$ in an online fashion, by solving the following $T$ problems: \begin{equation} \underset{\hat{x}_t \in \mathcal{X}}{\text{argmax}} \; p_{\hat{X}_t|Z^t}(\hat{x}_t|z^t), \qquad t \in \{1,\ldots,T\}. \end{equation} Thus, the attacker can be interpreted as an hypothesis test, as stated in \cite{li2019}. However, in the present case, we consider the more realistic scenario in which the statistical test is suboptimal due to the fact that the attacker has no access to the actual conditional distributions $p_{X_t|Z^t}$ but only to $p_{\hat{X}_t|Z^t}$, i.e., an inference of them In order to take into account the causal relation between $X^T$ and $\hat{X}^T$, the flow of information is quantified by DI~\cite{massey1990causality}: \begin{equation} \label{eqtr1} I\big(X^T\rightarrow \hat{X}^T\big)\triangleq \sum_{t=1}^{T} I(X^t;\hat{X}_t|\hat{X}^{t-1}), \end{equation} where $I(X^t;\hat{X}_t|\hat{X}^{t-1})$ is the conditional mutual information between $X^t$ and $\hat{X}_t$ conditioned on $\hat{X}^{t-1}$\cite{cover2006elements}. The normalized expected distortion between $Z^T$ and $Y^T$ is defined as: \begin{equation} \label{Distortion} \mathcal{D}(Z^T,Y^T) \triangleq \frac{\E[d(Z^T,Y^T)]}{T}, \end{equation} where $d : \R^T \times \R^T \to \R$ is any distortion function (i.e., a metric on $\R^T$). To ensure the quality of the release, it is natural to impose the following constraint: $\mathcal{D}(Z^T,Y^T) \le \varepsilon$ for some given $\varepsilon \ge 0$. In previous works, the normalized squared error was considered as a distortion function (e.g., \cite{sankar2013smart}). Beside this, other distortion measures can be relevant whithin the framework of SMs. For instance, demand response programs usually require an accurate knowledge of peak power consumption, so a distortion function closer to the infinity norm would be more meaningful for those particular applications. Thus, for the sake of generality and to keep the distortion function simple, we propose to use an $\ell_p$ distance: \begin{equation} \label{eq:p_distortion} d(z^T,y^T) \triangleq \| z^t - y^t \|_p = \left( \sum_{t=1}^T |z_t - y_t|^p \right)^{1/p}, \end{equation} where $p \ge 2$ is a fix parameter. Note that this distortion function contains the squared error case as a particular case for $p = 2$ while it converges to the maximum error between the components of $z^T$ and $y^T$ as $p \to \infty$. Therefore, the problem of finding an optimal releaser $\mathcal{R}_\theta$ subject to the aforementioned attacker $\mathcal{A}_{\phi}$ and distortion constraint can be formally written as follows: \begin{align} \label{eqtr2} & \underset{\theta}{\text{min}} \; \quad I\left(X^T\rightarrow \hat{X}^T\right), \nonumber \\ & \text{s.t. } \quad \mathcal{D}(Z^T,Y^T) \le \varepsilon. \end{align} Note that the solution to this optimization problem depends on $p_{\hat{X}^T|Z^T}$, i.e., the conditional distributions that represent the attacker $\mathcal{A}_{\phi}$. Thus, a joint optimization of the releaser $\mathcal{R}_{\theta}$ and the attacker $\mathcal{A}_{\phi}$ is required. \subsection{A novel training loss} The optimization problem in \eqref{eqtr2} can be exploited to motivate an objective function for $\mathcal{R}_\theta$. However, note that the cost of computing DI term is $O(|\mathcal{X}|^T)$, where $|\mathcal{X}|$ is the size of $\mathcal{X}$. Thus, for the sake of tractability, DI will be replaced with the following surrogate upper bound: \begin{align} \label{eqtr5} I\left(X^T\rightarrow \hat{X}^T\right) & = \sum_{t=1}^{T}\left[H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|\hat{X}^{t-1},X^t)\right] \nonumber \\ &\overset{\text{(i)}}{\leq}\sum_{t=1}^{T}\left[H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|\hat{X}^{t-1},X^t,Z^t)\right] \nonumber \\ &\overset{\text{(ii)}}{\triangleq} \sum_{t=1}^{T} \left[ H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|Z^t)\right]\nonumber \\ & \overset{\text{(iii)}}{\leq} T \log|\mathcal{X}| - \sum_{t=1}^{T}H(\hat{X}_t|Z^t)\nonumber \\ & \overset{\text{(iv)}}{=} \textrm{constant} - H\big(\hat{X}^T \| Z^T \big), \end{align} where (i) is due to the fact that conditioning reduces entropy; equality (ii) is due to the Markov chains $X^t -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$ and $\hat{X}^{t-1} -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$; (iii) is due to the trivial bound $H(\hat{X}_t|\hat{X}^{t-1}) \le H(\hat{X}_t) \le \log |\mathcal{X}|$; and (iv) follows by the definition of the \emph{causally} conditional entropy ~\cite{kramer2003} and the Markov chain $\hat{X}^{t-1} -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$. Therefore, the loss function for $\mathcal{R}_\theta$ can be written as: \begin{equation} \label{eq:releaser_loss} \mathcal{L}_{\mathcal{R}}(\theta, \phi,\lambda) = \mathcal{D}(Z^T,Y^T) - \frac{\lambda}{T} H\big(\hat{X}^T \| Z^T \big), \end{equation} where $\lambda \ge 0$ controls the privacy-utility trade-off and the factor $1/T$ has been introduced for normalization purposes. It should be noted that, for $\lambda = 0$, the loss function $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$ reduces to the expected distortion, being independent from the attacker $\mathcal{A}_{\phi}$. In such scenario, $\mathcal{R}_\theta$ offers no privacy guarantees. Conversely, for very large values of $\lambda$, the loss function $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$ is dominated by the upper bound on DI, so that privacy is the only goal of $\mathcal{R}_\theta$. In this regime, we expect the attacker $\mathcal{A}_{\phi}$ to fail completely to infer $X^T$, i.e., to approach to random guessing performance. On the other hand, the attacker $\mathcal{A}_{\phi}$ is a classifier which optimizes the following cross-entropy loss: \begin{equation} \label{eq:attacker_loss} \mathcal{L}_{\mathcal{A}}(\phi) \triangleq \frac{1}{T} \sum_{t=1}^T \E\left[- \log p_{\hat{X}_t|Z^t}(X_t|Z^t) \right], \end{equation} where the expectation should be understood w.r.t. $p_{X_tZ^t}$. It is important to note that \begin{align} \label{eq:loss_a_bound} \mathcal{L}_{\mathcal{A}}(\phi) & = \frac{1}{T} \sum_{t=1}^T \E\left[- \log p_{\hat{X}_t|Z^t}(\hat{X}_t|Z^t) \right] + \E\left[ \log \frac{p_{\hat{X}_t|Z^t}(\hat{X}_t|Z^t)}{p_{\hat{X}_t|Z^t}(X_t|Z^t)} \right] \nonumber \\ & \ge \frac{1}{T} H\big(\hat{X}^T \| Z^T \big), \end{align} since the second term in \eqref{eq:loss_a_bound} is a Kullback-Leibler divergence, which is non-negative. Thus, by minimizing $\mathcal{L}_{\mathcal{R}}$, the releaser is preventing the attacker from inferring the sensitive process while also minimizing the distortion between the useful and released processes. This shows that $\mathcal{R}_{\theta}$ and $\mathcal{A}_{\phi}$ are indeed trained in an adversarial fashion. It should be noted that $\mathcal{A}_{\phi}$ here is an artificial attacker used for training $\mathcal{R}_{\theta}$. Once the training is complete, and $\mathcal{R}_{\theta}$ is fixed, a new attacker should be trained from scratch, using the loss \eqref{eq:attacker_loss}, in order to assess the privacy-utility trade-off in an unbiased way. \section{Privacy-Preserving Adversarial Learning} \label{sec:model} Based on the previous theoretical formulation, an adversarial modeling framework consisting of two RNNs, a releaser $\mathcal{R}_{\theta}$ and an attacker $\mathcal{A}_{\phi}$, is considered (see Fig. \ref{had1}). Note that independent noise $U^T$ is appended to $W^T$ in order to randomize the released variables $Z^T$, which is a popular approach in privacy-preserving methods. In addition, the available theoretical results show that, for Gaussian distributions, the optimal release contains such a noise component \cite{sankar2013smart,tripathy2017privacy}. For both networks, a LSTM architecture is selected (see Fig. \ref{had2}), which was shown to be successful in several problems dealing with sequences of data (e.g., see \cite{goodfellow2016} and references therein for more details). Training in the suggested framework is performed using the Algorithm \ref{Al1} which requires $k$ gradient steps to train $\mathcal{A}_\phi$ followed by one gradient step to train $\mathcal{R}_\theta$. It is worth to emphesize that $k$ should be larger than $1$ in order to ensure that $\mathcal{A}_\phi$ represents a strong attacker. However, if $k$ is too large, this could lead to an overfitting and thus a poor attacker. \begin{figure}[t!] \centering \includegraphics[width=0.7\linewidth]{Model.png} \caption{Privacy-Preserving framework. The seed noise $U^T$ is generated from i.i.d. samples according to a uniform distribution: $U_t \sim U[0,1]$.} \label{had1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{LSTM.png} \caption{LSTM recurrent network cell diagram. The cell includes four gating units to control the flow of information. All the gating units have a sigmoid activation function ($\sigma$) except for the input unit (that uses an hyperbolic tangent activation function ($\tanh$) by default). The parameters $b,V,W$ are respectively biases, input weights, and recurrent weights. In the LSTM architecture, the forget gate $f_t = \sigma(b^f + K^fh_{t-1} + V^fw_t)$ uses the output of the previous cell (which is called hidden state $h_{t-1}$) to control the cell state $C_t$ to remove irrelevant information. On the other hand, the input gate $g_t = \sigma(b^g + K^g h_{t-1} + V^g w_t)$ and input unit adds new information to $C_t$ from the current input. Finally, the output gate $o_t = \sigma(b^o + K^oh_{t-1} + V^ow_t)$ generates the output of the cell from the current input and cell state.} \label{had2} \end{figure} \begin{algorithm*} \caption{Algorithm for training privacy-preserving data releaser neural network.} \label{Al1} \textbf{Input:} Data set (which includes sample sequences of useful data $y^T$, sensitive data $x^t$); seed noise samples $u^T$; seed noise dimension $m$; batch size $B$; number of steps to apply to the attacker $k$; gradient clipping value $C$; $\ell_2$ recurrent regularization parameter $\beta$. \\ \textbf{Output:} Releaser network $\mathcal{R}_\theta$. \begin{algorithmic}[1] \FOR {number of training iterations} \FOR {$k$ steps} \STATE Sample minibatch of $B$ examples: $\mathcal{B} = \Big\{w^{(b)T}=\Big(x^{(b)T},y^{(b)T},u^{(b)T}\Big); \; b=1,2,\ldots,B\Big\}$. \STATE Compute the gradient of $\mathcal{L}_{\mathcal{A}}(\phi)$, approximated with the minibatch $\mathcal{B}$, w.r.t. to $\phi$ \STATE Update the attacker by applying the RMSprop optimizer with clipping value $C$. \ENDFOR \STATE Sample minibatch of $B$ examples: $\mathcal{B} = \Big\{w^{(b)T}=\Big(x^{(b)T},y^{(b)T},u^{(b)T}\Big); \; b=1,2,\ldots,B\Big\}$. \STATE Compute the gradient of $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$, approximated with the minibatch $\mathcal{B}$, w.r.t. to $\theta$. \STATE Use $\textrm{Ridge}(L_2)$ recurrent regularization with value $\beta$ and update the releaser by applying RMSprop optimizer with clipping value $C$. \ENDFOR \end{algorithmic} \end{algorithm*} \section{Results and Discussion} \label{sec:results} \subsection{Description of data sets} \label{sec:dataset} Three different data sets are considered: \begin{itemize} \item The Electricity Consumption \& Occupancy (ECO) data set, collected and published by \cite{beckel2014eco}, which includes 1 Hz power consumption measurements and occupancy information of five houses in Swiss over a period of $8$ months. In this study, we re-sampled the data to have hourly samples. \item The Pecan Street data set contains hourly SMs data of houses in Texas, Austin and was collected by Pecan Street Inc. \cite{street2019dataport}. Pecan Street project is a smart grid demonstration research program which provides electricity, water, natural gas, and solar energy generation measurements for over $1000$ houses in Texas, Austin. \item The Low Carbon London (LCL) data set, which includes half-hourly energy consumption for more than $5000$ households over the period $2011-2014$ in London \cite{LCLdatastore}. Each household is allocated to a CACI Acorn group \cite{caci1989acorn}, which includes three categories: affluent, comfortable and adversity. \end{itemize} We model the time dependency over each day, so the data sets were reshaped to sample sequences of length $24$ for ECO and Pecan Street (data rate of 1 sample per hour) while sample sequences of length $48$ were used for LCL data set (data rate of 1 sample per half hour). For the ECO, Pecan Street, and LCL data sets, a total number of $11225$, $9120$, and $19237$ sample sequences were collected, respectively. The data sets were splitted into train and test sets with a ratio of roughly \mbox{85:15} while $10 \%$ of the training data is used as the validation set. The network architectures and hyperparameters used for training and test in the different applications are summarized in Table \ref{tab:hyperparameters}. \begin{table*}[htbp] \centering \caption{Model architectures and hyperparameters values used for each application.} \begin{adjustbox}{width=0.95\textwidth} \begin{tabular}{c c c c c c c c} \toprule \midrule \textbf{SMs Application}& \textbf{\makecell{Releaser}}& \textbf{\makecell{Training Attacker}}& \textbf{\makecell{Test Attacker}}& \textbf{\makecell{$B$}}& \textbf{\makecell{$k$}}& \textbf{\makecell{$m$}}\\ \midrule[0.5pt] \multicolumn{1}{l}{\textbf{Inference of households occupancy}} &\makecell{$4$ LSTM layers each\\with 64 cells and $\beta=1.5$} &\makecell{$2$ LSTM layers each\\ with 32 cells}& \makecell{$3$ LSTM layers each\\ with 32 cells}&128 &4&8 \\ \midrule[0.1pt] \multicolumn{1}{l}{\textbf{Inference of households identity}}&\makecell{$6$ LSTM layers each\\ with 128 cells and $\beta=2$} &\makecell{$4$ LSTM layers each\\ with 32 cells}& \makecell{$4$ LSTM layers each\\ with 32 cells}&128 &5&3\\ \midrule[0.1pt] \multicolumn{1}{l}{\textbf{Inference of households acron type}} &\makecell{$5$ LSTM layers each\\ with 100 cells and $\beta=0.1$} &\makecell{$3$ LSTM layers each\\ with 32 cells}& \makecell{$4$ LSTM layers each\\ with 32 cells}&128 &7&3 \\ \midrule \bottomrule \end{tabular} \end{adjustbox} \label{tab:hyperparameters} \end{table*} To assess the distortion with respect to the actual power consumption measurements, we define the Normalized Error (NE) for the different $\ell_p$ distortion functions as follows \begin{equation} \text{NE}_p \triangleq \frac{\E\left[ \| Y^T - Z^T \|_p \right]}{\E\left[ \| Y^T \|_p \right]}. \end{equation} \subsection{$\ell_2$ Distortion} \label{subsec:L2norm} First, the $\ell_2$ distortion function is considered (i.e., $p = 2$ in \eqref{eq:p_distortion}). In the following subsections, three different privacy applications are studied (one for each of the data sets presented in Section \ref{sec:dataset}). \subsubsection{Inference of households occupancy} \label{sec:household_occupancy} The first practical case of study regarding privacy-preserving in time series data is the concern of inferring presence/absence of residents at home from the total power consumption collected by SMs \cite{kleiminger2015household,jia2014human}. For this application, the electricity consumption measurements from the ECO data set are considered as the useful data, while occupancy labels are defined as the private data. Therefore, in this case, the releaser attempts to minimize a trade-off between the distortion of the total electricity consumption incurred and the probability of inferring the presence of an individual at home from the release signal. Note from Table \ref{tab:hyperparameters} that a stronger attacker composed of 3 LSTM layers is used for the test. In Fig. \ref{ECOTradeoff} we show the empirically found privacy-utility trade-off for this application. Note that by adding the distortion the accuracy of the attacker changes from more than $80 \%$ (no privacy) to $50 \%$ (full privacy), which corresponds to the performance of a random guessing classifier. \begin{figure}[htbp] \centering \includegraphics[width=.45\linewidth]{ECO_Tradeoff.png} \caption{Privacy-utility trade-off for house occupancy inference application. Since in this application the attacker is a binary classifier, the random guessing (balanced) accuracy is 50$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.} \label{ECOTradeoff} \end{figure} In order to provide more insights about the release mechanism, the Power Spectrum Density (PSD) of the input signal and the PSD of the error signal for three different cases along the privacy-utility trade-off curve of Fig.~\ref{ECOTradeoff} are estimated using Welch's method \cite{stoica2005spectral}. For each case, we use 10 release signals and average the PSD estimates to reduce the variance of the estimator. Results are shown in Fig. \ref{ECONoise}. Looking at the PSD of the input signal (useful data), some harmonics are clearly visible. The PSD of the error signals show that the model controls the trade-off in privacy-utility by modifying the floor noise and the distortion on these harmonics. \begin{figure}[htbp] \centering \includegraphics[width=0.65\linewidth]{ECO_Noise.png} \caption{PSD of the actual electricity consumption and error signals for the house occupancy inference application.} \label{ECONoise} \end{figure} It should be mentioned that two stationary tests, the Augmented Dickey-Fuller test \cite{dickey1979distribution} and the Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) test \cite{kwiatkowski1992testing}, were applied to the data. This confirmed that there is enough evidence to suggest the data is stationary, thus supporting our PSD analysis. \subsubsection{Inference of household identity} \label{sec:household_identity} The second practical case of study regarding privacy preservation in SMs measurements is identity recognition from total power consumption of households \cite{efthymiou2010smart}. It is assumed that both the releaser and the attacker have access to the total power consumption of different households in a region (training data), and then the attacker attempts to determine the identity of a house using the released data obtained from the test data. Thus, our model aims at generating release data of total power consumption of households in a way that prevents the attacker to perform the identity recognition while keeping distortion on the total power minimized. For this task, total power consumption of five houses is used. The empirical privacy-utility trade-off curve obtained for this application is presented in Fig. \ref{PEECANTradeoff}. Comparing Fig. \ref{PEECANTradeoff} with Fig. \ref{ECOTradeoff}, we see that a high level of privacy requires a high level of distortion. For instance, in order to obtain an attacker accuracy of 30 $\%$, NE$_2$ should be approximately equal to 0.30. This is attributed to the fact that this task is harder from the learning viewpoint than the one considered in Section \ref{sec:household_occupancy}. \begin{figure}[htbp] \centering \includegraphics[width=0.45\linewidth]{PEECAN_Five_Houses.png} \caption{Privacy-utility trade-off for house identity inference application. Since in this application the attacker is a five-class classifier, the random guessing (balanced) accuracy is 20$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.} \label{PEECANTradeoff} \end{figure} \subsubsection{Inference of households acorn type} \label{sec:household_acorntype} As the third practical case of study, we consider the family acorn type identification which can reveal the family economic status to any third-party having access to the SMs data \cite{thorve2018}. Thus, for this application, the SMs power consumption is used as useful data while the acorn type is considered as private one. The empirical privacy-utility trade-off curve obtained for this application is presented in Fig. \ref{LondonTradeoff}. Once again, we see a large variation in the accuracy of the attacker as the distortion is modified. \begin{figure}[htbp] \centering \includegraphics[width=0.45\linewidth]{London_Tradeoff.png} \caption{Privacy-utility trade-off for acorn type inference application. Since in this application the attacker is a three-class classifier, the random guessing (balanced) accuracy is 33$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.} \label{LondonTradeoff} \end{figure} It should be noted that the PSD analysis for this application and the previous one lead to similar results to the ones of the first application and therefore are not reported. To assess the quality of the release signal, utility providers may be interested in several different indicators. These include, for instance, the mean, skewness, kurtosis, standard deviation to mean ratio, and maximum to mean ratio \cite{shen2019}. Thus, for completeness, we present these indicators in Table \ref{tab:tab1} for three different cases along the privacy-utility trade off curve. These results show that in general the error in these indicators is small when the privacy constraints are lax and increases as they become strict. Whereas no simple relation can be expected between NE$_2$ and the values of the corresponding indicators. \begin{table*} \centering \caption{Errors in power quality indicators for three applications along the privacy-utility trade-off.} \begin{adjustbox}{width=0.9\textwidth} \renewcommand{\arraystretch}{0.55} \begin{tabular}{c c c c c c c c} \toprule \midrule \multirow{2}[4]{*}{SMs Application}&\multirow{2}[4]{*}{\textbf{NE$_2$}}&\multirow{2}[4]{*}{\textbf{Accuracy(\%)}} & \multicolumn{5}{c}{\textbf{Absolute relative error of quality indicators(\%) } }\\ \cmidrule(rl){4-8} & & & \textit{Mean} & \textit{Skewness}& \textit{Kurtosis}& \textit{Std. Dev./Mean}& \textit{Max./Mean} \\ \midrule \multirow{3}[4]{*}{\textbf{Inference of households occupancy}}&0.04& 78 &1.42&1.06&0.70&0.67&0.46\\ &0.12&65&9.69&4.32&5.81&4.58&4.92\\ &0.18&57& 13.26&12.83&2.57&16.44&13.89\\ \midrule \multirow{3}[4]{*}{\textbf{Inference of households identity }}&0.05&54 &3.42&2.22&2.01&3.50&2.51\\ &0.17&39&4.63&3.18&1.79&15.74&9.32\\ &0.36&29 &12.49&6.71&1.44&19.12&9.98\\ \midrule \multirow{3}[4]{*}{\textbf{Inference of households acron type}}&0.03&85&1.86&0.66&0.44&0.02&0.02\\ &0.29&47 &2.49&9.46&14.54&24.97&13.24\\ &0.60&35& 13.21&45.92&24.03&55.38&41.68\\ \midrule \bottomrule \end{tabular} \end{adjustbox} \label{tab:tab1} \end{table*} \subsection{$\ell_p$ Distortion As already discussed in Section \ref{sec:formulation}, the distortion function should be properly matched to the intended application of the release variables $Z^T$ in order to preserve the characteristics of the target variables $Y^T$ that are considered essential. In this section, we consider the $\ell_p$ distortion \eqref{eq:p_distortion} with $p=4,5$ as an alternative to the $\ell_2$ distortion function in Section and study their potential benefits. The privacy-utility trade-off curve for the inference of households occupancy application is shown in Fig. \ref{LossesEco}. As a first observation, it appears clear that the choice of the distortion measure has a non-negligible impact on the privacy-utility trade-off curve. In fact, it can be seen that for a given amount of distortion, the releasers trained with the $\ell_4$ and $\ell_5$ distortion measures achieve a higher level of privacy than the one trained with the $\ell_2$ distortion function. It should be mentioned that we also considered other norms, such as the $\ell_{10}$, and the privacy-utility trade-off was observed to be similar, but slightly better, than the one corresponding to the $\ell_4$ norm. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{L4L5_tradeoff.png} \caption{Privacy-utility trade-off for house occupancy inference application based on the different $\ell_p$ distortion functions. For each figure, the dashed line, shown for comparison purposes, is the fitted curve found in Fig. \ref{ECOTradeoff} for the $\ell_2$ distortion function.} \label{LossesEco} \end{figure} As we discussed in Section \ref{sec:formulation}, in demand response programs, the utilities are mostly interested in the peak power consumption of the customers. It is also expected that higher-order $\ell_p$ norms are better at preserving these signal characteristics than the $\ell_2$ norm. To verify this notion, we considered 60 random days of the ECO data set in a full privacy scenario (i.e., with an attacker accuracy very close to $50 \%$) and plotted the actual power consumption along with the corresponding release signals for both the $\ell_4$ and $\ell_2$ distortion functions. Results are shown in Fig. \ref{TimeL4L2ECO}, which clearly indicates that the number of peaks preserved by the releaser trained with the $\ell_4$ distortion function is much higher than the ones kept by the releaser trained with the $\ell_2$ distortion function. This suggests that for the demand response application, higher order $\ell_p$ distortion functions should be considered. \begin{figure}[htbp] \centering \includegraphics[width=0.65\linewidth]{Time_L24.png} \caption{Example of the release power consumption in the time domain compared with the actual power consumption over 60 random days with almost full privacy for the $\ell_4$ and $\ell_2$ distortion functions.} \label{TimeL4L2ECO} \end{figure} \subsection{Attacker with Data Mismatch Problem All the previous results are based on the assumption that the attacker has access to exactly the same training data set used by the releaser. This case should be considered as a worst-case analysis of the performance of the privacy-preserving networks. However, this assumption may not be true in practice. To examine this scenario, we revisit the application of Section \ref{sec:household_occupancy} in two different cases. In the first case, we assume that out of the data set of five houses (ECO data set), the releaser uses the data of houses $\{1,2,4,5|$ for training while the attacker has access to just the data of house $3$. In the second case, we assume that releaser is trained by the the data set of all houses but just the data set of houses $1$ and $3$ are available to the attacker. These scenarios try to capture different degrees of a data mismatch problem, which could have an impact on the privacy-utility trade-off due to the different generalization errors. The results are presented in Fig. \ref{DiffDataset} along with the worst-case scenario. This clearly shows how the overlapping of the training data sets of the releaser and the attacker affect the performance of the model. In fact, in the case where the attacker does not have access to the full data set of the releaser but a portion of that, the performance of the attacker largely degrades, which means that a target level of privacy requires much less distortion. In the extreme case where the attacker has no access to the releaser training data set, a very high level of privacy can be achieved with negligible distortion. This should be considered as a best-case scenario. It should be mentioned that we repeated this experiment with different shuffling of the houses and similar results were obtained. \begin{figure}[htbp] \centering \includegraphics[width=.45\linewidth]{drawing13.png} \caption{Privacy-utility trade-off for house occupancy inference application when an attacker (trained separately to infer private data from the release) does not have full access to the releaser training data set.} \label{DiffDataset} \end{figure} \section{Summary and Discussion} \label{sec:conclusion} Privacy problems associated with smart meters measurements are an important concern in society, which can have an impact on their deployment pace and the advancement of smart grid technologies. Thus, it is essential to understand the real privacy risks associated with them in order to provide an adequate solution to this problem. In this paper, we proposed to measure the privacy based on Directed Information (DI) between the sensitive time series and its inference by a potential attacker optimized for that task. DI captures the causal time dependencies present in the time series data and its processing. Unlike previous approaches, we impose no explicit assumption on the statistics or distributions of the involved random variables. We believe that this data-driven approach can provide a more accurate assessment of the information leakage in practice than purely theoretical studies based on worst-case assumptions. We considered a privacy-preserving adversarial learning framework that balances the trade-off between privacy and distortion on release data. More precisely, we defined a tractable training objective (or loss) based on an upper bound to DI and a general distortion measure. The desired releaser is then trained in an adversarial framework using RNNs to optimize such objective, while an artificial attacker is trained with an opposite goal. After convergence, a new attacker is trained to test the level of privacy achieved by the releaser. A detailed study of different applications, including inference of households occupancy (ECO data set), inference of household identity (Pecan Street data set), and inference of household acorn type (LCL data set), shows that the privacy-utility trade-off is strongly dependent upon the considered application and distortion measure. We showed that the usual $\ell_p$-norm based distortion measure for $p=2$ can have a worse privacy-utility trade-off than for $p>2$. In addition, we showed that the $\ell_4$ distortion measure generates a release that preserves most of the power consumption peaks even under a full privacy regime, which is not the case for the $\ell_2$ distortion function. This result is of considerable importance for demand response applications. Finally, we studied the impact of the data mismatch problem in this application, which occurs when the training data set of the releaser is not the same as the one used by the attacker. Results show that this effect can greatly affect the privacy-utility trade-off. Since this phenomenon is expected in practice, at least to some degree, these findings suggest that the level of required distortion to achieve desired privacy targets may not be too significant in several cases of interest. In such scenarios, our approach may offer a simpler and more general solution than the ones offered by methods based on rechargeable batteries and renewable energy sources. \bibliographystyle{ieeetr}
2023-04-23T08:17:43.517Z
2019-06-18T02:03:07.000Z
redpajama/arxiv
arxiv_0000
529
6,820
b6a7ea2ce9f7c3d5e3048ee2cb471f3da8b3d60b
\subsection{\@startsection{subsection}{3}% \z@{.4\linespacing\@plus.0\linespacing}{.1\linespacing}% {\normalfont\bfseries}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.4\linespacing\@plus.0\linespacing}{.1\linespacing}% {\normalfont\bfseries}} \makeatother \makeatletter \renewcommand{\paragraph}{\@startsection{paragraph}{4}{0ex}% {-3.25ex plus -1ex minus -0.2ex}% {1.5ex plus 0.2ex}% {\normalfont\normalsize\bfseries}} \makeatother \stepcounter{secnumdepth} \begin{document} \bibliographystyle{plainnat} \linespread{1.6} \abovedisplayskip=0pt \abovedisplayshortskip=0pt \belowdisplayskip=0pt \belowdisplayshortskip=0pt \abovecaptionskip=0pt \belowcaptionskip=0pt \newcounter{Lcount} \newcommand{\squishlisttwo}{ \begin{list}{\alph{Lcount}. } { \usecounter{Lcount} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\topsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\leftmargin}{2em} \setlength{\labelwidth}{1.5em} \setlength{\labelsep}{0.5em} } } \newcommand{\squishend}{ \end{list} } \newtheorem{lemma}{Lemma}[subsection] \newtheorem{definition}{Definition}[section] \newtheorem{remarks}{Remark}[section] \newtheorem{remark}{Remark}[subsection] \newtheorem{theorem}{Theorem}[subsection] \newtheorem{corollary}[lemma]{Corollary} \newtheorem{proposition}[lemma]{Proposition} \numberwithin{equation}{subsection} \newtheorem{axiom}[lemma]{Axiom} \newtheorem*{acknowledgements}{Acknowledgments} \newcommand{\mbox{$\Bbb R$}} \newcommand{\C}{\mbox{$\Bbb C$}}\newcommand{\F}{\mbox{$\Bbb F$}}{\mbox{$\Bbb R$}} \newcommand{\C}{\mbox{$\Bbb C$}}\newcommand{\F}{\mbox{$\Bbb F$}} \newcommand{\mbox{$\Bbb N$}}{\mbox{$\Bbb N$}} \newcommand{\mbox{$\Bbb Z$}} \def\mathfrak{g}} \def\q{\mathfrak{q}{\mathfrak{g}} \def\q{\mathfrak{q}}{\mbox{$\Bbb Z$}} \def\mathfrak{g}} \def\q{\mathfrak{q}{\mathfrak{g}} \def\q{\mathfrak{q}} \def\mathfrak{h}} \def\c{\mathfrak{c}} \def\d{\mathfrak{d}{\mathfrak{h}} \def\c{\mathfrak{c}} \def\d{\mathfrak{d}} \def\mathfrak{m}} \def\n{\mathfrak{n}} \def\i{\mathfrak{i}{\mathfrak{m}} \def\n{\mathfrak{n}} \def\i{\mathfrak{i}} \def\mathfrak{l}} \def\s{\mathfrak{s}{\mathfrak{l}} \def\s{\mathfrak{s}} \def\mathscr{L}{\mathscr{L}} \def\mathscr{R}{\mathscr{R}} \mbox{} \vskip 1cm \title[Solvable extensions of quasi-filiform algebras]{Solvable extensions of the naturally graded quasi-filiform Leibniz algebra of second type $\mathcal{L}^4$}\maketitle \begin{center} {A. Shabanskaya} Department of Mathematics, Adrian College, 110 S Madison St., Adrian, MI 49221, USA [email protected] \end{center} \begin{abstract} For a sequence of the naturally graded quasi-filiform Leibniz algebra of second type $\mathcal{L}^4$ introduced by Camacho, G\'{o}mez, Gonz\'{a}lez and Omirov, all the possible right solvable indecomposable extensions over the field $\C$ are constructed. \end{abstract} AMS Subject Classification: 17A30, 17A32, 17A36, 17A60, 17B30\\ Keywords: Leibniz algebra, solvability, nilpotency, nilradical, nil-independence, derivation. \vskip 6cm \newpage \section{Introduction} Leibniz algebras were discovered by Bloch in 1965 \cite{B} who called them $D-$ algebras. Later on they were considered by Loday and Cuvier \cite{ C, Lo, L, LP} as a non-antisymmetric analogue of Lie algebras. It makes every Lie algebra be a Leibniz algebra, but the converse is not true. Exactly Loday named them Leibniz algebras after Gottfried Wilhelm Leibniz. Since then many analogs of important theorems in Lie theory were found to be true for Leibniz algebras, such as the analogue of Levi's theorem which was proved by Barnes \cite{Ba}. He showed that any finite-dimensional complex Leibniz algebra is decomposed into a semidirect sum of the solvable radical and a semisimple Lie algebra. Therefore the biggest challenge in the classification problem of finite-dimensional complex Leibniz algebras is to study the solvable part. And to classify solvable Leibniz algebras, we need nilpotent Leibniz algebras as their nilradicals, same as in the case of Lie algebras \cite{Mub3}. Every Leibniz algebra satisfies a generalized version of the Jacobi identity called the Leibniz identity. There are two Leibniz identities: the left and the right. We call Leibniz algebras right Leibniz algebras if they satisfy the right Leibniz identity and left, if they satisfy the left. A left Leibniz algebra is not necessarily a right Leibniz algebra \cite{D}. Leibniz algebras inherit an important property of Lie algebras which is that the right (left) multiplication operator of a right (left) Leibniz algebra is a derivation \cite{CL}. Besides the algebra of right (left) multiplication operators is endowed with a structure of a Lie algebra by means of the commutator \cite{CL}. Also the quotient algebra by the two-sided ideal generated by the square elements of a Leibniz algebra is a Lie algebra \cite{O}, where such ideal is the minimal, abelian and in the case of non-Lie Leibniz algebras it is always non trivial. It is possible to find solvable Leibniz algebras in any finite dimension working with the sequence of nilpotent Leibniz algebras in any finite dimension and their ``nil-independent'' derivations. This method for Lie algebras is based on what was shown by Mubarakzyanov in \cite{Mub3}: the dimension of the complimentary vector space to the nilradical does not exceed the number of nil-independent derivations of the nilradical. This result was extended to Leibniz algebras by Casas, Ladra, Omirov and Karimjanov \cite{CLOK} with the help of \cite{AO}. Besides, similarly to the case of Lie algebras, for a solvable Leibniz algebra $L$ we also have the inequality $\dim\n\i\mathfrak{l}} \def\s{\mathfrak{s}(L)\geq\frac{1}{2}\dim L$ \cite{Mub3}. There is the following work performed over the field of characteristic zero using this method: Casas, Ladra, Omirov and Karimjanov classified solvable Leibniz algebras with null-filiform nilradical \cite{CLOK}; Omirov and his colleagues Casas, Khudoyberdiyev, Ladra, Karimjanov, Camacho and Masutova classified solvable Leibniz algebras whose nilradicals are a direct sum of null-filiform algebras \cite{KL}, naturally graded filiform \cite{CL, LadraMO}, triangular \cite{KK} and finally filiform \cite{COM}. Bosko-Dunbar, Dunbar, Hird and Stagg attempted to classify left solvable Leibniz algebras with Heisenberg nilradical \cite{BDHS}. Left and right solvable extensions of $\mathcal{R}_{18}$ \cite{AOR}, $\mathcal{L}^1,\mathcal{L}^2$ and $\mathcal{L}^3$ \cite{CGGO} over the field of real numbers were found by Shabanskaya in \cite{ShA, Sha, ShabA}. The starting point of the present article is a naturally graded quasi-filiform non Lie Leibniz algebra of the second type $\mathcal{L}^4,(n\geq4)$ in the notation of \cite{CGGO}. This algebra is left and right at the same time and an associative when $n=4$. Naturally graded quasi-filiform Leibniz algebras in any finite dimension over $\C$ were studied by Camacho, G\'{o}mez, Gonz\'{a}lez, Omirov \cite{CGGO}. They found six such algebras of the first type, where two of them depend on a parameter and eight algebras of the second type with one of them depending on a parameter. This paper continues a work on finding all solvable extensions of quasi-filiform Leibniz algebras over the field of complex numbers. For a sequence $\mathcal{L}^4$ such extensions of codimension at most two are possible. The paper is organized as follows: in Section \ref{Pr} we give some basic definitions, in Section \ref{Con} we show what involves constructing solvable Leibniz algebras with a given nilradical. In Section \ref{NilL4} we describe the nilpotent sequence $\mathcal{L}^4$ and give the summary of the results stated in theorems, which could be found in the remaining Section \ref{Classification}. As regards notation, we use $\langle e_1,e_2,...,e_r \rangle$ to denote the $r$-dimensional subspace generated by $e_1,e_2,...,e_r,$ where $r\in\mbox{$\Bbb N$}$. Besides $\mathfrak{g}} \def\q{\mathfrak{q}$ and $l$ are used to denote solvable right and left Leibniz algebras, respectively. Throughout the paper all the algebras are finite dimensional over the field of complex numbers and if the bracket is not given, then it is assumed to be zero, except the brackets for the nilradical, which most of the time are not given (see Remark \ref{remark5.1}) to save space. In the tables an ordered triple is a shorthand notation for a derivation property of the multiplication operators, which is either $\mathscr{R}_z\left([x,y]\right)= [\mathscr{R}_z(x),y]+[x,\mathscr{R}_z(y)]$ or $\mathscr{L}_z\left([x,y]\right)= [\mathscr{L}_z(x),y]+[x,\mathscr{L}_z(y)]$. We also assign $\mathscr{R}_{e_{n+1}}:=\mathscr{R}$ and $\mathscr{L}_{e_{n+1}}:=\mathscr{L}.$ We use Maple software to compute the Leibniz identity, the ``absorption'' (see \cite{Shab, ST} and Section \ref{Solvable left Leibniz algebras}), the change of basis for solvable Leibniz algebras in some particular dimensions, which are generalized and proved in an arbitrary finite dimension. \section{Preliminaries}\label{Pr} We give some Basic definitions encountered working with Leibniz algebras. \begin{definition} \begin{enumerate}[noitemsep, topsep=0pt] \item[1.] A vector space L over a field F with a bilinear operation\\ $[-,-]:\,L\rightarrow L$ is called a Leibniz algebra if for any $x,\,y,\,z\in\,L$ the Leibniz identity \begin{equation}\nonumber [[x,y],z]=[[x,z],y]+[x,[y,z]]\end{equation} holds. This Leibniz identity is known as the right Leibniz identity and we call $L$ in this case a right Leibniz algebra. \item[2.] There exists the version corresponding to the left Leibniz identity \begin{equation}\nonumber [[x,y],z]=[x,[y,z]]-[y,[x,z]], \end{equation} and a Leibniz algebra L is called a left Leibniz algebra. \end{enumerate} \end{definition} \begin{remarks} In addition, if $L$ satisfies $[x,x]=0$ for every $x\in L$, then it is a Lie algebra. Therefore every Lie algebra is a Leibniz algebra, but the converse is not true. \end{remarks} \begin{definition} The two-sided ideal $C(L)=\{x\in L:\,[x,y]=[y,x]=0\}$ is said to be the center of $L.$ \end{definition} \begin{definition} A linear map $d:\,L\rightarrow L$ of a Leibniz algebra $L$ is a derivation if for all $x,\,y\in L$ \begin{equation} \nonumber d([x,y])=[d(x),y]+[x,d(y)]. \end{equation} \end{definition} If $L$ is a right Leibniz algebra and $x\in L,$ then the right multiplication operator $\mathscr{R}_x:\,L\rightarrow L$ defined as $\mathscr{R}_x(y)=[y,x],\,y\in L$ is a derivation (for a left Leibniz algebra $L$ with $x\in L,$ the left multiplication operator $\mathscr{L}_x:\,L\rightarrow L,\,\mathscr{L}_x(y)=[x,y],\,y\in L$ is a derivation). Any right Leibniz algebra $L$ is associated with the algebra of right multiplications $\mathscr{R}(L)=\{\mathscr{R}_x\,|\,x\in L\}$ endowed with the structure of a Lie algebra by means of the commutator $[\mathscr{R}_x,\mathscr{R}_y]=\mathscr{R}_x\mathscr{R}_y-\mathscr{R}_y\mathscr{R}_x=\mathscr{R}_{[y,x]},$ which defines an antihomomorphism between $L$ and $\mathscr{R}(L).$ For a left Leibniz algebra $L,$ the corresponding algebra of left multiplications $\mathscr{L}(L)=\{\mathscr{L}_x\,|\,x\in L\}$ is endowed with the structure of a Lie algebra by means of the commutator as well $[\mathscr{L}_x,\mathscr{L}_y]=\mathscr{L}_x\mathscr{L}_y-\mathscr{L}_y\mathscr{L}_x=\mathscr{L}_{[x,y]}.$ In this case we have a homomorphism between $L$ and $\mathscr{L}(L)$. \begin{definition} Let $d_1,d_2,...,d_n$ be derivations of a Leibniz algebra $L.$ The derivations $d_1,d_2,...,d_n$ are said to be ``nil-independent'' if $\alpha_1d_1+\alpha_2d_2+\alpha_3d_3+\ldots+\alpha_nd_n$ is not nilpotent for any scalars $\alpha_1,\alpha_2,...,\alpha_n\in F,$ otherwise they are ``nil-dependent''. \end{definition} \begin{definition} For a given Leibniz algebra $L,$ we define the sequence of two-sided ideals as follows: \begin{equation} \nonumber L^0=L,\,L^{k+1}=[L^k,L],\,(k\geq0)\qquad\qquad L^{(0)}=L,\,L^{(k+1)}=[L^{(k)},L^{(k)}],\,(k\geq0), \end{equation} which are the lower central series and the derived series of $L,$ respectively. A Leibniz algebra $L$ is said to be nilpotent (solvable) if there exists $m\in\mbox{$\Bbb N$}$ such that $L^m=0$ ($L^{(m)}=0$). The minimal such number $m$ is said to be the index of nilpotency (solvability). \end{definition} \begin{definition} A Leibniz algebra $L$ is called a quasi-filiform if $L^{n-3}\neq0$ and $L^{n-2}=0,$ where $\dim(L)=n.$ \end{definition} \section{Constructing solvable Leibniz algebras with a given nilradical}\label{Con} Every solvable Leibniz algebra $L$ contains a unique maximal nilpotent ideal called the nilradical and denoted $\n\i\mathfrak{l}} \def\s{\mathfrak{s}(L)$ such that $\dim\n\i\mathfrak{l}} \def\s{\mathfrak{s}(L)\geq\frac{1}{2}\dim(L)$ \cite{Mub3}. Let us consider the problem of constructing solvable Leibniz algebras $L$ with a given nilradical $N=\n\i\mathfrak{l}} \def\s{\mathfrak{s}(L)$. Suppose $\{e_1,e_2,e_3,e_4,...,e_n\}$ is a basis for the nilradical and $\{e_{n+1},...,e_{p}\}$ is a basis for a subspace complementary to the nilradical. If $L$ is a solvable Leibniz algebra \cite{AO}, then \begin{equation}\label{Nil}[L,L]\subseteq N \end{equation} and we have the following structure equations \begin{equation}\label{Algebra}[e_i, e_j ] =C_{ij}^ke_k, [e_a, e_i] =A^k_{ai}e_k, [e_i,e_a]=A^k_{ia}e_k, [e_a, e_b] =B^k_{ab} e_k, \end{equation} where $1\leq i, j, k, m\leq n$ and $n+1\leq a, b\leq p$. \subsection{Solvable right Leibniz algebras} Calculations show that satisfying the right Leibniz identity is equivalent to the following conditions: \begin{equation}\label{RLeibniz} A_{ai}^kC_{kj}^m=A_{aj}^kC_{ki}^m+C_{ij}^kA_{ak}^m,\,A_{ia}^kC_{kj}^m=C_{ij}^kA_{ka}^m+A_{aj}^kC_{ik}^m,\, C_{ij}^kA_{ka}^m=A_{ia}^kC_{kj}^m+A_{ja}^kC_{ik}^m, \end{equation} \begin{equation}\label{RRLeibniz} B_{ab}^kC_{ki}^m=A_{ai}^kA_{kb}^m+A_{bi}^kA_{ak}^m,\,A_{ai}^kA_{kb}^m=B_{ab}^kC_{ki}^m+A_{ib}^kA_{ak}^m,\, B_{ab}^kC_{ik}^m=A_{kb}^mA_{ia}^k-A_{ka}^mA_{ib}^k. \end{equation} Then the entries of the matrices $A_a,$ which are $(A_i^k)_a$, must satisfy the equations $(\ref{RLeibniz})$ obtained from all the possible Leibniz identities between the triples $\{e_a,e_i,e_j\}.$ Since $N$ is the nilradical of $L$, no nontrivial linear combination of the matrices $A_a,\,(n+1\leq a\leq p)$ is nilpotent i.e. the matrices $A_a$ must be ``nil-independent'' \cite{CLOK, Mub3}. Let us now consider the right multiplication operator $\mathscr{R}_{e_a}$ and restrict it to $N$, $(n+1\leq a\leq p).$ We shall get outer derivations of the nilradical $N=\n\i\mathfrak{l}} \def\s{\mathfrak{s}(L)$ \cite{CLOK}. Then finding the matrices $A_a$ is the same as finding outer derivations $\mathscr{R}_{e_a}$ of $N.$ Further the commutators $[\mathscr{R}_{e_b},\mathscr{R}_{e_a}]=\mathscr{R}_{[e_a,e_b]},\,(n+1\leq a,b\leq p)$ due to $(\ref{Nil})$ consist of inner derivations of $N$. So those commutators give the structure constants $B_{ab}^k$ as shown in the last equation of $(\ref{RRLeibniz})$ but only up to the elements in the center of the nilradical $N$, because if $e_i,\,(1\leq i\leq n)$ is in the center of $N,$ then $\left(\mathscr{R}_{e_i}\right)_{|_N}=0,$ where $\left(\mathscr{R}_{e_i}\right)_{|_{N}}$ is an inner derivation of the nilradical. \subsection{Solvable left Leibniz algebras}\label{Solvable left Leibniz algebras} Satisfying the left Leibniz identity, we have \begin{equation}\label{LLeibniz} A_{ai}^kC_{jk}^m=A_{ja}^kC_{ki}^m+C_{ji}^kA_{ak}^m,\,A_{ia}^kC_{jk}^m=C_{ji}^kA_{ka}^m+A_{ja}^kC_{ik}^m,\, C_{ij}^kA_{ak}^m=A_{ai}^kC_{kj}^m+A_{aj}^kC_{ik}^m, \end{equation} \begin{equation}\label{LLLeibniz} B_{ab}^kC_{ik}^m=A_{ia}^kA_{kb}^m+A_{ib}^kA_{ak}^m,\,A_{ib}^kA_{ak}^m=B_{ab}^kC_{ik}^m+A_{ai}^kA_{kb}^m,\, B_{ab}^kC_{ki}^m=A_{ak}^mA_{bi}^k-A_{bk}^mA_{ai}^k, \end{equation} and the entries of the matrices $A_a,$ which are $(A_a)_i^k$, must satisfy the equations $(\ref{LLeibniz})$. Similarly $A_a=\left(\mathscr{L}_{e_a}\right)_{|_{N}},\,(n+1\leq a\leq p)$ are outer derivations and the commutators $[\mathscr{L}_{e_a},\mathscr{L}_{e_b}]=\mathscr{L}_{[e_a,e_b]}$ give the structure constants $B_{ab}^k,$ but only up to the elements in the center of the nilradical $N$. Once the left or right Leibniz identities are satisfied in the most general possible way and the outer derivations are found: \begin{enumerate}[noitemsep, topsep=0pt] \item[(i)] We can carry out the technique of ``absorption'' \cite{Shab, ST}, which means we can simplify a solvable Leibniz algebra without affecting the nilradical in $(\ref{Algebra})$ applying the transformation \begin{equation} \nonumber e^{\prime}_i=e_i,\,(1\leq i\leq n),\,e^{\prime}_a=e_a+\sum_{k=1}^nd^ke_k,\,(n+1\leq a\leq p). \end{equation} \item[(ii)] We change basis without affecting the nilradical in $(\ref{Algebra})$ to remove all the possible parameters and simplify the algebra. \end{enumerate} \section{The nilpotent sequence $\mathcal{L}^4$}\label{NilL4} In $\mathcal{L}^4,(n\geq4)$ the positive integer $n$ denotes the dimension of the algebra. The center of this algebra is $C(\mathcal{L}^4)=\langle e_{2},e_n \rangle$. $\mathcal{L}^4$ can be described explicitly as follows: in the basis $\{e_1,e_2,e_3,e_4,\ldots,e_n\}$ it has only the following non-zero brackets: \begin{equation} \begin{array}{l} \displaystyle [e_1,e_1]=e_{2},[e_i,e_1]=e_{i+1},(3\leq i\leq n-1),[e_1,e_3]=2e_2-e_4,[e_3,e_3]=e_2,\\ \displaystyle [e_1,e_j]=-e_{j+1},(4\leq j\leq n-1,n\geq4). \end{array} \label{L4} \end{equation} The dimensions of the ideals in the characteristic series are \begin{equation}\nonumber DS=[n,n-2,0], LS=[n,n-2,n-4,n-5,n-6,...,0],(n\geq4).\end{equation} A quasi-filiform Leibniz algebra $\mathcal{L}^4$ was introduced by Camacho, G\'{o}mez, Gonz\'{a}lez and Omirov in \cite{CGGO}. This algebra is served as the nilradical for the left and right solvable indecomposable extensions we construct in this paper. It is shown below that solvable right (left) Leibniz algebras with the nilradical $\mathcal{L}^4$ only exist for $\dim\mathfrak{g}} \def\q{\mathfrak{q}=n+1$ and $\dim\mathfrak{g}} \def\q{\mathfrak{q}=n+2$($\dim l=n+1$ and $\dim l=n+2$). Right solvable extensions with a codimension one nilradical $\mathcal{L}^4$ are found following the steps in Theorems \ref{TheoremRL4}, \ref{TheoremRL4Absorption} and \ref{RL4(Change of Basis)} with the main result summarized in Theorem \ref{RL4(Change of Basis)}, where it is shown there are eight such algebras: $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,i},(1\leq i\leq 4,n\geq4),$ $\mathfrak{g}} \def\q{\mathfrak{q}_{5,5},\mathfrak{g}} \def\q{\mathfrak{q}_{5,6},\mathfrak{g}} \def\q{\mathfrak{q}_{5,7}$ and $\mathfrak{g}} \def\q{\mathfrak{q}_{5,8}$. It is noticed that $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,1}$ is left as well when $a=0,$ $\mathfrak{g}} \def\q{\mathfrak{q}_{5,5}$ is left when $b=-1$ and $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4},\mathfrak{g}} \def\q{\mathfrak{q}_{5,7},\mathfrak{g}} \def\q{\mathfrak{q}_{5,8}$ are right and left Leibniz algebras at the same time. There are four solvable indecomposable right Leibniz algebras with a codimension two nilradical: $\mathfrak{g}} \def\q{\mathfrak{q}_{n+2,1},(n\geq5),\mathfrak{g}} \def\q{\mathfrak{q}_{6,2},\mathfrak{g}} \def\q{\mathfrak{q}_{6,3}$ and $\mathfrak{g}} \def\q{\mathfrak{q}_{6,4}$ stated in Theorem \ref{RCodim2L4}, where none of them is left. We follow the steps in Theorems \ref{TheoremLL4}, \ref{TheoremLL4Absorption} and \ref{LL4(Change of Basis)} to find codimension one left solvable extensions. We notice in Theorem \ref{LL4(Change of Basis)}, that we have eight of them as well: $l_{n+1,1},l_{n+1,2},l_{n+1,3},$ $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4},l_{5,5},l_{5,6},\mathfrak{g}} \def\q{\mathfrak{q}_{5,7}$ and $\mathfrak{g}} \def\q{\mathfrak{q}_{5,8},$ such that $l_{n+1,1}$ is right when $a=0$ and $l_{5,5}$ is right when $b=-1.$ We find four solvable indecomposable left Leibniz algebras with a codimension two nilradical as well: $l_{n+2,1},(n\geq5),l_{6,2},l_{6,3}$ and $l_{6,4}$ stated in Theorem \ref{LCodim2L4}, where none of them is right. \section{Classification of solvable indecomposable Leibniz algebras with a nilradical $\mathcal{L}^4$}\label{Classification} Our goal in this Section is to find all possible right and left solvable indecomposable extensions of the nilpotent Leibniz algebra $\mathcal{L}^4,$ which serves as the nilradical of the extended algebra. \begin{remarks}\label{remark5.1} It is assumed throughout this section that solvable indecomposable right and left Leibniz algebras have the nilradical $\mathcal{L}^4$; however, most of the time, the brackets of the nilradical will be omitted. \end{remarks} \subsection{Solvable indecomposable right Leibniz algebras with a nilradical $\mathcal{L}^4$} \subsubsection{Codimension one solvable extensions of $\mathcal{L}^4$} The nilpotent Leibniz algebra $\mathcal{L}^4$ is defined in $(\ref{L4})$. Suppose $\{e_{n+1}\}$ is in the complementary subspace to the nilradical $\mathcal{L}^4$ and $\mathfrak{g}} \def\q{\mathfrak{q}$ is the corresponding solvable right Leibniz algebra. Since $[\mathfrak{g}} \def\q{\mathfrak{q},\mathfrak{g}} \def\q{\mathfrak{q}]\subseteq \mathcal{L}^4,$ we have the following: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_1]=e_{2},[e_i,e_1]=e_{i+1},(3\leq i\leq n-1),[e_1,e_3]=2e_2-e_4,[e_3,e_3]=e_2,\\ \displaystyle [e_1,e_j]=-e_{j+1},(4\leq j\leq n-1),[e_r,e_{n+1}]=\sum_{s=1}^na_{s,r}e_s,[e_{n+1},e_k]=\sum_{s=1}^nb_{s,k}e_s,\\ \displaystyle (1\leq k\leq n,1\leq r\leq n+1,n\geq4). \end{array} \right. \label{BRLeibniz} \end{equation} \begin{theorem}\label{TheoremRL4} We set $a_{1,1}:=a$ and $a_{3,3}:=b$ in $(\ref{BRLeibniz})$. To satisfy the right Leibniz identity, there are the following cases based on the conditions involving parameters, each gives a continuous family of solvable Leibniz algebras: \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] If $a_{1,3}=0, b\neq-a,a\neq0,b\neq0,(n=4)$ or $b\neq(3-n)a,a\neq0,b\neq0,(n\geq5),$ then we have the following brackets for the algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+A_{2,1}e_2+(b-a)e_3+\sum_{k=4}^n{a_{k,1}e_k},[e_2,e_{n+1}]=2be_2, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\\ \displaystyle \sum_{k=4}^n{a_{k,3}e_k},[e_4,e_{n+1}]=(b-a)e_2+(a+b)e_4+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=\left((i-3)a+b\right)e_{i}+\\ \displaystyle \sum_{k=i+1}^n{a_{k-i+3,3}e_k}, [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(a-b)e_3-\sum_{k=4}^n{a_{k,1}e_k},\\ \displaystyle [e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2-be_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=(a+b)(e_2-e_4)-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=\left((3-i)a-b\right)e_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n),\\ \displaystyle where\,\,A_{2,1}:=\frac{(2b-a)b_{2,1}-2b\cdot a_{4,1}+2(a-b)(a_{2,3}+a_{4,3})}{a}. \end{array} \right. \end{equation} \item[(2)] If $a_{1,3}=0,b:=-a,a\neq0,(n=4)$ or $b:=(3-n)a,a\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+A_{2,1}e_2+(2-n)ae_3+\sum_{k=4}^n{a_{k,1}e_k},[e_2,e_{n+1}]=2(3-n)ae_2,\\ \displaystyle [e_3,e_{n+1}]=a_{2,3}e_2+(3-n)ae_3+\sum_{k=4}^n{a_{k,3}e_k},[e_4,e_{n+1}]=(2-n)ae_2+(4-n)ae_4+\\ \displaystyle \sum_{k=5}^n{a_{k-1,3}e_k},[e_{i},e_{n+1}]=(i-n)ae_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k}, [e_{n+1},e_{n+1}]=a_{2,n+1}e_2+a_{n,n+1}e_n,\\ \displaystyle [e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(n-2)ae_3-\sum_{k=4}^n{a_{k,1}e_k},[e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2+\\ \displaystyle (n-3)ae_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=(4-n)ae_2+(n-4)ae_4-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=(n-i)ae_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n-1),\\ \displaystyle where\,\,A_{2,1}:=(5-2n)b_{2,1}+2(n-3)a_{4,1}+2(n-2)(a_{2,3}+a_{4,3}). \end{array} \right. \end{equation} \item[(3)] If $a_{1,3}=0,a=0,b\neq0,(n=4)$ or $a=0$ and $b\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=a_{2,1}e_2+be_3+\sum_{k=4}^n{a_{k,1}e_k},[e_2,e_{n+1}]=2be_2, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=4}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=b(e_2+e_4)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=be_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2, [e_{n+1},e_1]=\left(a_{2,3}+a_{4,1}+a_{4,3}\right)e_2-be_3-\sum_{k=4}^n{a_{k,1}e_k},\\ \displaystyle [e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2-be_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=b(e_2-e_4)-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=-be_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n). \end{array} \right. \end{equation} \allowdisplaybreaks \item[(4)] If $a_{1,3}=0,b=0,a\neq0,(n=4)$ or $b=0,a\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+\left(a_{2,3}-b_{2,1}+b_{2,3}\right)e_2-ae_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+\sum_{k=4}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=-ae_2+ae_4+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-3)ae_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+ae_3-\sum_{k=4}^n{a_{k,1}e_k},[e_{n+1},e_3]=b_{2,3}e_2-\\ \displaystyle \sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=ae_2-ae_4-\sum_{k=5}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=(3-i)ae_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle (5\leq i\leq n). \end{array} \right. \end{equation} In the remaining cases $a_{1,3}:=c.$ \item[(5)] If $b\neq-a,a\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+A_{2,1}e_2+(b+c-a)e_3+a_{4,1}e_4,[e_2,e_{5}]=2(b+c)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2+\\ \displaystyle be_3+A_{4,3}e_4,[e_4,e_{5}]=(2c+b-a)e_2+(a+b)e_4,[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+b_{2,1}e_2+\\ \displaystyle (a-b-c)e_3-a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2-be_3-A_{4,3}e_4,[e_{5},e_4]=(a+b)e_2-(a+b)e_4,\\ \displaystyle where\,\,A_{2,1}:=a_{2,3}-b_{2,1}+b_{2,3}-\frac{(b+c)(a_{2,3}-2b_{2,1}+2a_{4,1}+b_{2,3})}{a}\,\,and \\ \displaystyle A_{4,3}:=\frac{(c-a)a_{2,3}+(a+c)b_{2,3}-2c(b_{2,1}-a_{4,1})}{2a}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ A_{2,1} & 2(b+c) & a_{2,3}& 2c+b-a \\ b+c-a & 0 & b & 0\\ a_{4,1} & 0 & A_{4,3} & a+b \end{smallmatrix}\right].$ \item[(6)] If $b:=-a,a\neq0,a\neq c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+A_{2,1}e_2+(c-2a)e_3+a_{4,1}e_4,[e_2,e_{5}]=2(c-a)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2-\\ \displaystyle ae_3+A_{4,3}e_4,[e_4,e_{5}]=2(c-a)e_2,[e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=-ae_1+b_{2,1}e_2+\\ \displaystyle (2a-c)e_3-a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ae_3-A_{4,3}e_4,\\ \displaystyle where\,\,A_{2,1}:=2a_{2,3}-3b_{2,1}+2b_{2,3}+2a_{4,1}-\frac{c(a_{2,3}-2b_{2,1}+2a_{4,1}+b_{2,3})}{a}\,\,and \\ \displaystyle A_{4,3}:=\frac{(c-a)a_{2,3}+(a+c)b_{2,3}-2c(b_{2,1}-a_{4,1})}{2a}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ A_{2,1} & 2(c-a) & a_{2,3}& 2(c-a) \\ c-2a & 0 &-a & 0\\ a_{4,1} & 0 & A_{4,3} &0 \end{smallmatrix}\right].$ \item[(7)] If $b:=-c,c\neq0,a\neq c,a\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\left(a_{2,3}-b_{2,1}+b_{2,3}\right)e_2-ae_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3+a_{4,3}e_4,\\ \displaystyle [e_4,e_{5}]=(c-a)e_2+(a-c)e_4,[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+b_{2,1}e_2+ae_3-a_{4,1}e_4,\\ \displaystyle [e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3-a_{4,3}e_4,[e_{5},e_4]=(a-c)e_2+(c-a)e_4, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ a_{2,3}-b_{2,1}+b_{2,3} & 0 & a_{2,3}& c-a \\ -a & 0 & -c & 0\\ a_{4,1} & 0 & a_{4,3} & a-c \end{smallmatrix}\right].$ \item[(8)] If $c:=a,b:=-a,a\neq0,$ then\footnote{The outer derivation $\mathscr{R}_{e_{5}}$ is nilpotent, so we do not consider this case any further.} \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\left(a_{2,3}-b_{2,1}+b_{2,3}\right)e_2-ae_3+a_{4,1}e_4, [e_3,e_{5}]=ae_1+a_{2,3}e_2-ae_3+a_{4,3}e_4,\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=-ae_1+b_{2,1}e_2+ae_3-\left(a_{4,1}-a_{4,3}-b_{4,3}\right)e_4,\\ \displaystyle [e_{5},e_3]=-ae_1+b_{2,3}e_2+ae_3+b_{4,3}e_4, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & a & 0\\ a_{2,3}-b_{2,1}+b_{2,3}& 0& a_{2,3}& 0 \\ -a & 0 & -a & 0\\ a_{4,1} & 0 & a_{4,3} & 0 \end{smallmatrix}\right].$ \item[(9)] If $a=0,b=0,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\left(3b_{2,1}-4a_{4,1}-2a_{2,3}-2a_{4,3}\right)e_2+ce_3+a_{4,1}e_4,[e_2,e_{5}]=2ce_2, [e_3,e_{5}]=ce_1+\\ \displaystyle a_{2,3}e_2+a_{4,3}e_4,[e_4,e_{5}]=2ce_2,[e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=b_{2,1}e_2-ce_3-a_{4,1}e_4,\\ \displaystyle [e_{5},e_3]=-ce_1+\left(2b_{2,1}-2a_{4,1}-a_{2,3}\right)e_2-a_{4,3}e_4, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ 3b_{2,1}-4a_{4,1}-2a_{2,3}-2a_{4,3} & 2c & a_{2,3}& 2c\\ c & 0 & 0 & 0\\ a_{4,1} & 0 & a_{4,3} & 0 \end{smallmatrix}\right].$ \item[(10)] If $a=0,b\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2+(b+c)e_3+a_{4,1}e_4,[e_2,e_{5}]=2(b+c)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3+\\ \displaystyle A_{4,3}e_4,[e_4,e_{5}]=(2c+b)e_2+be_4,[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=\left(a_{4,1}+\frac{a_{2,3}+b_{2,3}}{2}\right)e_2-\\ \displaystyle (b+c)e_3-a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2-be_3-A_{4,3}e_4,[e_{5},e_4]=b(e_2-e_4),\\ \displaystyle where\,\,A_{4,3}:=\frac{(3c+2b)b_{2,3}-(2b+c)a_{2,3}-2c(a_{2,1}+a_{4,1})}{4(b+c)}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ a_{2,1} & 2(b+c) & a_{2,3}& 2c+b \\ b+c & 0 & b & 0\\ a_{4,1} & 0 & A_{4,3} & b \end{smallmatrix}\right].$ \item[(11)] If $a=0,b:=-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3+a_{4,3}e_4,[e_4,e_{5}]=ce_2-ce_4,\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=\left(a_{2,3}+b_{2,3}-a_{2,1}\right)e_2-a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3-\\ \displaystyle a_{4,3}e_4,[e_{5},e_4]=-ce_2+ce_4, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ a_{2,1} & 0 & a_{2,3}& c \\ 0 & 0 &-c & 0\\ a_{4,1} & 0 & a_{4,3} & -c \end{smallmatrix}\right].$ \end{enumerate} \end{theorem} \vskip 5pt \begin{proof} \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] For $(n\geq5)$ the proof is given in Table \ref{Right(L4)}. For $(n=4)$ we work out the following identities: $1.,3.-5.,7.,8.,10.,12.,13.$(or $15.$), $16.-19.$ \item[(2)] For $(n\geq5),$ we apply the same identities given in Table \ref{Right(L4)}, except $17.$ For $(n=4)$, the identities are as follows: $1.,3.-5.,7.,8.,10.,12.,15.,16.-19.$ \item[(3)] For $(n\geq5),$ same identities except $18.$ and $19.$ applying instead $\mathscr{R}_{e_3}\left([e_{n+1},e_{n+1}]\right)=[\mathscr{R}_{e_3}(e_{n+1}),e_{n+1}]+[e_{n+1},\mathscr{R}_{e_3}(e_{n+1})]$ and $\mathscr{R}[e_{n+1},e_{1}]=[\mathscr{R}(e_{n+1}),e_{1}]+[e_{n+1},\mathscr{R}(e_{1})]$. For $(n=4)$, same identities as in case $(1),$ except $18.$ and $19.$ applying two identities given above. \item[(4)] Same identities as in case $(1).$ \item[(5)] We apply the following identities: $1.,3.-5.,7.,8.,10.,12.,13.,16.,17., \mathscr{R}_{e_3}\left([e_5,e_5]\right)=[\mathscr{R}_{e_3}(e_5),e_5]+[e_5,\mathscr{R}_{e_3}(e_5)],18.,19.$ \item[(6)] We apply the same identities as in $(5)$, except $17.$ and $18.$ \item[(7)] Same as in $(5),$ except $19.$ \item[(8)] We apply the same identities as in $(5),$ except $17.,18.$ and $19.$ \item[(9)] Same identities as in $(5),$ except $17.$ and $18.$ \item[(10)] We have the identities: $1.,3.-5.,7.,8.,10.,12.,13.,16.,17.,19., \mathscr{R}_{e_5}\left([e_5,e_1]\right)=[\mathscr{R}_{e_5}(e_5),e_1]+[e_5,\mathscr{R}_{e_5}(e_1)], \mathscr{R}_{e_3}\left([e_5,e_5]\right)=[\mathscr{R}_{e_3}(e_5),e_5]+[e_5,\mathscr{R}_{e_3}(e_5)].$ \item[(11)] Same identities as in $(5),$ except $19.$ \end{enumerate} \end{proof} \begin{table}[h!] \caption{Right Leibniz identities in case $(1)$ in Theorem \ref{TheoremRL4}, ($n\geq5$).} \label{Right(L4)} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{R}[e_1,e_{1}]$ &\scriptsize $a_{1,2}=0,a_{3,1}:=\frac{1}{2}a_{2,2}-a,a_{k,2}=0,(3\leq k\leq n)$ $\implies$ $[e_1,e_{n+1}]=ae_1+a_{2,1}e_2+\left(\frac{1}{2}a_{2,2}-a\right)e_3+\sum_{k=4}^n{a_{k,1}e_k},[e_2,e_{n+1}]=a_{2,2}e_2.$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{R}[e_{3},e_{i}]$ &\scriptsize $a_{1,3}=a_{1,i}=a_{3,i}=0$ $\implies$ $[e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=4}^n{a_{k,3}e_k},[e_i,e_{n+1}]=a_{2,i}e_2+\sum_{k=4}^n{a_{k,i}e_k},(4\leq i\leq n-1).$\\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{R}[e_{3},e_{n}]$ &\scriptsize $a_{1,n}=a_{3,n}=0$ $\implies$ $[e_{n},e_{n+1}]=a_{2,n}e_2+\sum_{k=4}^n{a_{k,n}e_k}.$\\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{R}[e_{3},e_{3}]$ &\scriptsize $a_{2,2}:=2b$ $\implies$ $[e_{1},e_{n+1}]=ae_1+a_{2,1}e_2+(b-a)e_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_2,e_{n+1}]=2be_2.$\\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{R}[e_1,e_{3}]$ &\scriptsize $a_{2,4}:=b-a,a_{4,4}:=a+b,a_{k,4}:=a_{k-1,3},(5\leq k\leq n)$ $\implies$ $[e_4,e_{n+1}]=(b-a)e_2+(a+b)e_4+\sum_{k=5}^n{a_{k-1,3}e_k}.$\\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{R}[e_{1},e_{i}]$ &\scriptsize $a_{2,i+1}=a_{4,i+1}=0,a_{i+1,i+1}:=(i-2)a+b,a_{k,i+1}:=a_{k-1,i},(5\leq k\leq n,k\neq i+1,4\leq i\leq n-1),$ where $i$ is fixed $\implies$ $[e_{j},e_{n+1}]=\left((j-3)a+b\right)e_j+\sum_{k=j+1}^n{a_{k-j+3,3}e_k},(5\leq j\leq n).$\\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $[e_{n+1},e_2]=0$ $\implies$ $b_{k,2}=0,(1\leq k\leq n).$\\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_3,e_{n+1}]\right)$ &\scriptsize $b_{1,1}:=-a,b_{3,1}:=a-b$ $\implies$ $[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(a-b)e_3+\sum_{k=4}^{n}b_{k,1}e_k.$\\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{1},e_{n+1}]\right)$ &\scriptsize $b_{k-1,1}:=-a_{k-1,1},(5\leq k\leq n,n\geq5)$ $\implies$ $[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(a-b)e_3-\sum_{k=4}^{n-1}{a_{k,1}e_k}+b_{n,1}e_n.$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{R}_{e_3}\left([e_{1},e_{n+1}]\right)$ &\scriptsize $b_{1,3}=0,b_{3,3}:=-b,b_{k-1,3}:=-a_{k-1,3},(5\leq k\leq n)$ $\implies$ $[e_{n+1},e_{3}]=b_{2,3}e_2-be_3-\sum_{k=4}^{n-1}{a_{k,3}e_k}+b_{n,3}e_n.$\\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{R}_{e_i}\left([e_{1},e_{n+1}]\right)$ &\scriptsize $b_{3,i}=0\implies b_{1,i}=0;b_{i,i}:=(3-i)a-b, b_{k-1,i}=0,(5\leq k\leq i),$ $b_{k-1,i}:=-a_{k-i+2,3},(i+2\leq k\leq n),$ where $i$ is fixed. $\implies$ $[e_{n+1},e_i]=b_{2,i}e_2+\left((3-i)a-b\right)e_i-\sum_{k=i+1}^{n-1}{a_{k-i+3,3}e_k}+b_{n,i}e_n,(4\leq i\leq n-1).$ \\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{R}_{e_{n}}\left([e_{1},e_{n+1}]\right)$ &\scriptsize $b_{3,n}=0\implies b_{1,n}=0;b_{k-1,n}=0,(5\leq k\leq n)$ $\implies$ $[e_{n+1},e_{n}]=b_{2,n}e_2+b_{n,n}e_n.$\\ \hline \scriptsize $13.$ &\scriptsize $\mathscr{R}_{e_{1}}\left([e_{n+1},e_{3}]\right)$ &\scriptsize $b_{2,4}:=a+b,b_{n,4}:=-a_{n-1,3}$ $\implies$ $[e_{n+1},e_{4}]=(a+b)e_2-(a+b)e_4-\sum_{k=5}^n{a_{k-1,3}e_k}.$\\ \hline \scriptsize $14.$ &\scriptsize $\mathscr{R}_{e_{1}}\left([e_{n+1},e_{i}]\right)$ &\scriptsize $b_{2,i+1}=0,b_{n,i+1}:=-a_{n-i+2,3},(4\leq i\leq n-2)$ $\implies$ $[e_{n+1},e_{j}]=\left((3-j)a-b\right)e_j-\sum_{k=j+1}^n{a_{k-j+3,3}e_k},(5\leq j\leq n-1).$\\ \hline \scriptsize $15.$ &\scriptsize $\mathscr{R}_{e_{1}}\left([e_{n+1},e_{n-1}]\right)$ &\scriptsize $b_{2,n}=0,b_{n,n}:=(3-n)a-b$ $\implies$ $[e_{n+1},e_{n}]=\left((3-n)a-b\right)e_n.$ Combining with $14.,$ $[e_{n+1},e_{i}]=\left((3-i)a-b\right)e_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n).$\\ \hline \scriptsize $16.$ &\scriptsize $\mathscr{R}[e_{1},e_{n+1}]$ &\scriptsize $a_{3,n+1}=0\implies a_{1,n+1}=0; a_{k-1,n+1}=0,(5\leq k\leq n)$ $\implies$ $[e_{n+1},e_{n+1}]=a_{2,n+1}e_2+a_{n,n+1}e_n.$\\ \hline \scriptsize $17.$ &\scriptsize $\mathscr{R}[e_{n+1},e_{n+1}]$ &\scriptsize $a_{n,n+1}=0,(b\neq(3-n)a)$ $\implies$ $[e_{n+1},e_{n+1}]=a_{2,n+1}e_2.$\\ \hline \scriptsize $18.$ &\scriptsize $\mathscr{R}[e_{n+1},e_{3}]$ &\scriptsize $b_{n,3}:=-a_{n,3},(a\neq0),b_{2,3}:=a_{2,3}+2a_{4,3},(b\neq0)$ $\implies$ $[e_{n+1},e_{3}]=\left(a_{2,3}+2a_{4,3}\right)e_2-be_3-\sum_{k=4}^{n}{a_{k,3}e_k}.$\\ \hline \scriptsize $19.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $b_{n,1}:=-a_{n,1},(a\neq0),A_{2,1}:=\frac{(2b-a)b_{2,1}-2b\cdot a_{4,1}+2(a-b)(a_{2,3}+a_{4,3})}{a}\implies$ $[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(a-b)e_3-\sum_{k=4}^{n}{a_{k,1}e_k}, [e_{1},e_{n+1}]=ae_1+A_{2,1}e_2+(b-a)e_3+\sum_{k=4}^n{a_{k,1}e_k}.$ \\ \hline \end{tabular} \end{table} \begin{theorem}\label{TheoremRL4Absorption} Applying the technique of ``absorption'' (see Section \ref{Solvable left Leibniz algebras}), we can further simplify the algebras in each of the cases in Theorem \ref{TheoremRL4} as follows: \begin{enumerate}[noitemsep, topsep=0pt] \allowdisplaybreaks \item[(1)] If $a_{1,3}=0, b\neq-a,a\neq0,b\neq0,(n=4)$ or $b\neq(3-n)a,a\neq0,b\neq0,(n\geq5),$ then we have the following brackets for the algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+\mathcal{A}_{2,1}e_2+(b-a)e_3,[e_2,e_{n+1}]=2be_2,[e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=(b-a)e_2+(a+b)e_4+\sum_{k=6}^n{a_{k-1,3}e_k},[e_{i},e_{n+1}]=\left((i-3)a+b\right)e_{i}+\\ \displaystyle \sum_{k=i+2}^n{a_{k-i+3,3}e_k},[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(a-b)e_3,[e_{n+1},e_3]=a_{2,3}e_2-be_3-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+1},e_4]=(a+b)\left(e_2-e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=\left((3-i)a-b\right)e_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle (5\leq i\leq n); where\,\,\mathcal{A}_{2,1}:=\frac{(2b-a)b_{2,1}+2(a-b)a_{2,3}}{a}. \end{array} \right. \end{equation} \item[(2)] If $a_{1,3}=0,b:=-a,a\neq0,(n=4)$ or $b:=(3-n)a,a\neq0,(n\geq5),$ then the brackets for the algebra are \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+\mathcal{A}_{2,1}e_2+(2-n)ae_3,[e_2,e_{n+1}]=2(3-n)ae_2,[e_3,e_{n+1}]=a_{2,3}e_2+\\ \displaystyle (3-n)ae_3+\sum_{k=5}^n{a_{k,3}e_k},[e_4,e_{n+1}]=(2-n)ae_2+(4-n)ae_4+\sum_{k=6}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{i},e_{n+1}]=(i-n)ae_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},[e_{n+1},e_{n+1}]=a_{n,n+1}e_n,\\ \displaystyle [e_{n+1},e_1]=-ae_1+b_{2,1}e_2+(n-2)ae_3,[e_{n+1},e_3]=a_{2,3}e_2+(n-3)ae_3-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle[e_{n+1},e_4]=(4-n)a(e_2-e_4)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=(n-i)ae_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle(5\leq i\leq n);where\,\,\mathcal{A}_{2,1}:=(5-2n)b_{2,1}+2(n-2)a_{2,3}. \end{array} \right. \end{equation} \item[(3)] If $a_{1,3}=0,a=0,b\neq0,(n=4)$ or $a=0$ and $b\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=a_{2,1}e_2+be_3,[e_2,e_{n+1}]=2be_2, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=b\left(e_2+e_4\right)+\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=be_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_1]=a_{2,3}e_2-be_3,[e_{n+1},e_3]=a_{2,3}e_2-be_3-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+1},e_4]=b\left(e_2-e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=-be_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n). \end{array} \right. \end{equation} \allowdisplaybreaks \item[(4)] If $a_{1,3}=0,b=0,a\neq0,(n=4)$ or $b=0,a\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+\left(a_{2,3}-b_{2,1}+b_{2,3}\right)e_2-ae_3, [e_3,e_{n+1}]=a_{2,3}e_2+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=-a(e_2-e_4)+\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-3)ae_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-ae_1+b_{2,1}e_2+ae_3,[e_{n+1},e_3]=b_{2,3}e_2-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+1},e_4]=a(e_2-e_4)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=(3-i)ae_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n). \end{array} \right. \end{equation} In the remaining cases $a_{1,3}:=c.$ \item[(5)] If $b\neq-a,a\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\mathcal{A}_{2,1}e_2+(b+c-a)e_3,[e_2,e_{5}]=2(b+c)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3,\\ \displaystyle [e_4,e_{5}]=(2c+b-a)e_2+(a+b)e_4,[e_{5},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(a-b-c)e_3,\\ \displaystyle [e_{5},e_3]=-ce_1+\mathcal{B}_{2,3}e_2-be_3,[e_{5},e_4]=(a+b)(e_2-e_4),\\ \displaystyle where\,\,\mathcal{A}_{2,1}:=\frac{(3a-2b-3c)a_{2,3}+(a-2b-3c)b_{2,3}}{2a},\,\,\mathcal{B}_{2,1}:=\frac{(a-c)a_{2,3}-(a+c)b_{2,3}}{2a},\\ \displaystyle \mathcal{B}_{2,3}:=\frac{(a-c)a_{2,3}-c\cdot b_{2,3}}{a}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ \mathcal{A}_{2,1} & 2(b+c) & a_{2,3}& 2c+b-a \\ b+c-a & 0 & b & 0\\ 0 & 0 & 0 & a+b \end{smallmatrix}\right].$ \item[(6)] If $b:=-a,a\neq0,a\neq c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\mathcal{A}_{2,1}e_2+(c-2a)e_3,[e_2,e_{5}]=2(c-a)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ae_3,\\ \displaystyle [e_4,e_{5}]=2(c-a)e_2,[e_{5},e_{5}]=a_{4,5}e_4,[e_{5},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(2a-c)e_3,\\ \displaystyle [e_{5},e_3]=-ce_1+\mathcal{B}_{2,3}e_2+ae_3;where\,\,\mathcal{A}_{2,1}:=\frac{(5a-3c)a_{2,3}+(3a-3c)b_{2,3}}{2a},\\ \displaystyle \mathcal{B}_{2,1}:=\frac{(a-c)a_{2,3}-(a+c)b_{2,3}}{2a},\mathcal{B}_{2,3}:=\frac{(a-c)a_{2,3}-c\cdot b_{2,3}}{a}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ \mathcal{A}_{2,1} & 2(c-a) &a_{2,3}& 2(c-a) \\ c-2a & 0 &-a & 0\\ 0 & 0 & 0 &0 \end{smallmatrix}\right].$ \item[(7)] If $b:=-c,c\neq0,a\neq c,a\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\left(a_{2,3}-b_{2,1}+b_{2,3}\right)e_2-ae_3, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3,\\ \displaystyle [e_4,e_{5}]=(c-a)\left(e_2-e_4\right),[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+b_{2,1}e_2+ae_3,\\ \displaystyle [e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3,[e_{5},e_4]=(a-c)\left(e_2-e_4\right), \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} a & 0 & c & 0\\ a_{2,3}-b_{2,1}+b_{2,3} & 0 &a_{2,3}& c-a \\ -a & 0 & -c & 0\\ 0 & 0 & 0 & a-c \end{smallmatrix}\right].$ \item[(8)] If $a=0,b=0,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\left(3b_{2,1}-2a_{2,3}\right)e_2+ce_3,[e_2,e_{5}]=2ce_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2,[e_4,e_{5}]=2ce_2,\\ \displaystyle [e_{5},e_{5}]=a_{4,5}e_4,[e_{5},e_1]=b_{2,1}e_2-ce_3, [e_{5},e_3]=-ce_1+\left(2b_{2,1}-a_{2,3}\right)e_2, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ 3b_{2,1}-2a_{2,3}& 2c & a_{2,3}& 2c\\ c & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{smallmatrix}\right].$ \item[(9)] If $a=0,b\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\mathcal{A}_{2,1}e_2+(b+c)e_3,[e_2,e_{5}]=2(b+c)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3,\\ \displaystyle [e_4,e_{5}]=(2c+b)e_2+be_4,[e_{5},e_1]=\mathcal{B}_{2,1}e_2-(b+c)e_3,[e_{5},e_3]=-ce_1+\mathcal{B}_{2,3}e_2-be_3,\\ \displaystyle [e_{5},e_4]=b(e_2-e_4); where\,\,\mathcal{A}_{2,1}:=\frac{(2b+c)a_{2,3}-(2b+3c)b_{2,3}}{4(b+c)},\\ \displaystyle \mathcal{B}_{2,1}:=\frac{(4b+3c)a_{2,3}-c\cdot b_{2,3}}{4(b+c)},\mathcal{B}_{2,3}:=\frac{(2b+c)a_{2,3}-c\cdot b_{2,3}}{2(b+c)}, \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ \mathcal{A}_{2,1}& 2(b+c) & a_{2,3}& 2c+b \\ b+c & 0 & b & 0\\ 0 & 0 & 0 & b \end{smallmatrix}\right].$ \item[(10)] If $a=0,b:=-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3,[e_4,e_{5}]=c(e_2-e_4),[e_{5},e_{5}]=a_{2,5}e_2,\\ \displaystyle [e_{5},e_1]=\left(a_{2,3}+b_{2,3}-a_{2,1}\right)e_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3,[e_{5},e_4]=-c(e_2-e_4), \end{array} \right. \end{equation} $\mathscr{R}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & c & 0\\ a_{2,1} & 0 & a_{2,3}& c \\ 0 & 0 &-c & 0\\ 0 & 0 & 0 & -c \end{smallmatrix}\right].$ \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] The right (a derivation) and left (not a derivation) multiplication operators restricted to the nilradical are given below: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ A_{2,1} & 2b & a_{2,3}& b-a &0& & \cdots &0&\cdots & 0& 0&0\\ b-a & 0 & b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & a+b &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & 2a+b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-3)a+b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &(n-4)a+b& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& (n-3)a+b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ b_{2,1} & 0 & a_{2,3}+2a_{4,3}& a+b &0& & \cdots &0&\cdots & 0& 0&0\\ a-b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a-b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a-b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a-b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& (3-n)a-b \end{smallmatrix}\right].$$ \allowdisplaybreaks \begin{itemize} \item The transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ removes $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we change to $A_{2,1}-a_{4,3}$ and $b_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ It changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $A_{2,1}+2a_{4,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,1}-a_{4,3}:=b_{2,1}.$ Then $A_{2,1}+2a_{4,1}-a_{4,3}:=\frac{(2b-a)b_{2,1}+2(a-b)a_{2,3}}{a}.$ \item The transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}-\frac{a_{2,n+1}}{2b}e_2$ removes the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}]$ and we prove the result. \end{itemize} \item[(2)] The right (a derivation) and left (not a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ A_{2,1} & 2(3-n)a & a_{2,3}& (2-n)a &0& & \cdots &0&\cdots & 0& 0&0\\ (2-n)a & 0 & (3-n)a & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & (4-n)a &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & (5-n)a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-n)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &-a& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& 0 \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ b_{2,1} & 0 & a_{2,3}+2a_{4,3}& (4-n)a &0& & \cdots &0&\cdots & 0& 0&0\\ (n-2)a & 0 & (n-3)a & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & (n-4)a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & (n-5)a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (n-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& 0 \end{smallmatrix}\right].$$ \begin{itemize} \item The transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ removes $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we change to $A_{2,1}-a_{4,3}$ and $b_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Then we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k}$ to remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ It changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $A_{2,1}+2a_{4,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,1}-a_{4,3}:=b_{2,1}.$ Then $A_{2,1}+2a_{4,1}-a_{4,3}:= (5-2n)b_{2,1}+2(n-2)a_{2,3},$ which we set to be $\mathcal{A}_{2,1}.$ \item Applying the transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}+\frac{a_{2,n+1}}{2(n-3)a}e_2,$ we remove the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}]$ and prove the result. \end{itemize} \item[(3)] The right (a derivation) and left (not a derivation) multiplication operators restricted to the nilradical are $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,1} & 2b & a_{2,3}& b &0& & \cdots &0&\cdots & 0& 0&0\\ b & 0 & b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & b &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &b& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,3}+a_{4,1}+a_{4,3} & 0 & a_{2,3}+2a_{4,3}& b &0& & \cdots &0&\cdots & 0& 0&0\\ -b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& -b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&-b \end{smallmatrix}\right].$$ \begin{itemize} \item The transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ removes $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we change to $a_{2,1}-a_{4,3}$ and $a_{2,3}+a_{4,1},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ It changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $a_{2,1}+2a_{4,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ Then we assign $a_{2,1}+2a_{4,1}-a_{4,3}:=a_{2,1}$ and $a_{2,3}+a_{4,1}:=a_{2,3}.$ \item The transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}-\frac{a_{2,n+1}}{2b}e_2$ removes the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}]$ and we prove the result. \end{itemize} \item[(4)] The right (a derivation) and left (not a derivation) multiplication operators restricted to the nilradical are given below: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,3}-b_{2,1}+b_{2,3} & 0 & a_{2,3}& -a &0& & \cdots &0&\cdots & 0& 0&0\\ -a & 0 & 0 & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & a &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & 2a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-3)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &(n-4)a& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}&(n-3)a \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ b_{2,1} & 0 & b_{2,3}& a &0& & \cdots &0&\cdots & 0& 0&0\\ a & 0 & 0 & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&(3-n)a \end{smallmatrix}\right].$$ \begin{itemize} \item The transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ removes $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we change to $a_{2,3}-b_{2,1}-a_{4,3}+b_{2,3}$ and $b_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $b_{2,3}-2a_{4,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ It changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $2a_{4,1}+a_{2,3}-b_{2,1}-a_{4,3}+b_{2,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3},$ respectively. It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3},b_{2,1}-a_{4,3}:=b_{2,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ Then $2a_{4,1}+a_{2,3}-b_{2,1}-a_{4,3}+b_{2,3}:=a_{2,3}-b_{2,1}+b_{2,3}.$ \end{itemize} \item[(5)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-A_{4,3}e_1+\frac{1}{2(b+c)}(A_{2,1}A_{4,3}+2a_{4,1}A_{4,3}+b_{2,1}A_{4,3}-A^2_{4,3}-a_{2,3}a_{4,1}-a^2_{4,1}-a_{4,1}b_{2,3}-a_{2,5})e_2+a_{4,1}e_3$ and then we assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}-2b_{2,1}+a_{4,1}:=b_{2,3}.$ \item[(6)] The transformation we apply is $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-A_{4,3}e_1+\frac{1}{2(c-a)}(A_{2,1}A_{4,3}+2a_{4,1}A_{4,3}+b_{2,1}A_{4,3}-A^2_{4,3}-a_{2,3}a_{4,1}-a^2_{4,1}-a_{4,1}b_{2,3}-a_{2,5})e_2+a_{4,1}e_3.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}-2b_{2,1}+a_{4,1}:=b_{2,3}.$ \item[(7)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1+a_{4,1}e_3$ and rename the coefficient in front of $e_2$ in $[e_5,e_5]$ back by $a_{2,5}.$ Then we assign $a_{2,3}+a_{4,1}:=a_{2,3},b_{2,1}-a_{4,3}:=b_{2,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ \item[(8)] We apply $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1-\frac{1}{2c}(2a_{2,3}a_{4,3}-a^2_{4,1}+2a_{4,1}a_{4,3}+2a_{4,1}b_{2,1}+3a_{4,3}^2-4a_{4,3}b_{2,1}+a_{2,5})e_2+a_{4,1}e_3.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,1}-a_{4,3}:=b_{2,1}.$ \item[(9)] The transformation is $e^{\prime}_i=e_i,(1\leq i\leq 4),$ $e^{\prime}_5=e_5-A_{4,3}e_1+\frac{1}{4(b+c)}(2a_{2,1}A_{4,3}-2a_{2,3}a_{4,1}+a_{2,3}A_{4,3}-2a^2_{4,1}+6a_{4,1}A_{4,3}-2a_{4,1}b_{2,3}-2A^2_{4,3}+b_{2,3}A_{4,3}-2a_{2,5})e_2+a_{4,1}e_3.$ Then we assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}-2a_{2,1}-3a_{4,1}:=b_{2,3}.$ \item[(10)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1+a_{4,1}e_3$ and rename the coefficient in front of $e_2$ in $[e_5,e_5]$ back by $a_{2,5}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3},a_{2,1}+2a_{4,1}-a_{4,3}:=a_{2,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ \end{enumerate} \end{proof} \allowdisplaybreaks \begin{theorem}\label{RL4(Change of Basis)} There are eight solvable indecomposable right Leibniz algebras up to isomorphism with a codimension one nilradical $\mathcal{L}^4,(n\geq4),$ which are given below: \begin{equation} \begin{array}{l} \displaystyle \nonumber (i)\,\,\, \mathfrak{g}} \def\q{\mathfrak{q}_{n+1,1}: [e_1,e_{n+1}]=e_1+(a-1)e_3,[e_2,e_{n+1}]=2ae_2,[e_3,e_{n+1}]=ae_3,\\ \displaystyle [e_4,e_{n+1}]=(a-1)e_2+(a+1)e_4,[e_{i},e_{n+1}]=\left(a+i-3\right)e_{i},[e_{n+1},e_1]=-e_1-(a-1)e_3,\\ \displaystyle[e_{n+1},e_3]=-ae_3,[e_{n+1},e_4]=(a+1)\left(e_2-e_4\right),[e_{n+1},e_i]=\left(3-i-a\right)e_i,(5\leq i\leq n),\\ \displaystyle \nonumber(ii)\,\,\, \mathfrak{g}} \def\q{\mathfrak{q}_{n+1,2}:[e_1,e_{n+1}]=e_1+(2-n)e_3,[e_2,e_{n+1}]=2(3-n)e_2, [e_3,e_{n+1}]=(3-n)e_3,\\ \displaystyle [e_4,e_{n+1}]=(2-n)e_2+(4-n)e_4,[e_{i},e_{n+1}]=(i-n)e_{i},[e_{n+1},e_{n+1}]=e_n,\\ \displaystyle [e_{n+1},e_1]=-e_1+(n-2)e_3,[e_{n+1},e_3]=(n-3)e_3,[e_{n+1},e_4]=(4-n)\left(e_2-e_4\right),\\ \displaystyle [e_{n+1},e_i]=(n-i)e_i,(5\leq i\leq n),\\ \displaystyle (iii)\,\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,3}: [e_1,e_{n+1}]=e_3,[e_2,e_{n+1}]=2e_2,[e_4,e_{n+1}]=e_2,[e_{i},e_{n+1}]=e_{i}+\epsilon e_{i+2}+\\ \displaystyle \sum_{k=i+3}^n{b_{k-i-2}e_k},[e_{n+1},e_1]=-e_3,[e_{n+1},e_4]=e_2,[e_{n+1},e_i]=-e_i-\epsilon e_{i+2}-\sum_{k=i+3}^n{b_{k-i-2}e_k},\\ \displaystyle (\epsilon=0,1,3\leq i\leq n),\\ \displaystyle \nonumber(iv)\,\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4}: [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=\epsilon e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2, [e_{n+1},e_4]=e_2-e_4,\\ \displaystyle [e_{n+1},e_i]=(3-i)e_i,(5\leq i\leq n;\epsilon=0,1; if\,\,\epsilon=0,\,\,then\,\,d^2+f^2\neq0),\\ \displaystyle \nonumber(v)\,\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,5}: [e_1,e_{5}]=ae_1+(b-a+1)e_3,[e_2,e_{5}]=2(b+1)e_2, [e_3,e_{5}]=e_1+be_3,\\ \displaystyle [e_4,e_{5}]=(b-a+2)e_2+(a+b)e_4,[e_{5},e_1]=-ae_1+(a-b-1)e_3, [e_{5},e_3]=-e_1-be_3,\\ \displaystyle [e_{5},e_4]=(a+b)(e_2-e_4),(if\,\,b=-1,then\,\,a\neq1),\\ \displaystyle \nonumber(vi)\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,6}: [e_1,e_{5}]=ae_1+(1-2a)e_3,[e_2,e_{5}]=2(1-a)e_2, [e_3,e_{5}]=e_1-ae_3,\\ \displaystyle [e_4,e_{5}]=2(1-a)e_2,[e_{5},e_{5}]=e_4,[e_{5},e_1]=-ae_1+(2a-1)e_3,[e_{5},e_3]=ae_3-e_1, (a\neq1),\\ \displaystyle \nonumber(vii)\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,7}:[e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=e_1+fe_2-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),\\ \displaystyle [e_{5},e_{5}]=\epsilon e_2,[e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+de_2+e_3,\\ \displaystyle [e_{5},e_4]=(a-1)\left(e_2-e_4\right),(\epsilon=0,1;if\,\,\epsilon=0,then\,\,d^2+f^2\neq0;a\neq1),\\ \displaystyle \nonumber(viii)\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,8}: [e_1,e_{5}]=ce_2, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=e_2-e_4,[e_{5},e_{5}]=\epsilon e_2,\\ \displaystyle [e_{5},e_1]=\left(c+d\right)e_2,[e_{5},e_3]=-e_1+\left(d+2c\right)e_2+e_3,[e_{5},e_4]=e_4-e_2,(c\neq0,\epsilon=0,1). \end{array} \end{equation} \end{theorem} \vskip 20pt \begin{proof} One applies the change of basis transformations keeping the nilradical $\mathcal{L}^4$ given in $(\ref{L4})$ unchanged. \begin{enumerate}[noitemsep, topsep=1pt] \allowdisplaybreaks \item[(1)] We have the right (a derivation) and the left (not a derivation) multiplication operators restricted to the nilradical given below: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{A}_{2,1} & 2b & a_{2,3}& b-a &0&0 & \cdots &&0 & 0& 0\\ b-a & 0 & b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & a+b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 2a+b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &3a+b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&(n-5)a+b &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &(n-4)a+b& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& (n-3)a+b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1} & 0 & a_{2,3}& a+b &0&0 & \cdots &&0 & 0& 0\\ a-b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a-b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a-b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a-b \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed, renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Besides it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \noindent $(I)$ Suppose $b\neq\frac{a}{2}.$ \item The transformation $e^{\prime}_1=e_1+\frac{1}{a-2b}\left(\mathcal{A}_{2,1}+\frac{(b-a)a_{2,3}}{b}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3-\frac{a_{2,3}}{b}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k}$ removes $\mathcal{A}_{2,1}$ and $b_{2,1}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. It also removes $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ as well as $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$. \begin{remark}\label{Remark{a_{5,1}}} If $n=4,$ then $a_{5,1}=0$ and the same is in case $(II).$ \end{remark} \item Then we scale $a$ to unity applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}.$ Renaming $\frac{b}{a}$ by $b,$ we obtain a continuous family of Leibniz algebras: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{n+1}]=e_1+(b-1)e_3,[e_2,e_{n+1}]=2be_2,[e_3,e_{n+1}]=be_3,\\ \displaystyle [e_4,e_{n+1}]=(b-1)e_2+(b+1)e_4,[e_{i},e_{n+1}]=\left(b+i-3\right)e_{i},\\ \displaystyle [e_{n+1},e_1]=-e_1-(b-1)e_3,[e_{n+1},e_3]=-be_3,[e_{n+1},e_4]=(b+1)e_2-\\ \displaystyle (b+1)e_4,[e_{n+1},e_i]=\left(3-i-b\right)e_i,(5\leq i\leq n),\\ \displaystyle(b\neq0,b\neq\frac{1}{2},b\neq3-n). \end{array} \right. \label{g_{n+1,1}} \end{equation} \noindent $(II)$ Suppose $b:=\frac{a}{2}.$ We have that $\mathcal{A}_{2,1}=a_{2,3}.$ \item The transformation $e^{\prime}_1=e_1-\frac{b_{2,1}+a_{2,3}}{a}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3-\frac{2a_{2,3}}{a}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k}$ removes $a_{2,3}$ from the $(2,1)^{st},(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and from the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}.$ It also removes $b_{2,1}$ from the $(2,1)^{st}$ position in $\mathscr{L}_{e_{n+1}}$ as well as $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$. \item To scale $a$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}$ and obtain a limiting case of $(\ref{g_{n+1,1}})$ with $b=\frac{1}{2}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-\frac{e_3}{2},[e_2,e_{n+1}]=e_2,[e_3,e_{n+1}]=\frac{e_3}{2},[e_4,e_{n+1}]=-\frac{e_2}{2}+\frac{3e_4}{2},\\ \displaystyle [e_{i},e_{n+1}]=\left(i-\frac{5}{2}\right)e_{i},[e_{n+1},e_1]=-e_1+\frac{e_3}{2},[e_{n+1},e_3]=-\frac{e_3}{2},\\ \displaystyle [e_{n+1},e_4]=\frac{3}{2}\left(e_2-e_4\right),[e_{n+1},e_i]=\left(\frac{5}{2}-i\right)e_i,(5\leq i\leq n). \end{array} \right. \end{equation} \end{itemize} \item[(2)] We have the right (a derivation) and the left (not a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{A}_{2,1} & 2(3-n)a & a_{2,3}& (2-n)a &0&0 & \cdots &&0 & 0& 0\\ (2-n)a & 0 & (3-n)a & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & (4-n)a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & (5-n)a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &(6-n)a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&-2a &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &-a& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0&0 \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1} & 0 & a_{2,3}& (4-n)a &0&0 & \cdots &&0 & 0& 0\\ (n-2)a & 0 & (n-3)a & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & (n-4)a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & (n-5)a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &(n-6)a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&2a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0&0 \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed, renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Besides it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \item The transformation $e^{\prime}_1=e_1+\frac{1}{(2n-5)a}\left(\mathcal{A}_{2,1}+\frac{(n-2)a_{2,3}}{n-3}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,3}}{(n-3)a}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k}$ removes $\mathcal{A}_{2,1}$ and $b_{2,1}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. It also removes $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}};$ $a_{k+1,1}$ and $-a_{k+1,1}$ from the entries in the $(k+1,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$, respectively, where $(4\leq k\leq n-1)$. (See Remark \ref{Remark{a_{5,1}}}). \item To scale $a$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}$ renaming the coefficient $\frac{a_{n,n+1}}{a^2}$ in front of $e_n$ in $[e_{n+1},e_{n+1}]$ back by $a_{n,n+1}.$ We obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+(2-n)e_3,[e_2,e_{n+1}]=2(3-n)e_2, [e_3,e_{n+1}]=(3-n)e_3,\\ \displaystyle [e_4,e_{n+1}]=(2-n)e_2+(4-n)e_4,[e_{i},e_{n+1}]=(i-n)e_{i},[e_{n+1},e_{n+1}]=a_{n,n+1}e_n,\\ \displaystyle [e_{n+1},e_1]=-e_1+(n-2)e_3,[e_{n+1},e_3]=(n-3)e_3,[e_{n+1},e_4]=(4-n)\left(e_2-e_4\right),\\ \displaystyle [e_{n+1},e_i]=(n-i)e_i,(5\leq i\leq n). \end{array} \right. \end{equation} \end{itemize} If $a_{n,n+1}=0,$ then we have a limiting case of (\ref{g_{n+1,1}}) with $b=3-n$. If $a_{n,n+1}\neq0,$ then $a_{n,n+1}=re^{i\phi}$ and we apply the transformation $e^{\prime}_j=\left(re^{i\phi}\right)^{\frac{j}{n-2}}e_j,(1\leq j\leq 2),e^{\prime}_k=\left(re^{i\phi}\right)^{\frac{k-2}{n-2}}e_k,(3\leq k\leq n), e^{\prime}_{n+1}=e_{n+1}$ to scale $a_{n,n+1}$ to $1$. We have the algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,2}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+(2-n)e_3,[e_2,e_{n+1}]=2(3-n)e_2, [e_3,e_{n+1}]=(3-n)e_3,\\ \displaystyle [e_4,e_{n+1}]=(2-n)e_2+(4-n)e_4,[e_{i},e_{n+1}]=(i-n)e_{i},[e_{n+1},e_{n+1}]=e_n,\\ \displaystyle [e_{n+1},e_1]=-e_1+(n-2)e_3,[e_{n+1},e_3]=(n-3)e_3,[e_{n+1},e_4]=(4-n)\left(e_2-e_4\right),\\ \displaystyle [e_{n+1},e_i]=(n-i)e_i,(5\leq i\leq n). \end{array} \label{g_{n+1,2}} \right. \end{equation} \item[(3)] We have the right (a derivation) and the left (not a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,1} & 2b & a_{2,3}& b &0&0 & \cdots &&0 & 0& 0\\ b & 0 & b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&b &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &b& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,3} & 0 & a_{2,3}& b &0&0 & \cdots &&0 & 0& 0\\ -b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0&-b \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item Applying the transformation $e^{\prime}_1=e_1-\frac{a_{2,1}+a_{2,3}}{2b}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3-\frac{a_{2,3}}{b}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n+1),$ we remove $a_{2,1}$ from the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $a_{2,3}$ from the $(2,1)^{st},$ the $(2,3)^{rd}$ positions in $\mathscr{L}_{e_{n+1}}$ and from the $(2,3)^{rd}$ position in $\mathscr{R}_{e_{n+1}}$ keeping other entries unchanged. \item To scale $b$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=\frac{e_{n+1}}{b}.$ Then we rename $\frac{a_{5,3}}{b},\frac{a_{6,3}}{b},...,\frac{a_{n,3}}{b}$ by $a_{5,3},a_{6,3},...,a_{n,3},$ respectively. We obtain a Leibniz algebra \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_3,[e_2,e_{n+1}]=2e_2,[e_4,e_{n+1}]=e_2,[e_{i},e_{n+1}]=e_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_1]=-e_3,[e_{n+1},e_4]=e_2,[e_{n+1},e_i]=-e_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(3\leq i\leq n). \end{array} \right. \end{equation} If $a_{5,3}\neq0,(n\geq5),$ then $a_{5,3}=re^{i\phi}$ and applying the transformation $e^{\prime}_j=\left(re^{i\phi}\right)^{\frac{j}{2}}e_j,(1\leq j\leq 2),e^{\prime}_k=\left(re^{i\phi}\right)^{\frac{k-2}{2}}e_k, (3\leq k\leq n),e^{\prime}_{n+1}=e_{n+1},$ we scale $a_{5,3}$ to $1.$ We also rename all the affected entries back and then we rename $a_{6,3},...,a_{n,3}$ by $b_1,...,b_{n-5},$ respectively. We combine with the case when $a_{5,3}=0$ and obtain a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,3}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_3,[e_2,e_{n+1}]=2e_2,[e_4,e_{n+1}]=e_2,[e_{i},e_{n+1}]=e_{i}+\epsilon e_{i+2}+\sum_{k=i+3}^n{b_{k-i-2}e_k},\\ \displaystyle [e_{n+1},e_1]=-e_3,[e_{n+1},e_4]=e_2,[e_{n+1},e_i]=-e_i-\epsilon e_{i+2}-\sum_{k=i+3}^n{b_{k-i-2}e_k},\\ \displaystyle (\epsilon=0,1,3\leq i\leq n). \end{array} \right. \end{equation} \begin{remark} If $n=4,$ then $\epsilon=0.$ \end{remark} \end{itemize} \item[(4)] We have the right (a derivation) and the left (not a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,3}-b_{2,1}+b_{2,3} & 0 & a_{2,3}& -a &0&0 & \cdots &&0 & 0& 0\\ -a & 0 & 0 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 2a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &3a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&(n-5)a &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &(n-4)a& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& (n-3)a \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1}& 0 & b_{2,3}& a &0&0 & \cdots &&0 & 0& 0\\ a & 0 & 0 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed, renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Besides it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \item The transformation $e^{\prime}_1=e_1+\frac{a_{2,3}-b_{2,1}+b_{2,3}}{a}e_2,e^{\prime}_{i}=e_{i}, (2\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=4}^{n-1}a_{k+1,1}e_{k}$ removes $a_{2,3}-b_{2,1}+b_{2,3}$ from the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$. It changes the entry in the $(2,1)^{st}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+b_{2,3}.$ It also removes $a_{k+1,1}$ and $-a_{k+1,1}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$ in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. \item We assign $a_{2,3}:=d$ and $b_{2,3}:=f$ and then we scale $a$ to unity applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}.$ Renaming $\frac{d}{a},\frac{f}{a}$ and $\frac{a_{2,n+1}}{a^2}$ by $d,f$ and $a_{2,n+1},$ respectively, we obtain a right and left Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,(5\leq i\leq n), \end{array} \right. \end{equation} which is a limiting case of (\ref{g_{n+1,1}}) with $b=0,$ when $d=f=a_{2,n+1}=0$. Altogether (\ref{g_{n+1,1}}) and all its limiting cases after replacing $b$ with $a$ give us a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,1}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+(a-1)e_3,[e_2,e_{n+1}]=2ae_2,[e_3,e_{n+1}]=ae_3,\\ \displaystyle [e_4,e_{n+1}]=(a-1)e_2+(a+1)e_4,[e_{i},e_{n+1}]=\left(a+i-3\right)e_{i},[e_{n+1},e_1]=-e_1-\\ \displaystyle (a-1)e_3,[e_{n+1},e_3]=-ae_3,[e_{n+1},e_4]=(a+1)\left(e_2-e_4\right),[e_{n+1},e_i]=\left(3-i-a\right)e_i,\\ \displaystyle(5\leq i\leq n). \end{array} \right. \end{equation} It remains to consider a continuous family of Leibniz algebras given below and scale any nonzero entries as much as possible. \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,(a_{2,n+1}^2+d^2+f^2\neq0,5\leq i\leq n), \end{array} \right. \end{equation} If $a_{2,n+1}\neq0,$ then $a_{2,n+1}=re^{i\phi}$ and applying the transformation $e^{\prime}_j=\left(re^{i\phi}\right)^{\frac{j}{2}}e_j,(1\leq j\leq 2),e^{\prime}_k=\left(re^{i\phi}\right)^{\frac{k-2}{2}}e_k,$ $(3\leq k\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1},$ we scale it to $1.$ We also rename all the affected entries back. Then we combine with the case when $a_{2,n+1}=0$ and obtain a right and left Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=\epsilon e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,\\ \displaystyle(5\leq i\leq n;\epsilon=0,1; if\,\,\epsilon=0,\,\,then\,\,d^2+f^2\neq0). \end{array} \right. \end{equation} \end{itemize} \item[(5)] Applying the transformation $e^{\prime}_1=e_1+\frac{(b+2c-2a)a_{2,3}+(b+2c)b_{2,3}}{2a(b+c)}e_2,e^{\prime}_2=e_2,$ $e^{\prime}_3=e_3+\frac{(c-2a)a_{2,3}+c\cdot b_{2,3}}{2a(b+c)}e_2, e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and renaming $\frac{a}{c}$ and $\frac{b}{c}$ by $a$ and $b,$ respectively, we obtain a continuous family of Leibniz algebras given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(b-a+1)e_3,[e_2,e_{5}]=2(b+1)e_2, [e_3,e_{5}]=e_1+be_3,\\ \displaystyle [e_4,e_{5}]=(b-a+2)e_2+(a+b)e_4,[e_{5},e_1]=-ae_1+(a-b-1)e_3,\\ \displaystyle [e_{5},e_3]=-e_1-be_3,[e_{5},e_4]=(a+b)(e_2-e_4),(b\neq-a,a\neq0,b\neq-1). \end{array} \right. \label{g_{5,5}} \end{equation} \item[(6)] We apply the transformation $e^{\prime}_1=e_1+\frac{(3a-2c)a_{2,3}+(a-2c)b_{2,3}}{2a(a-c)}e_2, e^{\prime}_2=e_2,$ $e^{\prime}_3=e_3+\frac{(2a-c)a_{2,3}-c\cdot b_{2,3}}{2a(a-c)}e_2,e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and rename $\frac{a}{c}$ and $\frac{a_{4,5}}{c^2}$ by $a$ and $a_{4,5},$ respectively, to obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(1-2a)e_3,[e_2,e_{5}]=2(1-a)e_2, [e_3,e_{5}]=e_1-ae_3,\\ \displaystyle [e_4,e_{5}]=2(1-a)e_2,[e_{5},e_{5}]=a_{4,5}e_4,[e_{5},e_1]=-ae_1+(2a-1)e_3,\\ \displaystyle [e_{5},e_3]=ae_3-e_1,(a\neq0,a\neq1), \end{array} \label{general1} \right. \end{equation} which is a limiting case of $(\ref{g_{5,5}})$ with $b:=-a$ if $a_{4,5}=0.$ If $a_{4,5}\neq0,$ then $a_{4,5}=re^{i\phi}$. To scale $a_{4,5}$ to $1,$ we apply the transformation $e^{\prime}_1=\sqrt{r}e^{i\frac{\phi}{2}}e_1,e^{\prime}_2=re^{i\phi}e_2,e^{\prime}_3=\sqrt{r}e^{i\frac{\phi}{2}}e_3,e^{\prime}_4=re^{i\phi}e_4,e^{\prime}_{5}=e_{5}$ and obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(1-2a)e_3,[e_2,e_{5}]=2(1-a)e_2, [e_3,e_{5}]=e_1-ae_3,\\ \displaystyle [e_4,e_{5}]=2(1-a)e_2,[e_{5},e_{5}]=e_4,[e_{5},e_1]=-ae_1+(2a-1)e_3,\\ \displaystyle [e_{5},e_3]=-e_1+ae_3,(a\neq0,a\neq1). \end{array} \right. \label{g_{5,6}} \end{equation} \item[(7)] We apply the transformation $e^{\prime}_1=e_1+\frac{a_{2,3}-b_{2,1}+b_{2,3}}{a}e_2,e^{\prime}_i=e_i,(2\leq i\leq 5)$ and assign $d:=\frac{a\cdot b_{2,3}+c\left(a_{2,3}-b_{2,1}+b_{2,3}\right)}{a},$ $f:=\frac{a\cdot a_{2,3}-c\left(a_{2,3}-b_{2,1}+b_{2,3}\right)}{a}$ to obtain a continuous family of Leibniz algebras: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=ce_1+fe_2-ce_3,[e_4,e_{5}]=(c-a)(e_2-e_4),[e_{5},e_{5}]=a_{2,5}e_2,\\ \displaystyle [e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-ce_1+de_2+ce_3,[e_{5},e_4]=(a-c)(e_2-e_4). \end{array} \right. \end{equation} Then we continue with the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=\frac{e_5}{c}$ renaming $\frac{d}{c},\frac{f}{c},\frac{a}{c}$ and $\frac{a_{2,5}}{c^2}$ by $d,f,a$ and $a_{2,5},$ respectively, to obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=e_1+fe_2-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+de_2+e_3,\\ \displaystyle [e_{5},e_4]=(a-1)\left(e_2-e_4\right),(a\neq0,a\neq 1). \end{array} \right. \label{general} \end{equation} If $d=f=a_{2,5}=0,$ then we have a limiting case of $(\ref{g_{5,5}})$ with $b=-1.$ If $a_{2,5}\neq0,$ then we apply the same transformation as we did to scale $a_{4,5}$ to $1.$ We also rename all the affected entries back. Then we combine with the case when $a_{2,5}=0$ and obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=e_1+fe_2-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),\\ \displaystyle [e_{5},e_{5}]=\epsilon e_2,[e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+de_2+e_3,\\ \displaystyle [e_{5},e_4]=(a-1)\left(e_2-e_4\right),(\epsilon=0,1;if\,\,\epsilon=0,then\,\,d^2+f^2\neq0),\\ \displaystyle (a\neq0,a\neq 1). \end{array} \label{g_{5,7}} \right. \end{equation} \begin{remark}\label{Remark{g_{5,7}}} We notice by applying the transformation $e^{\prime}_1=e_1+fe_2,e^{\prime}_i=e_i,(2\leq i\leq 5)$ that this algebra $(\ref{g_{5,7}})$ is isomorphic to \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1-afe_2-ae_3, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),[e_{5},e_{5}]=\epsilon e_2,\\ \displaystyle [e_{5},e_1]=-ae_1+\left(af+d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+(d+f)e_2+e_3,\\ \displaystyle [e_{5},e_4]=(a-1)\left(e_2-e_4\right),(\epsilon=0,1;if\,\,\epsilon=0,then\,\,d^2+f^2\neq0),(a\neq0,a\neq 1). \end{array} \right. \end{equation} \end{remark} \item[(8)] Applying the transformation $e^{\prime}_1=e_1+\frac{a_{2,3}-2b_{2,1}}{c}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3-\frac{b_{2,1}}{c}e_2,e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and renaming $\frac{a_{4,5}}{c^2}$ back by $a_{4,5},$ we obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=e_3,[e_2,e_{5}]=2e_2, [e_3,e_{5}]=e_1,[e_4,e_{5}]=2e_2,[e_{5},e_{5}]=a_{4,5}e_4,[e_{5},e_1]=-e_3,\\ \displaystyle [e_{5},e_3]=-e_1, \end{array} \right. \end{equation} which is a limiting case of $(\ref{general1})$ with $a=0$ and at the same time a limiting case of $(\ref{g_{5,5}})$ with $b:=-a$ if $a_{4,5}=0.$ Therefore we only consider the case when $a_{4,5}\neq0.$ One applies the transformation below $(\ref{general1})$ to scale $a_{4,5}$ to $1$ and we have a limiting case of $(\ref{g_{5,6}})$ with $a=0.$ Altogether $(\ref{g_{5,6}})$ and all its limiting cases give us the algebra $g_{5,6}$ in full generality: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+(1-2a)e_3,[e_2,e_{5}]=2(1-a)e_2, [e_3,e_{5}]=e_1-ae_3,[e_4,e_{5}]=2(1-a)e_2,\\ \displaystyle [e_{5},e_{5}]=e_4,[e_{5},e_1]=-ae_1+(2a-1)e_3,[e_{5},e_3]=-e_1+ae_3, \end{array} \right. \end{equation} where $a\neq1,$ otherwise nilpotent. \item[(9)] If $a=0,b\neq0,b\neq-c,c\neq0,(n=4),$ then we apply the transformation $e^{\prime}_1=e_1+\frac{(b+2c)b_{2,3}-(3b+2c)a_{2,3}}{(b+c)^2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3-\frac{\mathcal{B}_{2,1}}{b+c}e_2,e^{\prime}_4=e_4, e^{\prime}_5=\frac{e_5}{c}.$ We rename $\frac{b}{c}$ by $b$ and obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=(b+1)e_3,[e_2,e_{5}]=2(b+1)e_2, [e_3,e_{5}]=e_1+be_3,[e_4,e_{5}]=(b+2)e_2+be_4,\\ \displaystyle [e_{5},e_1]=(-b-1)e_3,[e_{5},e_3]=-e_1-be_3,[e_{5},e_4]=b(e_2-e_4),(b\neq0,b\neq-1), \end{array} \right. \end{equation} which is a limiting case of $(\ref{g_{5,5}})$ with $a=0.$ Altogether $(\ref{g_{5,5}})$ and all its limiting cases give us the algebra $g_{5,5}$ in full generality: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+(b-a+1)e_3,[e_2,e_{5}]=2(b+1)e_2, [e_3,e_{5}]=e_1+be_3,\\ \displaystyle [e_4,e_{5}]=(b-a+2)e_2+(a+b)e_4,[e_{5},e_1]=-ae_1+(a-b-1)e_3,[e_{5},e_3]=-e_1-be_3,\\ \displaystyle [e_{5},e_4]=(a+b)(e_2-e_4), \end{array} \right. \end{equation} where if $b=-1,$ then $a\neq1,$ otherwise nilpotent. \item[(10)] We apply the transformation $e^{\prime}_1=e_1+\frac{a_{2,3}}{c}e_2,e^{\prime}_i=e_i,(2\leq i\leq 4),e^{\prime}_5=\frac{e_5}{c}$ and rename $\frac{a_{2,1}}{c},\frac{a_{2,3}}{c},\frac{b_{2,3}}{c}$ and $\frac{a_{2,5}}{c^2}$ by $a_{2,1},a_{2,3},b_{2,3}$ and $a_{2,5},$ respectively, to obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=e_2-e_4,[e_{5},e_{5}]=a_{2,5}e_2,\\ \displaystyle [e_{5},e_1]=\left(a_{2,3}+b_{2,3}-a_{2,1}\right)e_2,[e_{5},e_3]=-e_1+\left(a_{2,3}+b_{2,3}\right)e_2+e_3,[e_{5},e_4]=e_4-e_2. \end{array} \right. \end{equation} We scale $a_{2,5}$ to $1$ if nonzero. Combining with the case when $a_{2,5}=0$ and renaming all the affected entries back, we have the algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=e_2-e_4,[e_{5},e_{5}]=\epsilon e_2,\\ \displaystyle [e_{5},e_1]=\left(a_{2,3}+b_{2,3}-a_{2,1}\right)e_2,[e_{5},e_3]=-e_1+\left(a_{2,3}+b_{2,3}\right)e_2+e_3,[e_{5},e_4]=e_4-e_2,\\ \displaystyle (\epsilon=0,1). \end{array} \right. \end{equation} If $a_{2,1}=0,$ then we obtain a limiting case of $(\ref{general})$ and $(\ref{g_{5,7}})$ with $a=0$ according to Remark \ref{Remark{g_{5,7}}}. Altogether $(\ref{g_{5,7}})$ and all its limiting cases, give us the algebra $g_{5,7}$ in full generality: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=e_1+fe_2-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),[e_{5},e_{5}]=\epsilon e_2,\\ \displaystyle [e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+de_2+e_3,[e_{5},e_4]=(a-1)\left(e_2-e_4\right),\\ \displaystyle (\epsilon=0,1;if\,\,\epsilon=0,then\,\,d^2+f^2\neq0), \end{array} \right. \end{equation} where $a\neq1,$ otherwise nilpotent. If $c:=a_{2,1}\neq0$ and we assign $d:=a_{2,3}+b_{2,3}-2a_{2,1},$ then we obtain a Leibniz algebra $g_{5,8}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ce_2, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=e_2-e_4,[e_{5},e_{5}]=\epsilon e_2,[e_{5},e_1]=\left(c+d\right)e_2,\\ \displaystyle [e_{5},e_3]=-e_1+\left(d+2c\right)e_2+e_3,[e_{5},e_4]=e_4-e_2,(c\neq0,\epsilon=0,1). \end{array} \right. \end{equation} \end{enumerate} \end{proof} \subsubsection{Codimension two and three solvable extensions of $\mathcal{L}^4$}\label{Two&Three} The non-zero inner derivations of $\mathcal{L}^4,(n\geq4)$ are given by \[ \mathscr{R}_{e_1}=\left[\begin{smallmatrix} 0&0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 1&0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0&0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0&0 & 1 & 0 & \cdots & 0 & 0 & 0 \\ 0& 0 & 0 & 1 & \cdots & 0 & 0 & 0 \\ \vdots& \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0& 0 & 0 & 0&\cdots & 1 & 0 &0\\ 0& 0 & 0 & 0&\cdots & 0 & 1 &0 \end{smallmatrix}\right],\mathscr{R}_{e_3}=\left[\begin{smallmatrix} 0&0 & 0 & 0 & \cdots & 0 \\ 2&0 & 1 & 0 & \cdots & 0 \\ 0&0 & 0 & 0 & \cdots & 0 \\ -1&0 & 0 & 0 & \cdots & 0\\ 0& 0 & 0 & 0 & \cdots & 0\\ \vdots& \vdots & \vdots & \vdots & & \vdots\\ 0& 0 & 0 & 0&\cdots & 0 \end{smallmatrix}\right], \mathscr{R}_{e_i}=-E_{i+1,1}=\left[\begin{smallmatrix} 0 & 0&0&\cdots & 0 \\ 0& 0&0&\cdots & 0 \\ 0 & 0&0&\cdots & 0 \\ 0 & 0&0&\cdots & 0 \\ \vdots &\vdots &\vdots& & \vdots\\ -1 &0&0& \cdots & 0\\ \vdots &\vdots &\vdots& & \vdots\\ \boldsymbol{\cdot} & 0&0&\cdots & 0 \end{smallmatrix}\right]\,(4\leq i\leq n-1),\] where $E_{i+1,1}$ is the $n\times n$ matrix that has $1$ in the $(i+1,1)^{st}$ position and all other entries are zero. \begin{remark}\label{NumberOuterDerivations} If we have four or more outer derivations, then they are ``nil-dependent''(see Section \ref{Pr}.). Therefore the solvable algebras we are constructing might be of codimension at most three. (As Section \ref{Codim3} shows, we have at most two outer derivations.) \end{remark} \begin{center}\textit{General approach to find right solvable Leibniz algebras with a codimension two nilradical $\mathcal{L}^4$.}\footnote{When we work with left Leibniz algebras, we first change the right multiplication operator to the left everywhere and the right Leibniz identity to the left Leibniz identity in step $(iii).$ We also interchange $s$ and $r$ in the very left in steps $(i)$ and $(ii)$ as well. } \end{center} \begin{enumerate}[noitemsep, topsep=0pt] \item[(i)] We consider $\mathscr{R}_{[e_r,e_s]}=[\mathscr{R}_{e_s},\mathscr{R}_{e_r}]=\mathscr{R}_{e_s}\mathscr{R}_{e_r}-\mathscr{R}_{e_r}\mathscr{R}_{e_s},\,(n+1\leq r\leq n+2,\,1\leq s\leq n+2)$ and compare with $c_1\mathscr{R}_{e_1}+\sum_{k=3}^{n-1}c_k\mathscr{R}_{e_k},$ because $e_{2}$ and $e_n$ are in the center of $\mathcal{L}^4,\,(n\geq4)$ defined in $(\ref{L4})$ to find all the unknown commutators. \item[(ii)] We write down $[e_{r},e_{s}],\,(n+1\leq r\leq n+2,\,1\leq s\leq n+2)$ including a linear combination of $e_{2}$ and $e_n$ as well. We add the brackets of the nilradical $\mathcal{L}^4$ and outer derivations $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}.$ \item[(iii)] One satisfies the right Leibniz identity: $[[e_r,e_s],e_t]=[[e_r,e_t],e_s]+[e_r,[e_s,e_t]]$ or, equivalently, $\mathscr{R}_{e_t}\left([e_r,e_s]\right)=[\mathscr{R}_{e_t}(e_r),e_s]+[e_r,\mathscr{R}_{e_t}(e_s)],\,(1\leq r,s,t\leq n+2)$ for all the brackets obtained in step $(ii).$ \item[(iv)] Then we carry out the technique of ``absorption'' (see Section \ref{Solvable left Leibniz algebras}) to remove some parameters to simplify the algebra. \item[(v)] We apply the change of basis transformations without affecting the nilradical to remove as many parameters as possible. \end{enumerate} \paragraph{Codimension two solvable extensions of $\mathcal{L}^4,(n=4)$}\label{Twodim(n=4)} There are the following cases based on the conditions involving parameters $a,b$ and $c$: \begin{enumerate} \item[(1)] $b\neq-a,a\neq0,b\neq-c,$ \item[(2)] $b:=-a,a\neq0,a\neq c,$ \item[(3)] $b:=-c,a\neq c,a\neq0,$ \item[(4)] $a=0,b\neq0,b\neq-c.$ \end{enumerate} \noindent (1) (a) One could set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} 1\\ a\\ 0 \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} 1\\ b\\ 1 \end{array}\right),(a\neq-1,a\neq0,b\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{3-2a}{2}a_{2,3}+\frac{1-2a}{2}b_{2,3} & 2a & a_{2,3}& a-1\\ a-1 & 0 & a & 0 \\ 0 & 0 & 0 & a+1 \end{matrix}\right] \end{array},$$ $$\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 1 & 0\\ -b\cdot \alpha_{2,3}-(1+b)\beta_{2,3} & 2(b+1) & \alpha_{2,3}& b+1\\ b & 0 & b & 0 \\ 0 & 0 & 0 & b+1 \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{R}_{[e_5,e_6]},$ we obtain that $a:=1$ and $\alpha_{2,3}:=\left(b+\frac{3}{2}\right)a_{2,3}+\frac{b_{2,3}}{2}.$ Since $b\neq-1$, it follows that $\beta_{2,3}:=\frac{b_{2,3}}{2}-\left(b+\frac{1}{2}\right)a_{2,3}$ and $\mathscr{R}_{[e_5,e_6]}=0.$ As a result, $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & 2 & a_{2,3}& 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array},$$ $$\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 1 & 0\\ \frac{a_{2,3}}{2}-\left(b+\frac{1}{2}\right)b_{2,3} & 2(b+1) & \left(b+\frac{3}{2}\right)a_{2,3}+\frac{b_{2,3}}{2}& b+1\\ b & 0 & b & 0 \\ 0 & 0 & 0 & b+1 \end{matrix}\right] \end{array},(b\neq-1).$$ Further, we find the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{5},e_{1}]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{5},e_2]}=0,\mathscr{R}_{[e_{5},e_3]}=-\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_i]}=0,\mathscr{R}_{[e_{6},e_1]}=-\mathscr{R}_{e_1}- b\mathscr{R}_{e_3},\\ \displaystyle \mathscr{R}_{[e_{6},e_2]}=0,\mathscr{R}_{[e_{6},e_{3}]}=-\mathscr{R}_{e_1}-b\mathscr{R}_{e_3}, \mathscr{R}_{[e_{6},e_i]}=0,(b\neq-1,4\leq i\leq 6). \end{array} \right. \end{equation} \noindent $(ii)$ We include a linear combination of $e_2$ and $e_4$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+c_{2,1}e_2+c_{4,1}e_4,[e_{5},e_2]=c_{2,2}e_2+c_{4,2}e_4, [e_{5},e_3]=c_{2,3}e_2-e_3+c_{4,3}e_4,\\ \displaystyle [e_{5},e_i]=c_{2,i}e_2+c_{4,i}e_4,[e_{6},e_1]=-e_1+d_{2,1}e_2- be_3+d_{4,1}e_4,[e_{6},e_2]=d_{2,2}e_2+d_{4,2}e_4,\\ \displaystyle [e_{6},e_{3}]=-e_1+d_{2,3}e_2-be_3+d_{4,3}e_4, [e_{6},e_i]=d_{2,i}e_2+d_{4,i}e_4,(b\neq-1,4\leq i\leq 6). \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ as well. \noindent $(iii)$ We satisfy the right Leibniz identity, which is shown in Table \ref{RightCodimTwo(L4,(n=4))}. \begin{table}[h!] \caption{Right Leibniz identities in case (1) (a) with a nilradical $\mathcal{L}^4,(n=4)$.} \label{RightCodimTwo(L4,(n=4))} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{5},e_{1}]\right)$ &\scriptsize $[e_{5},e_2]=0$ $\implies$ $c_{2,2}=c_{4,2}=0.$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{6},e_{1}]\right)$ &\scriptsize $[e_{6},e_2]=0$ $\implies$ $d_{2,2}=d_{4,2}=0.$\\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{R}_{e_3}\left([e_{5},e_{1}]\right)$ &\scriptsize $c_{2,4}:=2,c_{4,4}:=-2$ $\implies$ $[e_{5},e_4]=2\left(e_2-e_4\right).$ \\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{R}_{e_3}\left([e_{6},e_{1}]\right)$ &\scriptsize $d_{2,4}:=1+b,d_{4,4}:=-1-b$ $\implies$ $[e_{6},e_4]=(b+1)\left(e_2-e_4\right).$ \\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{R}_{e_{5}}\left([e_{5},e_{5}]\right)$ &\scriptsize $c_{4,5}=0$ $\implies$ $[e_{5},e_{5}]=c_{2,5}e_{2}.$\\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{R}_{e_6}\left([e_{5},e_{6}]\right)$ &\scriptsize $d_{4,6}=0$ $\implies$ $[e_{6},e_6]=d_{2,6}e_2.$ \\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{R}_{e_5}\left([e_{5},e_{3}]\right)$ &\scriptsize $c_{2,3}:=a_{2,3},c_{4,3}=0$ $\implies$ $[e_{5},e_3]=a_{2,3}e_2-e_3.$ \\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{R}_{e_{6}}\left([e_{5},e_{3}]\right)$ &\scriptsize $c_{2,1}:=\frac{a_{2,3}-b_{2,3}}{2},$ $c_{4,1}=0$ $\implies$ $[e_{5},e_{1}]=-e_{1}+\frac{a_{2,3}-b_{2,3}}{2}e_2.$\\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{R}_{e_{1}}\left([e_{5},e_{6}]\right)$ &\scriptsize $d_{4,1}=0$ $\implies$ $[e_{6},e_{1}]=-e_1+d_{2,1}e_2-be_3.$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{R}_{e_{3}}\left([e_{5},e_{6}]\right)$ &\scriptsize $d_{4,3}=0$ $\implies$ $[e_{6},e_{3}]=-e_1+d_{2,3}e_2-be_3.$ \\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{R}_{e_5}\left([e_{5},e_{6}]\right)$ &\scriptsize $d_{4,5}:=-c_{4,6}$ $\implies$ $c_{4,6}:=(b+1)c_{2,5}-c_{2,6}$ $\implies$ $[e_{5},e_6]=c_{2,6}e_2+\left((b+1)c_{2,5}-c_{2,6}\right)e_4, [e_6,e_5]=d_{2,5}e_2+\left(c_{2,6}-(b+1)c_{2,5}\right)e_4$\\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{R}_{e_{5}}\left([e_{6},e_{3}]\right)$ &\scriptsize $d_{2,3}:=\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}$ $\implies$ $[e_{6},e_{3}]=-e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2-be_3.$ \\ \hline \scriptsize $13.$ &\scriptsize $\mathscr{R}_{e_{6}}\left([e_{6},e_{3}]\right)$ &\scriptsize $d_{2,1}:=\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}$ $\implies$ $[e_{6},e_{1}]=-e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2-be_3.$ \\ \hline \scriptsize $14.$ &\scriptsize $\mathscr{R}_{e_{6}}\left([e_{6},e_{5}]\right)$ &\scriptsize $d_{2,6}:=(b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}$ $\implies$ $[e_{6},e_6]=\left((b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}\right)e_2.$\\ \hline \end{tabular} \end{table} We obtain that $\mathscr{R}_{e_5}$ and $\mathscr{R}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{5},e_3]=a_{2,3}e_2-e_3,[e_{5},e_4]=2\left(e_2-e_4\right),[e_5,e_5]=c_{2,5}e_2,\\ \displaystyle [e_5,e_6]=c_{2,6}e_2+\left((b+1)c_{2,5}-c_{2,6}\right)e_4, [e_{6},e_1]=-e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2- be_3,\\ \displaystyle [e_{6},e_{3}]=-e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2- be_3, [e_{6},e_4]=(b+1)\left(e_2-e_4\right),\\ \displaystyle [e_6,e_5]=d_{2,5}e_2+\left(c_{2,6}-(b+1)c_{2,5}\right)e_4,[e_{6},e_6]=\left((b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}\right)e_2,\\ \displaystyle (b\neq-1). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ written in the bracket notation and the remaining brackets given above define a continuous family of Leibniz algebras depending on the parameters. \noindent $(iv)\&(v)$ We apply the following transformation: $e^{\prime}_1=e_1+\frac{b_{2,3}-a_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3-a_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5-\frac{c_{2,5}}{2}e_2, e^{\prime}_6=e_6-\frac{d_{2,5}}{2}e_2+\left(\frac{b+1}{2}c_{2,5}-\frac{c_{2,6}}{2}\right)e_4$ and obtain a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{6,2}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1,[e_2,e_5]=2e_2,[e_3,e_5]=e_3,[e_4,e_5]=2e_4,[e_{5},e_{1}]=-e_1,[e_{5},e_3]=-e_3,\\ \displaystyle [e_{5},e_4]=2\left(e_2-e_4\right), [e_1,e_6]=e_1+be_3,[e_2,e_6]=2(b+1)e_2,[e_3,e_6]=e_1+be_3,\\ \displaystyle[e_4,e_6]=(b+1)\left(e_2+e_4\right), [e_{6},e_1]=-e_1-be_3,[e_{6},e_{3}]=-e_1-be_3,\\ \displaystyle [e_{6},e_4]=(b+1)\left(e_2-e_4\right),(b\neq-1). \end{array} \right. \end{equation} \begin{remark} We notice that if $b=-1$, then the outer derivation $\mathscr{R}_{e_6}$ is nilpotent. \end{remark} \noindent (1) (b) We set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} 1\\ 2\\ c \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} 1\\ 1\\ d \end{array}\right),(c\neq-2,d\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & c & 0\\ -\frac{1+3c}{2}a_{2,3}-\frac{3+3c}{2}b_{2,3} & 2(c+2) & a_{2,3}& 2c+1\\ c+1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{matrix}\right] \end{array},$$ $$\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & d & 0\\ \frac{1-3d}{2}\alpha_{2,3}-\frac{1+3d}{2}\beta_{2,3} & 2(d+1) & \alpha_{2,3}& 2d\\ d & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{R}_{[e_5,e_6]},$ we obtain that $d=0$ and we have the system of equations: $$\left\{ \begin{array}{ll} a_{2,3}:=\left(\frac{3}{2}c+2\right)\alpha_{2,3}+\frac{c}{2}\beta_{2,3} {,} \\ \left(\frac{3}{2}+c\right)\beta_{2,3}-\frac{1}{2}\alpha_{2,3}-\left(\frac{3}{2}c+\frac{1}{2}\right)a_{2,3}-\left(\frac{3}{2}c+\frac{3}{2}\right)b_{2,3}=0{.} \end{array} \right. $$ There are the following two cases: \begin{enumerate}[noitemsep, topsep=0pt] \item[(I)] If $c\neq-1,$ then $b_{2,3}:=-\frac{3c+2}{2}\alpha_{2,3}-\frac{c-2}{2}\beta_{2,3}$ and $\mathscr{R}_{[e_5,e_6]}=0.$ As a result, $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & c & 0\\ \frac{\alpha_{2,3}}{2}-\frac{2c+3}{2}\beta_{2,3} & 2(c+2) & \left(\frac{3c}{2}+2\right)\alpha_{2,3}+\frac{c}{2}\beta_{2,3}& 2c+1\\ c+1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{matrix}\right] \end{array},$$ $$\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & 2 & \alpha_{2,3}& 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array},(c\neq-2).$$ \noindent Further, we find the following: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{5},e_{1}]}=-\mathscr{R}_{e_1}-(c+1)\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_2]}=0,\mathscr{R}_{[e_{5},e_3]}=-c\mathscr{R}_{e_1}-2\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_i]}=0,\\ \displaystyle \mathscr{R}_{[e_{6},e_1]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{6},e_2]}=0,\mathscr{R}_{[e_{6},e_{3}]}=-\mathscr{R}_{e_3}, \mathscr{R}_{[e_{6},e_i]}=0,(c\neq-2,4\leq i\leq 6). \end{array} \right. \end{equation} \item[(II)] If $c=-1,$ then $a_{2,3}:=\frac{\alpha_{2,3}-\beta_{2,3}}{2}$ and $\mathscr{R}_{[e_5,e_6]}=0.$ It follows that $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & -1 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & 2& \frac{\alpha_{2,3}-\beta_{2,3}}{2}& -1\\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{matrix}\right] \end{array}, \mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & 2 & \alpha_{2,3}& 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array}$$ and we have the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{5},e_{1}]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{5},e_2]}=0,\mathscr{R}_{[e_{5},e_3]}=\mathscr{R}_{e_1}-2\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_i]}=0,\\ \displaystyle \mathscr{R}_{[e_{6},e_1]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{6},e_2]}=0,\mathscr{R}_{[e_{6},e_{3}]}=-\mathscr{R}_{e_3}, \mathscr{R}_{[e_{6},e_i]}=0,(4\leq i\leq 6). \end{array} \right. \end{equation} \end{enumerate} \noindent $(ii)$ We combine cases (I) and (II) together, include a linear combination of $e_2$ and $e_4$, and have the following brackets: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+a_{2,1}e_2-(c+1)e_3+a_{4,1}e_4,[e_{5},e_2]=a_{2,2}e_2+a_{4,2}e_4,[e_{5},e_3]=-ce_1+\\ \displaystyle a_{2,3}e_2-2e_3+a_{4,3}e_4,[e_{5},e_i]=a_{2,i}e_2+a_{4,i}e_4,[e_{6},e_1]=-e_1+b_{2,1}e_2+b_{4,1}e_4,\\ \displaystyle [e_{6},e_2]=b_{2,2}e_2+b_{4,2}e_4,[e_{6},e_{3}]=b_{2,3}e_2-e_3+b_{4,3}e_4,[e_{6},e_i]=b_{2,i}e_2+b_{4,i}e_4,\\ \displaystyle(c\neq-2,4\leq i\leq 6). \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ as well. \noindent $(iii)$ To satisfy the right Leibniz identity, we refer to the identities given in Table \ref{RightCodimTwo(L4,(n=4))}. as much as possible. As a result, the identities we apply are the following: $1.-6.,8.,$ $\mathscr{R}_{e_6}{\left([e_5,e_1]\right)}=[\mathscr{R}_{e_6}(e_5),e_1]+[e_5,\mathscr{R}_{e_6}(e_1)],$ $9.-11.,$ $\mathscr{R}_{e_6}\left([e_6,e_1]\right)=[\mathscr{R}_{e_6}(e_6),e_1]+[e_6,\mathscr{R}_{e_6}(e_1)],$ $13.$ and $14.$ We obtain that $\mathscr{R}_{e_5}$ and $\mathscr{R}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are the following: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+\left(\left(c+\frac{3}{2}\right)\alpha_{2,3}-\frac{\beta_{2,3}}{2}\right)e_2-(c+1)e_3,\\ \displaystyle [e_{5},e_3]=-ce_1+\left(\left(\frac{c}{2}+2\right)\alpha_{2,3}-\frac{c}{2}\beta_{2,3}\right)e_2-2e_3,[e_{5},e_4]=3\left(e_2-e_4\right),\\ \displaystyle [e_5,e_5]=(c+2)\left(a_{2,6}+a_{4,6}\right)e_2,[e_5,e_6]=a_{2,6}e_2+a_{4,6}e_4,[e_{6},e_1]=-e_1+\frac{\alpha_{2,3}-\beta_{2,3}}{2}e_2,\\ \displaystyle [e_{6},e_{3}]=\alpha_{2,3}e_2-e_3, [e_{6},e_4]=2\left(e_2-e_4\right),[e_6,e_5]=\left((c+2)b_{2,6}+a_{4,6}\right)e_2-a_{4,6}e_4,\\ \displaystyle [e_{6},e_6]=b_{2,6}e_2,(c\neq-2). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ written in the bracket notation and the remaining brackets given above define a continuous family of Leibniz algebras depending on the parameters. \noindent $(iv)\&(v)$ We apply the transformation: $e^{\prime}_1=e_1-\frac{\alpha_{2,3}-\beta_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3-\alpha_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5-\frac{a_{2,6}}{2}e_2-\frac{a_{4,6}}{2}e_4, e^{\prime}_6=e_6-\frac{b_{2,6}}{2}e_2$ and obtain a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{6,3}$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1+(c+1)e_3,[e_2,e_5]=2(c+2)e_2,[e_3,e_5]=ce_1+2e_3,[e_4,e_5]=(2c+1)e_2+3e_4,\\ \displaystyle [e_{5},e_{1}]=-e_1-(c+1)e_3,[e_{5},e_3]=-ce_1-2e_3,[e_{5},e_4]=3\left(e_2-e_4\right),[e_1,e_6]=e_1,\\ \displaystyle [e_2,e_6]=2e_2,[e_3,e_6]=e_3,[e_4,e_6]=2e_4, [e_{6},e_1]=-e_1,[e_{6},e_{3}]=-e_3,[e_{6},e_4]=2\left(e_2-e_4\right),\\ \displaystyle (c\neq-2). \end{array} \right. \end{equation} \noindent (1) (c) We set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} a\\ 1\\ 0 \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} b\\ 0\\ 1 \end{array}\right),(a,b\neq0,a\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} a & 0 & 0 & 0\\ \frac{(3a-2)a_{2,3}+(a-2)b_{2,3}}{2a} & 2 & a_{2,3}& 1-a\\ 1-a & 0 & 1 & 0 \\ 0 & 0 & 0 & a+1 \end{matrix}\right] \end{array},\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} b & 0 & 1 & 0\\ \frac{(3b-3)\alpha_{2,3}+(b-3)\beta_{2,3}}{2b} & 2 & \alpha_{2,3}& 2-b\\ 1-b & 0 & 0 & 0 \\ 0 & 0 & 0 & b \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{R}_{[e_5,e_6]},$ we obtain that $a:=1$ and we have the system of equations: $$\left\{ \begin{array}{ll} \alpha_{2,3}:=\frac{3a_{2,3}+b_{2,3}}{2} {,} \\ b^2a_{2,3}+(b^2-2b)b_{2,3}+(3-3b)\alpha_{2,3}+(3-b)\beta_{2,3}=0{.} \end{array} \right. $$ There are the following two cases: \begin{enumerate}[noitemsep, topsep=0pt] \item[(I)] If $b\neq3,$ then $\beta_{2,3}:=\frac{(2b-3)a_{2,3}+(2b-1)b_{2,3}}{2}$ and $\mathscr{R}_{[e_5,e_6]}=0.$ Consequently, $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & 2 & a_{2,3}& 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array},\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} b & 0 & 1 & 0\\ \frac{b\cdot a_{2,3}+(b-2)b_{2,3}}{2} & 2 &\frac{3a_{2,3}+b_{2,3}}{2}& 2-b\\ 1-b & 0 & 0 & 0 \\ 0 & 0 & 0 & b \end{matrix}\right] \end{array},(b\neq0).$$ \noindent Further, we find the commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{5},e_{1}]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{5},e_2]}=0,\mathscr{R}_{[e_{5},e_3]}=-\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_i]}=0,\mathscr{R}_{[e_{6},e_1]}=-b\mathscr{R}_{e_1}+(b-1)\mathscr{R}_{e_3},\\ \displaystyle \mathscr{R}_{[e_{6},e_2]}=0,\mathscr{R}_{[e_{6},e_{3}]}=-\mathscr{R}_{e_1}, \mathscr{R}_{[e_{6},e_i]}=0,(b\neq0,4\leq i\leq 6). \end{array} \right. \end{equation} \item[(II)] If $b:=3,$ then $\mathscr{R}_{[e_5,e_6]}=0$ and $\mathscr{R}_{e_5},\mathscr{R}_{e_6}$ are as follows: $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} 1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & 2& a_{2,3}& 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{matrix}\right] \end{array}, \mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} 3 & 0 & 1 & 0\\ \frac{3a_{2,3}+b_{2,3}}{2} & 2 & \frac{3a_{2,3}+b_{2,3}}{2}& -1\\ -2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 \end{matrix}\right] \end{array}.$$ We have the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{5},e_{1}]}=-\mathscr{R}_{e_1},\mathscr{R}_{[e_{5},e_2]}=0,\mathscr{R}_{[e_{5},e_3]}=-\mathscr{R}_{e_3},\mathscr{R}_{[e_{5},e_i]}=0,\mathscr{R}_{[e_{6},e_1]}=-3\mathscr{R}_{e_1}+2\mathscr{R}_{e_3},\\ \displaystyle \mathscr{R}_{[e_{6},e_2]}=0,\mathscr{R}_{[e_{6},e_{3}]}=-\mathscr{R}_{e_1}, \mathscr{R}_{[e_{6},e_i]}=0,(4\leq i\leq 6). \end{array} \right. \end{equation} \end{enumerate} \noindent $(ii)$ We combine cases (I) and (II) together and include a linear combination of $e_2$ and $e_4:$ \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+c_{2,1}e_2+c_{4,1}e_4,[e_{5},e_2]=c_{2,2}e_2+c_{4,2}e_4,[e_{5},e_3]=c_{2,3}e_2-e_3+c_{4,3}e_4,\\ \displaystyle [e_{5},e_i]=c_{2,i}e_2+c_{4,i}e_4,[e_{6},e_1]=-be_1+d_{2,1}e_2+(b-1)e_3+d_{4,1}e_4,[e_{6},e_2]=d_{2,2}e_2+d_{4,2}e_4,\\ \displaystyle [e_{6},e_{3}]=-e_1+d_{2,3}e_2+d_{4,3}e_4,[e_{6},e_i]=d_{2,i}e_2+d_{4,i}e_4,(b\neq0,4\leq i\leq 6). \end{array} \right. \end{equation} We also have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ as well. \noindent $(iii)$ To satisfy the right Leibniz identity, we apply the same identities as in Table \ref{RightCodimTwo(L4,(n=4))}. We have that $\mathscr{R}_{e_5}$ and $\mathscr{R}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are the following: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{5},e_{1}]=-e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{5},e_3]=a_{2,3}e_2-e_3,[e_{5},e_4]=2\left(e_2-e_4\right),[e_5,e_5]=c_{2,5}e_2,\\ \displaystyle [e_5,e_6]=c_{2,6}e_2+\left(c_{2,5}-c_{2,6}\right)e_4,[e_{6},e_1]=-be_1+\left(\frac{2-b}{2}a_{2,3}-\frac{b}{2}b_{2,3}\right)e_2+ (b-1)e_3,\\ \displaystyle [e_{6},e_{3}]=-e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{6},e_4]=b\left(e_2-e_4\right), [e_6,e_5]=d_{2,5}e_2+\left(c_{2,6}-c_{2,5}\right)e_4,\\ \displaystyle [e_{6},e_6]=\left(d_{2,5}-c_{2,5}+c_{2,6}\right)e_2,(b\neq0). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{R}_{e_{5}}$ and $\mathscr{R}_{e_{6}}$ written in the bracket notation and the remaining brackets given above define a continuous family of Leibniz algebras depending on the parameters. \noindent $(iv)\&(v)$ We apply the transformation: $e^{\prime}_1=e_1+\frac{b_{2,3}-a_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3-a_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5-\frac{c_{2,5}}{2}e_2, e^{\prime}_6=e_6-\frac{d_{2,5}}{2}e_2+\frac{c_{2,5}-c_{2,6}}{2}e_4$ and obtain a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{6,4}$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1,[e_2,e_5]=2e_2,[e_3,e_5]=e_3,[e_4,e_5]=2e_4,[e_{5},e_{1}]=-e_1,[e_{5},e_3]=-e_3,\\ \displaystyle [e_{5},e_4]=2\left(e_2-e_4\right),[e_1,e_6]=be_1+(1-b)e_3,[e_2,e_6]=2e_2,[e_3,e_6]=e_1,\\ \displaystyle [e_4,e_6]=(2-b)e_2+be_4,[e_{6},e_1]=-be_1+(b-1)e_3,[e_{6},e_{3}]=-e_1,[e_{6},e_4]=b\left(e_2-e_4\right),\\ \displaystyle (b\neq0). \end{array} \right. \end{equation} \noindent $(2)$ In this case we consider $\mathscr{R}_{[e_5,e_6]}$ and compare with $c_1\mathscr{R}_{e_1}+c_3\mathscr{R}_{e_3},$ where $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} a & 0 & c & 0\\ \frac{(5a-3c)a_{2,3}+(3a-3c)b_{2,3}}{2a} & 2(c-a) &a_{2,3}& 2(c-a) \\ c-2a & 0 &-a & 0\\ 0 & 0 & 0 &0 \end{matrix}\right] \end{array},$$ $$\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} \alpha & 0 & \gamma & 0\\ \frac{(5\alpha-3\gamma)\alpha_{2,3}+(3\alpha-3\gamma)\beta_{2,3}}{2\alpha} & 2(\gamma-\alpha) &\alpha_{2,3}& 2(\gamma-\alpha) \\ \gamma-2\alpha & 0 &-\alpha & 0\\ 0 & 0 & 0 &0 \end{matrix}\right] \end{array}.$$ We obtain that $a\gamma-\alpha c=0,$ where $a,\alpha\neq0,$ which is impossible, because $\left( \begin{array}{c} a\\ c \end{array}\right)$ and $\left(\begin{array}{c} \alpha\\ \gamma \end{array}\right)$ are supposed to be linearly independent. \noindent $(3)$ Considering $\mathscr{R}_{[e_5,e_6]}$ and comparing with $c_1\mathscr{R}_{e_1}+c_3\mathscr{R}_{e_3},$ where $$\mathscr{R}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} a & 0 & c & 0\\ a_{2,3}-b_{2,1}+b_{2,3} & 0 &a_{2,3}& c-a\\ -a & 0 &-c & 0\\ 0 & 0 & 0 &a-c \end{matrix}\right] \end{array},\mathscr{R}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} \alpha & 0 & \gamma & 0\\ \alpha_{2,3}-\beta_{2,1}+\beta_{2,3} & 0 &\alpha_{2,3}& \gamma-\alpha\\ -\alpha & 0 &-\gamma & 0\\ 0 & 0 & 0 &\alpha-\gamma \end{matrix}\right] \end{array},$$ we have to have that $a\gamma-\alpha c=0,$ where $a,\alpha\neq0,$ which is impossible. \noindent $(4)$ Finally we consider $\mathscr{R}_{[e_5,e_6]}$ and compare with $c_1\mathscr{R}_{e_1}+c_3\mathscr{R}_{e_3}$ as well. We obtain that $b\gamma-\beta c=0,$ where $b,\beta\neq0,$ which is impossible. \paragraph{Codimension two solvable extensions of $\mathcal{L}^4,(n\geq5)$} By taking a linear combination of $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ and keeping in mind that no nontrivial linear combination of the matrices $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ can be a nilpotent matrix, one could set $\left( \begin{array}{c} a \\ b \end{array}\right)=\left( \begin{array}{c} 1\\ 2 \end{array}\right)$ and $\left( \begin{array}{c} \alpha \\ \beta \end{array}\right)=\left( \begin{array}{c} 1\\ 1 \end{array}\right).$ Therefore the vector space of outer derivations as $n \times n$ matrices is as follows: { $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ 3b_{2,1}-2a_{2,3} & 4 & a_{2,3}& 1 &0&0 & \cdots &&0 & 0& 0\\ 1 & 0 & 2 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & 3 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 4 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &5&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&n-3 &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &n-2& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& n-1 \end{smallmatrix}\right],$$} { $$ \mathscr{R}_{e_{n+2}}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \beta_{2,1} & 2 & \alpha_{2,3}& 0 &0&0 & \cdots &&0 & 0& 0\\ 0 & 0 & 1 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & 2 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & \alpha_{5,3} & 0 & 3 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \alpha_{5,3} & 0 &4&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & \alpha_{n-2,3}& \alpha_{n-3,3}& \cdots&\cdots &\alpha_{5,3}&0&n-4 &0& 0\\ 0 & 0 & \alpha_{n-1,3}& \alpha_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&\alpha_{5,3}&0 &n-3& 0\\ 0 & 0 & \alpha_{n,3}& \alpha_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&\alpha_{5,3} &0& n-2 \end{smallmatrix}\right].$$} \noindent $(i)$ Considering $\mathscr{R}_{[e_{n+1},e_{n+2}]},$ which is the same as $[\mathscr{R}_{e_{n+2}},\mathscr{R}_{e_{n+1}}],$ we deduce that $\alpha_{i,3}:=a_{i,3},(5\leq i\leq n),$ $\alpha_{2,3}:=\frac{a_{2,3}}{2}$ and $\beta_{2,1}:=b_{2,1}-\frac{a_{2,3}}{2}.$ As a result, the outer derivation $\mathscr{R}_{e_{n+2}}$ changes as follows: $$\mathscr{R}_{e_{n+2}}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1}-\frac{a_{2,3}}{2} & 2 & \frac{a_{2,3}}{2}& 0 &0&0 & \cdots &&0 & 0& 0\\ 0 & 0 & 1 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & 2 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 3 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &4&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&n-4 &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &n-3& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& n-2 \end{smallmatrix}\right].$$ Altogether we find the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{R}_{[e_{n+1},e_{1}]}=-\mathscr{R}_{e_1}-\mathscr{R}_{e_3},\mathscr{R}_{[e_{n+1},e_2]}=0,\mathscr{R}_{[e_{n+1},e_i]}=(1-i)\mathscr{R}_{e_i}-\sum_{k=i+2}^{n-1}{a_{k-i+3,3}\mathscr{R}_{e_k}},\\ \displaystyle \mathscr{R}_{[e_{n+1},e_j]}=0,(n\leq j\leq n+1),\mathscr{R}_{[e_{n+1},e_{n+2}]}=-\sum_{k=4}^{n-1}{a_{k+1,3}\mathscr{R}_{e_{k}}},\mathscr{R}_{[e_{n+2},e_1]}=-\mathscr{R}_{e_1},\\ \displaystyle \mathscr{R}_{[e_{n+2},e_2]}=0,\mathscr{R}_{[e_{n+2},e_{i}]}=(2-i)\mathscr{R}_{e_i}-\sum_{k=i+2}^{n-1}{a_{k-i+3,3}\mathscr{R}_{e_k}},(3\leq i\leq n-1),\\ \displaystyle \mathscr{R}_{[e_{n+2},e_n]}=0,\mathscr{R}_{[e_{n+2},e_{n+1}]}=\sum_{k=4}^{n-1}{a_{k+1,3}\mathscr{R}_{e_{k}}},\mathscr{R}_{[e_{n+2},e_{n+2}]}=0. \end{array} \right. \end{equation} \noindent $(ii)$ We include a linear combination of $e_2$ and $e_n:$ \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{n+1},e_{1}]=-e_1+c_{2,1}e_2-e_3+c_{n,1}e_n,[e_{n+1},e_2]=c_{2,2}e_2+c_{n,2}e_n,[e_{n+1},e_i]=c_{2,i}e_2+\\ \displaystyle(1-i)e_i-\sum_{k=i+2}^{n-1}{a_{k-i+3,3}e_k}+c_{n,i}e_n,[e_{n+1},e_j]=c_{2,j}e_2+c_{n,j}e_n,(n\leq j\leq n+1),\\ \displaystyle [e_{n+1},e_{n+2}]=c_{2,n+2}e_2-\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+ c_{n,n+2}e_n,[e_{n+2},e_1]=-e_1+d_{2,1}e_2+d_{n,1}e_n,\\ \displaystyle [e_{n+2},e_2]=d_{2,2}e_2+d_{n,2}e_n, [e_{n+2},e_{i}]=d_{2,i}e_2+(2-i)e_i-\sum_{k=i+2}^{n-1}{a_{k-i+3,3}e_k}+d_{n,i}e_n,\\ \displaystyle (3\leq i\leq n-1),[e_{n+2},e_n]=d_{2,n}e_2+d_{n,n}e_n,[e_{n+2},e_{n+1}]=d_{2,n+1}e_2+\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+\\ \displaystyle d_{n,n+1}e_n, [e_{n+2},e_{n+2}]=d_{2,n+2}e_2+d_{n,n+2}e_n. \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ as well. \noindent $(iii)$ We satisfy the right Leibniz identity shown in Table \ref{RightCodimTwo(L4)}. We notice that $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ restricted to the nilradical do not change, but the remaining brackets are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{n+1},e_{1}]=-e_1+b_{2,1}e_2-e_3,[e_{n+1},e_3]=a_{2,3}e_2-2e_3-\sum_{k=5}^n{a_{k,3}e_k}, [e_{n+1},e_4]=3\left(e_2-e_4\right)-\\ \displaystyle \sum_{k=6}^n{a_{k-1,3}e_k}, [e_{n+1},e_i]=(1-i)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},[e_{n+1},e_{n+1}]=\left(2c_{2,n+2}-2a_{5,3}\right)e_2,\\ \displaystyle [e_{n+1},e_{n+2}]=c_{2,n+2}e_2-\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+ c_{n,n+2}e_n,[e_{n+2},e_1]=-e_1+\left(b_{2,1}-\frac{a_{2,3}}{2}\right)e_2,\\ \displaystyle [e_{n+2},e_3]=\frac{a_{2,3}}{2}e_2-e_3-\sum_{k=5}^n{a_{k,3}e_k}, [e_{n+2},e_4]=2\left(e_2-e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+2},e_{i}]=(2-i)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k}, (5\leq i\leq n),[e_{n+2},e_{n+1}]=d_{2,n+1}e_2+\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}-\\ \displaystyle c_{n,n+2}e_n, [e_{n+2},e_{n+2}]=\frac{d_{2,n+1}+a_{5,3}}{2}e_2. \end{array} \right. \end{equation} \begin{table}[h!] \caption{Right Leibniz identities in the codimension two nilradical $\mathcal{L}^4,(n\geq5)$.} \label{RightCodimTwo(L4)} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $[e_{n+1},e_2]=0$ $\implies$ $c_{2,2}=c_{n,2}=0.$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{R}_{e_1}\left([e_{n+2},e_{1}]\right)$ &\scriptsize $[e_{n+2},e_2]=0$ $\implies$ $d_{2,2}=d_{n,2}=0.$\\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{R}_{e_3}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $c_{2,4}:=3,c_{n,4}:=-a_{n-1,3},$ where $a_{4,3}=0$ $\implies$ $[e_{n+1},e_4]=3(e_2-e_4)-\sum_{k=6}^{n}{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{R}_{e_i}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $c_{2,i+1}=0,c_{n,i+1}:=-a_{n-i+2,3},(4\leq i\leq n-2),$ where $a_{4,3}=0$ $\implies$ $[e_{n+1},e_j]=\left(1-j\right)e_j-\sum_{k=j+2}^{n}{a_{k-j+3,3}e_k},(5\leq j\leq n-1).$ \\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{R}_{e_{n-1}}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $c_{2,n}=0,$ $c_{n,n}:=1-n$ $\implies$ $[e_{n+1},e_{n}]=\left(1-n\right)e_{n}.$ Altogether with $4.,$ $[e_{n+1},e_i]=\left(1-i\right)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n).$ \\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{R}_{e_3}\left([e_{n+2},e_{1}]\right)$ &\scriptsize $d_{2,4}:=2,d_{n,4}:=-a_{n-1,3},$ where $a_{4,3}=0$ $\implies$ $[e_{n+2},e_4]=2(e_2-e_4)-\sum_{k=6}^{n}{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{R}_{e_i}\left([e_{n+2},e_{1}]\right)$ &\scriptsize $d_{2,i+1}=0,d_{n,i+1}:=-a_{n-i+2,3},(4\leq i\leq n-2),$ where $a_{4,3}=0$ $\implies$ $[e_{n+2},e_j]=\left(2-j\right)e_j-\sum_{k=j+2}^{n}{a_{k-j+3,3}e_k},(5\leq j\leq n-1).$ \\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{R}_{e_{n-1}}\left([e_{n+2},e_{1}]\right)$ &\scriptsize $d_{2,n}=0,$ $d_{n,n}:=2-n$ $\implies$ $[e_{n+2},e_{n}]=\left(2-n\right)e_{n}.$ Combining with $7.,$ $[e_{n+2},e_i]=\left(2-i\right)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n).$ \\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{R}_{e_{n+1}}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $c_{n,n+1}=0$ $\implies$ $[e_{n+1},e_{n+1}]=c_{2,n+1}e_2.$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{R}_{e_{n+2}}\left([e_{n+1},e_{n+2}]\right)$ &\scriptsize $d_{n,n+2}=0$ $\implies$ $[e_{n+2},e_{n+2}]=d_{2,n+2}e_2.$ \\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{R}_{e_{n+1}}\left([e_{n+1},e_{3}]\right)$ &\scriptsize $c_{2,3}:=a_{2,3}, c_{n,3}:=-a_{n,3}$ $\implies$ $[e_{n+1},e_3]=a_{2,3}e_2-2e_3-\sum_{k=5}^n{a_{k,3}e_k}.$\\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{R}_{e_{n+2}}\left([e_{n+2},e_{3}]\right)$ &\scriptsize $d_{2,3}:=\frac{a_{2,3}}{2},$ $d_{n,3}:=-a_{n,3}$ $\implies$ $[e_{n+2},e_{3}]=\frac{a_{2,3}}{2}e_2-e_3-\sum_{k=5}^n{a_{k,3}e_k}.$ \\ \hline \scriptsize $13.$ &\scriptsize $\mathscr{R}_{e_{n+2}}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $c_{2,1}:=b_{2,1},c_{n,1}=0$ $\implies$ $[e_{n+1},e_{1}]=-e_1+b_{2,1}e_2-e_3$. \\ \hline \scriptsize $14.$ &\scriptsize $\mathscr{R}_{e_{n+2}}\left([e_{n+2},e_{1}]\right)$ &\scriptsize $d_{2,1}:=b_{2,1}-\frac{a_{2,3}}{2},d_{n,1}=0$ $\implies$ $[e_{n+2},e_1]=-e_1+\left(b_{2,1}-\frac{a_{2,3}}{2}\right)e_2.$ \\ \hline \scriptsize $15.$ &\scriptsize $\mathscr{R}_{e_{n+1}}\left([e_{n+1},e_{n+2}]\right)$ &\scriptsize $c_{2,n+1}:=2c_{2,n+2}-2a_{5,3},d_{n,n+1}:=-c_{n,n+2}$ $\implies$ $[e_{n+1},e_{n+1}]=\left(2c_{2,n+2}-2a_{5,3}\right)e_2, [e_{n+2},e_{n+1}]=d_{2,n+1}e_2+\sum_{k=4}^{n-1}a_{k+1,3}e_k-c_{n,n+2}e_n.$ \\ \hline \scriptsize $16.$ &\scriptsize $\mathscr{R}_{e_{n+2}}\left([e_{n+2},e_{n+1}]\right)$ &\scriptsize $d_{2,n+2}:=\frac{d_{2,n+1}+a_{5,3}}{2}$ $\implies$ $[e_{n+2},e_{n+2}]=\frac{d_{2,n+1}+a_{5,3}}{2}e_2.$\\ \hline \end{tabular} \end{table} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ and the remaining brackets given above define a continuous family of solvable right Leibniz algebras depending on the parameters. Then we apply the technique of ``absorption'' according to step $(iv)$. \begin{itemize}[noitemsep, topsep=0pt] \item First we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq5),e^{\prime}_{n+1}=e_{n+1}-\frac{c_{2,n+2}}{2}e_2, e^{\prime}_{n+2}=e_{n+2}-\frac{d_{2,n+1}+a_{5,3}}{4}e_2$ to remove the coefficients $c_{2,n+2}$ and $\frac{d_{2,n+1}+a_{5,3}}{2}$ in front of $e_2$ in $[e_{n+1},e_{n+2}]$ and $[e_{n+2},e_{n+2}],$ respectively. This transformation changes the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}]$ and $[e_{n+2},e_{n+1}]$ to $-2a_{5,3}$ and $-a_{5,3},$ respectively. \item Then we apply $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq5),e^{\prime}_{n+1}=e_{n+1}+\frac{a_{5,3}}{2}e_4, e^{\prime}_{n+2}=e_{n+2}$ to remove the coefficients $-a_{5,3}$ and $-2a_{5,3}$ in front of $e_2$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+1}]$, respectively. Besides this transformation removes $a_{5,3}$ and $-a_{5,3}$ in front of $e_4$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively, and changes the coefficients in front of $e_{k},(6\leq k\leq n-1)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}]$ to $a_{k+1,3}-\frac{a_{5,3}}{2}a_{k-1,3}$ and $\frac{a_{5,3}}{2}a_{k-1,3}-a_{k+1,3},$ respectively. It also affects the coefficients in front $e_n,(n\geq6)$ in $[e_{n+1},e_{n+2}]$ and $[e_{n+2},e_{n+1}],$ which we rename back by $c_{n,n+2}$ and $-c_{n,n+2},$ respectively. The following entries are introduced by the transformation: $-\frac{a_{5,3}}{2}$ and $\frac{a_{5,3}}{2}$ in the $(5,1)^{st},(n\geq5)$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. \item Applying the transformation $e^{\prime}_j=e_j,(1\leq j\leq n+1,n\geq6), e^{\prime}_{n+2}=e_{n+2}-\sum_{k=5}^{n-1}{\frac{A_{k+1,3}}{k-1}e_k},$ where $A_{6,3}:=a_{6,3}$ and $A_{k+1,3}:=a_{k+1,3}-\frac{1}{2}a_{5,3}a_{k-1,3}-\sum_{i=7}^k{\frac{A_{i-1,3}a_{k-i+5,3}}{i-3}},$ $(6\leq k\leq n-1,n\geq7),$ we remove the coefficients $a_{6,3}$ and $-a_{6,3}$ in front of $e_5$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively. We also remove $a_{k+1,3}-\frac{a_{5,3}}{2}a_{k-1,3}$ and $\frac{a_{5,3}}{2}a_{k-1,3}-a_{k+1,3}$ in front of $e_k,(6\leq k\leq n-1)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively. This transformation introduces $\frac{a_{6,3}}{4}$ and $-\frac{a_{6,3}}{4}$ in the $(6,1)^{st},(n\geq6)$ position as well as $\frac{A_{k+1,3}}{k-1}$ and $\frac{A_{k+1,3}}{1-k}$ in the $(k+1,1)^{st},(6\leq k\leq n-1)$ in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ respectively. It also affects the coefficients in front $e_n,(n\geq7)$ in $[e_{n+1},e_{n+2}]$ and $[e_{n+2},e_{n+1}],$ which we rename back by $c_{n,n+2}$ and $-c_{n,n+2},$ respectively. \item Finally applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n+1,n\geq5), e^{\prime}_{n+2}=e_{n+2}+\frac{c_{n,n+2}}{n-1}e_n,$ we remove $c_{n,n+2}$ and $-c_{n,n+2}$ in front of $e_n$ in $[e_{n+1},e_{n+2}]$ and $[e_{n+2},e_{n+1}],$ respectively, without affecting other entries. We obtain that $\mathscr{R}_{e_{n+1}}$ and $\mathscr{R}_{e_{n+2}}$ are as follows: \end{itemize} {$$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ 3b_{2,1}-2a_{2,3} & 4 & a_{2,3}& 1 &0&0 & \cdots &&0 & 0& 0\\ 1 & 0 & 2 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & 3 &0 &0 &\cdots&&0 &0 & 0\\ -\frac{a_{5,3}}{2} & 0 & a_{5,3} & 0 & 4 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &5&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&n-3 &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &n-2& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& n-1 \end{smallmatrix}\right],$$} $$\mathscr{R}_{e_{n+2}}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0&0&0&0&\cdots && 0&0 & 0\\ b_{2,1}-\frac{a_{2,3}}{2} & 2 & \frac{a_{2,3}}{2}& 0 &0&0 & 0&\cdots &&0 & 0& 0\\ 0 & 0 & 1 & 0 & 0&0 &0&\cdots &&0 &0& 0\\ 0 & 0 & 0 & 2 &0 &0 &0&\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 3 &0&0&\cdots & &0&0 & 0\\ \frac{a_{6,3}}{4} & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &4&0&\cdots & &0&0 & 0\\ \frac{A_{7,3}}{5} & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3}&0 &5& &&\vdots&\vdots &\vdots\\ \frac{A_{8,3}}{6} & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} &\boldsymbol{\cdot} &a_{5,3}&0 &\ddots&&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &&\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ \frac{A_{n-2,3}}{n-4} & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots&\cdots &a_{5,3}&0&n-4 &0& 0\\ \frac{A_{n-1,3}}{n-3} & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &n-3& 0\\ \frac{A_{n,3}}{n-2} & 0 & a_{n,3}&a_{n-1,3}& \cdots&\cdots &\cdots&\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& n-2 \end{smallmatrix}\right],(n\geq5).$$ The remaining brackets are given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{n+1},e_{1}]=-e_1+b_{2,1}e_2-e_3+\frac{a_{5,3}}{2}e_5,[e_{n+1},e_3]=a_{2,3}e_2-2e_3-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+1},e_4]=3\left(e_2-e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{n+1},e_i]=(1-i)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+2},e_1]=-e_1+\left(b_{2,1}-\frac{a_{2,3}}{2}\right)e_2-\sum_{k=6}^n{\frac{A_{k,3}}{k-2}e_k},[e_{n+2},e_3]=\frac{a_{2,3}}{2}e_2-e_3-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+2},e_4]=2\left(e_2-e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+2},e_{i}]=(2-i)e_i-\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n). \end{array} \right. \end{equation} \noindent $(v)$ Finally we apply the following two change of basis transformations: \begin{itemize}[noitemsep, topsep=0pt] \item $e^{\prime}_1=e_1-\left(b_{2,1}-\frac{a_{2,3}}{2}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3-\frac{a_{2,3}}{2}e_2, e^{\prime}_i=e_i-\sum_{k=i+2}^n{\frac{B_{k-i+3,3}}{k-i}e_k},$ $(3\leq i\leq n-2),e^{\prime}_{n-1}=e_{n-1},e^{\prime}_{n}=e_{n},e^{\prime}_{n+1}=e_{n+1},e^{\prime}_{n+2}=e_{n+2},$ where $B_{j,3}:=a_{j,3}-\sum_{k=7}^j{\frac{B_{k-2,3}a_{j-k+5,3}}{k-5}},(5\leq j\leq n)$.\\ This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}},$ $\mathscr{R}_{e_{n+2}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}},\mathscr{L}_{e_{n+2}}.$ It also removes $-\frac{a_{5,3}}{2}$ and $\frac{a_{5,3}}{2}$ from the $(5,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. Besides it removes $3b_{2,1}-2a_{2,3}$ and $b_{2,1}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively, as well as $b_{2,1}-\frac{a_{2,3}}{2}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}}.$ The transformation also removes $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}},$ $\mathscr{L}_{e_{n+1}}$ and $\frac{a_{2,3}}{2}$ from the same positions in $\mathscr{R}_{e_{n+2}},$ $\mathscr{L}_{e_{n+2}}.$ This transformation introduces the entries in the $(i,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{i,1}$ and $-a_{i,1},(6\leq i\leq n),$ respectively. The transformation affects the entries in the $(j,1)^{st},(8\leq j\leq n)$ positions in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ but we rename all the entries in the $(i,1)^{st}$ positions by $\frac{i-3}{i-2}a_{i,1}$ in $\mathscr{R}_{e_{n+2}}$ and by $\frac{3-i}{i-2}a_{i,1},(6\leq i\leq n)$ in $\mathscr{L}_{e_{n+2}}.$ \item Finally applying the transformation $e_k^{\prime}=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=5}^{n-1}{a_{k+1,1}e_k},$ $e^{\prime}_{n+2}=e_{n+2}+\sum_{k=5}^{n-1}{\frac{k-2}{k-1}a_{k+1,1}e_k},$ we remove $a_{i,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{i,1}$ in $\mathscr{L}_{e_{n+1}}$ as well as $\frac{i-3}{i-2}a_{i,1}$ and $\frac{3-i}{i-2}a_{i,1},(6\leq i\leq n)$ in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ respectively. \end{itemize} We obtain a Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+2,1}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+e_3,[e_2,e_{n+1}]=4e_2,[e_3,e_{n+1}]=2e_3, [e_4,e_{n+1}]=e_2+3e_4,\\ \displaystyle [e_{i},e_{n+1}]=(i-1)e_i,[e_{n+1},e_{1}]=-e_1-e_3,[e_{n+1},e_3]=-2e_3, [e_{n+1},e_4]=3(e_2-e_4),\\ \displaystyle [e_{n+1},e_{i}]=(1-i)e_i,[e_{1},e_{n+2}]=e_1,[e_2,e_{n+2}]=2e_2,[e_j,e_{n+2}]=(j-2)e_j,(3\leq j\leq n),\\ \displaystyle [e_{n+2},e_1]=-e_1,[e_{n+2},e_3]=-e_3,[e_{n+2},e_4]=2(e_2-e_4),[e_{n+2},e_i]=(2-i)e_i,(5\leq i\leq n).\end{array} \right. \end{equation} We summarize a result in the following theorem: \begin{theorem}\label{RCodim2L4} There are four solvable indecomposable right Leibniz algebras up to isomorphism with a codimension two nilradical $\mathcal{L}^4,(n\geq4),$ which are given below: \begin{equation} \begin{array}{l} \displaystyle \nonumber (i)\,\mathfrak{g}} \def\q{\mathfrak{q}_{n+2,1}: [e_1,e_{n+1}]=e_1+e_3,[e_2,e_{n+1}]=4e_2,[e_3,e_{n+1}]=2e_3, [e_4,e_{n+1}]=e_2+3e_4,\\ \displaystyle [e_{i},e_{n+1}]=(i-1)e_i,[e_{n+1},e_{1}]=-e_1-e_3,[e_{n+1},e_3]=-2e_3, [e_{n+1},e_4]=3(e_2-e_4),\\ \displaystyle [e_{n+1},e_{i}]=(1-i)e_i,[e_{1},e_{n+2}]=e_1,[e_2,e_{n+2}]=2e_2,[e_j,e_{n+2}]=(j-2)e_j,(3\leq j\leq n),\\ \displaystyle [e_{n+2},e_1]=-e_1,[e_{n+2},e_3]=-e_3,[e_{n+2},e_4]=2(e_2-e_4),[e_{n+2},e_i]=(2-i)e_i,(5\leq i\leq n),\\ \displaystyle (n\geq5),\\ \displaystyle (ii)\,\mathfrak{g}} \def\q{\mathfrak{q}_{6,2}: [e_1,e_5]=e_1,[e_2,e_5]=2e_2,[e_3,e_5]=e_3,[e_4,e_5]=2e_4,[e_{5},e_{1}]=-e_1,[e_{5},e_3]=-e_3,\\ \displaystyle [e_{5},e_4]=2\left(e_2-e_4\right), [e_1,e_6]=e_1+be_3,[e_2,e_6]=2(b+1)e_2,[e_3,e_6]=e_1+be_3,\\ \displaystyle[e_4,e_6]=(b+1)\left(e_2+e_4\right), [e_{6},e_1]=-e_1-be_3,[e_{6},e_{3}]=-e_1-be_3, [e_{6},e_4]=(b+1)\left(e_2-e_4\right),\\ \displaystyle (b\neq-1),\\ \displaystyle (iii)\, \mathfrak{g}} \def\q{\mathfrak{q}_{6,3}: [e_1,e_5]=e_1+(c+1)e_3,[e_2,e_5]=2(c+2)e_2,[e_3,e_5]=ce_1+2e_3,\\ \displaystyle [e_4,e_5]=(2c+1)e_2+3e_4,[e_{5},e_{1}]=-e_1-(c+1)e_3,[e_{5},e_3]=-ce_1-2e_3,[e_{5},e_4]=3\left(e_2-e_4\right),\\ \displaystyle [e_1,e_6]=e_1,[e_2,e_6]=2e_2,[e_3,e_6]=e_3,[e_4,e_6]=2e_4, [e_{6},e_1]=-e_1,[e_{6},e_{3}]=-e_3,\\ \displaystyle [e_{6},e_4]=2\left(e_2-e_4\right),(c\neq-2),\\ \displaystyle (iv)\,\mathfrak{g}} \def\q{\mathfrak{q}_{6,4}: [e_1,e_5]=e_1,[e_2,e_5]=2e_2,[e_3,e_5]=e_3,[e_4,e_5]=2e_4,[e_{5},e_{1}]=-e_1,[e_{5},e_3]=-e_3,\\ \displaystyle [e_{5},e_4]=2\left(e_2-e_4\right),[e_1,e_6]=be_1+(1-b)e_3,[e_2,e_6]=2e_2,[e_3,e_6]=e_1,[e_4,e_6]=(2-b)e_2+\\ \displaystyle be_4,[e_{6},e_1]=-be_1+(b-1)e_3,[e_{6},e_{3}]=-e_1,[e_{6},e_4]=b\left(e_2-e_4\right),(b\neq0). \end{array} \end{equation} \end{theorem} \paragraph{Codimension three solvable extensions of $\mathcal{L}^4,(n=4)$}\label{Codim3} Considering $\mathscr{R}_{[e_5,e_6]},\mathscr{R}_{[e_5,e_7]}$ and $\mathscr{R}_{[e_6,e_7]}$ and comparing with $c_1\mathscr{R}_{e_1}+c_3\mathscr{R}_{e_3},$ where $$\mathscr{R}_{e_{i}}=\begin{array}{llll} \left[\begin{matrix} a_i & 0 & c_i & 0\\ \frac{(3a_i-2b_i-3c_i){a_i}_{2,3}+(a_i-2b_i-3c_i){b_i}_{2,3}}{2a_i} & 2(b_i+c_i) &{a_i}_{2,3}& 2c_i+b_i-a_i \\ b_i+c_i-a_i & 0 &b_i & 0\\ 0 & 0 & 0 &a_i+b_i \end{matrix}\right] \end{array},(5\leq i\leq7),$$ we have to have, respectively, that \begin{equation}\nonumber a_1c_2-a_2c_1-b_1c_2+b_2c_1=0, a_1c_3-a_3c_1-b_1c_3+b_3c_1=0, a_2c_3-a_3c_2-b_2c_3+b_3c_2=0. \end{equation} However, it means that $\left( \begin{array}{c} a_1\\ b_1\\ c_1 \end{array}\right),\left( \begin{array}{c} a_2\\ b_2\\ c_2 \end{array}\right)$ and $\left( \begin{array}{c} a_3\\ b_3\\ c_3 \end{array}\right)$ are linearly dependent: one could see that considering $\det\begin{pmatrix} a_1 & a_2 &a_3 \\ b_1 & b_2 &b_3\\ c_1 &c_2 &c_3 \end{pmatrix}=a_1\left(b_2c_3-b_3c_2\right)-a_2\left(b_1c_3-b_3c_1\right)+a_3\left(b_1c_2-b_2c_1\right)= a_1\left(a_2c_3-a_3c_2\right)-a_2\left(a_1c_3-a_3c_1\right)+a_3\left(a_1c_2-a_2c_1\right)=0.$ \subsection{Solvable indecomposable left Leibniz algebras with a nilradical $\mathcal{L}^4$} \subsubsection{Codimension one solvable extensions of $\mathcal{L}^4$} Classification follows the same steps in theorems with the same cases per step. \begin{theorem}\label{TheoremLL4} We set $a_{1,1}:=a$ and $a_{3,3}:=b$ in $(\ref{BRLeibniz})$. To satisfy the left Leibniz identity, there are the following cases based on the conditions involving parameters, each gives a continuous family of solvable Leibniz algebras: \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] If $a_{1,3}=0, b\neq-a,a\neq0,b\neq0,(n=4)$ or $b\neq(3-n)a,a\neq0,b\neq0,(n\geq5),$ then we have the following brackets for the algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2+(b-a)e_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+ \sum_{k=4}^n{a_{k,3}e_k},\\ \displaystyle[e_4,e_{n+1}]=(a+b)(e_4-e_2)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=\left((i-3)a+b\right)e_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-ae_1+B_{2,1}e_2+(a-b)e_3-\sum_{k=4}^n{a_{k,1}e_k},\\ \displaystyle [e_{n+1},e_2]=-2be_2,[e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2-be_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=(a-b)e_2-\\ \displaystyle (a+b)e_4-\sum_{k=5}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=\left((3-i)a-b\right)e_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n),\\ \displaystyle where\,\,B_{2,1}:=\frac{(2b-a)a_{2,1}+2b\cdot a_{4,1}+2(a-b)(a_{2,3}+a_{4,3})}{a}, \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ B_{2,1} & -2b & a_{2,3}+2a_{4,3}& a-b &0& & \cdots &0&\cdots & 0& 0&0\\ a-b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a-b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a-b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a-b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& (3-n)a-b \end{smallmatrix}\right].$$ \item[(2)] If $a_{1,3}=0,b:=-a,a\neq0,(n=4)$ or $b:=(3-n)a,a\neq0,(n\geq5),$ then the brackets for the algebra are \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2-(n-2)ae_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+(3-n)ae_3+\\ \displaystyle \sum_{k=4}^n{a_{k,3}e_k},[e_4,e_{n+1}]=(n-4)a(e_2-e_4)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=\left(i-n\right)ae_{i}+\\ \displaystyle \sum_{k=i+1}^n{a_{k-i+3,3}e_k}, [e_{n+1},e_{n+1}]=a_{2,n+1}e_2+a_{n,n+1}e_n,[e_{n+1},e_1]=-ae_1+B_{2,1}e_2+\\ \displaystyle (n-2)ae_3-\sum_{k=4}^n{a_{k,1}e_k},[e_{n+1},e_2]=(2n-6)ae_2,[e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2+\\ \displaystyle (n-3)ae_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=(n-2)ae_2+(n-4)ae_4-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=\left(n-i\right)ae_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n),\\ \displaystyle where\,\,B_{2,1}:=(5-2n)a_{2,1}+(6-2n)a_{4,1}+2(n-2)(a_{2,3}+a_{4,3}), \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ B_{2,1} & (2n-6)a & a_{2,3}+2a_{4,3}& (n-2)a &0& & \cdots &0&\cdots & 0& 0&0\\ (n-2)a & 0 & (n-3)a & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & (n-4)a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & (n-5)a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (n-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&0 \end{smallmatrix}\right].$$ \item[(3)] If $a_{1,3}=0,a=0,b\neq0,(n=4)$ or $a=0$ and $b\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=\left(a_{2,3}+a_{4,3}-a_{4,1}\right)e_2+be_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=4}^n{a_{k,3}e_k},\\ \displaystyle [e_4,e_{n+1}]=b\left(e_4-e_2\right)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=be_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2, [e_{n+1},e_1]=b_{2,1}e_2-be_3-\sum_{k=4}^n{a_{k,1}e_k},[e_{n+1},e_2]=-2be_2,\\ \displaystyle [e_{n+1},e_3]=(a_{2,3}+2a_{4,3})e_2-be_3-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=-b\left(e_2+e_4\right)-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=-be_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n), \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ b_{2,1} & -2b & a_{2,3}+2a_{4,3}&-b &0& & \cdots &0&\cdots & 0& 0&0\\ -b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& -b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&-b \end{smallmatrix}\right].$$ \allowdisplaybreaks \item[(4)] If $a_{1,3}=0,b=0,a\neq0,(n=4)$ or $b=0,a\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2-ae_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+\sum_{k=4}^n{a_{k,3}e_k},\\ \displaystyle[e_4,e_{n+1}]=a(e_4-e_2)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-3)ae_{i}+\sum_{k=i+1}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-ae_1+\left(a_{2,3}-a_{2,1}+b_{2,3}\right)e_2+ae_3-\sum_{k=4}^n{a_{k,1}e_k},\\ \displaystyle [e_{n+1},e_3]=b_{2,3}e_2-\sum_{k=4}^n{a_{k,3}e_k},[e_{n+1},e_4]=a(e_2-e_4)-\sum_{k=5}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=(3-i)ae_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n), \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a &0& & \cdots &0&\cdots & 0& 0&0\\ a & 0 & 0 & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& (3-n)a \end{smallmatrix}\right].$$ In the remaining cases $a_{1,3}:=c.$ \item[(5)] If $b\neq-a,a\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+a_{2,1}e_2+(b+c-a)e_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3+A_{4,3}e_4,\\ \displaystyle [e_4,e_{5}]=(a+b)(e_4-e_2),[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+B_{2,1}e_2+(a-b-c)e_3-a_{4,1}e_4,\\ \displaystyle [e_5,e_{2}]=-2(b+c)e_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2-be_3-A_{4,3}e_4,[e_{5},e_4]=(a-2c-b)e_2-\\ \displaystyle (a+b)e_4; B_{2,1}:=\frac{(2b-a+2c)a_{2,1}+2(b+c)a_{4,1}+(a-b-c)(a_{2,3}+b_{2,3})}{a}\,\,and \\ \displaystyle A_{4,3}:=\frac{2c(a_{2,1}+a_{4,1})-(a+c)a_{2,3}+(a-c)b_{2,3}}{2a}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ B_{2,1} & -2(b+c) & b_{2,3}& a-2c-b \\ a-b-c & 0 & -b & 0\\ -a_{4,1} & 0 & -A_{4,3} & -a-b \end{smallmatrix}\right].$ \item[(6)] If $b:=-a,a\neq0,a\neq c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+a_{2,1}e_2+(c-2a)e_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ae_3+A_{4,3}e_4,\\ \displaystyle[e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=-ae_1+B_{2,1}e_2+(2a-c)e_3-a_{4,1}e_4,\\ \displaystyle [e_5,e_{2}]=2(a-c)e_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ae_3-A_{4,3}e_4,[e_{5},e_4]=2(a-c)e_2,\\ \displaystyle where\,\, B_{2,1}:=\frac{(2c-3a)a_{2,1}+2(c-a)a_{4,1}+(2a-c)(a_{2,3}+b_{2,3})}{a}\,\,and \\ \displaystyle A_{4,3}:=\frac{2c(a_{2,1}+a_{4,1})-(a+c)a_{2,3}+(a-c)b_{2,3}}{2a}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ B_{2,1} & 2(a-c) & b_{2,3}& 2(a-c) \\ 2a-c & 0 & a & 0\\ -a_{4,1} & 0 & -A_{4,3} & 0 \end{smallmatrix}\right].$ \item[(7)] If $b:=-c,c\neq0,a\neq c,a\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+a_{2,1}e_2-ae_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3+a_{4,3}e_4,\\ \displaystyle [e_4,e_{5}]=(c-a)(e_2-e_4),[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+\left(a_{2,3}-a_{2,1}+b_{2,3}\right)e_2+ae_3-\\ \displaystyle a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3-a_{4,3}e_4,[e_{5},e_4]=(c-a)(e_4-e_2), \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a-c\\ a & 0 & c & 0\\ -a_{4,1} & 0 & -a_{4,3} & c-a \end{smallmatrix}\right].$ \item[(8)] If $c:=a,b:=-a,a\neq0,$ then\footnote{The outer derivation $\mathscr{L}_{e_5}$ is nilpotent, so we remove this case from further consideration.} \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+a_{2,1}e_2-ae_3+\left(a_{4,3}+b_{4,3}-b_{4,1}\right)e_4, [e_3,e_{5}]=ae_1+a_{2,3}e_2-ae_3+a_{4,3}e_4,\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=-ae_1+\left(a_{2,3}-a_{2,1}+b_{2,3}\right)e_2+ae_3+b_{4,1}e_4,\\ \displaystyle [e_{5},e_3]=-ae_1+b_{2,3}e_2+ae_3+b_{4,3}e_4, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -a & 0\\ a_{2,3}-a_{2,1}+b_{2,3}& 0& b_{2,3}& 0 \\ a & 0 & a & 0\\ b_{4,1} & 0 & b_{4,3} & 0 \end{smallmatrix}\right].$ \item[(9)] If $a=0,b=0,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2+ce_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+\left(2a_{2,1}+2a_{4,1}-b_{2,3}\right)e_2+a_{4,3}e_4,\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2+a_{4,5}e_4,[e_{5},e_1]=\left(3a_{2,1}+4a_{4,1}-2b_{2,3}+2a_{4,3}\right)e_2-ce_3-a_{4,1}e_4,\\ \displaystyle [e_5,e_{2}]=-2ce_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2-a_{4,3}e_4,[e_5,e_{4}]=-2ce_2, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ 3a_{2,1}+4a_{4,1}-2b_{2,3}+2a_{4,3} & -2c & b_{2,3}& -2c\\ -c & 0 & 0 & 0\\ -a_{4,1} & 0 & -a_{4,3} & 0 \end{smallmatrix}\right].$ \item[(10)] If $a=0,b\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\left(\frac{a_{2,3}+b_{2,3}}{2}-a_{4,1}\right)e_2+(b+c)e_3+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3+A_{4,3}e_4,\\ \displaystyle [e_4,e_{5}]=b(e_4-e_2),[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=b_{2,1}e_2-(b+c)e_3-a_{4,1}e_4,\\ \displaystyle [e_5,e_{2}]=-2(b+c)e_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2-be_3-A_{4,3}e_4,[e_{5},e_4]=-(2c+b)e_2-be_4,\\ \displaystyle where\,\,A_{4,3}:=\frac{(2b+c)b_{2,3}-(2b+3c)a_{2,3}+2c(b_{2,1}-a_{4,1})}{4(b+c)}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ b_{2,1} & -2(b+c) & b_{2,3}& -2c-b \\ -b-c & 0 & -b & 0\\ -a_{4,1} & 0 & -A_{4,3} & -b \end{smallmatrix}\right].$ \item[(11)] If $a=0,b:=-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\left(a_{2,3}+b_{2,3}-b_{2,1}\right)e_2+a_{4,1}e_4, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3+a_{4,3}e_4,\\ \displaystyle [e_4,e_{5}]=c(e_2-e_4),[e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=b_{2,1}e_2-a_{4,1}e_4,[e_{5},e_3]=-ce_1+b_{2,3}e_2+\\ \displaystyle ce_3-a_{4,3}e_4,[e_{5},e_4]=c(e_4-e_2), \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ b_{2,1} & 0 & b_{2,3}& -c \\ 0 & 0 &c & 0\\ -a_{4,1} & 0 & -a_{4,3} & c \end{smallmatrix}\right].$ \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] The proof is off-loaded to Table \ref{Left(L4)}, when $(n\geq5)$. For $(n=4),$ we recalculate the applicable identities, which are $1.,2.,4.,6.-8.,10.-15.$ \item[(2)] We repeat case $(1),$ except the identity $13.$ \item[(3)] If $(n\geq5),$ then we apply the identities given in Table \ref{Left(L4)}, except $14.$ and $15.$ applying instead: $\mathscr{L}_{e_3}[e_{n+1},e_{n+1}]=[\mathscr{L}_{e_3}(e_{n+1}),e_{n+1}]+[e_{n+1},\mathscr{L}_{e_3}(e_{n+1})]$ and $\mathscr{L}[e_1,e_{n+1}]=[\mathscr{L}(e_1),e_{n+1}]+[e_1,\mathscr{L}(e_{n+1})].$ For $(n=4),$ we apply the same identities as in case $(1)$ except $14.$ and $15.$ applying two identities given above. \item[(4)] We apply the same identities as in case $(1).$ \item[(5)] We apply the following identities: $1.,2.,4.,6.-8.,10.-14.,\mathscr{L}_{e_3}[e_5,e_5]=[\mathscr{L}_{e_3}(e_5),e_5]+[e_5,\mathscr{L}_{e_3}(e_5)],15.$ \item[(6)] We apply the same identities as in case $(2)$ for $(n=4).$ \item[(7)] We apply the same identities as in case $(1)$ for $(n=4).$ \item[(8)] We apply the same identities as in case $(1)$ for $(n=4),$ except $13.$ and $15.$ \item[(9)] We apply the same identities as in case $(2)$ for $(n=4).$ \item[(10)] The same identities as in case $(5).$ \item[(11)] The same identities as in case $(5),$ except the identity $15.$ \end{enumerate} \end{proof} \newpage \begin{table}[h!] \caption{Left Leibniz identities in case $(1)$ in Theorem \ref{TheoremLL4}, ($n\geq5$).} \label{Left(L4)} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_1,e_{n+1}]\right)$ &\scriptsize $[e_{2},e_{n+1}]=0$ $\implies$ $a_{k,2}=0,(1\leq k\leq n).$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{3},e_{n+1}]\right)$ &\scriptsize $a_{1,4}=0,a_{k,4}:=a_{k-1,3},(5\leq k\leq n),a_{3,1}:=a_{2,4}+a_{1,3}+2b,a_{3,4}=0,a_{4,4}:=a+b$ $\implies$ $[e_1,e_{n+1}]=ae_1+a_{2,1}e_2+(a_{2,4}+a_{1,3}+2b)e_3+\sum_{k=4}^n{a_{k,1}e_k},[e_{4},e_{n+1}]=a_{2,4}e_2+(a+b)e_4+\sum_{k=5}^n{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{i},e_{n+1}]\right)$ &\scriptsize $a_{1,i+1}=a_{3,i+1}=0\,\,and\,\,we\,\,had\,\,that\,\,a_{1,4}=a_{3,4}=0\implies a_{2,i+1}=a_{4,i+1}=0;a_{i+1,i+1}:=(i-2)a+b,a_{k,i+1}:=a_{k-1,i},(5\leq k\leq n,k\neq i+1,4\leq i\leq n-1)$ $\implies$ $[e_{j},e_{n+1}]=\left((j-3)a+b\right)e_j+\sum_{k=j+1}^n{a_{k-j+3,3}e_k},(5\leq j\leq n).$ \\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{L}_{e_3}\left([e_{n+1},e_3]\right)$ &\scriptsize $b_{1,2}=b_{3,2}=0,b_{k,2}=0,(5\leq k\leq n),b_{3,3}:=2a_{1,3}+b+b_{2,2},b_{4,2}:=a_{1,3}+b_{1,3}$ $\implies$ $[e_{n+1},e_{2}]=b_{2,2}e_2+(a_{1,3}+b_{1,3})e_4,[e_{n+1},e_3]=b_{1,3}e_1+b_{2,3}e_2+(2a_{1,3}+b+b_{2,2})e_3+\sum_{k=4}^n{b_{k,3}e_k}.$ \\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{L}_{e_3}\left([e_{n+1},e_{i}]\right)$ &\scriptsize $b_{1,i}=b_{3,i}=0,a_{1,3}=0$ $\implies$ $[e_1,e_{n+1}]=ae_1+a_{2,1}e_2+(a_{2,4}+2b)e_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=4}^n{a_{k,3}e_k}, [e_{n+1},e_2]=b_{2,2}e_2+b_{1,3}e_4, [e_{n+1},e_3]=b_{1,3}e_1+b_{2,3}e_2+(b+b_{2,2})e_3+\sum_{k=4}^n{b_{k,3}e_k}, [e_{n+1},e_{i}]=b_{2,i}e_2+\sum_{k=4}^n{b_{k,i}e_k},(4\leq i\leq n-1).$ \\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{L}_{e_3}\left([e_{n+1},e_{n}]\right)$ &\scriptsize $b_{1,n}=b_{3,n}=0$ $\implies$ $[e_{n+1},e_{n}]=b_{2,n}e_2+\sum_{k=4}^n{b_{k,n}e_k}.$\\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{L}[e_{3},e_{3}]$ &\scriptsize $b_{1,3}=0\implies b_{2,2}:=-2b$ $\implies$ $[e_{n+1},e_{2}]=-2be_2,[e_{n+1},e_3]=b_{2,3}e_2-be_3+\sum_{k=4}^n{b_{k,3}e_k}.$\\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{n+1},e_{3}]\right)$ &\scriptsize $b_{2,4}:=a_{2,4}+2a,b_{4,4}:=-a-b,b_{k,4}:=b_{k-1,3},(5\leq k\leq n)$ $\implies$ $[e_{n+1},e_{4}]=\left(a_{2,4}+2a\right)e_2-\left(a+b\right)e_4+\sum_{k=5}^n{b_{k-1,3}e_k}.$ \\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{n+1},e_{i}]\right)$ &\scriptsize $b_{2,i+1}=b_{4,i+1}=0,b_{i+1,i+1}:=(2-i)a-b,b_{k,i+1}:=b_{k-1,i},(5\leq k\leq n,k\neq i+1,4\leq i\leq n-1)$ $\implies$ $[e_{n+1},e_j]=\left((3-j)a-b\right)e_j+\sum_{k=j+1}^n{b_{k-j+3,3}e_k},(5\leq j\leq n).$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{L}_{e_3}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $b_{1,1}:=-a,b_{3,1}:=a_{2,4}+2a,b_{k-1,3}:=-a_{k-1,3},(5\leq k\leq n)$ $\implies$ $[e_{n+1},e_{1}]=-ae_1+b_{2,1}e_2+(a_{2,4}+2a)e_3+\sum_{k=4}^n{b_{k,1}e_k}, [e_{n+1},e_{3}]=b_{2,3}e_2-be_3-\sum_{k=4}^{n-1}{a_{k,3}e_k}+b_{n,3}e_n, [e_{n+1},e_{4}]=(a_{2,4}+2a)e_2-(a+b)e_4-\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{n+1},e_i]=\left((3-i)a-b\right)e_i-\sum_{k=i+1}^n{a_{k-i+3,3}e_k},(5\leq i\leq n).$\\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{n+1},e_{1}]\right)$ &\scriptsize $a_{2,4}:=-a-b,b_{k-1,1}:=-a_{k-1,1},(5\leq k\leq n)$ $\implies$ $[e_{1},e_{n+1}]=ae_1+a_{2,1}e_2+(b-a)e_3+\sum_{k=4}^n{a_{k,1}e_k}, [e_{4},e_{n+1}]=(a+b)(e_4-e_2)+\sum_{k=5}^n{a_{k-1,3}e_k}, [e_{n+1},e_{1}]=-ae_1+b_{2,1}e_2+(a-b)e_3-\sum_{k=4}^{n-1}{a_{k,1}e_k}+b_{n,1}e_n, [e_{n+1},e_{4}]=(a-b)e_2-(a+b)e_4-\sum_{k=5}^n{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{L}[e_{n+1},e_{1}]$ &\scriptsize $a_{1,n+1}=0,a_{k,n+1}=0,(3\leq k\leq n-1)$ $\implies$ $[e_{n+1},e_{n+1}]=a_{2,n+1}e_2+a_{n,n+1}e_n.$ \\ \hline \scriptsize $13.$ &\scriptsize $\mathscr{L}[e_{n+1},e_{n+1}]$ &\scriptsize $a_{n,n+1}=0,(because\,\,b\neq(3-n)a)$ $\implies$ $[e_{n+1},e_{n+1}]=a_{2,n+1}e_2.$ \\ \hline \scriptsize $14.$ &\scriptsize $\mathscr{L}[e_{3},e_{n+1}]$ &\scriptsize $b_{n,3}:=-a_{n,3},(because\,\,a\neq0),b_{2,3}:=a_{2,3}+2a_{4,3},(because\,\,b\neq0)$ $\implies$ $[e_{n+1},e_{3}]=(a_{2,3}+2a_{4,3})e_2-be_3-\sum_{k=4}^{n}{a_{k,3}e_k}.$ \\ \hline \scriptsize $15.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $b_{n,1}:=-a_{n,1},B_{2,1}:=\frac{(2b-a)a_{2,1}+2b\cdot a_{4,1}+2(a-b)(a_{2,3}+a_{4,3})}{a},(because\,\,a\neq0)$ $\implies$ $[e_{n+1},e_{1}]=-ae_1+B_{2,1}e_2+(a-b)e_3-\sum_{k=4}^{n}{a_{k,1}e_k}.$ \\ \hline \end{tabular} \end{table} \begin{theorem}\label{TheoremLL4Absorption} Applying the technique of ``absorption'' (see Section \ref{Solvable left Leibniz algebras}), we can further simplify the algebras in each of the cases in Theorem \ref{TheoremLL4} as follows: \begin{enumerate}[noitemsep, topsep=0pt] \allowdisplaybreaks \item[(1)] If $a_{1,3}=0, b\neq-a,a\neq0,b\neq0,(n=4)$ or $b\neq(3-n)a,a\neq0,b\neq0,(n\geq5),$ then we have the following brackets for the algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2+(b-a)e_3, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle[e_4,e_{n+1}]=(a+b)(e_4-e_2)+\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=\left((i-3)a+b\right)e_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(a-b)e_3,[e_{n+1},e_2]=-2be_2,[e_{n+1},e_3]=a_{2,3}e_2-be_3-\\ \displaystyle \sum_{k=5}^n{a_{k,3}e_k},[e_{n+1},e_4]=(a-b)e_2-(a+b)e_4-\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{n+1},e_i]=\left((3-i)a-b\right)e_i-\\ \displaystyle \sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n);where\,\,\mathcal{B}_{2,1}:=\frac{(2b-a)a_{2,1}+2(a-b)a_{2,3}}{a}, \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{B}_{2,1} & -2b & a_{2,3}& a-b &0&0 & \cdots &&0 & 0& 0\\ a-b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a-b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a-b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a-b \end{smallmatrix}\right].$$ \item[(2)] If $a_{1,3}=0,b:=-a,a\neq0,(n=4)$ or $b:=(3-n)a,a\neq0,(n\geq5),$ then the brackets for the algebra are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2-(n-2)ae_3, [e_3,e_{n+1}]=a_{2,3}e_2+(3-n)ae_3+ \sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle[e_4,e_{n+1}]=(n-4)a(e_2-e_4)+\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=\left(i-n\right)ae_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{n,n+1}e_n,[e_{n+1},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(n-2)ae_3,[e_{n+1},e_2]=(2n-6)ae_2,\\ \displaystyle[e_{n+1},e_3]=a_{2,3}e_2+(n-3)ae_3-\sum_{k=5}^n{a_{k,3}e_k},[e_{n+1},e_4]=(n-2)ae_2+(n-4)ae_4-\\ \displaystyle \sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=\left(n-i\right)ae_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n),\\ \displaystyle where\,\,\mathcal{B}_{2,1}:=(5-2n)a_{2,1}+2(n-2)a_{2,3}, \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{B}_{2,1} & (2n-6)a & a_{2,3}& (n-2)a &0&0 & \cdots &&0 & 0& 0\\ (n-2)a & 0 & (n-3)a & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & (n-4)a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & (n-5)a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &(n-6)a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&2a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& 0 \end{smallmatrix}\right].$$ \item[(3)] If $a_{1,3}=0,a=0,b\neq0,(n=4)$ or $a=0$ and $b\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=a_{2,3}e_2+be_3, [e_3,e_{n+1}]=a_{2,3}e_2+be_3+\sum_{k=5}^n{a_{k,3}e_k},[e_4,e_{n+1}]=b\left(e_4-e_2\right)+\\ \displaystyle \sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=be_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},[e_{n+1},e_1]=b_{2,1}e_2-be_3,[e_{n+1},e_2]=-2be_2,\\ \displaystyle [e_{n+1},e_3]=a_{2,3}e_2-be_3-\sum_{k=5}^n{a_{k,3}e_k},[e_{n+1},e_4]=-b\left(e_2+e_4\right)-\sum_{k=6}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{n+1},e_i]=-be_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n), \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1} & -2b & a_{2,3}& -b &0&0 & \cdots &&0 & 0& 0\\ -b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0&-b \end{smallmatrix}\right].$$ \allowdisplaybreaks \item[(4)] If $a_{1,3}=0,b=0,a\neq0,(n=4)$ or $b=0,a\neq0,(n\geq5),$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=ae_1+a_{2,1}e_2-ae_3, [e_3,e_{n+1}]=a_{2,3}e_2+\sum_{k=5}^n{a_{k,3}e_k},[e_4,e_{n+1}]=a(e_4-e_2)+\\ \displaystyle\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-3)ae_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},[e_{n+1},e_{n+1}]=a_{2,n+1}e_2,\\ \displaystyle [e_{n+1},e_1]=-ae_1+\left(a_{2,3}-a_{2,1}+b_{2,3}\right)e_2+ae_3,[e_{n+1},e_3]=b_{2,3}e_2-\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{n+1},e_4]=a(e_2-e_4)-\sum_{k=6}^n{a_{k-1,3}e_k},[e_{n+1},e_i]=(3-i)ae_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(5\leq i\leq n), \end{array} \right. \end{equation} $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a &0&0 & \cdots &&0 & 0& 0\\ a & 0 &0 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a \end{smallmatrix}\right].$$ In the remaining cases $a_{1,3}:=c.$ \item[(5)] If $b\neq-a,a\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\mathcal{A}_{2,1}e_2+(b+c-a)e_3, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3,[e_{4},e_5]=(a+b)e_4-\\ \displaystyle (a+b)e_2,[e_{5},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(a-b-c)e_3,[e_5,e_{2}]=-2(b+c)e_2,[e_{5},e_3]=-ce_1+\\ \displaystyle \mathcal{B}_{2,3}e_2-be_3,[e_5,e_{4}]=(a-2c-b)e_2-(a+b)e_4;where\,\,\mathcal{A}_{2,1}:=\frac{(a+c)a_{2,3}+(c-a)b_{2,3}}{2a},\\ \displaystyle \mathcal{B}_{2,1}:=\frac{(3a-2b-c)a_{2,3}+(a-2b-c)b_{2,3}}{2a}\,\,and\,\, \mathcal{B}_{2,3}:=\frac{(a+c)a_{2,3}+c\cdot b_{2,3}}{a}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ \mathcal{B}_{2,1} & -2(b+c) & \mathcal{B}_{2,3}& a-2c-b \\ a-b-c & 0 & -b & 0\\ 0 & 0 & 0 & -a-b \end{smallmatrix}\right].$ \item[(6)] If $b:=-a,a\neq0,a\neq c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+\mathcal{A}_{2,1}e_2+(c-2a)e_3, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ae_3,[e_{5},e_{5}]=a_{4,5}e_4,\\ \displaystyle [e_{5},e_1]=-ae_1+\mathcal{B}_{2,1}e_2+(2a-c)e_3,[e_5,e_2]=2(a-c)e_2,[e_{5},e_3]=-ce_1+\mathcal{B}_{2,3}e_2+ae_3,\\ \displaystyle [e_5,e_{4}]=2(a-c)e_2;where\,\,\mathcal{A}_{2,1}:=\frac{(a+c)a_{2,3}+(c-a)b_{2,3}}{2a},\mathcal{B}_{2,3}:=\frac{(a+c)a_{2,3}+c\cdot b_{2,3}}{a},\\ \displaystyle \mathcal{B}_{2,1}:=\frac{(5a-c)a_{2,3}+(3a-c)b_{2,3}}{2a}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ \mathcal{B}_{2,1} & 2(a-c) &\mathcal{B}_{2,3}& 2(a-c) \\ 2a-c & 0 &a & 0\\ 0 & 0 & 0 &0 \end{smallmatrix}\right].$ \item[(7)] If $b:=-c,c\neq0,a\neq c,a\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=ae_1+a_{2,1}e_2-ae_3, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3,[e_4,e_{5}]=(c-a)(e_2-e_4),\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=-ae_1+\left(a_{2,3}-a_{2,1}+b_{2,3}\right)e_2+ae_3,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3,\\ \displaystyle [e_{5},e_4]=(c-a)(e_4-e_2), \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} -a & 0 & -c & 0\\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a-c\\ a & 0 & c & 0\\ 0& 0 & 0 & c-a \end{smallmatrix}\right].$ \item[(8)] If $a=0,b=0,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=a_{2,1}e_2+ce_3, [e_3,e_{5}]=ce_1+\left(2a_{2,1}-b_{2,3}\right)e_2,[e_{5},e_{5}]=a_{4,5}e_4,\\ \displaystyle [e_{5},e_1]=\left(3a_{2,1}-2b_{2,3}\right)e_2-ce_3,[e_5,e_{2}]=-2ce_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2,[e_5,e_{4}]=-2ce_2, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ 3a_{2,1}-2b_{2,3} & -2c & b_{2,3}& -2c\\ -c & 0 & 0 & 0\\ 0& 0 & 0 & 0 \end{smallmatrix}\right].$ \item[(9)] If $a=0,b\neq0,b\neq-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\mathcal{A}_{2,1}e_2+(b+c)e_3, [e_3,e_{5}]=ce_1+a_{2,3}e_2+be_3,[e_4,e_{5}]=b(e_4-e_2),\\ \displaystyle [e_{5},e_1]=\mathcal{B}_{2,1}e_2-(b+c)e_3,[e_5,e_{2}]=-2(b+c)e_2,[e_{5},e_3]=-ce_1+\mathcal{B}_{2,3}e_2-be_3,\\ \displaystyle [e_{5},e_4]=-(2c+b)e_2-be_4;where\,\,\mathcal{A}_{2,1}:=\frac{(4b+5c)a_{2,3}+c\cdot b_{2,3}}{4(b+c)},\\ \displaystyle \mathcal{B}_{2,1}:=\frac{(2b+3c)a_{2,3}-(2b+c)b_{2,3}}{4(b+c)}\,\,and\,\,\mathcal{B}_{2,3}:=\frac{(2b+3c)a_{2,3}+c\cdot b_{2,3}}{2(b+c)}, \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ \mathcal{B}_{2,1} & -2(b+c) & \mathcal{B}_{2,3}& -2c-b \\ -b-c & 0 & -b & 0\\ 0 & 0 &0 & -b \end{smallmatrix}\right].$ \item[(10)] If $a=0,b:=-c,c\neq0,$ then \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=\left(a_{2,3}+b_{2,3}-b_{2,1}\right)e_2, [e_3,e_{5}]=ce_1+a_{2,3}e_2-ce_3,[e_4,e_{5}]=c(e_2-e_4),\\ \displaystyle [e_{5},e_{5}]=a_{2,5}e_2,[e_{5},e_1]=b_{2,1}e_2,[e_{5},e_3]=-ce_1+b_{2,3}e_2+ce_3,[e_{5},e_4]=c(e_4-e_2), \end{array} \right. \end{equation} $\mathscr{L}_{e_{5}}=\left[\begin{smallmatrix} 0 & 0 & -c & 0\\ b_{2,1} & 0 & b_{2,3}& -c \\ 0 & 0 &c & 0\\ 0 & 0 & 0 & c \end{smallmatrix}\right].$ \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[noitemsep, topsep=0pt] \item[(1)] The right (not a derivation) and the left (a derivation) multiplication operators restricted to the nilradical are given below: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,1} & 0 & a_{2,3}& -a-b &0& & \cdots &0&\cdots & 0& 0&0\\ b-a & 0 & b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & a+b &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & 2a+b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-3)a+b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &(n-4)a+b& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& (n-3)a+b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ B_{2,1} & -2b & a_{2,3}+2a_{4,3}& a-b &0& & \cdots &0&\cdots & 0& 0&0\\ a-b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a-b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a-b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a-b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& (3-n)a-b \end{smallmatrix}\right].$$ \allowdisplaybreaks \begin{itemize} \item We start with applying the transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ to remove $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ that we change to $a_{2,1}-a_{4,3}$ and $B_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Then we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k}$ to remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ The transformation changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $2a_{4,1}+a_{2,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $2a_{4,1}+a_{2,1}-a_{4,3}:=a_{2,1}$ and $a_{2,3}+a_{4,1}:=a_{2,3}.$ Then $B_{2,1}-a_{4,3}:=\frac{(2b-a)a_{2,1}+2(a-b)a_{2,3}}{a}.$ \item Applying the transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}+\frac{a_{2,n+1}}{2b}e_2,$ we remove the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}].$ \end{itemize} \item[(2)] The right (not a derivation) and left (a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,1} & 0 & a_{2,3}& (n-4)a &0& & \cdots &0&\cdots & 0& 0&0\\ (2-n)a & 0 & (3-n)a & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & (4-n)a &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & (5-n)a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-n)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &-a& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& 0 \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ B_{2,1} & (2n-6)a & a_{2,3}+2a_{4,3}& (n-2)a &0& & \cdots &0&\cdots & 0& 0&0\\ (n-2)a & 0 & (n-3)a & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & (n-4)a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & (n-5)a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (n-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&0 \end{smallmatrix}\right].$$ \begin{itemize} \item We apply the transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ to remove $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but it affects other entries as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ that we change to $a_{2,1}-a_{4,3}$ and $B_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Then we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k}$ to remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ It changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $2a_{4,1}+a_{2,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $2a_{4,1}+a_{2,1}-a_{4,3}:=a_{2,1}$ and $a_{2,3}+a_{4,1}:=a_{2,3}.$ Then $B_{2,1}-a_{4,3}:=(5-2n)a_{2,1}+2(n-2)a_{2,3}.$ \item Finally the transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}-\frac{a_{2,n+1}}{2(n-3)a}e_2$ removes the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}].$ \end{itemize} \item[(3)] The right (not a derivation) and left (a derivation) multiplication operators restricted to the nilradical are given below: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,3}+a_{4,3}-a_{4,1} & 0 & a_{2,3}& -b &0& & \cdots &0&\cdots & 0& 0&0\\ b & 0 & b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & b &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &b& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}& b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ b_{2,1} & -2b & a_{2,3}+2a_{4,3}&-b &0& & \cdots &0&\cdots & 0& 0&0\\ -b & 0 & -b & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -b &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -b &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& -b&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &-b& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}&-b \end{smallmatrix}\right].$$ \begin{itemize} \item Applying the transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1,$ we remove $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but the transformation affects other entries, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ that we change to $a_{2,3}-a_{4,1}$ and $b_{2,1}-a_{4,3},$ respectively. It also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}.$ At the same time, it affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Then we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k}$ to remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ This transformation changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ as well as the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}.$ It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ which we rename back by $a_{2,n+1}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,1}-a_{4,3}:=b_{2,1}.$ \item Finally applying the transformation $e^{\prime}_j=e_j,(1\leq j\leq n),e^{\prime}_{n+1}=e_{n+1}+\frac{a_{2,n+1}}{2b}e_2,$ we remove the coefficient $a_{2,n+1}$ in front of $e_2$ in $[e_{n+1},e_{n+1}].$ \end{itemize} \item[(4)] The right (not a derivation) and left (a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,1} & 0 & a_{2,3}& -a &0& & \cdots &0&\cdots & 0& 0&0\\ -a & 0 & 0 & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ a_{4,1} & 0 & a_{4,3} & a &0 & &\cdots&0&\cdots &0 & 0&0\\ a_{5,1} & 0 & a_{5,3} & a_{4,3} & 2a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & a_{5,3} & a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ a_{i,1} & 0 & a_{i,3} & a_{i-1,3} & a_{i-2,3} &\cdots&a_{4,3}& (i-3)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ a_{n-1,1} & 0 & a_{n-1,3}& a_{n-2,3}& a_{n-3,3}&\cdots &a_{n-i+3,3} &a_{n-i+2,3}&\cdots&a_{4,3} &(n-4)a& 0\\ a_{n,1} & 0 & a_{n,3}& a_{n-1,3}& a_{n-2,3}&\cdots &a_{n-i+4,3} &a_{n-i+3,3}&\cdots&a_{5,3} &a_{4,3}&(n-3)a \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&&\cdots &0& \cdots&0 & 0&0 \\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a &0& & \cdots &0&\cdots & 0& 0&0\\ a & 0 & 0 & 0 & 0& &\cdots &0&\cdots &0& 0&0 \\ -a_{4,1} & 0 & -a_{4,3} & -a &0 & &\cdots&0&\cdots &0 & 0&0\\ -a_{5,1} & 0 & -a_{5,3} & -a_{4,3} & -2a &&\cdots &0 &\cdots&0 & 0&0 \\ \boldsymbol{\cdot} & \boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3} & -a_{4,3} &\ddots& &\vdots &&\vdots & \vdots&\vdots \\ \vdots & \vdots & \vdots &\vdots &\vdots &\ddots& \ddots&\vdots &&\vdots & \vdots&\vdots \\ -a_{i,1} & 0 & -a_{i,3} & -a_{i-1,3} & -a_{i-2,3} &\cdots&-a_{4,3}& (3-i)a&\cdots&0 & 0&0\\ \vdots & \vdots & \vdots &\vdots &\vdots&&\vdots &\vdots &&\vdots & \vdots&\vdots \\ -a_{n-1,1} & 0 & -a_{n-1,3}& -a_{n-2,3}& -a_{n-3,3}&\cdots &-a_{n-i+3,3} &-a_{n-i+2,3}&\cdots&-a_{4,3} &(4-n)a& 0\\ -a_{n,1} & 0 & -a_{n,3}& -a_{n-1,3}& -a_{n-2,3}&\cdots &-a_{n-i+4,3} &-a_{n-i+3,3}&\cdots&-a_{5,3} &-a_{4,3}& (3-n)a \end{smallmatrix}\right].$$ \begin{itemize} \item We continue with the transformation $e^{\prime}_k=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}-a_{4,3}e_1$ to remove $a_{4,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{4,3}$ in $\mathscr{L}_{e_{n+1}}$ from the $(i,i-1)^{st}$ positions, where $(4\leq i\leq n),$ but other entries are affected as well, such as the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ that we change to $a_{2,1}-a_{4,3}$ and $a_{2,3}-a_{2,1}-a_{4,3}+b_{2,3},$ respectively. The transformation also changes the entry in the $(2,3)^{rd}$ position in $\mathscr{L}_{e_{n+1}}$ to $b_{2,3}-2a_{4,3}$ and affects the coefficient in front of $e_2$ in the bracket $[e_{n+1},e_{n+1}],$ which we change back to $a_{2,n+1}$. \item Applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=3}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(3\leq k\leq n-1).$ This transformation changes the entry in the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ to $2a_{4,1}+a_{2,1}-a_{4,3},$ the entries in the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+a_{4,1}$ and $a_{4,1}+b_{2,3}-2a_{4,3},$ respectively. It also affects the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}],$ that we rename back by $a_{2,n+1}.$ We assign $2a_{4,1}+a_{2,1}-a_{4,3}:=a_{2,1},a_{2,3}+a_{4,1}:=a_{2,3}$ and $a_{4,1}+b_{2,3}-2a_{4,3}:=b_{2,3}.$ \end{itemize} \item[(5)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-A_{4,3}e_1- \frac{1}{2(b+c)}(B_{2,1}A_{4,3}+a_{2,1}A_{4,3}+2a_{4,1}A_{4,3}-A^2_{4,3}-a_{2,3}a_{4,1}-a^2_{4,1}-a_{4,1}b_{2,3}-a_{2,5})e_2+a_{4,1}e_3.$ Then we assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}-2a_{2,1}-3a_{4,1}:=b_{2,3}.$ \item[(6)] One applies the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-A_{4,3}e_1+ \frac{1}{2(a-c)}(B_{2,1}A_{4,3}+a_{2,1}A_{4,3}+2a_{4,1}A_{4,3}-A^2_{4,3}-a_{2,3}a_{4,1}-a^2_{4,1}-a_{4,1}b_{2,3}-a_{2,5})e_2+a_{4,1}e_3.$ Then we assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}-2a_{2,1}-3a_{4,1}:=b_{2,3}.$ \item[(7)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1+a_{4,1}e_3$ and rename the coefficient in front of $e_2$ in $[e_5,e_5]$ back by $a_{2,5}.$ Then we assign $a_{2,1}+2a_{4,1}-a_{4,3}:=a_{2,1},a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ \item[(8)] The transformation is as follows: $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1+ \frac{1}{2c}(2a_{2,1}a_{4,1}-4a_{2,1}a_{4,3}+3a^2_{4,1}-6a_{4,1}a_{4,3}-a^2_{4,3}+2a_{4,3}b_{2,3}+a_{2,5})e_2+a_{4,1}e_3.$ We assign $a_{2,1}+2a_{4,1}-a_{4,3}:=a_{2,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ \item[(9)] The transformation is $e^{\prime}_i=e_i,(1\leq i\leq 4),$ $e^{\prime}_5=e_5-A_{4,3}e_1+ \frac{1}{4(b+c)}(2A^2_{4,3}-a_{2,3}A_{4,3}-2a_{4,1}A_{4,3}-2b_{2,1}A_{4,3}-b_{2,3}A_{4,3}+2a_{2,3}a_{4,1}+2a^2_{4,1}+2a_{4,1}b_{2,3}+2a_{2,5})e_2+a_{4,1}e_3.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3}$ and $b_{2,3}+a_{4,1}-2b_{2,1}:=b_{2,3}.$ \item[(10)] We apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq 4),e^{\prime}_5=e_5-a_{4,3}e_1+a_{4,1}e_3$ and rename the coefficient in front of $e_2$ in $[e_5,e_5]$ back by $a_{2,5}.$ We assign $a_{2,3}+a_{4,1}:=a_{2,3},b_{2,1}-a_{4,3}:=b_{2,1}$ and $b_{2,3}+a_{4,1}-2a_{4,3}:=b_{2,3}.$ \end{enumerate} \end{proof} \allowdisplaybreaks \begin{theorem}\label{LL4(Change of Basis)} There are eight solvable indecomposable left Leibniz algebras up to isomorphism with a codimension one nilradical $\mathcal{L}^4,(n\geq4),$ which are given below: \begin{equation} \begin{array}{l} \displaystyle \nonumber (i)\,\,\, l_{n+1,1}: [e_1,e_{n+1}]=e_1+(a-1)e_3, [e_3,e_{n+1}]=ae_3,[e_4,e_{n+1}]=(a+1)(e_4-e_2),\\ \displaystyle[e_{i},e_{n+1}]=\left(a+i-3\right)e_{i},[e_{n+1},e_1]=-e_1-(a-1)e_3,[e_{n+1},e_2]=-2ae_2, [e_{n+1},e_3]=-ae_3,\\ \displaystyle [e_{n+1},e_4]=(1-a)e_2-(a+1)e_4, [e_{n+1},e_i]=\left(3-i-a\right)e_i,(5\leq i\leq n),\\ \displaystyle \nonumber(ii)\,\,\, l_{n+1,2}:[e_1,e_{n+1}]=e_1+(2-n)e_3, [e_3,e_{n+1}]=(3-n)e_3,[e_4,e_{n+1}]=(n-4)(e_2-e_4),\\ \displaystyle[e_{i},e_{n+1}]=\left(i-n\right)e_{i},[e_{n+1},e_{n+1}]=e_n,[e_{n+1},e_1]=-e_1+(n-2)e_3,[e_{n+1},e_2]=(2n-6)e_2,\\ \displaystyle [e_{n+1},e_3]=(n-3)e_3,[e_{n+1},e_4]=(n-2)e_2+(n-4)e_4,[e_{n+1},e_i]=\left(n-i\right)e_i,\\ \displaystyle (5\leq i\leq n),\\ \displaystyle (iii)\,\,\,l_{n+1,3}: [e_1,e_{n+1}]=e_3,[e_4,e_{n+1}]=-e_2,[e_{i},e_{n+1}]=e_{i}+\epsilon e_{i+2}+\sum_{k=i+3}^n{b_{k-i-2}e_k},\\ \displaystyle [e_{n+1},e_1]=-e_3,[e_{n+1},e_2]=-2e_2,[e_{n+1},e_4]=-e_2,[e_{n+1},e_i]=-e_i-\epsilon e_{i+2}-\sum_{k=i+3}^n{b_{k-i-2}e_k},\\ \displaystyle (\epsilon=0,1,3\leq i\leq n),\\ \displaystyle \nonumber(iv)\,\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4}: [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=fe_2,[e_4,e_{n+1}]=-e_2+e_4,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=\epsilon e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=de_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,(5\leq i\leq n;\epsilon=0,1; if\,\,\epsilon=0,\,\,then\,\,d^2+f^2\neq0),\\ \displaystyle \nonumber(v)\,\,\,l_{5,5}:[e_1,e_{5}]=ae_1+(b-a+1)e_3, [e_3,e_{5}]=e_1+be_3,[e_{4},e_5]=(a+b)(e_4-e_2),\\ \displaystyle [e_{5},e_1]=-ae_1+(a-b-1)e_3,[e_5,e_{2}]=-2(b+1)e_2,[e_{5},e_3]=-e_1-be_3,\\ \displaystyle [e_5,e_4]=(a-b-2)e_2-(a+b)e_4,(if\,\,b=-1,then\,\,a\neq1),\\ \displaystyle \nonumber(vi)\,\,l_{5,6}: [e_1,e_{5}]=ae_1+(1-2a)e_3, [e_3,e_{5}]=e_1-ae_3,[e_{5},e_{5}]=e_4,[e_{5},e_1]=-ae_1+\\ \displaystyle (2a-1)e_3,[e_5,e_{2}]=2(a-1)e_2,[e_{5},e_3]=ae_3-e_1,[e_5,e_4]=2(a-1)e_2,(a\neq1),\\ \displaystyle \nonumber(vii)\,\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,7}:[e_1,e_{5}]=a(e_1-e_3), [e_3,e_{5}]=e_1+fe_2-e_3,[e_4,e_{5}]=(1-a)\left(e_2-e_4\right),\\ \displaystyle [e_{5},e_{5}]=\epsilon e_2,[e_{5},e_1]=-ae_1+\left(d+f\right)e_2+ae_3,[e_{5},e_3]=-e_1+de_2+e_3,\\ \displaystyle [e_{5},e_4]=(a-1)\left(e_2-e_4\right),(\epsilon=0,1;if\,\,\epsilon=0,then\,\,d^2+f^2\neq0;a\neq1),\\ \displaystyle \nonumber(viii)\,\mathfrak{g}} \def\q{\mathfrak{q}_{5,8}: [e_1,e_{5}]=ce_2, [e_3,e_{5}]=e_1-e_3,[e_4,e_{5}]=e_2-e_4,[e_{5},e_{5}]=\epsilon e_2,[e_{5},e_1]=\left(c+d\right)e_2,\\ \displaystyle [e_{5},e_3]=-e_1+\left(d+2c\right)e_2+e_3,[e_{5},e_4]=e_4-e_2,(c\neq0,\epsilon=0,1). \end{array} \end{equation} \end{theorem} \vskip 20pt \begin{proof} One applies the change of basis transformations keeping the nilradical $\mathcal{L}^4$ given in $(\ref{L4})$ unchanged. \begin{enumerate}[noitemsep, topsep=1pt] \allowdisplaybreaks \item[(1)] We have the right (not a derivation) and the left (a derivation) multiplication operators restricted to the nilradical are as follows: $\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,1} & 0 & a_{2,3}& -a-b &0&0 & \cdots &&0 & 0& 0\\ b-a & 0 & b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & a+b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 2a+b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &3a+b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&(n-5)a+b &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &(n-4)a+b& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& (n-3)a+b \end{smallmatrix}\right],$ $\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{B}_{2,1} & -2b & a_{2,3}& a-b &0&0 & \cdots &&0 & 0& 0\\ a-b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a-b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a-b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a-b \end{smallmatrix}\right].$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed, renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Besides it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \noindent $(I)$ Suppose $b\neq\frac{a}{2}.$ \item Applying the transformation $e^{\prime}_1=e_1+\frac{1}{2b-a}\left(\mathcal{B}_{2,1}+\frac{(b-a)a_{2,3}}{b}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,3}}{b}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{2,1}$ and $\mathcal{B}_{2,1}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. This transformation also removes $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ as well as $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$. (For $n=4,$ we direct to the Remark \ref{Remark{a_{5,1}}}.) \item Then we scale $a$ to unity applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}.$ Renaming $\frac{b}{a}$ by $b,$ we obtain a continuous family of Leibniz algebras: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{n+1}]=e_1+(b-1)e_3, [e_3,e_{n+1}]=be_3,[e_4,e_{n+1}]=(b+1)(e_4-e_2),\\ \displaystyle[e_{i},e_{n+1}]=\left(b+i-3\right)e_{i},[e_{n+1},e_1]=-e_1-(b-1)e_3,\\ \displaystyle [e_{n+1},e_2]=-2be_2,[e_{n+1},e_3]=-be_3,[e_{n+1},e_4]=(1-b)e_2-(b+1)e_4, \\ \displaystyle [e_{n+1},e_i]=\left(3-i-b\right)e_i,(5\leq i\leq n),(b\neq0,b\neq\frac{1}{2},b\neq3-n). \end{array} \right. \label{l_{n+1,1}} \end{equation} \noindent $(II)$ Suppose $b:=\frac{a}{2}.$ We have that $\mathcal{B}_{2,1}=a_{2,3}.$ \item We apply the transformation $e^{\prime}_1=e_1+\frac{a_{2,1}+a_{2,3}}{a}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{2a_{2,3}}{a}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k}$ to remove $a_{2,3}$ from the $(2,1)^{st},(2,3)^{rd}$ positions in $\mathscr{L}_{e_{n+1}}$ and from the $(2,3)^{rd}$ position in $\mathscr{R}_{e_{n+1}}.$ This transformation also removes $a_{2,1}$ from the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ as well as $a_{k+1,1}$ and $-a_{k+1,1}$ from the entries in the $(k+1,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively, where $(4\leq k\leq n-1)$. (For $n=4,$ we refer to Remark \ref{Remark{a_{5,1}}}.) \item To scale $a$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}$ and obtain a limiting case of $(\ref{l_{n+1,1}})$ with $b=\frac{1}{2}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-\frac{e_3}{2}, [e_3,e_{n+1}]=\frac{e_3}{2},[e_4,e_{n+1}]=\frac{3}{2}(e_4-e_2),[e_{i},e_{n+1}]=\left(i-\frac{5}{2}\right)e_{i},\\ \displaystyle[e_{n+1},e_1]=-e_1+\frac{e_3}{2},[e_{n+1},e_2]=-e_2,[e_{n+1},e_3]=-\frac{e_3}{2},[e_{n+1},e_4]=\frac{e_2}{2}-\frac{3e_4}{2},\\ \displaystyle [e_{n+1},e_i]=\left(\frac{5}{2}-i\right)e_i,(5\leq i\leq n). \end{array} \right. \end{equation} \end{itemize} \item[(2)] We have the right (not a derivation) and the left (a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,1} & 0 & a_{2,3}& (n-4)a &0&0 & \cdots &&0 & 0& 0\\ (2-n)a & 0 & (3-n)a & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & (4-n)a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & (5-n)a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &(6-n)a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&-2a &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &-a& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0&0 \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \mathcal{B}_{2,1} & (2n-6)a & a_{2,3}& (n-2)a &0&0 & \cdots &&0 & 0& 0\\ (n-2)a & 0 & (n-3)a & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & (n-4)a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & (n-5)a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &(n-6)a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&2a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& 0 \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed, renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Moreover it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \item Applying the transformation $e^{\prime}_1=e_1+\frac{1}{(5-2n)a}\left(\mathcal{B}_{2,1}+\frac{(2-n)a_{2,3}}{3-n}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,3}}{(3-n)a}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}-a_{5,1}e_2+\sum_{k=4}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{2,1}$ and $\mathcal{B}_{2,1}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. We also remove $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ as well as $a_{k+1,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{k+1,1}$ in $\mathscr{L}_{e_{n+1}}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$. (See Remark \ref{Remark{a_{5,1}}}.) \item To scale $a$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}$ renaming the coefficient $\frac{a_{n,n+1}}{a^2}$ in front of $e_n$ in $[e_{n+1},e_{n+1}]$ back by $a_{n,n+1}.$ We obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+(2-n)e_3, [e_3,e_{n+1}]=(3-n)e_3,[e_4,e_{n+1}]=(n-4)(e_2-e_4),\\ \displaystyle [e_{i},e_{n+1}]=\left(i-n\right)e_{i},[e_{n+1},e_{n+1}]=a_{n,n+1}e_n,[e_{n+1},e_1]=-e_1+(n-2)e_3,\\ \displaystyle [e_{n+1},e_2]=(2n-6)e_2,[e_{n+1},e_3]=(n-3)e_3,[e_{n+1},e_4]=(n-2)e_2+(n-4)e_4,\\ \displaystyle [e_{n+1},e_i]=\left(n-i\right)e_i,(5\leq i\leq n), \end{array} \right. \end{equation} \end{itemize} If $a_{n,n+1}=0,$ then we have a limiting case of (\ref{l_{n+1,1}}) with $b=3-n$. If $a_{n,n+1}\neq0,$ then we scale it to $1.$ It gives us the algebra $l_{n+1,2}.$ \item[(3)] We have the right (not a derivation) and the left (a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,3} & 0 & a_{2,3}& -b &0&0 & \cdots &&0 & 0& 0\\ b & 0 & b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&b &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &b& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& b \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} 0 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ b_{2,1} & -2b & a_{2,3}& -b &0&0 & \cdots &&0 & 0& 0\\ -b & 0 & -b & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -b &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -b &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-b&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&-b &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &-b& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0&-b \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item Applying the transformation $e^{\prime}_1=e_1+\frac{b_{2,1}+a_{2,3}}{2b}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,3}}{b}e_2,e^{\prime}_{i}=e_{i}, (4\leq i\leq n+1),$ we remove $b_{2,1}$ from the $(2,1)^{st}$ position in $\mathscr{L}_{e_{n+1}}$ and $a_{2,3}$ from the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$ and from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}}$ keeping other entries unchanged. \item To scale $b$ to unity, we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n),e^{\prime}_{n+1}=\frac{e_{n+1}}{b}.$ Then we rename $\frac{a_{5,3}}{b},\frac{a_{6,3}}{b},...,\frac{a_{n,3}}{b}$ by $a_{5,3},a_{6,3},...,a_{n,3},$ respectively. We obtain a Leibniz algebra \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_3,[e_4,e_{n+1}]=-e_2,[e_{i},e_{n+1}]=e_{i}+\sum_{k=i+2}^n{a_{k-i+3,3}e_k},[e_{n+1},e_1]=-e_3,\\ \displaystyle [e_{n+1},e_2]=-2e_2,[e_{n+1},e_4]=-e_2, [e_{n+1},e_i]=-e_i-\sum_{k=i+2}^n{a_{k-i+3,3}e_k},(3\leq i\leq n), \end{array} \right. \end{equation} If $a_{5,3}\neq0,(n\geq5),$ then we scale it to $1.$ We also rename all the affected entries back and then we rename $a_{6,3},...,a_{n,3}$ by $b_1,...,b_{n-5},$ respectively. We combine with the case when $a_{5,3}=0$ and obtain a Leibniz algebra $l_{n+1,3}.$ \begin{remark} If $n=4,$ then $\epsilon=0.$ \end{remark} \end{itemize} \item[(4)] We have the right (not a derivation) and the left (a derivation) multiplication operators restricted to the nilradical are as follows: $$\mathscr{R}_{e_{n+1}}=\left[\begin{smallmatrix} a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,1} & 0 & a_{2,3}& -a &0&0 & \cdots &&0 & 0& 0\\ -a & 0 & 0 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & a_{5,3} & 0 & 2a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & a_{5,3} & 0 &3a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & a_{n-2,3}& a_{n-3,3}& \cdots&\cdots &a_{5,3}&0&(n-5)a &0& 0\\ 0 & 0 & a_{n-1,3}& a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&a_{5,3}&0 &(n-4)a& 0\\ 0 & 0 & a_{n,3}& a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&a_{5,3} &0& (n-3)a \end{smallmatrix}\right],$$ $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -a & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,3}-a_{2,1}+b_{2,3} & 0 & b_{2,3}& a &0&0 & \cdots &&0 & 0& 0\\ a & 0 &0 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -a &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -2a &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-3a&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&(5-n)a &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &(4-n)a& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& (3-n)a \end{smallmatrix}\right].$$ \begin{itemize}[noitemsep, topsep=0pt] \allowdisplaybreaks \item We apply the transformation $e^{\prime}_1=e_1,e^{\prime}_2=e_2,e^{\prime}_i=e_i-\frac{a_{k-i+3,3}}{(k-i)a}e_k,(3\leq i\leq n-2, i+2\leq k\leq n,n\geq5),e^{\prime}_{j}=e_{j},(n-1\leq j\leq n+1),$ where $k$ is fixed renaming all the affected entries back. This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}}.$ Besides it introduces the entries in the $(5,1)^{st},(6,1)^{st},...,(n,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ which we set to be $a_{5,1},a_{6,1},...,a_{n,1}$ and $-a_{5,1},-a_{6,1},...,-a_{n,1},$ respectively. \item Applying the transformation $e^{\prime}_1=e_1+\frac{a_{2,1}}{a}e_2,e^{\prime}_{i}=e_{i}, (2\leq i\leq n,n\geq4),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=4}^{n-1}a_{k+1,1}e_{k},$ we remove $a_{2,1}$ from the $(2,1)^{st}$ position in $\mathscr{R}_{e_{n+1}}$. This transformation changes the entry in the $(2,1)^{st}$ position in $\mathscr{L}_{e_{n+1}}$ to $a_{2,3}+b_{2,3}.$ It also removes $a_{k+1,1}$ and $-a_{k+1,1}$ from the entries in the $(k+1,1)^{st}$ positions, where $(4\leq k\leq n-1)$ in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively.\footnote{Except this transformation it is the same as case $(4)$ for the right Leibniz algebras.} \item We assign $a_{2,3}:=d$ and $b_{2,3}:=f$ and then we scale $a$ to unity applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq4),e^{\prime}_{n+1}=\frac{e_{n+1}}{a}.$ Renaming $\frac{d}{a},\frac{f}{a}$ and $\frac{a_{2,n+1}}{a^2}$ by $d,f$ and $a_{2,n+1},$ respectively, we obtain a Leibniz algebra, which is right and left at the same time and a limiting case of (\ref{l_{n+1,1}}) with $b=0,$ when $d=f=a_{2,n+1}=0$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,(5\leq i\leq n). \end{array} \right. \end{equation} Altogether (\ref{l_{n+1,1}}) and all its limiting cases after replacing $b$ with $a$ give us a Leibniz algebra $l_{n+1,1}.$ It remains to consider a continuous family of Leibniz algebras given below and scale any nonzero entries as much as possible. \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1-e_3, [e_3,e_{n+1}]=de_2,[e_4,e_{n+1}]=e_4-e_2,[e_{i},e_{n+1}]=(i-3)e_{i},\\ \displaystyle [e_{n+1},e_{n+1}]=a_{2,n+1}e_2,[e_{n+1},e_1]=-e_1+\left(d+f\right)e_2+e_3,[e_{n+1},e_3]=fe_2,\\ \displaystyle [e_{n+1},e_4]=e_2-e_4,[e_{n+1},e_i]=(3-i)e_i,(a_{2,n+1}^2+d^2+f^2\neq0,5\leq i\leq n), \end{array} \right. \end{equation} If $a_{2,n+1}\neq0,$ then we scale it to $1.$ We also rename all the affected entries back. Then we combine with the case when $a_{2,n+1}=0$ and obtain a right and left Leibniz algebra $\mathfrak{g}} \def\q{\mathfrak{q}_{n+1,4}.$ \end{itemize} \item[(5)] Applying the transformation $e^{\prime}_1=e_1+\frac{(2a-b)a_{2,3}-b\cdot b_{2,3}}{2a(b+c)}e_2,e^{\prime}_2=e_2,$ $e^{\prime}_3=e_3+\frac{(2a+c)a_{2,3}+c\cdot b_{2,3}}{2a(b+c)}e_2,$\\ $e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and renaming $\frac{a}{c}$ and $\frac{b}{c}$ by $a$ and $b,$ respectively, we obtain a continuous family of Leibniz algebras given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(b-a+1)e_3, [e_3,e_{5}]=e_1+be_3,[e_{4},e_5]=(a+b)(e_4-e_2),\\ \displaystyle [e_{5},e_1]=-ae_1+(a-b-1)e_3,[e_5,e_{2}]=-2(b+1)e_2,[e_{5},e_3]=-e_1-be_3,\\ \displaystyle [e_5,e_4]=(a-b-2)e_2-(a+b)e_4,(b\neq-a,a\neq0,b\neq-1). \end{array} \right. \label{l_{5,5}} \end{equation} \item[(6)] We apply the transformation $e^{\prime}_1=e_1+\frac{3a_{2,3}+b_{2,3}}{2(c-a)}e_2, e^{\prime}_2=e_2,$ $e^{\prime}_3=e_3+\frac{(2a+c)a_{2,3}+c\cdot b_{2,3}}{2a(c-a)}e_2,$\\ $e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and rename $\frac{a}{c}$ and $\frac{a_{4,5}}{c^2}$ by $a$ and $a_{4,5},$ respectively, to obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(1-2a)e_3, [e_3,e_{5}]=e_1-ae_3,[e_{5},e_{5}]=a_{4,5}e_4,\\ \displaystyle [e_{5},e_1]=-ae_1+(2a-1)e_3,[e_5,e_{2}]=2(a-1)e_2,[e_{5},e_3]=ae_3-e_1,\\ \displaystyle [e_5,e_4]=2(a-1)e_2,(a\neq0,a\neq1), \end{array} \label{g1} \right. \end{equation} which is a limiting case of $(\ref{l_{5,5}})$ with $b:=-a$ when $a_{4,5}=0.$ If $a_{4,5}\neq0,$ then we scale it to $1$ and obtain a continuous family of Leibniz algebras: \begin{equation} \left\{ \begin{array}{l} \displaystyle [e_1,e_{5}]=ae_1+(1-2a)e_3, [e_3,e_{5}]=e_1-ae_3,[e_{5},e_{5}]=e_4,\\ \displaystyle [e_{5},e_1]=-ae_1+(2a-1)e_3,[e_5,e_{2}]=2(a-1)e_2,[e_{5},e_3]=ae_3-e_1,\\ \displaystyle [e_5,e_4]=2(a-1)e_2,(a\neq0,a\neq1). \end{array} \right. \label{l_{5,6}} \end{equation} \item[(7)] We apply the transformation $e^{\prime}_1=e_1+\frac{a_{2,1}}{a}e_2,e^{\prime}_i=e_i,(2\leq i\leq 5)$ and assign $d:=\frac{a\cdot b_{2,3}+c\cdot a_{2,1}}{a},$ $f:=\frac{a\cdot a_{2,3}-c\cdot a_{2,1}}{a}.$ Then this case becomes the same as case $(7)$ of Theorem \ref{RL4(Change of Basis)}. \item[(8)] Applying the transformation $e^{\prime}_1=e_1+\frac{2a_{2,1}-b_{2,3}}{c}e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,1}}{c}e_2,e^{\prime}_4=e_4,e^{\prime}_5=\frac{e_5}{c}$ and renaming $\frac{a_{4,5}}{c^2}$ back by $a_{4,5},$ we obtain a Leibniz algebra: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=e_3, [e_3,e_{5}]=e_1,[e_{5},e_{5}]=a_{4,5}e_4,[e_{5},e_1]=-e_3,[e_5,e_{2}]=-2e_2,[e_{5},e_3]=-e_1,\\ \displaystyle [e_5,e_{4}]=-2e_2, \end{array} \right. \end{equation} which is a limiting case of $(\ref{g1})$ with $a=0.$ If $a_{4,5}\neq0,$ then we scale $a_{4,5}$ to $1$ and obtain a limiting case of $(\ref{l_{5,6}})$ with $a=0.$ Altogether $(\ref{l_{5,6}})$ and all its limiting cases give us the algebra $l_{5,6}.$ \item[(9)] One applies the transformation $e^{\prime}_1=e_1+\frac{(3b+4c)a_{2,3}-b\cdot b_{2,3}}{4(b+c)^2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3+\frac{(4b+5c)a_{2,3}+c\cdot b_{2,3}}{4(b+c)^2}e_2,e^{\prime}_4=e_4, e^{\prime}_5=\frac{e_5}{c}.$ Renaming $\frac{b}{c}$ by $b,$ we obtain a Leibniz algebra given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{5}]=(b+1)e_3, [e_3,e_{5}]=e_1+be_3,[e_{4},e_5]=b(e_4-e_2),[e_{5},e_1]=(-b-1)e_3,\\ \displaystyle [e_5,e_{2}]=-2(b+1)e_2,[e_{5},e_3]=-e_1-be_3,[e_5,e_{4}]=-(b+2)e_2-be_4,(b\neq0,b\neq-1), \end{array} \right. \end{equation} which is a limiting case of $(\ref{l_{5,5}})$ with $a=0.$ Altogether $(\ref{l_{5,5}})$ and all its limiting cases give us the algebra $l_{5,5}.$ \item[(10)] We apply the transformation $e^{\prime}_1=e_1+\frac{a_{2,3}}{c}e_2,e^{\prime}_i=e_i,(2\leq i\leq 4),e^{\prime}_5=\frac{e_5}{c}$ and rename $\frac{b_{2,1}}{c},\frac{b_{2,3}}{c},\frac{a_{2,3}}{c}$ and $\frac{a_{2,5}}{c^2}$ by $b_{2,1},b_{2,3},a_{2,3}$ and $a_{2,5},$ respectively. Then we assign $a_{2,1}:=a_{2,3}+b_{2,3}-b_{2,1}$ and this case becomes the same as case $(10)$ of Theorem \ref{RL4(Change of Basis)}. \end{enumerate} \end{proof} \subsubsection{Codimension two and three solvable extensions of $\mathcal{L}^4$} The non-zero inner derivations of $\mathcal{L}^4,(n\geq4)$ are given by \[ \mathscr{L}_{e_1}=\left[\begin{smallmatrix} 0&0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 1&0 & 2 & 0 & \cdots & 0 & 0 & 0 \\ 0&0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0&0 & -1 & 0 & \cdots & 0 & 0 & 0 \\ 0& 0 & 0 & -1 & \cdots & 0 & 0 & 0 \\ \vdots& \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0& 0 & 0 & 0&\cdots & -1 & 0 &0\\ 0& 0 & 0 & 0&\cdots & 0 & -1 &0 \end{smallmatrix}\right],\mathscr{L}_{e_3}=\left[\begin{smallmatrix} 0&0 & 0 & 0 & \cdots & 0 \\ 0&0 & 1 & 0 & \cdots & 0 \\ 0&0 & 0 & 0 & \cdots & 0 \\ 1&0 & 0 & 0 & \cdots & 0\\ 0& 0 & 0 & 0 & \cdots & 0\\ \vdots& \vdots & \vdots & \vdots & & \vdots\\ 0& 0 & 0 & 0&\cdots & 0 \end{smallmatrix}\right], \mathscr{L}_{e_i}=E_{i+1,1}=\left[\begin{smallmatrix} 0 & 0&0&\cdots & 0 \\ 0& 0&0&\cdots & 0 \\ 0 & 0&0&\cdots & 0 \\ 0 & 0&0&\cdots & 0 \\ \vdots &\vdots &\vdots& & \vdots\\ 1 &0&0& \cdots & 0\\ \vdots &\vdots &\vdots& & \vdots\\ \boldsymbol{\cdot} & 0&0&\cdots & 0 \end{smallmatrix}\right]\,(4\leq i\leq n-1),\] where $E_{i+1,1}$ is the $n\times n$ matrix that has $1$ in the $(i+1,1)^{st}$ position and all other entries are zero. According to Remark \ref{NumberOuterDerivations}, we have at most two outer derivations. \paragraph{Codimension two solvable extensions of $\mathcal{L}^4,(n=4)$} We consider the same cases as in Section \ref{Twodim(n=4)} and follow the General approach given in Section \ref{Two&Three}. \noindent (1) (a) One could set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} 1\\ a\\ 0 \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} 1\\ b\\ 1 \end{array}\right),(a\neq-1,a\neq0,b\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{3-2a}{2}a_{2,3}+\frac{1-2a}{2}b_{2,3} & -2a & a_{2,3}& 1-a\\ 1-a & 0 & -a & 0 \\ 0 & 0 & 0 & -1-a \end{matrix}\right] \end{array},$$ $$\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & -1 & 0\\ (1-b)\alpha_{2,3}-b\cdot\beta_{2,3} & -2(b+1) & 2\alpha_{2,3}+\beta_{2,3}& -1-b\\ -b & 0 & -b & 0 \\ 0 & 0 & 0 & -1-b \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{L}_{[e_5,e_6]},$ we obtain that $a:=1.$ Since $b\neq-1,$ we have that $\alpha_{2,3}:=\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}$ and it follows that $\beta_{2,3}:=\left(\frac{1}{2}-b\right)a_{2,3}+\frac{3}{2}b_{2,3}.$ It implies that $\mathscr{L}_{[e_5,e_6]}=0.$ As a result, $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & -2 & a_{2,3}& 0\\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array},$$ $$\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & -1 & 0\\ \frac{a_{2,3}}{2}-\left(b+\frac{1}{2}\right)b_{2,3} & -2(b+1) & \left(b+\frac{3}{2}\right)a_{2,3}+\frac{b_{2,3}}{2}& -1-b\\ -b & 0 & -b & 0 \\ 0 & 0 & 0 & -1-b \end{matrix}\right] \end{array},(b\neq-1)\footnote{We notice that $\mathscr{L}_{e_{6}}$ is nilpotent if $b=-1.$}.$$ Further, we find the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{5}]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_5]}=0,\mathscr{L}_{[e_{3},e_5]}=\mathscr{L}_{e_3},\mathscr{L}_{[e_{i},e_5]}=0,\mathscr{L}_{[e_{1},e_6]}=\mathscr{L}_{e_1}+ b\mathscr{L}_{e_3},\\ \displaystyle \mathscr{L}_{[e_{2},e_6]}=0,\mathscr{L}_{[e_{3},e_{6}]}=\mathscr{L}_{e_1}+b\mathscr{L}_{e_3}, \mathscr{L}_{[e_{i},e_6]}=0,(b\neq-1,4\leq i\leq 6). \end{array} \right. \end{equation} \noindent $(ii)$ We include a linear combination of $e_2$ and $e_4$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+c_{2,1}e_2+c_{4,1}e_4,[e_{2},e_5]=c_{2,2}e_2+c_{4,2}e_4, [e_{3},e_5]=c_{2,3}e_2+e_3+c_{4,3}e_4,\\ \displaystyle [e_{i},e_5]=c_{2,i}e_2+c_{4,i}e_4,[e_{1},e_6]=e_1+d_{2,1}e_2+ be_3+d_{4,1}e_4,[e_{2},e_6]=d_{2,2}e_2+d_{4,2}e_4,\\ \displaystyle [e_{3},e_{6}]=e_1+d_{2,3}e_2+be_3+d_{4,3}e_4, [e_{i},e_6]=d_{2,i}e_2+d_{4,i}e_4,(b\neq-1,4\leq i\leq 6). \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ as well. \noindent $(iii)$ We satisfy the right Leibniz identity shown in Table \ref{LeftCodimTwo(L4,(n=4))}. \begin{table}[h!] \caption{Left Leibniz identities in case (1) (a) with a nilradical $\mathcal{L}^4,(n=4)$.} \label{LeftCodimTwo(L4,(n=4))} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{1},e_{5}]\right)$ &\scriptsize $[e_{2},e_5]=0$ $\implies$ $c_{2,2}=c_{4,2}=0.$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{1},e_{6}]\right)$ &\scriptsize $[e_{2},e_6]=0$ $\implies$ $d_{2,2}=d_{4,2}=0.$\\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{3},e_{5}]\right)$ &\scriptsize $c_{2,4}:=-2,c_{4,4}:=2$ $\implies$ $[e_{4},e_5]=2\left(e_4-e_2\right).$ \\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{3},e_{6}]\right)$ &\scriptsize $d_{2,4}:=-b-1,d_{4,4}:=b+1$ $\implies$ $[e_{4},e_6]=(b+1)\left(e_4-e_2\right).$ \\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{L}_{e_{5}}\left([e_{5},e_{5}]\right)$ &\scriptsize $c_{4,5}=0$ $\implies$ $[e_{5},e_{5}]=c_{2,5}e_{2}.$\\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{L}_{e_6}\left([e_{6},e_{6}]\right)$ &\scriptsize $b\neq-1\implies$ $d_{4,6}=0$ $\implies$ $[e_{6},e_6]=d_{2,6}e_2.$ \\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{L}_{e_3}\left([e_{5},e_{5}]\right)$ &\scriptsize $c_{2,3}:=a_{2,3},c_{4,3}=0$ $\implies$ $[e_{3},e_5]=a_{2,3}e_2+e_3.$ \\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{5},e_{5}]\right)$ &\scriptsize $c_{2,1}:=\frac{a_{2,3}-b_{2,3}}{2},$ $c_{4,1}=0$ $\implies$ $[e_{1},e_{5}]=e_{1}+\frac{a_{2,3}-b_{2,3}}{2}e_2.$\\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{5},e_{6}]\right)$ &\scriptsize $d_{2,1}:=\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2},d_{4,1}=0$ $\implies$ $[e_{1},e_{6}]=e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2+be_3.$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{L}_{e_{5}}\left([e_{3},e_{6}]\right)$ &\scriptsize $d_{2,3}:=\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2},d_{4,3}=0$ $\implies$ $[e_{3},e_{6}]=e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2+be_3.$ \\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{L}_{e_5}\left([e_{6},e_{5}]\right)$ &\scriptsize $d_{4,5}:=-c_{4,6}$ $\implies$ $c_{4,6}:=(b+1)c_{2,5}-c_{2,6}$ $\implies$ $[e_5,e_6]=d_{2,5}e_2+\left(c_{2,6}-(b+1)c_{2,5}\right)e_4,[e_{6},e_5]=c_{2,6}e_2+\left((b+1)c_{2,5}-c_{2,6}\right)e_4$\\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{L}_{e_{5}}\left([e_{6},e_{6}]\right)$ &\scriptsize $d_{2,6}:=(b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}$ $\implies$ $[e_{6},e_6]=\left((b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}\right)e_2.$\\ \hline \end{tabular} \end{table} We obtain that $\mathscr{L}_{e_5}$ and $\mathscr{L}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{3},e_5]=a_{2,3}e_2+e_3,[e_{4},e_5]=2\left(e_4-e_2\right),[e_5,e_5]=c_{2,5}e_2,\\ \displaystyle [e_6,e_5]=c_{2,6}e_2+\left((b+1)c_{2,5}-c_{2,6}\right)e_4, [e_{1},e_6]=e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2+ be_3,\\ \displaystyle [e_{3},e_{6}]=e_1+\left(\left(b+\frac{1}{2}\right)a_{2,3}-\frac{b_{2,3}}{2}\right)e_2+ be_3, [e_{4},e_6]=(b+1)\left(e_4-e_2\right),\\ \displaystyle [e_5,e_6]=d_{2,5}e_2+\left(c_{2,6}-(b+1)c_{2,5}\right)e_4,[e_{6},e_6]=\left((b+1)\left(c_{2,6}+d_{2,5}\right)-(b+1)^2c_{2,5}\right)e_2,\\ \displaystyle (b\neq-1). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ and the remaining brackets given above define a continuous family of Leibniz algebras depending on the parameters. \noindent $(iv)\&(v)$ We apply the following transformation: $e^{\prime}_1=e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3+a_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5+\frac{c_{2,5}}{2}e_2, e^{\prime}_6=e_6+\frac{d_{2,5}}{2}e_2+\frac{c_{2,6}-(b+1)c_{2,5}}{2}e_4$ and obtain a Leibniz algebra $l_{6,2}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1,[e_3,e_5]=e_3,[e_4,e_5]=2(e_4-e_2),[e_{5},e_{1}]=-e_1,[e_5,e_2]=-2e_2,[e_{5},e_3]=-e_3,\\ \displaystyle [e_{5},e_4]=-2e_4, [e_1,e_6]=e_1+be_3,[e_3,e_6]=e_1+be_3,[e_4,e_6]=(b+1)\left(e_4-e_2\right),\\ \displaystyle [e_{6},e_1]=-e_1-be_3,[e_6,e_2]=-2(b+1)e_2,[e_{6},e_{3}]=-e_1-be_3,[e_{6},e_4]=-(b+1)\left(e_2+e_4\right),\\ \displaystyle (b\neq-1). \end{array} \right. \end{equation} \begin{remark} We notice that if $b=-1$ in the algebra, then the outer derivation $\mathscr{L}_{e_6}$ is nilpotent. \end{remark} \noindent (1) (b) We set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} 1\\ 2\\ c \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} 1\\ 1\\ d \end{array}\right),(c\neq-2,d\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & -c & 0\\ -\frac{c+1}{2}a_{2,3}-\frac{c+3}{2}b_{2,3} & -2(c+2) & (c+1)a_{2,3}+c\cdot b_{2,3}& -1-2c\\ -c-1 & 0 & -2 & 0 \\ 0 & 0 & 0 & -3 \end{matrix}\right] \end{array},$$ $$\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & -d & 0\\ \frac{1-d}{2}\alpha_{2,3}-\frac{d+1}{2}\beta_{2,3} & -2(d+1) & (d+1)\alpha_{2,3}+d\cdot\beta_{2,3}& -2d\\ -d & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{L}_{[e_5,e_6]},$ we obtain that $d=0$ and we have the system of equations: $$\left\{ \begin{array}{ll} b_{2,3}:=\left(\frac{c}{2}+1\right)\beta_{2,3}-\left(\frac{c}{2}+1\right)\alpha_{2,3} {,} \\ \left(2c+3\right)\beta_{2,3}-\alpha_{2,3}-\left(c+1\right)a_{2,3}-\left(c+3\right)b_{2,3}=0{.} \end{array} \right. $$ There are the following two cases: \begin{enumerate}[noitemsep, topsep=0pt] \item[(I)] If $c\neq-1,$ then $a_{2,3}:=\frac{c+4}{2}\alpha_{2,3}-\frac{c}{2}\beta_{2,3}$ and $\mathscr{L}_{[e_5,e_6]}=0.$ Consequently, $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & -c & 0\\ \frac{\alpha_{2,3}}{2}-\frac{2c+3}{2}\beta_{2,3} & -2(c+2) & \frac{3c+4}{2}\alpha_{2,3}+\frac{c}{2}\beta_{2,3}& -2c-1\\ -c-1 & 0 & -2 & 0 \\ 0 & 0 & 0 & -3 \end{matrix}\right] \end{array},$$ $$\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & -2 & \alpha_{2,3}& 0\\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array},(c\neq-2)$$ \noindent and we find the commutators given below: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{5}]}=\mathscr{L}_{e_1}+(c+1)\mathscr{L}_{e_3},\mathscr{L}_{[e_{2},e_5]}=0,\mathscr{L}_{[e_{3},e_5]}=c\mathscr{L}_{e_1}+2\mathscr{L}_{e_3},\mathscr{L}_{[e_{i},e_5]}=0,\\ \displaystyle \mathscr{L}_{[e_{1},e_6]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_6]}=0,\mathscr{L}_{[e_{3},e_{6}]}=\mathscr{L}_{e_3}, \mathscr{L}_{[e_{i},e_6]}=0,(c\neq-2,4\leq i\leq 6). \end{array} \right. \end{equation} \item[(II)] If $c=-1,$ then $b_{2,3}:=\frac{\beta_{2,3}-\alpha_{2,3}}{2}$ and $\mathscr{L}_{[e_5,e_6]}=0.$ As a result, $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 1 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & -2& \frac{\alpha_{2,3}-\beta_{2,3}}{2}& 1\\ 0 & 0 & -2 & 0 \\ 0 & 0 & 0 & -3 \end{matrix}\right] \end{array}, \mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{\alpha_{2,3}-\beta_{2,3}}{2} & -2 & \alpha_{2,3}& 0\\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array}$$ and we have the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{5}]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_5]}=0,\mathscr{L}_{[e_{3},e_5]}=2\mathscr{L}_{e_3}-\mathscr{L}_{e_1},\mathscr{L}_{[e_{i},e_5]}=0,\\ \displaystyle \mathscr{L}_{[e_{1},e_6]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_6]}=0,\mathscr{L}_{[e_{3},e_{6}]}=\mathscr{L}_{e_3}, \mathscr{L}_{[e_{i},e_6]}=0,(4\leq i\leq 6). \end{array} \right. \end{equation} \end{enumerate} \noindent $(ii)$ We combine cases (I) and (II), include a linear combination of $e_2$ and $e_4$, and obtain the following: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+a_{2,1}e_2+(c+1)e_3+a_{4,1}e_4,[e_{2},e_5]=a_{2,2}e_2+a_{4,2}e_4,[e_{3},e_5]=ce_1+a_{2,3}e_2+\\ \displaystyle 2e_3+a_{4,3}e_4,[e_{i},e_5]=a_{2,i}e_2+a_{4,i}e_4,[e_{1},e_6]=e_1+b_{2,1}e_2+b_{4,1}e_4,[e_{2},e_6]=b_{2,2}e_2+b_{4,2}e_4,\\ \displaystyle [e_{3},e_{6}]=b_{2,3}e_2+e_3+b_{4,3}e_4,[e_{i},e_6]=b_{2,i}e_2+b_{4,i}e_4,(c\neq-2,4\leq i\leq 6). \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ as well. \noindent $(iii)$ To satisfy the left Leibniz identity, we refer to the identities given in Table \ref{LeftCodimTwo(L4,(n=4))}. as much as possible. The identities we apply are the following: $1.-6.,$ $\mathscr{L}_{e_3}{\left([e_6,e_6]\right)}=[\mathscr{L}_{e_3}(e_6),e_6]+[e_6,\mathscr{L}_{e_3}(e_6)],$ $\mathscr{L}_{e_1}\left([e_6,e_6]\right)=[\mathscr{L}_{e_1}(e_6),e_6]+[e_6,\mathscr{L}_{e_1}(e_6)],$ $\mathscr{L}_{e_1}\left([e_6,e_5]\right)=[\mathscr{L}_{e_1}(e_6),e_5]+[e_6,\mathscr{L}_{e_1}(e_5)], \mathscr{L}_{e_3}\left([e_6,e_5]\right)=[\mathscr{L}_{e_3}(e_6),e_5]+[e_6,\mathscr{L}_{e_3}(e_5)],$ $11.$ and $12.$ We have that $\mathscr{L}_{e_5}$ and $\mathscr{L}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+\left(\left(c+\frac{3}{2}\right)\alpha_{2,3}-\frac{\beta_{2,3}}{2}\right)e_2+(c+1)e_3,\\ \displaystyle [e_{3},e_5]=ce_1+\left(\left(\frac{c}{2}+2\right)\alpha_{2,3}-\frac{c}{2}\beta_{2,3}\right)e_2+2e_3,[e_{4},e_5]=3\left(e_4-e_2\right),\\ \displaystyle [e_5,e_5]=(c+2)\left(a_{2,6}+a_{4,6}\right)e_2,[e_6,e_5]=a_{2,6}e_2+a_{4,6}e_4,[e_{1},e_6]=e_1+\frac{\alpha_{2,3}-\beta_{2,3}}{2}e_2,\\ \displaystyle [e_{3},e_{6}]=\alpha_{2,3}e_2+e_3, [e_{4},e_6]=2\left(e_4-e_2\right),[e_5,e_6]=\left((c+2)b_{2,6}+a_{4,6}\right)e_2-a_{4,6}e_4,\\ \displaystyle [e_{6},e_6]=b_{2,6}e_2,(c\neq-2). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ and the remaining brackets given above define a continuous family of Leibniz algebras. \noindent $(iv)\&(v)$ Applying the transformation: $e^{\prime}_1=e_1+\frac{\alpha_{2,3}-\beta_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3+\alpha_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5+\frac{a_{2,6}}{2}e_2+\frac{a_{4,6}}{2}e_4, e^{\prime}_6=e_6+\frac{b_{2,6}}{2}e_2,$ we obtain a Leibniz algebra $l_{6,3}$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1+(c+1)e_3,[e_3,e_5]=ce_1+2e_3,[e_{4},e_5]=3\left(e_4-e_2\right),[e_{5},e_{1}]=-e_1-(c+1)e_3,\\ \displaystyle [e_5,e_2]=-2(c+2)e_2,[e_{5},e_3]=-ce_1-2e_3,[e_5,e_4]=(-2c-1)e_2-3e_4,[e_1,e_6]=e_1,\\ \displaystyle [e_3,e_6]=e_3, [e_{4},e_6]=2\left(e_4-e_2\right),[e_{6},e_1]=-e_1,[e_6,e_2]=-2e_2,[e_{6},e_{3}]=-e_3,\\ \displaystyle [e_6,e_4]=-2e_4,(c\neq-2). \end{array} \right. \end{equation} \noindent (1) (c) We set $\left( \begin{array}{c} a^1 \\ b^1\\ c^1 \end{array}\right)=\left( \begin{array}{c} a\\ 1\\ 0 \end{array}\right)$ and $\left( \begin{array}{c} a^2 \\ b^2\\ c^2 \end{array}\right)=\left( \begin{array}{c} b\\ 0\\ 1 \end{array}\right),(a,b\neq0,a\neq-1).$ Therefore the vector space of outer derivations as $4\times 4$ matrices is as follows: $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -a & 0 & 0 & 0\\ \frac{(3a-2)a_{2,3}+(a-2)b_{2,3}}{2a} & -2 & a_{2,3}& a-1\\ a-1 & 0 & -1 & 0 \\ 0 & 0 & 0 & -a-1 \end{matrix}\right] \end{array},$$ $$\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -b & 0 & -1 & 0\\ \frac{(3b-1)\alpha_{2,3}+(b-1)\beta_{2,3}}{2b} & -2 & \frac{(b+1)\alpha_{2,3}+\beta_{2,3}}{b}& b-2\\ b-1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -b \end{matrix}\right] \end{array}.$$ \noindent $(i)$ Considering $\mathscr{L}_{[e_5,e_6]},$ we obtain that $a:=1$ and we have the system of equations: $$\left\{ \begin{array}{ll} \beta_{2,3}:=\frac{3b\cdot a_{2,3}+b\cdot b_{2,3}}{2}-(b+1)\alpha_{2,3} {,} \\ (b-3)\alpha_{2,3}+\frac{b-3}{2}\left(b_{2,3}-a_{2,3}\right)=0{.} \end{array} \right. $$ There are the following two cases: \begin{enumerate}[noitemsep, topsep=0pt] \item[(I)] If $b\neq3,$ then $\alpha_{2,3}:=\frac{a_{2,3}-b_{2,3}}{2}\implies$ $\beta_{2,3}:=\frac{(2b-1)a_{2,3}+(2b+1)b_{2,3}}{2}$ and $\mathscr{L}_{[e_5,e_6]}=0.$ As a result, $$\hspace*{-0.6cm}\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & -2 & a_{2,3}& 0\\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array},\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -b & 0 & -1 & 0\\ \frac{b\cdot a_{2,3}+(b-2)b_{2,3}}{2} & -2 & \frac{3a_{2,3}+b_{2,3}}{2}& b-2\\ b-1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -b \end{matrix}\right] \end{array},(b\neq0).$$ \noindent Further, we find the following: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{5}]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_5]}=0,\mathscr{L}_{[e_{3},e_5]}=\mathscr{L}_{e_3},\mathscr{L}_{[e_{i},e_5]}=0,\mathscr{L}_{[e_{1},e_6]}=b\mathscr{L}_{e_1}+(1-b)\mathscr{L}_{e_3},\\ \displaystyle \mathscr{L}_{[e_{2},e_6]}=0,\mathscr{L}_{[e_{3},e_{6}]}=\mathscr{L}_{e_1}, \mathscr{L}_{[e_{i},e_6]}=0,(b\neq0,4\leq i\leq 6). \end{array} \right. \end{equation} \item[(II)] If $b:=3,$ then $\beta_{2,3}:=\frac{9a_{2,3}+3b_{2,3}}{2}-4\alpha_{2,3}$ and $\mathscr{L}_{[e_5,e_6]}=0$ as well as $\mathscr{L}_{e_5},\mathscr{L}_{e_6}$ are as follows: $$\mathscr{L}_{e_{5}}=\begin{array}{llll} \left[\begin{matrix} -1 & 0 & 0 & 0\\ \frac{a_{2,3}-b_{2,3}}{2} & -2 & a_{2,3}& 0\\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2 \end{matrix}\right] \end{array},\mathscr{L}_{e_{6}}=\begin{array}{llll} \left[\begin{matrix} -3 & 0 & -1 & 0\\ \frac{3a_{2,3}+b_{2,3}}{2} & -2 & \frac{3a_{2,3}+b_{2,3}}{2}& 1\\ 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & -3 \end{matrix}\right] \end{array}.$$ We have the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{5}]}=\mathscr{L}_{e_1},\mathscr{L}_{[e_{2},e_5]}=0,\mathscr{L}_{[e_{3},e_5]}=\mathscr{L}_{e_3},\mathscr{L}_{[e_{i},e_5]}=0,\mathscr{L}_{[e_{1},e_6]}=3\mathscr{L}_{e_1}-2\mathscr{L}_{e_3},\\ \displaystyle \mathscr{L}_{[e_{2},e_6]}=0,\mathscr{L}_{[e_{3},e_{6}]}=\mathscr{L}_{e_1}, \mathscr{L}_{[e_{i},e_6]}=0,(4\leq i\leq 6). \end{array} \right. \end{equation} \end{enumerate} \noindent $(ii)$ We combine cases (I) and (II) and include a linear combination of $e_2$ and $e_4:$ \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+c_{2,1}e_2+c_{4,1}e_4,[e_{2},e_5]=c_{2,2}e_2+c_{4,2}e_4,[e_{3},e_5]=c_{2,3}e_2+e_3+c_{4,3}e_4,\\ \displaystyle [e_{i},e_5]=c_{2,i}e_2+c_{4,i}e_4,[e_{1},e_6]=be_1+d_{2,1}e_2+(1-b)e_3+d_{4,1}e_4,[e_{2},e_6]=d_{2,2}e_2+\\ \displaystyle d_{4,2}e_4,[e_{3},e_{6}]=e_1+d_{2,3}e_2+d_{4,3}e_4,[e_{i},e_6]=d_{2,i}e_2+d_{4,i}e_4,(b\neq0,4\leq i\leq 6). \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ as well. \noindent $(iii)$ To satisfy the left Leibniz identity, we mostly apply the identities given in Table \ref{LeftCodimTwo(L4,(n=4))}: $1.-5.,\mathscr{L}_{e_6}\left([e_6,e_5]\right)=[\mathscr{L}_{e_6}(e_6),e_5]+[e_6,\mathscr{L}_{e_6}(e_5)],7.-12.$ We have that $\mathscr{L}_{e_5}$ and $\mathscr{L}_{e_6}$ restricted to the nilradical do not change, but the remaining brackets are the following: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{5}]=e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{3},e_5]=a_{2,3}e_2+e_3,[e_{4},e_5]=2\left(e_4-e_2\right),[e_5,e_5]=c_{2,5}e_2,\\ \displaystyle [e_6,e_5]=c_{2,6}e_2+\left(c_{2,5}-c_{2,6}\right)e_4,[e_{1},e_6]=be_1+\frac{(2-b)a_{2,3}-b\cdot b_{2,3}}{2}e_2+ (1-b)e_3,\\ \displaystyle [e_{3},e_{6}]=e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, [e_{4},e_6]=b\left(e_4-e_2\right), [e_5,e_6]=d_{2,5}e_2+\left(c_{2,6}-c_{2,5}\right)e_4,\\ \displaystyle [e_{6},e_6]=\left(d_{2,5}-c_{2,5}+c_{2,6}\right)e_2,(b\neq0). \end{array} \right. \end{equation} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{L}_{e_{5}}$ and $\mathscr{L}_{e_{6}}$ and the remaining brackets given above define a continuous family of Leibniz algebras depending on the parameters. \noindent $(iv)\&(v)$ We apply the transformation: $e^{\prime}_1=e_1+\frac{a_{2,3}-b_{2,3}}{2}e_2, e^{\prime}_2=e_2,e^{\prime}_3=e_3+a_{2,3}e_2,e^{\prime}_4=e_4,e^{\prime}_5=e_5+\frac{c_{2,5}}{2}e_2, e^{\prime}_6=e_6+\frac{d_{2,5}}{2}e_2+\frac{c_{2,6}-c_{2,5}}{2}e_4$ and obtain a Leibniz algebra $l_{6,4}$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_5]=e_1,[e_3,e_5]=e_3,[e_{4},e_5]=2\left(e_4-e_2\right),[e_{5},e_{1}]=-e_1,[e_5,e_2]=-2e_2,[e_{5},e_3]=-e_3,\\ \displaystyle [e_5,e_4]=-2e_4,[e_1,e_6]=be_1+(1-b)e_3,[e_3,e_6]=e_1,[e_{4},e_6]=b\left(e_4-e_2\right),\\ \displaystyle [e_{6},e_1]=-be_1+(b-1)e_3,[e_6,e_2]=-2e_2,[e_{6},e_{3}]=-e_1,[e_6,e_4]=(b-2)e_2-be_4,(b\neq0). \end{array} \right. \end{equation} \paragraph{Codimension two solvable extensions of $\mathcal{L}^4,(n\geq5)$} One could set $\left( \begin{array}{c} a \\ b \end{array}\right)=\left( \begin{array}{c} 1\\ 2 \end{array}\right)$ and $\left( \begin{array}{c} \alpha \\ \beta \end{array}\right)=\left( \begin{array}{c} 1\\ 1 \end{array}\right).$ Therefore the vector space of outer derivations as $n \times n$ matrices is as follows: { $$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ 3a_{2,1}-2a_{2,3} & -4 & a_{2,3}& -1 &0&0 & \cdots &&0 & 0& 0\\ -1 & 0 & -2 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -3 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -4 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-5&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&3-n &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &2-n& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& 1-n \end{smallmatrix}\right],$$} { $$ \mathscr{L}_{e_{n+2}}=\left[\begin{smallmatrix} -1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ \alpha_{2,1} & -2 & \alpha_{2,3}& 0 &0&0 & \cdots &&0 & 0& 0\\ 0 & 0 & -1 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -2 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -\alpha_{5,3} & 0 & -3 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -\alpha_{5,3} & 0 &-4&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -\alpha_{n-2,3}& -\alpha_{n-3,3}& \cdots&\cdots &-\alpha_{5,3}&0&4-n &0& 0\\ 0 & 0 & -\alpha_{n-1,3}& -\alpha_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-\alpha_{5,3}&0 &3-n& 0\\ 0 & 0 & -\alpha_{n,3}& -\alpha_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-\alpha_{5,3} &0& 2-n \end{smallmatrix}\right].$$} \noindent $(i)$ Considering $\mathscr{L}_{[e_{n+1},e_{n+2}]}$ and comparing with $\sum_{i=1}^n{c_i\mathscr{L}_{e_i}},$ we deduce that $\alpha_{i,3}:=a_{i,3},(5\leq i\leq n),$ $\alpha_{2,3}:=\frac{a_{2,3}}{2}$ and $\alpha_{2,1}:=a_{2,1}-\frac{a_{2,3}}{2}.$ As a result, the outer derivation $\mathscr{L}_{e_{n+2}}$ changes as follows: $$\mathscr{L}_{e_{n+2}}=\left[\begin{smallmatrix} -1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ a_{2,1}-\frac{a_{2,3}}{2} & -2 & \frac{a_{2,3}}{2}& 0 &0&0 & \cdots &&0 & 0& 0\\ 0 & 0 & -1 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -2 &0 &0 &\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -3 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-4&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&4-n &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &3-n& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& 2-n \end{smallmatrix}\right].$$ Altogether we find the following commutators: \allowdisplaybreaks \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber \mathscr{L}_{[e_{1},e_{n+1}]}=\mathscr{L}_{e_1}+\mathscr{L}_{e_3},\mathscr{L}_{[e_{2},e_{n+1}]}=0,\mathscr{L}_{[e_{i},e_{n+1}]}=(i-1)\mathscr{L}_{e_i}+\sum_{k=i+2}^{n-1}{a_{k-i+3,3}\mathscr{L}_{e_k}},\\ \displaystyle \mathscr{L}_{[e_{j},e_{n+1}]}=0,(n\leq j\leq n+1),\mathscr{L}_{[e_{n+2},e_{n+1}]}=\sum_{k=4}^{n-1}{a_{k+1,3}\mathscr{L}_{e_{k}}},\mathscr{L}_{[e_{1},e_{n+2}]}=\mathscr{L}_{e_1},\\ \displaystyle \mathscr{L}_{[e_{2},e_{n+2}]}=0,\mathscr{L}_{[e_{i},e_{n+2}]}=(i-2)\mathscr{L}_{e_i}+\sum_{k=i+2}^{n-1}{a_{k-i+3,3}\mathscr{L}_{e_k}},(3\leq i\leq n-1),\\ \displaystyle \mathscr{L}_{[e_{n},e_{n+2}]}=0,\mathscr{L}_{[e_{n+1},e_{n+2}]}=-\sum_{k=4}^{n-1}{a_{k+1,3}\mathscr{L}_{e_{k}}},\mathscr{L}_{[e_{n+2},e_{n+2}]}=0. \end{array} \right. \end{equation} \noindent $(ii)$ We include a linear combination of $e_2$ and $e_n:$ \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{n+1}]=e_1+c_{2,1}e_2+e_3+c_{n,1}e_n,[e_{2},e_{n+1}]=c_{2,2}e_2+c_{n,2}e_n,[e_{i},e_{n+1}]=c_{2,i}e_2+\\ \displaystyle(i-1)e_i+\sum_{k=i+2}^{n-1}{a_{k-i+3,3}e_k}+c_{n,i}e_n,[e_{j},e_{n+1}]=c_{2,j}e_2+c_{n,j}e_n,(n\leq j\leq n+1),\\ \displaystyle [e_{n+2},e_{n+1}]=c_{2,n+2}e_2+\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+ c_{n,n+2}e_n,[e_{1},e_{n+2}]=e_1+d_{2,1}e_2+d_{n,1}e_n,\\ \displaystyle [e_{2},e_{n+2}]=d_{2,2}e_2+d_{n,2}e_n, [e_{i},e_{n+2}]=d_{2,i}e_2+(i-2)e_i+\sum_{k=i+2}^{n-1}{a_{k-i+3,3}e_k}+d_{n,i}e_n,\\ \displaystyle (3\leq i\leq n-1),[e_{n},e_{n+2}]=d_{2,n}e_2+d_{n,n}e_n,[e_{n+1},e_{n+2}]=d_{2,n+1}e_2-\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+\\ \displaystyle d_{n,n+1}e_n, [e_{n+2},e_{n+2}]=d_{2,n+2}e_2+d_{n,n+2}e_n. \end{array} \right. \end{equation} Besides we have the brackets from $\mathcal{L}^4$ and from outer derivations $\mathscr{L}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+2}}$ as well. \noindent $(iii)$ We satisfy the right Leibniz identity shown in Table \ref{LeftCodimTwo(L4)}. We notice that $\mathscr{L}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+2}}$ restricted to the nilradical do not change, but the remaining brackets are as follows: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{n+1}]=e_1+a_{2,1}e_2+e_3,[e_{3},e_{n+1}]=a_{2,3}e_2+2e_3+\sum_{k=5}^n{a_{k,3}e_k}, [e_{4},e_{n+1}]=3\left(e_4-e_2\right)+\\ \displaystyle \sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-1)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},[e_{n+1},e_{n+1}]=\left(2c_{2,n+2}+2a_{5,3}\right)e_2,\\ \displaystyle [e_{n+2},e_{n+1}]=c_{2,n+2}e_2+\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}+ c_{n,n+2}e_n,[e_{1},e_{n+2}]=e_1+\left(a_{2,1}-\frac{a_{2,3}}{2}\right)e_2,\\ \displaystyle [e_{3},e_{n+2}]=\frac{a_{2,3}}{2}e_2+e_3+\sum_{k=5}^n{a_{k,3}e_k}, [e_{4},e_{n+2}]=2\left(e_4-e_2\right)+\sum_{k=6}^n{a_{k-1,3}e_k},\\ \displaystyle [e_{i},e_{n+2}]=(i-2)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k}, (5\leq i\leq n),[e_{n+1},e_{n+2}]=d_{2,n+1}e_2-\sum_{k=4}^{n-1}{a_{k+1,3}e_{k}}-\\ \displaystyle c_{n,n+2}e_n, [e_{n+2},e_{n+2}]=\frac{d_{2,n+1}-a_{5,3}}{2}e_2. \end{array} \right. \end{equation} \begin{table}[h!] \caption{Left Leibniz identities in the codimension two nilradical $\mathcal{L}^4,(n\geq5)$.} \label{LeftCodimTwo(L4)} \begin{tabular}{lp{2.4cm}p{12cm}} \hline \scriptsize Steps &\scriptsize Ordered triple &\scriptsize Result\\ \hline \scriptsize $1.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{1},e_{n+1}]\right)$ &\scriptsize $[e_{2},e_{n+1}]=0$ $\implies$ $c_{2,2}=c_{n,2}=0.$\\ \hline \scriptsize $2.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{1},e_{n+2}]\right)$ &\scriptsize $[e_{2},e_{n+2}]=0$ $\implies$ $d_{2,2}=d_{n,2}=0.$\\ \hline \scriptsize $3.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{3},e_{n+1}]\right)$ &\scriptsize $c_{2,4}:=-3,c_{n,4}:=a_{n-1,3},$ where $a_{4,3}=0$ $\implies$ $[e_{4},e_{n+1}]=3(e_4-e_2)+\sum_{k=6}^{n}{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $4.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{i},e_{n+1}]\right)$ &\scriptsize $c_{2,i+1}=0,c_{n,i+1}:=a_{n-i+2,3},(4\leq i\leq n-2),$ where $a_{4,3}=0$ $\implies$ $[e_{j},e_{n+1}]=\left(j-1\right)e_j+\sum_{k=j+2}^{n}{a_{k-j+3,3}e_k},(5\leq j\leq n-1).$ \\ \hline \scriptsize $5.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{n-1},e_{n+1}]\right)$ &\scriptsize $c_{2,n}=0,$ $c_{n,n}:=n-1$ $\implies$ $[e_{n},e_{n+1}]=\left(n-1\right)e_{n}.$ Altogether with $4.,$ $[e_{i},e_{n+1}]=\left(i-1\right)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n).$ \\ \hline \scriptsize $6.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{3},e_{n+2}]\right)$ &\scriptsize $d_{2,4}:=-2,d_{n,4}:=a_{n-1,3},$ where $a_{4,3}=0$ $\implies$ $[e_{4},e_{n+2}]=2(e_4-e_2)+\sum_{k=6}^{n}{a_{k-1,3}e_k}.$ \\ \hline \scriptsize $7.$ &\scriptsize $\mathscr{L}_{e_1}\left([e_{i},e_{n+2}]\right)$ &\scriptsize $d_{2,i+1}=0,d_{n,i+1}:=a_{n-i+2,3},(4\leq i\leq n-2),$ where $a_{4,3}=0$ $\implies$ $[e_{j},e_{n+2}]=\left(j-2\right)e_j+\sum_{k=j+2}^{n}{a_{k-j+3,3}e_k},(5\leq j\leq n-1).$ \\ \hline \scriptsize $8.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{n-1},e_{n+2}]\right)$ &\scriptsize $d_{2,n}=0,$ $d_{n,n}:=n-2$ $\implies$ $[e_{n},e_{n+2}]=\left(n-2\right)e_{n}.$ Combining with $7.,$ $[e_{i},e_{n+2}]=\left(i-2\right)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n).$ \\ \hline \scriptsize $9.$ &\scriptsize $\mathscr{L}_{e_{n+1}}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $c_{n,n+1}=0$ $\implies$ $[e_{n+1},e_{n+1}]=c_{2,n+1}e_2.$ \\ \hline \scriptsize $10.$ &\scriptsize $\mathscr{L}_{e_{n+2}}\left([e_{n+2},e_{n+2}]\right)$ &\scriptsize $d_{n,n+2}=0$ $\implies$ $[e_{n+2},e_{n+2}]=d_{2,n+2}e_2.$ \\ \hline \scriptsize $11.$ &\scriptsize $\mathscr{L}_{e_{3}}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $c_{2,3}:=a_{2,3}, c_{n,3}:=a_{n,3}$ $\implies$ $[e_{3},e_{n+1}]=a_{2,3}e_2+2e_3+\sum_{k=5}^n{a_{k,3}e_k}.$\\ \hline \scriptsize $12.$ &\scriptsize $\mathscr{L}_{e_{3}}\left([e_{n+2},e_{n+2}]\right)$ &\scriptsize $d_{2,3}:=\frac{a_{2,3}}{2},$ $d_{n,3}:=a_{n,3}$ $\implies$ $[e_{3},e_{n+2}]=\frac{a_{2,3}}{2}e_2+e_3+\sum_{k=5}^n{a_{k,3}e_k}.$ \\ \hline \scriptsize $13.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{n+1},e_{n+1}]\right)$ &\scriptsize $c_{2,1}:=a_{2,1},c_{n,1}=0$ $\implies$ $[e_{1},e_{n+1}]=e_1+a_{2,1}e_2+e_3$. \\ \hline \scriptsize $14.$ &\scriptsize $\mathscr{L}_{e_{1}}\left([e_{n+2},e_{n+2}]\right)$ &\scriptsize $d_{2,1}:=a_{2,1}-\frac{a_{2,3}}{2},d_{n,1}=0$ $\implies$ $[e_{1},e_{n+2}]=e_1+\left(a_{2,1}-\frac{a_{2,3}}{2}\right)e_2.$ \\ \hline \scriptsize $15.$ &\scriptsize $\mathscr{L}_{e_{n+1}}\left([e_{n+2},e_{n+1}]\right)$ &\scriptsize $c_{2,n+1}:=2c_{2,n+2}+2a_{5,3},d_{n,n+1}:=-c_{n,n+2}$ $\implies$ $[e_{n+1},e_{n+1}]=\left(2c_{2,n+2}+2a_{5,3}\right)e_2, [e_{n+1},e_{n+2}]=d_{2,n+1}e_2-\sum_{k=4}^{n-1}a_{k+1,3}e_k-c_{n,n+2}e_n.$ \\ \hline \scriptsize $16.$ &\scriptsize $\mathscr{L}_{e_{n+1}}\left([e_{n+2},e_{n+2}]\right)$ &\scriptsize $d_{2,n+2}:=\frac{d_{2,n+1}-a_{5,3}}{2}$ $\implies$ $[e_{n+2},e_{n+2}]=\frac{d_{2,n+1}-a_{5,3}}{2}e_2.$\\ \hline \end{tabular} \end{table} Altogether the nilradical $\mathcal{L}^4$ $(\ref{L4}),$ the outer derivations $\mathscr{L}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+2}}$ and the remaining brackets given above define a continuous family of the solvable left Leibniz algebras. Then we apply the technique of ``absorption'' according to step $(iv)$. \begin{itemize}[noitemsep, topsep=0pt] \item We start with the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq5),e^{\prime}_{n+1}=e_{n+1}+\frac{c_{2,n+2}}{2}e_2, e^{\prime}_{n+2}=e_{n+2}+\frac{d_{2,n+1}-a_{5,3}}{4}e_2.$ This transformation removes the coefficients $c_{2,n+2}$ and $\frac{d_{2,n+1}-a_{5,3}}{2}$ in front of $e_2$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+2},e_{n+2}],$ respectively, and changes the coefficient in front of $e_2$ in $[e_{n+1},e_{n+1}]$ and $[e_{n+1},e_{n+2}]$ to $2a_{5,3}$ and $a_{5,3},$ correspondingly. \item Then we apply the transformation $e^{\prime}_i=e_i,(1\leq i\leq n,n\geq5),e^{\prime}_{n+1}=e_{n+1}+\frac{a_{5,3}}{2}e_4, e^{\prime}_{n+2}=e_{n+2}$ to remove the coefficients $a_{5,3}$ and $2a_{5,3}$ in front of $e_2$ in $[e_{n+1},e_{n+2}]$ and $[e_{n+1},e_{n+1}]$, respectively. At the same time this transformation removes $a_{5,3}$ and $-a_{5,3}$ in front of $e_4$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively, and changes the coefficients in front of $e_{k},(6\leq k\leq n-1)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}]$ to $a_{k+1,3}-\frac{a_{5,3}}{2}a_{k-1,3}$ and $\frac{a_{5,3}}{2}a_{k-1,3}-a_{k+1,3},$ respectively. It also affects the coefficients in front $e_n,(n\geq6)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ which we rename back by $c_{n,n+2}$ and $-c_{n,n+2},$ respectively. The following entries are introduced by the transformation: $-\frac{a_{5,3}}{2}$ and $\frac{a_{5,3}}{2}$ in the $(5,1)^{st},(n\geq5)$ position in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. \item Applying the transformation $e^{\prime}_j=e_j,(1\leq j\leq n+1,n\geq6), e^{\prime}_{n+2}=e_{n+2}-\sum_{k=5}^{n-1}{\frac{A_{k+1,3}}{k-1}e_k},$ where $A_{6,3}:=a_{6,3}$ and $A_{k+1,3}:=a_{k+1,3}-\frac{1}{2}a_{5,3}a_{k-1,3}-\sum_{i=7}^k{\frac{A_{i-1,3}a_{k-i+5,3}}{i-3}},$ $(6\leq k\leq n-1,n\geq7),$ we remove the coefficients $a_{6,3}$ and $-a_{6,3}$ in front of $e_5$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively. Besides the transformation removes $a_{k+1,3}-\frac{a_{5,3}}{2}a_{k-1,3}$ and $\frac{a_{5,3}}{2}a_{k-1,3}-a_{k+1,3}$ in front of $e_k,(6\leq k\leq n-1)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively. This transformation introduces $\frac{a_{6,3}}{4}$ and $-\frac{a_{6,3}}{4}$ in the $(6,1)^{st},(n\geq6)$ position as well as $\frac{A_{k+1,3}}{k-1}$ and $\frac{A_{k+1,3}}{1-k}$ in the $(k+1,1)^{st},(6\leq k\leq n-1)$ position in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ respectively. It also affects the coefficients in front $e_n,(n\geq7)$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ which we rename back by $c_{n,n+2}$ and $-c_{n,n+2},$ respectively. \item Finally applying the transformation $e^{\prime}_i=e_i,(1\leq i\leq n+1,n\geq5), e^{\prime}_{n+2}=e_{n+2}-\frac{c_{n,n+2}}{n-1}e_n,$ we remove $c_{n,n+2}$ and $-c_{n,n+2}$ in front of $e_n$ in $[e_{n+2},e_{n+1}]$ and $[e_{n+1},e_{n+2}],$ respectively, without affecting other entries. We obtain that $\mathscr{L}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+2}}$ are as follows: \end{itemize} {$$\mathscr{L}_{e_{n+1}}=\left[\begin{smallmatrix} -1 & 0 & 0 & 0&0&0&\cdots && 0&0 & 0\\ 3a_{2,1}-2a_{2,3} & -4 & a_{2,3}& -1 &0&0 & \cdots &&0 & 0& 0\\ -1 & 0 & -2 & 0 & 0&0 &\cdots &&0 &0& 0\\ 0 & 0 & 0 & -3 &0 &0 &\cdots&&0 &0 & 0\\ \frac{a_{5,3}}{2} & 0 & -a_{5,3} & 0 & -4 &0&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-5&\cdots & &0&0 & 0\\ 0 & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & \ddots &0&\ddots &&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ 0 & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots &-a_{5,3}&0&3-n &0& 0\\ 0 & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &2-n& 0\\ 0 & 0 & -a_{n,3}& -a_{n-1,3}& \cdots&\cdots &\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0& 1-n \end{smallmatrix}\right],$$} $$\mathscr{L}_{e_{n+2}}=\left[\begin{smallmatrix} -1 & 0 & 0 & 0&0&0&0&\cdots && 0&0 & 0\\ a_{2,1}-\frac{a_{2,3}}{2} & -2 & \frac{a_{2,3}}{2}& 0 &0&0 & 0&\cdots &&0 & 0& 0\\ 0 & 0 & -1 & 0 & 0&0 &0&\cdots &&0 &0& 0\\ 0 & 0 & 0 & -2 &0 &0 &0&\cdots&&0 &0 & 0\\ 0 & 0 & -a_{5,3} & 0 & -3 &0&0&\cdots & &0&0 & 0\\ -\frac{a_{6,3}}{4} & 0 &\boldsymbol{\cdot} & -a_{5,3} & 0 &-4&0&\cdots & &0&0 & 0\\ -\frac{A_{7,3}}{5} & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} & -a_{5,3}&0 &-5& &&\vdots&\vdots &\vdots\\ -\frac{A_{8,3}}{6} & 0 &\boldsymbol{\cdot} & \boldsymbol{\cdot} &\boldsymbol{\cdot} &-a_{5,3}&0 &\ddots&&\vdots&\vdots &\vdots\\ \vdots & \vdots & \vdots &\vdots & &&\ddots&\ddots &\ddots &\vdots&\vdots & \vdots\\ \frac{A_{n-2,3}}{4-n} & 0 & -a_{n-2,3}& -a_{n-3,3}& \cdots&\cdots&\cdots &-a_{5,3}&0&4-n &0& 0\\ \frac{A_{n-1,3}}{3-n} & 0 & -a_{n-1,3}& -a_{n-2,3}& \cdots&\cdots&\cdots &\boldsymbol{\cdot}&-a_{5,3}&0 &3-n& 0\\ \frac{A_{n,3}}{2-n} & 0 & -a_{n,3}&-a_{n-1,3}& \cdots&\cdots &\cdots&\boldsymbol{\cdot}&\boldsymbol{\cdot}&-a_{5,3} &0&2-n \end{smallmatrix}\right],(n\geq5).$$ The remaining brackets are given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_{1},e_{n+1}]=e_1+a_{2,1}e_2+e_3-\frac{a_{5,3}}{2}e_5,[e_{3},e_{n+1}]=a_{2,3}e_2+2e_3+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{4},e_{n+1}]=3\left(e_4-e_2\right)+\sum_{k=6}^n{a_{k-1,3}e_k}, [e_{i},e_{n+1}]=(i-1)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},\\ \displaystyle [e_{1},e_{n+2}]=e_1+\left(a_{2,1}-\frac{a_{2,3}}{2}\right)e_2+\sum_{k=6}^n{\frac{A_{k,3}}{k-2}e_k},[e_{3},e_{n+2}]=\frac{a_{2,3}}{2}e_2+e_3+\sum_{k=5}^n{a_{k,3}e_k},\\ \displaystyle [e_{4},e_{n+2}]=2\left(e_4-e_2\right)+\sum_{k=6}^n{a_{k-1,3}e_k},[e_{i},e_{n+2}]=(i-2)e_i+\sum_{k=i+2}^{n}{a_{k-i+3,3}e_k},(5\leq i\leq n). \end{array} \right. \end{equation} \noindent $(v)$ Finally we apply the following two change of basis transformations: \begin{itemize}[noitemsep, topsep=0pt] \item $e^{\prime}_1=e_1+\left(a_{2,1}-\frac{a_{2,3}}{2}\right)e_2,e^{\prime}_2=e_2, e^{\prime}_3=e_3+\frac{a_{2,3}}{2}e_2, e^{\prime}_i=e_i-\sum_{k=i+2}^n{\frac{B_{k-i+3,3}}{k-i}e_k},$ $(3\leq i\leq n-2),e^{\prime}_{n-1}=e_{n-1},e^{\prime}_{n}=e_{n},e^{\prime}_{n+1}=e_{n+1},e^{\prime}_{n+2}=e_{n+2},$ where $B_{j,3}:=a_{j,3}-\sum_{k=7}^j{\frac{B_{k-2,3}a_{j-k+5,3}}{k-5}},(5\leq j\leq n)$.\\ This transformation removes $a_{5,3},a_{6,3},...,a_{n,3}$ in $\mathscr{R}_{e_{n+1}},$ $\mathscr{R}_{e_{n+2}}$ and $-a_{5,3},-a_{6,3},...,-a_{n,3}$ in $\mathscr{L}_{e_{n+1}},\mathscr{L}_{e_{n+2}}.$ It also removes $-\frac{a_{5,3}}{2}$ and $\frac{a_{5,3}}{2}$ from the $(5,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively. Besides it removes $a_{2,1}$ and $3a_{2,1}-2a_{2,3}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ respectively, as well as $a_{2,1}-\frac{a_{2,3}}{2}$ from the $(2,1)^{st}$ positions in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}}.$ The transformation also removes $a_{2,3}$ from the $(2,3)^{rd}$ positions in $\mathscr{R}_{e_{n+1}},$ $\mathscr{L}_{e_{n+1}}$ and $\frac{a_{2,3}}{2}$ from the same positions in $\mathscr{R}_{e_{n+2}},$ $\mathscr{L}_{e_{n+2}}.$ It introduces the entries in the $(i,1)^{st}$ positions in $\mathscr{R}_{e_{n+1}}$ and $\mathscr{L}_{e_{n+1}},$ that we set to be $a_{i,1}$ and $-a_{i,1},(6\leq i\leq n),$ respectively. The transformation affects the entries in the $(j,1)^{st},(8\leq j\leq n)$ positions in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ but we rename all the entries in the $(i,1)^{st}$ positions by $\frac{i-3}{i-2}a_{i,1}$ in $\mathscr{R}_{e_{n+2}}$ and by $\frac{3-i}{i-2}a_{i,1},(6\leq i\leq n)$ in $\mathscr{L}_{e_{n+2}}.$ \item Applying the transformation $e_k^{\prime}=e_k,(1\leq k\leq n),e^{\prime}_{n+1}=e_{n+1}+\sum_{k=5}^{n-1}{a_{k+1,1}e_k}, e^{\prime}_{n+2}=e_{n+2}+\sum_{k=5}^{n-1}{\frac{k-2}{k-1}a_{k+1,1}e_k},$ we remove $a_{i,1}$ in $\mathscr{R}_{e_{n+1}}$ and $-a_{i,1}$ in $\mathscr{L}_{e_{n+1}}$ as well as $\frac{i-3}{i-2}a_{i,1}$ and $\frac{3-i}{i-2}a_{i,1},(6\leq i\leq n)$ in $\mathscr{R}_{e_{n+2}}$ and $\mathscr{L}_{e_{n+2}},$ respectively. \end{itemize} We obtain a Leibniz algebra $l_{n+2,1}$ given below: \begin{equation} \left\{ \begin{array}{l} \displaystyle \nonumber [e_1,e_{n+1}]=e_1+e_3,[e_3,e_{n+1}]=2e_3, [e_4,e_{n+1}]=3(e_4-e_2), [e_{i},e_{n+1}]=(i-1)e_i,\\ \displaystyle [e_{n+1},e_{1}]=-e_1-e_3,[e_{n+1},e_{2}]=-4e_2,[e_{n+1},e_3]=-2e_3, [e_{n+1},e_4]=-e_2-3e_4,\\ \displaystyle [e_{n+1},e_{i}]=(1-i)e_i,[e_{1},e_{n+2}]=e_1,[e_{3},e_{n+2}]=e_3,[e_{4},e_{n+2}]=2(e_4-e_2),\\ \displaystyle [e_{i},e_{n+2}]=(i-2)e_i,(5\leq i\leq n,n\geq5),[e_{n+2},e_1]=-e_1,[e_{n+2},e_{2}]=-2e_2,\\ \displaystyle[e_{n+2},e_{j}]=(2-j)e_j,(3\leq j\leq n).\end{array} \right. \end{equation} We summarize a result in the following theorem: \begin{theorem}\label{LCodim2L4} There are four solvable indecomposable left Leibniz algebras up to isomorphism with a codimension two nilradical $\mathcal{L}^4,(n\geq4),$ which are given below: \begin{equation} \begin{array}{l} \displaystyle \nonumber (i)\,l_{n+2,1}: [e_1,e_{n+1}]=e_1+e_3,[e_3,e_{n+1}]=2e_3, [e_4,e_{n+1}]=3(e_4-e_2), [e_{i},e_{n+1}]=(i-1)e_i,\\ \displaystyle [e_{n+1},e_{1}]=-e_1-e_3,[e_{n+1},e_{2}]=-4e_2,[e_{n+1},e_3]=-2e_3, [e_{n+1},e_4]=-e_2-3e_4,\\ \displaystyle [e_{n+1},e_{i}]=(1-i)e_i,[e_{1},e_{n+2}]=e_1,[e_{3},e_{n+2}]=e_3,[e_{4},e_{n+2}]=2(e_4-e_2),\\ \displaystyle [e_{i},e_{n+2}]=(i-2)e_i,(5\leq i\leq n,n\geq5),[e_{n+2},e_1]=-e_1,[e_{n+2},e_{2}]=-2e_2,\\ \displaystyle [e_{n+2},e_{j}]=(2-j)e_j,(3\leq j\leq n),\\ \displaystyle (ii)\,l_{6,2}: [e_1,e_5]=e_1,[e_3,e_5]=e_3,[e_4,e_5]=2(e_4-e_2),[e_{5},e_{1}]=-e_1,[e_5,e_2]=-2e_2,\\ \displaystyle [e_{5},e_3]=-e_3, [e_{5},e_4]=-2e_4, [e_1,e_6]=e_1+be_3,[e_3,e_6]=e_1+be_3,[e_4,e_6]=(b+1)e_4-\\ \displaystyle(b+1)e_2, [e_{6},e_1]=-e_1-be_3,[e_6,e_2]=-2(b+1)e_2,[e_{6},e_{3}]=-e_1-be_3,\\ \displaystyle [e_{6},e_4]=-(b+1)\left(e_2+e_4\right),(b\neq-1),\\ \displaystyle (iii)\, l_{6,3}: [e_1,e_5]=e_1+(c+1)e_3,[e_3,e_5]=ce_1+2e_3,[e_{4},e_5]=3\left(e_4-e_2\right),[e_{5},e_{1}]=-e_1-\\ \displaystyle (c+1)e_3,[e_5,e_2]=-2(c+2)e_2,[e_{5},e_3]=-ce_1-2e_3,[e_5,e_4]=(-2c-1)e_2-3e_4,\\ \displaystyle [e_1,e_6]=e_1,[e_3,e_6]=e_3, [e_{4},e_6]=2\left(e_4-e_2\right),[e_{6},e_1]=-e_1,[e_6,e_2]=-2e_2,[e_{6},e_{3}]=-e_3,\\ \displaystyle [e_6,e_4]=-2e_4,(c\neq-2),\\ \displaystyle (iv)\,l_{6,4}:[e_1,e_5]=e_1,[e_3,e_5]=e_3,[e_{4},e_5]=2\left(e_4-e_2\right),[e_{5},e_{1}]=-e_1,[e_5,e_2]=-2e_2,\\ \displaystyle [e_{5},e_3]=-e_3,[e_5,e_4]=-2e_4,[e_1,e_6]=be_1+(1-b)e_3,[e_3,e_6]=e_1,[e_{4},e_6]=b\left(e_4-e_2\right),\\ \displaystyle [e_{6},e_1]=-be_1+(b-1)e_3,[e_6,e_2]=-2e_2,[e_{6},e_{3}]=-e_1,[e_6,e_4]=(b-2)e_2-be_4,(b\neq0). \end{array} \end{equation} \end{theorem} \newpage
2023-04-23T08:17:44.063Z
2019-06-17T02:14:26.000Z
redpajama/arxiv
arxiv_0000
556
56,779
dab6975a47f288c0d4f3d96115f323ce36f05ff5
\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.1\linespacing}% {\normalfont\itshape}} \makeatother \settopmatter{printacmref=false} \renewcommand\footnotetextcopyrightpermission[1]{} \usepackage{url} \usepackage{listings} \renewcommand{\ttdefault}{pcr} \lstset{language=[90]Fortran, basicstyle=\ttfamily\lst@ifdisplaystyle\scriptsize\fi, keywordstyle=\bfseries, comment=[l]{!\ }, escapechar=@, tabsize=3, } \newcommand{\SRC}[1]{% \lstinline[gobble=0]{#1}\xspace } \newcommand{\CMD}[1]{\SRC{#1}} \usepackage{xspace} \newcommand{\systemshape}{\sffamily} \newcommand{{\systemshape LIBXSMM}\xspace}{{\systemshape LIBXSMM}\xspace} \usepackage{placeins} \newcommand{{i.\,e.}\xspace}{{i.\,e.}\xspace} \newcommand{{e.\,g.}\xspace}{{e.\,g.}\xspace} \hyphenation{op-tical net-works semi-conduc-tor} \hyphenation{per-for-mance} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \algnewcommand{\LineComment}[1]{\State \emph{\textcolor{blue}{\(\triangleright\) #1}}} \algrenewcommand\algorithmicindent{1em}% \makeatletter \makeatother \newcommand{\mathrel{+}=}{\mathrel{+}=} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{tabularx} \usepackage{enumitem} \makeatletter \makeatother \usepackage{color, soul} \soulregister\emph7 \newcommand{\fix}[1]{\textbf{\textcolor{red}{#1}}} \begin{abstract} Deep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the \emph{batch-reduce GEMM kernel} and show how the most popular DL algorithms can be formulated with this kernel as the basic building-block. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting our new kernel we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in just 3K lines of high-level code. Our primitives outperform vendor-optimized libraries on multi-node CPU clusters, and we also provide proof-of-concept CNN kernels targeting GPUs. Finally, we demonstrate that the batch-reduce GEMM kernel within a tensor compiler yields high-performance CNN primitives, further amplifying the viability of our approach. \end{abstract} \begin{document} \title{High-Performance Deep Learning via a Single Building Block} \author{Evangelos Georganas, Kunal Banerjee, Dhiraj Kalamkar, Sasikanth Avancha, Anand Venkat, Michael Anderson, Greg Henry, Hans Pabst, Alexander Heinecke} \affiliation{\institution{Intel Corporation}} \renewcommand{\shortauthors}{E. Georganas et al.} \maketitle \section{Introduction} In the past decade, machine learning has experienced an academic and industrial renaissance where deep learning (DL) has been the main driving force. More specifically, deep neural networks have advanced the fields of computer vision, speech recognition, machine translation and search ranking, and naturally emerge in numerous applications and scientific domains~\cite{origalexnet,szegedy2015going,simonyan2014very,yu2013feature,wu2016google,cheng2016wide}. Three types of neural networks (NN) comprise the most prominent DL workloads by representing 95\% of the data-centers's demands~\cite{jouppi2017datacenter}: i) Recurrent Neural Networks (RNN)~\cite{graves2013speech} with the so-called Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} networks being the most popular variation, ii) Convolution Neural Networks (CNN)~\cite{origalexnet}, and iii) Multi-Layer Perceptrons (MLP)~\cite{minsky2017perceptrons,hornik1989multilayer}. Additionally, the contemporary Transformer~\cite{transformer} and BERT~\cite{bert} workloads computationally involve fully-connected layers which also lie in the heart of MLP. All these neural networks can be further associated with two use-cases: \emph{training} of the underlying NN models (i.e.\ learning via back-propagation~\cite{lecun1988theoretical}), and \emph{inference} (i.e.\ yielding predictions) based on trained models. Due to the increase of the involved datasets' size and complexity in deep neural networks (DNN), the training and inference tasks require vast amount of computation. Therefore, academia and industry have invested into the development of DL libraries targeting all the aforementioned workloads on various architectures. The development of such DL libraries typically embraces one of the following strategies: (i) the specific workload kernel leverages coarse-grained, linear algebra library calls, e.g.\ LSTM cell via large GEneral Matrix Multiply (GEMM) calls in mkl-dnn~\cite{mkldnn}, convolutions via image-to-column tensor transformations and subsequent large GEMM calls~\cite{vasudevan2017parallel,anderson2017low}, or (ii) for each workload and use-case (training/inference) the kernel employs a specialized implementation that targets the specific algorithm/workload and architecture at hand, e.g.\ convolution kernels in mkl-dnn and cuDNN~\cite{chetlur2014cudnn}. The former approach of deploying coarse-grained, linear algebra library calls provides ease in the DL library development process since no special kernel development is involved. However it may result in suboptimal data reuse (e.g.\ redundant data movements to format underlying tensor/matrices in the required layout that enables GEMM calls), and also it is not flexible enough to allow efficient, fine-grained fusion of other operators. The latter approach of implementing specialized kernels for each DL workload/use-case and platform/architecture strives for performance but naturally results in numerous, complex code-bases that are hard to maintain and do not generalize. For example, the code-base \emph{only for convolutions on CPUs} within mkl-dnn consists of $\sim$36,000 lines of code. Figure~\ref{fig:motivation} shows the performance of various convolution kernel implementations on a Xeon Skylake-SP 8180 processor. The yellow and green lines represent implementations adopting strategy (i). More specifically, the green line shows the performance of convolutions that leverage small GEMM library calls, whereas the yellow line illustrates the performance of an implementation which uses image to column transformations and \emph{batched GEMM}~\cite{dongarra2017design} library calls. Both approaches perform far from the machine's peak with average efficiencies of 61\% and 49\% respectively. On the other hand, the orange line exhibits the performance of the vendor-optimized mkl-dnn library that follows strategy (ii) with ad hoc, specialized direct convolution kernels and achieves average efficiency of 81\%, being 1.33$\times$ and 1.64$\times$ faster than the aforementioned generic implementations. However, this performance comes at the cost of complex, specialized kernels that do not generalize to different workloads (e.g.\ RNN/LSTM/MLP) or different architectures (e.g.\ GPUs). \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{new_motivation.pdf} \caption{Performance of ResNet-50 forward convolutions} \label{fig:motivation} \end{figure} In this work, we introduce a new kernel called \emph{batch-reduce GEMM} and show how the most popular DL workloads and algorithms (RNN/LSTM, CNN and MLP) can be formulated with this new kernel as basic building block. The batch-reduce GEMM kernel essentially multiplies a sequence of input sub-tensor blocks (which form a \emph{batch}) and the partial multiplication results are \emph{reduced} into a single accumulator/output sub-tensor block. Our new kernel is flexible enough to accommodate coarse-grained and fine-grained operations that arise in DL workloads, whereas its semantics lend themselves to various optimizations (e.g.\ load/store optimizations of the result sub-tensor, prefetching of the sub-tensors to be multiplied). Also, since the kernel supports operations at fine granularity, fusion of subsequent operators on the output sub-blocks is inherently efficient. The blue line in Figure~\ref{fig:motivation} shows the performance of the convolution primitive that leverages our new \emph{batch-reduce GEMM} kernel achieving average efficiency of 83\%, and outperforms even the ad hoc, vendor-optimized kernel. Having a single kernel as basic building-block is transformative: by implementing and optimizing this single kernel for a given architecture, the development of DL primitives degenerates to mere loop tuning around this kernel. Essentially our approach with a \emph{single} kernel addresses the issue of combinatorial explosion of low-level optimization work that is required for each pair <architecture, DL primitive>. Instead, for each architecture we need to optimize at low-level \emph{only one kernel for all DL primitives}. Furthermore, having a single, highly efficient building-block enables efficient usage of tensor compiler frameworks. Such frameworks embrace tensors as first class citizens, and provide specific optimization techniques targeting tensor algebra programs. Since DL primitives are inherently tensor algebra programs, there is a large amount of ongoing research that leverages specialized tensor compilers for DL workload development (e.g.\ TVM~\cite{chen2018tvm}, GLOW~\cite{DBLP:journals/corr/abs-1805-00907}, PlaidML~\cite{plaidml}, MLIR~\cite{mlir}). However, compilers struggle to optimize small GEMM-flavored loop nests that arise in tensor programs~\cite{libxsmm}. Contemporary architectures become increasingly complex, and all the micro-architectural idiosyncrasies have to be considered in order to achieve close-to-peak performance. Our kernel is optimized for the nuances of the architecture at hand, and serves tensor compilers a robust building block that can be used during the polyhedral optimization phase of general loop nests~\cite{plaidml,poly}. To illustrate the viability and generality of our methodology with a single kernel, we develop DL primitives which target training and inference of RNN/LSTM, CNN and MLP workloads in $\sim$3,000 lines of high-level C code. Our primitives outperform vendor-optimized libraries on CPUs. We also provide proof-of-concept design with a tensor compiler framework by showcasing efficient CNN implementation in TVM that leverages our batch-reduce GEMM kernel. Additionally, our methodology provides a pathway for performance portability; we present exemplary, high-performance CNN kernels on integrated GPUs. Last but not least, we integrate our primitives in distributed DL frameworks (Tensorflow~\cite{tensorflow2015} and GxM~\cite{sc18}), and show performance results on two training workloads: Google's Neural Machine Translation (GNMT)~\cite{wu2016google} and ResNet-50 training~\cite{he2016deep}. These results push the envelope of DL training performance on CPU clusters. The main contributions of this paper are: \begin{itemize} \item The introduction of the batch-reduce GEMM kernel along with its efficient implementation. \item The design and implementation of multi-threaded, high performance DL primitives covering RNN/LSTM, CNN and MLP inference and training algorithms with batch-reduce GEMM kernel being the basic building block. We need to optimize at low-level \emph{only this kernel for all DL primitives}. \item A detailed performance comparison of our DL primitives with state-of-the-art vendor-optimized libraries. \item Distributed memory results of LSTM and CNN training workloads that leverage our optimized DL kernels and outperform the best in class results on CPU clusters. \item CNN proof-of-concept results on integrated GPUs and CNN kernels within TVM that leverage the batch-reduce GEMM kernel. \end{itemize} \section{The Batch-Reduce GEMM kernel} \label{sec:br_gemm} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{combined2.pdf} \caption{(a) The batch-reduce GEMM kernel (b) Outer product small GEMM microkernel} \label{fig:br_gemm} \end{figure} In this section, we describe the design and implementation of the new batch-reduce GEMM kernel which comprises the cornerstone of our deep learning primitives. Figure~\ref{fig:br_gemm} (a) illustrates the functionality of the new kernel which materializes the operation: \begin{equation*} C_j = \beta \cdot C_j + \alpha \sum_{i=0}^{N-1} A_i \cdot B_i \end{equation*} This kernel multiplies the specified blocks $A_i \in {\rm I\!R}^{m\times k}$ and $B_i \in {\rm I\!R}^{k\times n}$ and \emph{reduces} the partial results to a block $C_j\in {\rm I\!R}^{m\times n}$ of a tensor $C$. Tensors $A$ and $B$ can alias and also the blocks $A_i$ and $B_i$ can reside in any position in the input tensors $A$ and $B$. The batch-reduce GEMM kernel takes the following arguments: (i) \emph{two arrays of pointers} to the corresponding blocks $A_i$ and $B_i$ to be multiplied, (ii) a pointer to the output block $C_j$, (iii) the number $N$ of the blocks to be multiplied and (iv) the scaling parameters $\alpha$ and $\beta$. Our kernel differs from the recently introduced batched GEMM~\cite{dongarra2017design} and its variation strided-batch-gemm~\cite{stridedbatchgmemm} that materialize: \begin{equation*} C_i = \beta \cdot C_i + \alpha \cdot A_i \cdot B_i \end{equation*} These batched routines are missing the \emph{reduction} functionality and cannot optimize for the output matrix re-use. Also, the strided-batch-gemm kernel accesses the $A_i$ and $B_i$ subblocks based on fixed strides and therefore is more restrictive. \begin{algorithm}[t] \begin{algorithmic}[1] \Statex \algbackskip \textbf{Inputs}: $A_i \in {\rm I\!R}^{m\times k},B_i \in {\rm I\!R}^{k\times n}i = 0,...,N\text{-}1$,\ $C_j \in {\rm I\!R}^{m\times n}$ $\alpha, \beta \in {\rm I\!R}$ \Statex \algbackskip \textbf{Output}:$\ C_j = \beta \cdot C_j + \alpha \sum_{i=0}^{N-1} A_i \cdot B_i$ \For{$i_n=0 \dots n-1\ \textbf{with\ step\ }\mathbf{n_b}$} \For{$i_m=0 \dots m-1\ \textbf{with\ step\ }\mathbf{m_b}$} \State acc\_regs $\leftarrow$ load $m_b \times n_b$ $C_j$$\text{\ subblock}_{i_m,i_n}$ \For{$i=0 \dots N-1\ \textbf{with\ step\ } \mathbf{1}$} \For{$i_k=0 \dots k-1\ \textbf{with\ step\ } \mathbf{1}$} \LineComment{Outer product GEMM microkernel} \State acc\_regs $\mathrel{+}=$ $A_i\ \text{subcolumn}_{i_m,i_k}\times B_i\ \text{subrow}_{i_k,i_n} $ \EndFor \EndFor \State $C_j\ \text{subblock}_{i_m,i_n} \leftarrow$ acc\_regs \EndFor \EndFor \end{algorithmic} \caption{The batch-reduce GEMM kernel} \label{alg:br_kernel} \end{algorithm} The new batch-reduce GEMM kernel specification naturally lends itself to a handful of optimizations. First, this kernel minimizes the output data movement compared to GEMM or batched GEMM approaches since the specification dictates the use of a single output. Second, the input subblocks that are multiplied can reside in arbitrary locations within tensors, therefore the kernel obviates the need for tensor transformations/copy overheads that are otherwise required in order to obtain long accumulation chains (e.g.\ image to column transformations are required to implement convolutions via large GEMM calls). Such long accumulation chains are essential in order to achieve high performance. Additionally, being able to provide arbitrary sub-tensor blocks as inputs provides ease of integration with blocked/tiled tensor layouts. Last but not least, since the input $A_i$ and $B_i$ subblocks are part of the interface, the implementation can trivially prefetch them in order to hide the latency of data movement. In order to obtain a high performance implementation of the batch-reduce GEMM kernel we build upon and extend the open source LIBXSMM~\cite{libxsmm} library which leverages JIT techniques and generates small GEMMS achieving close to peak performance. Algorithm~\ref{alg:br_kernel} shows the pseudocode of the batch-reduce GEMM kernel. Lines 1-2 block the computation of the result $C_j$ in $m_b \times n_b$ subblocks. Once such a subblock is loaded into the accumulation registers (line 3), we loop over all pairs $A_i,\ B_i$ (line 4) and we accumulate into the loaded registers the products of the corresponding $m_b\times k$ subblocks of $A_i$ with the relevant $k\times n_b$ subblocks of $B_i$ (lines 5-7). In order to calculate a partial product of an $m_b\times k$ subblock of $A_i$ with a $k\times n_b$ subblock of $B_i$, we follow an outer product formulation. In particular, we multiply an $m_b\times 1$ column of $A_i$ with a $1\times n_b$ row of $B_i$ (line 7) and we repeat the analogous outer product computation for all $k$ columns/rows of the $A_i$/$B_i$ subblocks (line 5). Figure~\ref{fig:br_gemm}(b) depicts the outer product microkernel that multiplies an $m_b\times 1$ column of $A_i$ with a $1\times n_b$ row of $B_i$ (in this example $m_b=64$, $n_b=6$). For illustration purposes, we consider that the underlying architecture has 32 vector registers where each one can hold 16 tensor elements. In this example, accumulation registers 7-30 hold the partial $C_j$ result. First, we broadcast the row of $B_i$ into registers 1-6. Then, we load in register 0 the first 16 elements of the $A_i$ column and via 6 fused-multiply-add instructions (FMAs) with registers 1-6 we update the accumulators 7-12. We repeat the analogous process for the remaining 48 elements of the $A_i$ column and we update all the accumulation registers. We note here that this is just one of the methods that LIBXSMM adopts for the outer product microkernel; LIBXSMM leverages various strategies depending on the architecture at hand (i.e.\ vector length) and the $m_b$, $n_b$ values. Once the $m_b \times n_b$ subblock of $C_j$ is fully computed for all pairs of $A_i$ and $B_i$ matrices, the accumulators are stored in the proper location of $C_j$ (line 8). Finally, we further enhance the microkernel with software prefetches of $A_i$ and $B_i$ elements aiming to mitigate cache miss latency overheads. \section{Deep Learning Kernels} \label{sec:dl_algs} Here we describe the design and implementation of our DL primitives that exploit the batch-reduce GEMM kernel. In particular, we outline how to implement the required algorithms for LSTM~\cite{hochreiter1997long}, CNN~\cite{szegedy2015going} and MLP~\cite{hornik1989multilayer} workloads. We choose performance-optimal data layouts which might differ from classic layout specifications in today's vendor libraries. However, this is fully acceptable as modern DL frameworks anyways change tensor layouts during their graph optimization phase for operator fusion (e.g.\ Tensorflow's Grappler). Therefore, freedom of data layout choice is a fundamental cornerstone to enable high performance through tensor compilers. We highlight that the subsequent algorithmic descriptions are agnostic of the compute precision.The only prerequisite in order to get an implementation with the desired compute precision is to generate the corresponding batch-reduce GEMM kernel. The results we present in section~\ref{sec:results} are in single precision (FP32), however we already have implementations supporting the int8 and bfloat16 datatypes (via the new Intel VNNI and bfloat16 instructions respectively) which have been shown to sufficiently cover a range of DL training and inference workloads~\cite{vanhoucke2011improving,bfloat16_tf,de2018high} and are supported on up-coming CPU architectures. Also, the same algorithms are applicable for GPUs; in Section~\ref{sec:results} we showcase exemplary results of CNNs on integrated GPUs. \subsection{Long-Short Term Memory (LSTM)} \label{subsec:lstm} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{lstm.pdf} \caption{Long-Short Term Memory data flow.} \label{fig:lstm} \end{figure} LSTM is a type of RNN which is well-suited for processing temporal data. Unlike traditional RNN, LSTM can handle exploding and vanishing gradient problems encountered during neural network training. LSTM has found applications in language translation, text generation, handwriting recognition and image captioning. In this subsection, we focus on the forward propagation to train an LSTM cell (the forward propagation pass is utilized also for the inference use-case). The backward by data and weight update kernels required for the entire training via the back-propagation algorithm~\cite{lecun1988theoretical} are implemented in an analogous way. \subsubsection{LSTM equations and prior art} \label{subsubsec:naive_lstm} Given the batch size $N$, the sequence length $T$, the state size $C$ and hidden state size $K$, the inputs of the forward propagation pass in the training process of the LSTM cell are: i) the weight tensors $W_i$, $W_c$, $W_f$, $W_o\in {\rm I\!R}^{K\times C}$, ii) the recurrent weights $R_i$, $R_c$, $R_f$, $R_o\in {\rm I\!R}^{K\times K}$, iii) the input sequence tensor $x \in {\rm I\!R}^{T\times C\times N}$, and iv) the bias tensors $b_i$, $b_c$, $b_f$, $b_o\in {\rm I\!R}^{K}$. These tensors are combined based on the Equations~1-6 and yield the output sequence $h \in {\rm I\!R}^{T\times K\times N}$ and tensor $s \in {\rm I\!R}^{T\times K\times N}$: \begin{eqnarray} i_t &=& \sigma(W_i\cdot x_t + R_i\cdot h_{t-1} + b_i)\\ c_t &=& \textrm{tanh}(W_c\cdot x_t + R_c\cdot h_{t-1} + b_c)\\ f_t &=& \sigma(W_f\cdot x_t + R_f\cdot h_{t-1} + b_f)\\ o_t &=& \sigma(W_o\cdot x_t + R_o\cdot h_{t-1} + b_o)\\ s_t &=& f_t \circ s_{t-1} + i_t \circ c_t\\ h_t &=& o_t \circ \textrm{tanh}(s_t) \end{eqnarray} In these equations, observe the \emph{recurrent} relationship between subtensors $i_t, c_t, f_t, o_t$ and $s_t$ of the current time-step $t$ and subtensors $h_{t-1}$, $s_{t-1}$ of the previous time-step $t-1$. Also, $\sigma()$ represents the standard logistic sigmoid function, $\tanh()$ is the hyperbolic tangent function and ``$\circ$" stands for element-wise multiplication of tensors. Figure~\ref{fig:lstm} visualizes the computations and the dependencies involved in the forward propagation pass of the LSTM network. Typical implementations of the LSTM cell (e.g.\ basic LSTM cell in Tensorflow) stack the $W_i$, $W_c$, $W_f$, $W_o$ matrices into $W\in{\rm I\!R}^{4K\times C}$ and the $R_i$, $R_c$, $R_f$, $R_o$ into $R\in {\rm I\!R}^{4K\times K}$ and then employ two large GEMMS $W\cdot x_t$ and $R\cdot h_{t-1}$ to calculate the relevant partial products in Equations 1-4. Moreover, these two large GEMMs can be further replaced with a single large GEMM call by stacking $W$, $R$ and $x_t$, $h_{t-1}$ and performing: $\begin{bmatrix} W\ R \end{bmatrix}\cdot \begin{bmatrix} x_t^T\ h_{t-1}^T\end{bmatrix}^T$. Then, such an implementation applies the element-wise operations (sigmoid/tanh) onto the GEMM results and concludes with the element-wise operations dictated by Equations 5-6. While such an approach is easy to implement by exploiting large vendor-optimized GEMM library calls, the data reuse of the underlying tensors relies on how GEMMs are parallelized and may be suboptimal for GEMM sizes stemming from small batch size $N$. Also, the element-wise operations are exposed as a bandwidth-bound kernel after the GEMM which is typically a compute-bound kernel; the outputs of the large GEMM are not hot in cache (due to limited cache capacity) and as such the involved tensors have to be re-read from memory for the element-wise operations. \begin{algorithm}[t] \begin{algorithmic}[1] \Statex \algbackskip \textbf{Inputs}: Weight tensors $W_*[K_b][C_b][b_c][b_k], R_*[K_b][K_b][b_k][b_k]$ \Statex \algbackskip Input sequence $x[T][N][C]$, Bias $b_*[K]$, blocking factors $b_k, b_c, b_n$ \Statex \algbackskip \textbf{Outputs}: Output sequence $h[T][N][K]$ and $s[T][N][K]$ \State $N_b \leftarrow N/b_n$ \State Based on $thread\_id$ calculate $K_b\_start$, $K_b\_end$, $N_b\_start$ and $N_b\_end$ to assign output work items \For{$t=0 \dots T-1$} \For{$ib_k=K_b\_start \dots K_b\_end$} \For{$ib_n=N_b\_start \dots N_b\_end$} \LineComment{Compute a block of $i_t = \sigma(W_i\cdot x_t + R_i\cdot h_{t-1} + b_i)$} \State $i_k \leftarrow ib_k \cdot b_k$\ ,\ $i_n \leftarrow ib_n \cdot b_n$ \State $i[t][i_n][i_k]\leftarrow b_i[i_k]$ \For{$ib_c=0 \dots C_b-1$} \State $A_{ptrs}[ib_c] = \&W_i[ib_k][ib_c][0][0]$ \State $B_{ptrs}[ib_c] = \&x[t][i_n][ib_c\cdot b_c]$ \EndFor \State $\mathbf{batchreduce\_gemm}(A_{ptrs}, B_{ptrs}, \&i[t][i_n][i_k], C_b)$ \For{$ib_c=0 \dots K_b-1$} \State $A_{ptrs}[ib_c] = \&R_i[ib_k][ib_c][0][0]$ \State $B_{ptrs}[ib_c] = \&h[t-1][i_n][ib_c\cdot b_k]$ \EndFor \State $\mathbf{batchreduce\_gemm}(A_{ptrs}, B_{ptrs}, \&i[t][i_n][i_k], K_b)$ \State $i[t][i_n][i_k] \leftarrow \sigma(i[t][i_n][i_k])$ \LineComment{Ditto for blocks of $c_t, f_t,o_t$ via Equations 2-4} \State $s[t][i_n][i_k] \leftarrow f[t][i_n][i_k] \circ s[t-1][i_n][i_k] +$ \State \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $i[t][i_n][i_k] \circ c[t][i_n][i_k]$ \State {$h[t][i_n][i_k] \leftarrow o[t][i_n][i_k] \circ \tanh{(s[t][i_n][i_k])}$} \EndFor \EndFor \EndFor \end{algorithmic} \caption{Forward propagation pass of LSTM cell} \label{alg:lstm} \end{algorithm} \subsubsection{Optimized LSTM cell via the batch-reduce GEMM kernel} \label{subsubsec:lstm_opt} In order to ameliorate the inefficiencies of the large GEMM approach, we follow a \emph{data flow} methodology for our optimized LSTM cell, an approach which has been also explored in previous work~\cite{lstmdataflow}. More specifically, we implement a parallel blocked matrix GEMM in order to achieve load balance, maximize weight matrix reuse and fuse the element-wise operations after partial GEMM blocks are computed and while they are still hot in cache. Algorithm~\ref{alg:lstm} shows our data-flow implementation. In particular, the output and the intermediate GEMM results/tensors are divided into logical $b_n\times b_k$ blocks which constitute the work items. Then these work items are assigned onto the available threads (line 2) and subsequently each thread proceeds with its assigned computations. Lines 6-17 indicate how such a $b_n\times b_k$ block of $i_t$ is calculated by a specific thread. First (line 8), the corresponding $i_t$ block is initialized with the according bias tensor values from $b_i$. Then, lines 9-12 employ the batch-reduce GEMM kernel described in Section~\ref{sec:br_gemm} and calculate the contribution $W_i\cdot x_t$ to the current block of $i_t$. More specifically, lines 9-11 prepare the arguments of the batch-reduce GEMM call by calculating the pointers of the required $W_i$ and $x_t$ sub-blocks and storing them in auxiliary arrays $A_{ptrs}$ and $B_{ptrs}$. Then, line 12 calls the batch-reduce GEMM kernel which accumulates the partial products from the $W_i$ and $x_t$ sub-blocks onto the current $i_t$ block. We emphasize here that our batch-reduce GEMM allows small blocking values $b_n$ and $b_k$ to be used since: (a) the small GEMM microkernel runs close to peak even for small dimensions and (b) it avoids the redundant load/stores of the accumulators that arise from the batch-reduce operation and would cripple the overall performance; instead it keeps the accumulation chain in-registers for as long as possible (see Algorithm~\ref{alg:br_kernel}). In an analogous way, lines 13-16 calculate the contribution $R_i\cdot h_{t-1}$ to the current block of $i_t$ as shown in Equation 1. Subsequently, line 17 applies the element-wise operation (sigmoid in this case) onto the just-computed block of $i_t$. Since the block of $i_t$ is hot in cache, the application of the element-wise operation does not incur any data movement from memory. The same technique is used to calculate the corresponding sub-blocks of $c_t$, $f_t$ and $o_t$ (omitted in Algorithm~\ref{alg:lstm} for simplicity). It is noteworthy that the $c_t$, $f_t$ and $o_t$ computations reuse the same entries of $x_t$ and $h_{t-1}$ from cache since these tensor entries were also used for the computation of $i_t$. Finally, lines 19-21 conclude the computation of the corresponding blocks of the output tensors $h_t$ and $s_t$ based on the element-wise operations dictated by Equations 5-6. After all the work items assigned to the available threads for a given time-step are fully computed, all the threads synchronize and proceed to the next time-step (loop at line 3). Such a synchronization is necessitated because all the output entries $h_t$ of the current time-step are required in the next time-step iteration. We also note here that the way the work items are processed by the threads affects the data reuse of the weight tensors $W_*$ and $R_*$. In particular, since work items are processed by iterating the ``mini batch" dimension first (loop at line 5), the corresponding slices of the weight tensors $W_*$ and $R_*$ are reused $N_b\_end-N_b\_start-1$ times from cache (potentially from mid-level cache). Another optimization that is not shown in Algorithm~\ref{alg:lstm} for simplicity is further cache blocking of the batch-reduce loops at lines 9 and 13. In particular, if the weight tensors at hand have large state sizes $C$ and $K$, we block these dimensions in order to fit the corresponding weight tensors slices in cache. In such a case, the algorithm would have yet another loop just after the time-step loop (at line 3) which blocks the batch-reduce loops at lines 9 and 13. Last but not least, Algorithm~\ref{alg:lstm} carefully chooses the layouts of the corresponding tensors. The weight tensors $W_*$ and $R_*$ are conceptually 2 dimensional tensors, whereas our implementation employs a blocked layout (with $C_b=C/b_c$ and $K_b=K/b_k$) : \begin{eqnarray*} W_*[C][K] \rightarrow W_*[K_b][C_b][b_c][b_k],\ R_*[K][K] \rightarrow R_*[K_b][K_b][b_k][b_k] \end{eqnarray*} Such a blocked layout exposes better locality (i.e.\ the corresponding accesses of weight sub-blocks are \emph{non-strided} with such a layout) and more importantly avoids cumbersome conflict cache misses. Typically the $C$ and $K$ values are large powers of 2 resulting in strided accesses (in the case of the non-blocked format) which are known to cause conflict misses in contemporary associative cache designs. However, our blocked format bypasses this issue by laying out the weight tensors in a format allowing non-strided accesses in the GEMM microkernel. In regard to the activation tensors, we keep the original non-blocked three dimensional format $x[T][N][C]$, $h[T][N][K]$ and $s[T][N][K]$ since strided accesses are barely an issue for the ``B" matrix in the GEMM microkernel (we also confirmed this by experimenting with a blocked format for the activation tensors). Note that even though our LSTM cell internally uses a blocked layout for the weight tensors, this does not need to be exposed at the application level; instead, we can transform the weight tensors into the desired blocked layout in the beginning of the algorithm and such a transformation overhead is amortized among the multiple time-steps in the LSTM cell. Finally, we would like to briefly discuss the importance of a \emph{single}, architecture-specific optimized kernel. All the functionalities in the LSTM cell (forward propagation/backward by data/weight update pass) utilize as building block just our batch-reduce GEMM kernel. The development/parallelization/optimization of the LSTM cell then merely degenerates to tuning/calibrating the surrounding loops around this microkernel -- a process which can be automated to some extent or even implemented in different programming frameworks/tensor compilers like TVM~\cite{chen2018tvm} or PlaidML~\cite{plaidml}. \label{sec:cnn} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{cnn.pdf} \caption{Convolution Neural Network (CNN) tensors} \label{fig:cnn} \end{figure} \subsection{Convolution Neural Networks (CNN)} Convolutional Neural networks (CNN) consist of layers with multiple neurons connected by weights, and have found applications in image recognition, semantic segmentation, autonomous driving and medical imaging. Similar to the LSTM cell, our CNN primitives implement all the kernels required for training via back-propagation. In this section, we describe only the forward propagation kernels which are used as-is for inference. The implementation of backward by data and gradient update kernels follows the same design principles as the forward propagation. \subsubsection{Direct convolution loops and prior art} The values assigned to a neuron are usually called activations. Both activations and weights are represented with multidimensional tensors as illustrated in Figure~\ref{fig:cnn}. The input activation tensors are convoluted with the weight tensors to yield the output activation tensors. The activation tensors conceptually consist of 4 dimensions: the minibatch size $N$, the number of feature maps $C$ and the spatial dimensions $H$ and $W$. We denote the input tensor dimensions with $N$, $C$, $H$ and $W$ while the corresponding output tensor dimensions are $N$, $K$ (output feature maps), $P$ and $Q$ (output spatial dimensions). The weight tensor is conceptually characterized also by 4 dimensions: the feature map dimensions $C$, $K$ and the spatial dimensions $R$ and $S$. \begin{algorithm}[t!] \algrenewcommand\algorithmicindent{0.60em}% \begin{algorithmic}[1] \State $C_b = C/b_c,\ K_b = K/b_k,\ Q_b = Q/b_q$ \For{$n=0 \dots N-1$} \For{$k_b=0 \dots K_b-1$} \For{$c_b=0 \dots C_b-1$} \For{$oj=0 \dots P-1$} \For{$oib=0 \dots Q_b-1$} \State $oi = oib\cdot b_q,\ ii = str\cdot oi,\ ij = str\cdot oj $ \For{$r=0 \dots R-1$} \For{$s=0 \dots S-1$} \LineComment{Small GEMM loops} \For{$k^\prime=0 \dots b_k-1$} \For{$oi^\prime=0 \dots b_q-1$} \For{$c^\prime=0 \dots b_c-1$} \State $oi^{\prime\prime}=oi+oi^\prime$ \State $ij^\prime = ij + r,\ ii^\prime=ii+str\cdot oi^\prime+s$ \State $\scriptsize{O[n][k_b][oj][oi^{\prime\prime}][k^\prime]\mathrel{+}=}$ \State $\scriptsize{W[k_b][c_b][r][s][c^\prime][k^\prime]\cdot I[n][c_b][ij^\prime][ii^\prime][c^\prime]}$ \EndFor \EndFor \EndFor \EndFor \EndFor \EndFor \EndFor \EndFor \EndFor \EndFor \end{algorithmic} \caption{CNN forward propagation loops} \label{alg:cnn_basic} \end{algorithm} Algorithm~\ref{alg:cnn_basic} shows a basic implementation of the forward propagation loops where the feature map loops (lines 3 and 4) are blocked by factors $b_k$ and $b_c$ respectively and the $Q$ loop (output tensor pixel dimension) is blocked by a factor $b_q$. The input tensor pixels can be also accessed in a strided fashion via a stride $str$. Additionally, the tensors employ a blocked layout format which has been shown to exhibit better locality properties for direct convolutions~\cite{sc18}: \begin{eqnarray*} Input\ tensor: & &I[N][C_b][H][W][b_c]\\ Weight\ tensor:& &W[K_b][C_b][R][S][b_c][b_k]\\ Output\ tensor: & &O[N][K_b][P][Q][b_k] \end{eqnarray*} By adopting such a blocked layout and given the loop ordering of Algorithm~\ref{alg:cnn_basic}, the three innermost loops (lines 11-17) form a small GEMM of a $b_k\times b_c$ weight sub-tensor with a $b_c\times b_q$ input sub-tensor yielding a $b_k\times b_q$ output subtensor (note that the leading dimension of the input sub-tensor is $str\cdot b_c$). The authors of previous work~\cite{sc18} identified this property; however, they implemented a specialized convolution kernel because: \begin{itemize} \item they optimize load/store of the output $O$ in case of $R, S > 1$ and in case the input feature map loop (line 4) is reordered as the innermost loop in order to maximize output reuse. \item they apply additional pixel blocking when $Q = b_q$ and this value is smaller than the FMA latency of the architecture at hand. \end{itemize} In the following subsection, we describe how we address these issues with our new batch-reduce GEMM kernel. \subsubsection{Optimized convolutions via the batch-reduce GEMM kernel} \label{subsubsec:cnn_opt} \begin{algorithm}[t] \begin{algorithmic}[1] \State $C_b = C/b_c,\ K_b = K/b_k,\ Q_b = Q/b_q$ \For{$n=0 \dots N-1$} \For{$k_b=0 \dots K_b-1$} \For{$c_b=0 \dots C_b-1\ \textbf{with\ step}\ B_c $} \For{$oj=0 \dots P-1$} \For{$oib=0 \dots Q_b-1$} \State $oi = oib\cdot b_q,\ ii = str\cdot oi,\ ij = str\cdot oj,\ i = 0$ \LineComment{Prepare batch-reduce GEMM arguments} \For{$r=0 \dots R-1$} \For{$s=0 \dots S-1$} \For{$c=0 \dots B_c-1$} \State $A_{ptrs}[i] = \&W[k_b][c_b+c][r][s][0][0]$ \State $B_{ptrs}[i\scriptsize{++}] = \& I[n][c_b+c][ij+r][ii+s][0]$ \EndFor \EndFor \EndFor \State $Out = \&O[n][k_b][oj][oi][0]$ \State $\mathbf{batchreduce\_gemm}(A_{ptrs}, B_{ptrs}, Out, R\cdot S\cdot B_c)$ \EndFor \EndFor \EndFor \EndFor \EndFor \end{algorithmic} \caption{CNN forward pass via batch-reduce GEMM} \label{alg:cnn_br_gemm} \end{algorithm} The introduction of the batch-reduce GEMM kernel obviates the need for a specialized convolution kernel. More specifically, the batch-reduce GEMM kernel inherently optimizes load/store of the output $O$ in case of $R, S > 1$ and in case the input feature map loop is reordered as the innermost loop. By properly selecting the sub-tensors of weights/inputs to be multiplied and reduced onto an $O$ sub-tensor, the accumulation takes place entirely in registers as described in Section~\ref{sec:br_gemm}. In order to tackle the second issue regarding the case with $Q = b_q$ and $b_q$ being smaller than the FMA latency, we make the following observation: the small GEMM microkernel utilizes $b_q\times (b_k/VLEN)$ accumulator registers where $VLEN$ is the vector length of the architecture at hand. Therefore, if $b_q$ is small then we accordingly increase $b_k$ such that $b_q\times (b_k/VLEN)$ is larger than the FMA latency. Algorithm~\ref{alg:cnn_br_gemm} shows how to implement the convolution loops using our new batch-reduce GEMM kernel. Note that the input feature map loop (line 4) is blocked by a factor $B_c$ and these $B_c$ iterations are brought into the batch-reduce call in order to further increase the output register reuse. The loops at lines 9-11 prepare the arguments of the batch-reduce GEMM call by calculating the pointers to the weight and input sub-tensors that need to be multiplied and reduced onto a sub-tensor in $O$. In this way, we optimize the output sub-tensor $O$ register reuse: without the batch-reduce kernel we would have to load/store the output registers $(R\times S\times B_c)-1$ additional times. Another optimization involves the case of convolutions with $R=S=1$ and unit stride (i.e.\ $str=1$). In such a case, the input spatial dimensions (loops 5 and 6) are accessed sequentially and as such one can consider that the spatial dimensions are collapsing into a single dimension allowing even more aggressive blocking parameter values $b_q$. In regard to the parallelization of Algorithm~\ref{alg:cnn_br_gemm}, we observe that the mini-batch dimension (line 2), the output feature map blocks (line 3) and the output pixels blocks (lines 5 and 6) define $N\times K_b\times P\times Q_b$ independent tasks. Typically we opt to divide work first based on the mini-batch dimension since the weight tensors could be reused by multiple threads from shared caches. If we don't have sufficient work just based on the mini-batch size, then we consider all $N\times K_b\times P\times Q_b$ tasks and they are assigned to the available threads in a block fashion. In case our convolution at hand involves large weights, it may be better to assign tasks by starting from the feature map dimension $K_b$. In this way, each thread will touch only a part of the large weight tensor which could be further blocked for a specific cache-level. We implemented all these parallelization strategies and use the most suitable one based on the convolution layer specifications and the available number of threads. Our backward by data/weight update kernels with batch-reduce GEMM leverage previous work~\cite{sc18}. The authors in~\cite{sc18} show that only slight modifications to the forward kernel are required in order to implement the back-propagation kernels, as they can be mapped through linear index transformations into the forward convolution loop nest (``dual convolutions"). The data reuse optimizations/parallelization tasks then simply translate to tuning the surrounding loops as shown in Algorithm~\ref{alg:cnn_br_gemm}. In Section~\ref{subsec:poc_cnn_results}, we show results of a proof-of-concept design where we develop CNN primitives within a tensor compiler framework via our batch-reduce GEMM kernel. We also show how the same design principles are applicable for integrated GPUs, yielding high performance convolution kernels. \subsection{Multilayer Perceptron (MLP)} \label{sec:mlp} \begin{figure}[t!] \centering \includegraphics[width=0.6\columnwidth]{mlp.pdf} \caption{A Mulitlayer Perceptron (MLP) topology} \label{fig:mlp} \end{figure} Multilayer perceptrons (MLP) comprise a class of feed-forward artificial neural networks that are widely used for classification tasks, brain modeling, time series prediction, character recognition and data compression. An MLP consists of (at least three) \emph{fully connected} layers of neurons as illustrated in Figure~\ref{fig:mlp}: the topology starts with an input layer, followed by a number of hidden layers which conclude to the output layer. Each neuron in the topology uses a non-linear activation function. For the rest of this section we consider the optimization of the \emph{fully connected} layers since they constitute the cornerstone of MLP. The fully-connected layers also lie in the heart of the modern Transformer~\cite{transformer} and BERT~\cite{bert} workloads. We dive into the details of the forward propagation algorithm of the MLP training process (also used for inference); we also implemented all the required kernels of the back-propagation training in an analogous fashion. \subsubsection{Fully Connected layers and prior art} \label{subsubsec:fc_coarse} The dashed box in Figure~\ref{fig:mlp} illustrates two fully connected layers consisting of $C$ and $K$ neurons respectively. A neuron $i$ from the first layer is connected to a neuron $j$ in the second layer with a weight $W_{ij}$. Mathematically, an input layer $x\in {\rm I\!R}^{C}$ is mapped to an output layer $y\in {\rm I\!R}^{K}$ via the relation $y=W\cdot x$, where $W\in {\rm I\!R}^{K\times C}$ is the weight tensor of the connections between the neurons. During the training process, $N$ multiple inputs ($N$ is the so-called mini-batch size) are grouped together yielding the equation $Y=W\cdot X$ with $W\in {\rm I\!R}^{K\times C}$, $X\in {\rm I\!R}^{C\times N}$ and $Y\in {\rm I\!R}^{K\times N}$. After the output tensor $Y$ is computed, a non-linear activation function $g()$ is applied on it. Observe that by increasing the mini-batch $N$, we fundamentally increase the weight tensor reuse. Typical implementations of Fully Connected layers leverage a large GEMM call and they apply the activation functions onto the GEMM outputs. Even though such an approach is straightforward to implement, its performance can be underwhelming for three reasons: i) typical high-performance GEMM library calls internally perform packing of sub-matrices to ameliorate TLB misses and cache conflict misses~\cite{goto2008anatomy}, ii) the multi-threaded implementation of GEMM with shapes arising from small mini-batch values $N$ may not fully exploit the available data reuse, and iii) in case of large matrices that do not fit in cache, the activation function application is exposed as a bandwidth-bound kernel which decays the overall performance. In the next subsection, we describe how our implementation of Fully Connected layers via the batch-reduce GEMM kernel addresses all these issues. \begin{algorithm}[t] \begin{algorithmic}[1] \Statex \algbackskip \textbf{Inputs}: Weight $W[K_b][C_b][b_c][b_k]$, Input $X[N_b][C_b][b_n][b_c]$ \Statex \algbackskip \textbf{Outputs}: Output $Y[N_b][K_b][b_n][b_k]$ \State Based on $thread\_id$ calculate $K_b\_start$, $K_b\_end$, $N_b\_start$ and $N_b\_end$ to assign output work items \For{$ib_n=N_b\_start \dots N_b\_end$} \For{$ib_k=K_b\_start \dots K_b\_end$} \LineComment{Prepare batch-reduce GEMM arguments} \For{$ib_c=0 \dots C_b-1$} \State $A_{ptrs}[ib_c] = \&W[ib_k][ib_c][0][0]$ \State $B_{ptrs}[ib_c] = \&X[ib_n][ib_c][0][0]$ \EndFor \State $Out = \&Y[ib_n][ib_k][0][0]$ \State $\mathbf{batchreduce\_gemm}(A_{ptrs}, B_{ptrs}, Out, C_b)$ \State $Y[ib_n][ib_k][0][0] \leftarrow g(Y[ib_n][ib_k][0][0])$ \EndFor \EndFor \end{algorithmic} \caption{Forward pass of Fully Connected Layer} \label{alg:mlp} \end{algorithm} \subsubsection{Fully Connected layers via the batch-reduce GEMM kernel} \label{subsubsec:fc_opt} Algorithm~\ref{alg:mlp} shows the implementation of the forward propagation in the training process of fully connected layers. First, we highlight the blocked tensor layout; all the 2 dimensional tensors are transformed into 4 dimensional ones by blocking the mini-batch dimension $N$ with a factor $b_n$ and the tensor dimensions $C$ and $K$ with blocking factors $b_c$ and $b_k$ respectively. Such a blocked layout addresses issue (i) mentioned in the previous subsection by exposing better locality and avoiding large, strided sub-tensor accesses which are known to cause TLB misses and cache conflict misses in case the leading dimensions are large powers of 2. Our algorithm first assigns the output sub-tensor blocks to the available threads (line 1) and every thread then for each assigned output $Y$ block calculates the addresses of the $W$ and $X$ sub-tensor blocks that need to be multiplied and reduced onto the current output $Y$ block (lines 5-7). Note that our JIT-ed kernel allows small values of blocking values $b_n$ to be used, and as such we can extract parallelism from the mini-batch dimension even for small values of $N$. By following the loop ordering of Algorithm~\ref{alg:mlp}, a weight sub-tensor is reused by each thread $N_b\_end-N_b\_start-1$ times, potentially from some level of cache. Also, multiple threads are able to read weights from shared caches when the assigned $Y$ blocks correspond to the same subspace of the $K$ dimension. Finally, in case a weight sub-tensor does not fit in the targeted/desired level of cache, we can further block loops at lines 3 and 5. These cache blocking techniques in combination with the flexible blocking factors $b_n$, $b_c$ and $b_k$ which yield high performance micro-kernels, address the data reuse issue (ii) mentioned in the previous subsection. Finally, once the arguments of the batch-reduce GEMM have been calculated, we perform the batch-reduce GEMM call (line 9) and while the output sub-tensor block $Y$ is still hot in cache we apply on it the relevant activation function (line 10). In this way, we ensure that the application of the activation function takes place when the data are still hot in cache and it does not incur any additional data movement from memory, addressing issue (iii) from the previous subsection. Once again, the development of the Fully Connected primitive follows the same recipe as the LSTM and CNN primitives. Therefore, the loops surrounding the batch-reduce GEMM kernel can be automatically optimized with a tensor compiler/infrastructure. \begin{figure*}[t!] \centering \includegraphics[width=2.0\columnwidth]{lstm_perf.pdf} \caption{Performance of LSTM cell: (Left) Forward propagation and (Right) backward by data and weight update pass.} \label{fig:lstm_cell} \end{figure*} \section{Performance results} \label{sec:results} In subsection~\ref{subsec:dl_perf}, we evaluate the performance of our DL kernels. Then, in subsection~\ref{subsec:distr_training}, we present distributed memory results on two state of the art workloads, namely Google's Neural Machine Translation (GNMT) which corresponds to LSTM training and ResNet-50 which is representative of CNN training. Finally, in subsection~\ref{subsec:poc_cnn_results}, we show a couple of proof-of-concept results that highlight the generalizability of our approach. More specifically, we show CNN kernel results on integrated GPUs and conclude with CNN kernel results that are generated by TVM, both leveraging batch-reduce GEMM kernel as their basic building block. \subsection{Performance evaluation of our DL kernels} \label{subsec:dl_perf} Since we use a JIT-ing methodology for the batch-reduce GEMM kernel, we can virtually run on every platform supporting SSE, AVX, AVX2 and AVX-512 instructions. All the experiments presented in this subsection are conducted on a Skylake-SP (SKX) 8180 processor with 28 cores, 96 GB\,DDR4 2666 main memory at 2.3\,GHz (AVX512) Turbo at 205W TDP. The stream triad of a single socket is 105\,GB/s. For the experiments we used all 28 cores with turbo disabled (i.e.\ AVX-512 base frequency at 1.7\,GHz) in order to get stable measurements. With such a setup, the peak of the machine is $\sim$3,050 GFLOPS (single precision). All the experiments were performed 400 times and we report the average timing; due to careful configuration of our platform (i.e.\ tick-less Linux kernel, core pinning, turbo disabled) the run-to-run variation is within 3\%. For our work we used the Intel compilers (version 18.0.0). For performance comparisons, we used the latest version of MKL-DNN (version 0.9) \subsubsection{Performance evaluation of LSTM cell} \begin{table}[!tb] \fontsize{8}{8}\selectfont \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{{\textbf{batch-reduce GEMM }}} & \multicolumn{1}{c|}{{\textbf{Elementwise }}} & \multicolumn{1}{c|}{{\textbf{Tensor }}} \\ {\textbf{LSTM pass}} & \textbf{\% of total} & \textbf{GFLOPS} & \textbf{operations} & \textbf{reformatting }\\ \hline \hline \textbf{fwd} & 93.3\% & 2550 & 5.3\% & 1.4\% \\ \hline \textbf{bwd \& upd} & 91.2\% & 2350 & 5.3\% & 3.5\% \\ \hline \end{tabular} \caption{\label{tab:lstm_breakdown}Breakdown of LSTM cell performance ($\mathbf{C}$=$\mathbf{K}$=1024).} \end{table} Figure~\ref{fig:lstm_cell}~(Left) illustrates the performance of the forward propagation algorithm in the LSTM cell that is described in subsection~\ref{subsec:lstm}. In this experiment, we fix $N=168$ (mini-batch), $T=50$ (sequence length), and we vary the hidden state size $K$ which is equal to the input state size (i.e.\ $C=K$). The blue bars represent the performance in GFLOPS (see Left y-axis) of our kernels. We observe that even in the case of small $C$ and $K$, our LSTM cell runs at $\sim$60\% of peak (see Right y-axis), whereas for larger weight tensors the activation-tensor reuse is larger and consequently the kernels run at $\sim$70\% of peak. In Table~\ref{tab:lstm_breakdown} (row labeled ``fwd'') we provide more details regarding how the time is spent within the LSTM cell for the case with $C=K=1024$. During the forward pass, 93.3\% of the time is spent in the batch-reduce GEMM kernel which runs at 2550 GFLOPS or equivalently at 84\% of peak. Then, 5.3\% of the execution time is spent for the elementwise operations described in subsection~\ref{subsubsec:lstm_opt}. The rest 1.4\% is spent in reformatting the weight tensors to take advantage of the blocked format (see subsection~\ref{subsubsec:lstm_opt}). Figure~\ref{fig:lstm_cell}~(Right) exhibits the performance of the remaining two passes in the LSTM training process, namely backward propagation and weight update pass (henceforth called ``bwd'' and ``upd'' respectively). The performance follows the same trend as the forward propagation, i.e.\ with larger weight tensors, the overall efficiency is closer to peak due to more re-use of the activation tensors. Notably, the overall efficiency is diminished compared to the forward propagation; by inspecting the time breakdown at Table~\ref{tab:lstm_breakdown} (row labeled ``bwd \& upd'') we observe that larger fraction of the overall time is spent in tensor reformatting. This is expected because bwd and upd passes require algorithmically additional weight and activation tensor transposes~\cite{lecun1988theoretical}. Also, the batch-reduce GEMM runs at 2350 GFLOPS or equivalently at 77\% of peak, which is a bit lower than the efficiency of the one achieved in the forward pass. This is a result of different tensor shapes in the ``upd'' pass, where the reduction dimension of GEMM becomes the mini-batch dimension and typically it is smaller than $C$/$K$ which constitute the GEMM reduction dimensions in forward pass. In Figure~\ref{fig:lstm_cell}, we also provide performance comparison of our LSTM cell with the vendor-optimized LSTM cell within MKL-DNN (orange bars). For small to medium problem sizes, our LSTM cell is faster than the MKL-DNN cell in the range of $1.2$-$1.3\times$ for forward propagation and $1.1$-$1.7\times$ for the bwd/upd pass. This is a result of the adopted ``data-flow'' approach described in subsection~\ref{subsec:lstm} that leverages our batch-reduce GEMM kernel: the elementwise operations are naturally fused within the GEMM operations which run at high efficiency. For larger problem sizes, the overall cost of the GEMM operation dominates the entire computation and as such the elementwise operations are negligible. This is expected since the GEMM computation cost scales cubically compared to the quadratic scaling of the elmentwise operations. Therefore, for large problem sizes a coarse grained approach like the one described in subsection~\ref{subsubsec:naive_lstm} yields good performance. It is worth mentioning that in the following subsection~\ref{subsec:distr_training} where we present distributed memory GNMT training results, the involved LSTM corresponds to the case with $C$=$K$=1024 in Figure~\ref{fig:lstm_cell}, where our code is $1.26\times$ faster than MKL-DNN for all training passes (for $N$=168). \subsubsection{Performance evaluation of CNN kernels} \begin{figure*}[t!] \centering \includegraphics[width=2.0\columnwidth]{cnn_fwd_bwd_new.pdf} \caption{Performance of ResNet-50 convolutions: (Left) Forward propagation and (Right) backward by data pass.} \label{fig:cnn_fwdbwd} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{cnn_upd_perf.pdf} \caption{Performance of ResNet-50 weight update pass.} \label{fig:cnn_upd} \end{figure} \begin{table} \fontsize{6}{5}\selectfont \begin{tabularx}{\columnwidth}{|X| l | l |X|X|X|X| X || X| l | l |X|X|X|X| X |} \hline ID & C & K & H & W & R & S & str & ID & C & K & H & W & R & S & str \\ \hline \hline 1 & 3 & 64 & \fontsize{6}{4}\selectfont 224 &\fontsize{6}{4}\selectfont 224 & 7 & 7 & 2 & 11 & 512 & 1024 & 28 & 28 & 1 & 1 & 2 \\ \hline 2 & 64 & 256 & 56 & 56 & 1 & 1 & 1 & 12 & 512 & 256 & 28 & 28 & 1 & 1 & 2 \\ \hline 3 & 64 & 64 & 56 & 56 & 1 & 1 & 1 & 13 & 256 & 256 & 14 & 14 & 3 & 3 & 1 \\ \hline 4 & 64 & 64 & 56 & 56 & 3 & 3 & 1 & 14 & 256 & 1024 & 14 & 14 & 1 & 1 & 1 \\ \hline 5 & 256 & 64 & 56 & 56 & 1 & 1 & 1 & 15 & 1024 & 256 & 14 & 14 & 1 & 1 & 1 \\ \hline 6 & 256 & 512 & 56 & 56 & 1 & 1 & 2 & 16 & 1024 & 2048 & 14 & 14 & 1 & 1 & 2 \\ \hline 7 & 256 & 128 & 56 & 56 & 1 & 1 & 2 & 17 & 1024 & 512 & 14 & 14 & 1 & 1 & 2 \\ \hline 8 & 128 & 128 & 28 & 28 & 3 & 3 & 1 & 18 & 512 & 512 & 7 & 7 & 3 & 3 & 1 \\ \hline 9 & 128 & 512 & 28 & 28 & 1 & 1 & 1 & 19 & 512 & 2048 & 7 & 7 & 1 & 1 & 1 \\ \hline 10 & 512 & 128 & 28 & 28 & 1 & 1 & 1 & 20 & 2048 & 512 & 7 & 7 & 1 & 1 & 1 \\ \hline \end{tabularx} \caption{ResNet-50 layers specifications} \label{tab:resnet_layers} \end{table} We conducted experiments with the ResNet-50 topology which yields state of the art results in image recognition tasks~\cite{he2016deep}. The convolution layers within the ResNet-50 topology cover a wide variety of parameters/configurations (e.g.\ filter dimensionality and sizes, input sizes, strided convolutions) and can be seen at Table~\ref{tab:resnet_layers}. In this Table we assign to each convolution layer an ID that is used as identifier in the performance plots. Also, for the remaining of this paper we will use the term \emph{weighted efficiency} when presenting ResNet-50 results; each layer $i$ of Table~\ref{tab:resnet_layers} requires $F_i$ flops, takes $t_i$ seconds to be computed, and is represented $n_i$ times in the entire topology (which has 53 layers in total). The weighted efficiency of the entire topology is given by: $(\sum_{i=0}^{52}n_i\cdot F_i)/(\sum_{i=0}^{52}n_i\cdot t_i)$. Figures~\ref{fig:cnn_fwdbwd} and ~\ref{fig:cnn_upd} show the performance of the ResNet-50 convolutions with mini-batch size $N=28$. The blue bars in Figure~\ref{fig:cnn_fwdbwd} (Left) represent the achieved performance (in GFLOPS) of the forward (FWD) propagation algorithm described in subsection~\ref{subsubsec:cnn_opt} that leverages the batch-reduce GEMM kernel. The weighted efficiency of the FWD convolutions within the ResNet-50 topology is 83\% of peak. More specifically, the convolutions with large spatial filters (e.g.\ $R$=$S$=$3$ in convolutions with IDs 4, 8, 13, 18) run at $\sim$90\% of peak since they inherently have more input and output tensor reuse than the convolutions with $R$=$S$=$1$ which run at $\sim$80\% of peak. Notably, layer with ID 2 runs at 65\% of peak since it has large output spatial and feature map dimensions and as such its performance is bound by the write bandwidth of our system. By comparing the performance of our kernels to MKL-DNN (orange bars) we observe similar trends. The MKL-DNN library exhibits weighted efficiency of 81\% of peak for FWD convolutions and as such it is 2.5\% slower than our work. This result highlights the effectiveness of our approach with a single kernel as basic building block: our convolutions consist of just $\sim$1500 lines of code (for all training passes) whereas the convolution portion of the MKL-DNN library is $\sim$36000 lines of code since it leverages ad hoc, specialized kernels (e.g.\ ad hoc optimization of the direct convolution loops, different approaches/code generation for various $R$ and $S$ values). Figures~\ref{fig:cnn_fwdbwd} (Right) and ~\ref{fig:cnn_upd} exhibit the performance of the convolution kernels in the remaining training passes, namely backward by data (BWD) and weight update (UPD). Our kernels have weighted efficiency 80\% and 73.6\% for the BWD pass and the UPD pass respectively. Similarly to the convolutions in the FWD pass, the layers with large spatial weight dimensions show better performance than the ones with $1\times 1$ spatial dimensions since the former have better input and output tensor reuse properties. Also, we note that the efficiency of the UPD kernels is $\sim$10\% lower than the efficiency of FWD/BWD kernels. This is a consequence of the weight tensor reduction which is required in the weight update algorithm in order to maximize the input and output tensor data movement~\cite{sc18}. For comparison, the MKL-DNN BWD and UPD kernels illustrate weighted efficiencies of 78.9\% and 68.9\% respectively, and are 1\% and 7\% slower than our kernels. In subsection~\ref{subsec:distr_training} we integrate our kernels in the GxM distributed framework and improve the best in class performance of ResNet-50 training on CPUs. \subsubsection{Performance evaluation of Fully Connected Layers} \begin{figure*}[t!] \centering \includegraphics[width=2.0\columnwidth]{mlp_perf.pdf} \caption{Performance of Fully Connected Layers. Bars correspond to Left y-axis / efficiency corresponds to Right y-axis.} \label{fig:mlp_perf} \end{figure*} Figure~\ref{fig:mlp_perf} shows the performance of the Fully Connected layers which are the cornerstone of the MLP workload. In these experiments we fix the mini-batch size $N$=1344 and we vary the dimensions of the weight tensors. For each configuration, we show results for the forward propagation (FWD), backward by data pass (BWD) and weight update pass (UPD). We observe that our approach (blue bars) with the fine-grained batch-reduce GEMM kernel shows for the smaller configuration ($C$=$K$=256) efficiencies in the range 57\%-73\%, for the medium weight sizes ($C$=$K$=512) 55\%-94\% and for the larger configuration ($C$=$K$=1024) the efficiencies are 67\%-82\%. In all configurations, we observe that the BWD kernels's performance deteriorates compared to the equivalent FWD kernels. This is the case because the BWD kernels require a weight transpose~\cite{lecun1988theoretical}. The overhead of this weight transpose is more emphasized in the cases with small $C$/$K$ values while for the cases with large $C$/$K$ values the cost of the GEMM kernel dominates the overall runtime and as such the transpose cost in negligible (see case with $C$=$K$=1024). In regard to the UPD kernels, we also observe that for smaller weight tensors the performance is lower than the corresponding FWD kernels. This is due to the limited parallelism that is available in such cases. More precisely, the FWD pass employs parallelism in the $N$/$K$ dimensions (see Algorithm~\ref{alg:mlp}), the BWD pass in the $N$/$C$ dimensions while the UPD pass in the $C$/$K$ dimensions. Consequently it is more challenging to extract sufficient parallelism within the configurations with small $C$/$K$ values. Moreover, Figure~\ref{fig:mlp_perf} shows the performance of the Fully Connected layers within the MKL-DNN library (orange bars). These kernels use the coarse-grained approach (i.e.\ a single large GEMM call) as described in subsection~\ref{subsubsec:fc_coarse}. Considering the average efficiencies of all MKL-DNN kernels (FWD, BWD and UPD), the smallest configuration achieves 55\% of peak, the medium configuration runs at 56\% of peak and the largest test case attains 70\% of peak. In contrast, our approach with the batch-reduce GEMM kernel achieves 64\%, 76\%, and 76\% of peak respectively and is $1.16\times$, $1.36\times$, and $1.09\times$ faster than the corresponding coarse-grained approach. \subsection{Distributed memory training results} \label{subsec:distr_training} \begin{figure*}[t!] \centering \includegraphics[width=2.0\columnwidth]{from_zero_plot.pdf} \caption{Distributed memory training results: (a) 4-Layer GNMT model (LSTM kernels), (b) ResNet-50 model (CNN kernels) } \label{fig:strong_all} \end{figure*} Our experimental platform is a 32-node cluster (Intel Omnipath interconnect), each node having two Skylake-SP (SKX) 8180 processors. For these runs, we enable the turbo mode on the processors (i.e.\ clock frequency up to 2.3\,GHz). \subsubsection{Distributed memory GNMT training results} We conducted our experiments with the 4-layer GNMT~\cite{wu2016google} model. The framework of our choice is Tensorflow (TF)~\cite{tensorflow2015}, where we replaced the Tensorflow LSTM cell implementation with our optimized LSTM cell that leverages the batch-reduce GEMM kernel. Then, we utilized Uber's Horovod library~\cite{horovod} to enable efficient multi-node runs. In order to accelerate the communication performance of Horovod, we replaced its default MPI communication backend with Intel's MLSL library~\cite{mlsl} which optimizes communication primitives arising in deep learning. Moreover, we extend the partitioning logic of the inputs by grouping sequences with similar length together in order to achieve load balance; such a technique yields up to 1.5$\times$ speedup compared to classic input partitioning. For all the experiments in this section, we use 1 MPI rank per CPU socket. As a baseline for comparisons, we used the default LSTM cell for CPUs within TF which we configured to use the MKL library to materialize efficiently the large GEMM calls. In order to assess the benefits of our new kernels, we incorporated our LSTM cell and we further modified the TF code to support fused time-step operations as they are described in Algorithm~\ref{alg:lstm}. We verified correctness of the code changes by achieving state of the art BLEU score of 22.7 after 3 epochs with the German to English WMT16 dataset~\cite{wmt16}. Figure~\ref{fig:strong_all}~(a) illustrates the strong scaling of the distributed memory training with three different batch sizes; the usage of such large batch sizes (up to $\sim$ 5K) is enabled by the LEGW~\cite{you2019large} approach. The y-axis represents the achieved training performance in Kilo Words per Second (KWPS) while the x-axis shows the number of nodes. Both axes are in logarithmic scale. For the smaller batch size ($N$=1344), our approach (solid blue line) scales from 1 to 4 nodes with 84\% strong scaling efficiency, and when we keep scaling from 4 to 16 nodes the parallel efficiency further drops down to 38\%. The main reason for this efficiency drop involves the small mini-batch per socket as we strong scale (we use pure data parallelism to scale out). As a result, we get reduced efficiency within the LSTM cell computation. For example, with $N$=1344 and at the concurrency of 1 node (2 MPI ranks), the mini-batch per socket is 672 whereas at 16 nodes (32 MPI ranks), the mini-batch per socket is 42. Nevertheless, we are able to increase the performance all the way up to 16 nodes even for such a small batch size and we achieve 35.8 KWPS. For comparison, the reference LSTM cell + TF approach (orange line) achieves 15.36 KWPS, thus the approach with our kernels is 2.33$\times$ faster. As we increase the global batch size, the strong scaling efficiency is better because the local computation does not suffer from very small mini-batch. For example, with batch size $N$=2688 our strong scaling efficiency at 16 nodes is 58\% achieving 52.5 KWPS, and is 2.77$\times$ faster than the reference LSTM cell that achieves 18.9 KWPS. With the largest batch size ($N$=5376) the strong scaling efficiency at 16 nodes is 75.2\% achieving 65.9 KWPS. For the same setup, the reference LSTM cell achieves 32.32 KWPS, thus our approach is 2.04$\times$ faster. For the last two experimental setups, we start from 2 and 4 nodes respectively since those batch sizes are too large for the available memory of a single node. To the best of our knowledge, these achieved GNMT training results are the best in class on CPU platforms. For completeness, we mention here the performance of contemporary Nvidia V100 GPU systems on the 4-layer GNMT model in Tensorflow (FP32 precision): The achieved performance is 12.7 and 83.3 KWPS on 1 and 8 GPUs respectively~\cite{v100gnmt}. The FP32 peak performance of V100 GPU is 15.7 TF/s which is $\sim$2$\times$ larger than a single CPU node, and also the available bandwidth is 900 GB/s which is 3.6$\times$ larger than the bandwidth of our node. The scaling from 1 to 8 GPUs shows similar scaling efficiency to our distributed training results. \subsubsection{Distributed memory ResNet-50 training results} For ResNet-50 training, we integrated our new CNN kernels into the GxM framework which has been shown to scale efficiently on clusters of CPUs via the MLSL library~\cite{sc18,mlsl}. We verified correctness of our experiments by converging to the same state of the art accuracies, e.g.\ 75.7\% Top-1 accuracy. For Nvidia V100 performance, we use numbers from Nvidia~\cite{v100_fp16_resnet} and previous work~\cite{xu2018deep}. Figure~\ref{fig:strong_all}~(b) summarizes the obtained performance. For the single node CPU measurements, we use 28 cores per socket and the mini-batch is 56 (dual socket nodes). Our approach (red flat line) achieves 149 images/sec and is 1.45$\times$ faster than the configuration with MKL-DNN and Tensorflow (orange flat line) which attains 103 images/sec. If we increase the mini-batch to 224, MKL-DNN improves its efficiency and is able to obtain 129 images/sec. For completeness, we mention the achieved performance of one V100 GPU which is 371 images/sec in single precision (green flat line) and 870 images/sec in mixed FP16/FP32 precision (black flat line) -- the latter approach uses the available tensor cores~\cite{markidis2018nvidia}. In order to scale out with GxM, we dedicate 2 cores per node for communication via MLSL primitives and we use 54 cores for computations. In Figure~\ref{fig:strong_all}~(b), we illustrate with solid blue line the scaling of GxM with our new CNN kernels (both axes are in logarithmic scale). We scale up to 32 nodes with 95.3\% parallel efficiency, achieving at the concurrency of 32 nodes (1,792 cores) 4432 images/sec. To the best of our knowledge, these are the best reported distributed memory training results for CPUs (in terms of efficiency). Previous work, which also uses GxM~\cite{sc18}, obtained on the same platform 1696 images/sec at the concurrency of 16 nodes, while our CNN kernels within GxM achieve 2239 images/sec, improving the end-to-end training performance by 1.32$\times$. \subsection{CNN kernels on integrated GPU and TVM} \label{subsec:poc_cnn_results} \begin{figure*}[t!] \centering \includegraphics[width=2.0\columnwidth]{updated_cldnn.pdf} \caption{CNN forward propagation: (Left) integrated GPU Gen9, (Right) implementation with batch-reduce GEMM in TVM} \label{fig:poc_cnn} \end{figure*} In order to showcase the generalizability of our approach to diverse platforms, we developed the batch-reduce GEMM kernel in OpenCL. We implemented forward propagation CNN kernels (Algorithm~\ref{alg:cnn_br_gemm}) targeting Intel's integrated GPU Gen9 (Core i7 6770HQ) ~\cite{junkins2015compute} which has peak performance of 1152 GFLOPS. Figure~\ref{fig:poc_cnn}~(Left) illustrates the performance of our kernels (blue bars) and the vendor-optimized library Intel clDNN~\cite{cldnn} (orange bars). For the clDNN experiments, we tried all the available tensor layouts and picked the one that yields the highest performance. For this experiment the mini-batch size is $N$=32. We conclude that our kernels with batch-reduce GEMM achieve similar performance to clDNN. When considering the weighted efficiency, our kernels run at 728.3 Gflops and the clDNN kernels at 753.5 Gflops, thus our approach is within 3\% of the vendor-optimized, ad hoc implementation. Once again, in this work, the specific algorithm/kernel development can be seen as loop tuning around batch-reduce GEMM, cf.\ section~\ref{sec:dl_algs}. Even though our DL kernels perform this tuning in a well-informed, manual fashion and are implemented in high-level C code, an alternative is to use a high-level tensor framework. Here we present results of such a proof-of-concept design, where we implement the forward convolutions within TVM~\cite{chen2018tvm}. More specifically, we provide to TVM the forward propagation loop recipes in high-level Python code, and at the innermost loop nest level we invoke our batch-reduce GEMM kernel. To assess the efficiency of our approach, we consider the \emph{inference} use-case of CNNs (i.e.\ only the forward propagation pass). One idiosyncrasy of inference compared to training is the very small mini-batch $N$=1 that is necessitated to meet the latency requirements of the application. Figure~\ref{fig:poc_cnn}~(Right) shows the performance of our TVM implementation (green bars) on ResNet-50 forward kernels (mini-batch $N$=1) on the SKX 8180 platform. In this plot we also show the performance of the following implementations: i) CNN kernels within AutoTVM developed by Amazon~\cite{liu2018optimizing} (yellow bars), which are \emph{auto-tuned} for inference, ii) MKL-DNN (orange bars), and iii) the CNN kernel performance of our high-level C code kernels (blue bars). First, we observe that our DL kernels are the most efficient, achieving overall weighted efficiency of 2492 GFLOPS. Our Python implementation within TVM that exploits the batch-reduce GEMM kernel runs at 2361 GFLOPS, and consequently is within 5.3\% of our C implementation. Our TVM implementation is 2\% faster than the Amazon-AutoTVM \emph{auto-tuned code} and 1.24$\times$ faster than the vendor optimized MKL-DNN library. These results show that high-performance DL kernels within high-level tensor frameworks are feasible if the proper building block is used. \section{Related Work} The status quo in the development of high-performance DL workloads entails vendor-optimized DL primitives within some high-level deep learning framework (e.g. Tensorflow~\cite{tensorflow2015}, Pytorch~\cite{paszke2017pytorch}, Caffe~\cite{jia2014caffe}). MKL-DNN~\cite{mkldnn} is the Intel optimized DL library that provides specialized primitives (e.g.\ convolutions, RNN/LSTM cell, fully connected layers) for Intel CPUs. Each one of these primitives is individually optimized at a low-level on a per-platform basis in order to maximize performance, leading to numerous, highly specialized code-bases that do not generalize to different architectures. In an analogous way, cuDNN~\cite{chetlur2014cudnn} is the vendor-optimized DL library targeting Nvidia GPUs. clDNN~\cite{cldnn} is an open source performance library for DL applications intended for acceleration of DL Inference on Intel GPUs. All these library approaches suffer from the combinatorial explosion of the low-level optimizations that have to be done for each pair <architecture, DL primitive>. On the other hand, our proposed batch-reduce GEMM kernel covers all major DL primitives and it is the sole building-block that has to be optimized at a low-level. An alternative methodology to implement DL primitives is to leverage vendor-optimized linear algebra library calls. For example, convolutions can be lowered to a matrix multiplication~\cite{chellapilla2006high}, but this lowering requires tensor transformations, and the obtained performance deteriorates~\cite{sc18,chetlur2014cudnn}. Aiming to accelerate the small matrix operations that pertain in DL, academia and industry have recently developed batched linear algebra routines ~\cite{dongarra2017design,stridedbatchgmemm,ng2017magmadnn}. The batched GEMM approaches in~\cite{stridedbatchgmemm,ng2017magmadnn} specifically target only Nvidia GPUs where the reduction across output subtensors is relatively cheap. Even though such approaches improve the performance of the DL primitives, they still perform worse than ad hoc implementations (cf.\ batched GEMM approach and mkl-dnn in Figure~\ref{fig:motivation}). Our work extends the batched GEMM routine, enables more optimizations (see Section~\ref{sec:br_gemm}), optimizes for locality and is therefore well suited for latency architectures such as CPUs. Also, the derived DL primitives match/exceed the performance of vendor-optimized ad hoc implementations as shown in Section~\ref{sec:results}. Tensor compilers comprise a promising research area for end-to-end DL workload optimization and performance portability (e.g.\ TVM~\cite{chen2018tvm}, GLOW~\cite{DBLP:journals/corr/abs-1805-00907}, PlaidML~\cite{plaidml}, MLIR~\cite{mlir}, Tensor Comprehensions~\cite{vasilache2018tensor}). Such frameworks treat tensors as first-class objects, and provide optimizations targeting tensor algebra programs (e.g.\ polyhedral optimizations for data movements). However, compilers struggle to optimize the GEMM-flavored loop nests for the nuances of the increasingly complex architectures. Our work can be seen as complementary to this effort, where the GEMM-flavored loops are abstracted into our batch-reduce GEMM call (which is independently optimized at a low-level). Then, the specific DL primitive optimization is reduced to mere loop tuning around a single kernel, and this task can be handed off to a tensor compiler. In Section~\ref{sec:results} we showcased a prototype for CNNs within TVM that uses our kernel. \section{Conclusions} In this work, we showed how the most popular DL algorithms (RNN/LSTM, CNN and MLP) can be formulated with batch-reduce GEMM as basic building block. We demonstrated that our methodology outperforms vendor-optimized, low-level DL primitives by factors up to 1.4$\times$. Moreover, we integrated our DL kernels into distributed frameworks, and optimized end-to-end workflows for GNMT and ResNet-50 training. In multi-node experiments we exceeded the performance of vendor-optimized implementations by up to 2.3$\times$. Additionally, we highlighted the architectural-agnostic aspect of our methodology by matching the CNN kernel performance of a vendor-provided library on integrated GPUs. Finally, we prototyped CNN kernels in a tensor compiler framework by harnessing our batch-reduce GEMM kernel, and matched the performance of \emph{auto-tuned} inference TVM primitives. As future work, we plan to extend our DL primitives for a wider set of architectures/workloads, and also we intend to experiment with Tensor compilers' automatic polyhedral optimization (e.g.~\cite{plaidml, poly}). \FloatBarrier \bibliographystyle{unsrt}
2023-04-23T08:17:44.352Z
2019-06-19T02:08:25.000Z
redpajama/arxiv
arxiv_0000
568
12,323
91a9b27938c8fe1248b076024e172c61cabc21c4
\section*{Introduction} Seidel and Thomas \cite{ST01} introduced the notion of a \textit{spherical object} for the construction of autoequivalences of the bounded derived category of a smooth complex projective variety. Their approach was inspired by homological mirror symmetry; the symplectic analogue of the constructed autoequivalence is a Dehn twist associated to a Lagrangian sphere. Spherical objects were soon generalized, leading to the notion of a \textit{spherical functor}, applicable to the more general construction of autoequivalences of triangulated categories. A functor $F:\mathcal{A}\rightarrow \mathcal{B}$ between (suitably enhanced) triangulated categories that admits a right adjoint $G$ is called spherical if \begin{itemize} \item the endofunctor $T_\mathcal{A}=\operatorname{cone}(id_\mathcal{A}\xrightarrow{u}GF)$ of $\mathcal{A}$ given by the pointwise cone of the unit transformation $u$ of the adjunction $F\dashv G$ is an equivalence and \item the endofunctor $T_\mathcal{B}=\operatorname{cone}(FG\xrightarrow{cu}id_\mathcal{B})[-1]$ of $\mathcal{B}$ given by the pointwise shifted cone of the counit transformation of the adjunction $F\dashv G$ is an equivalence. \end{itemize} The functors $T_\mathcal{A}$ and $T_\mathcal{B}$ are called twist and cotwist functor, respectively. Following \cite{DKSS19}, we adopt a different convention and call the adjunction $F\dashv G$ a \textit{spherical adjunction}, emphasizing that sphericalness is a property of an adjunction. Currently, much interest in spherical adjunctions arises because spherical adjunctions appear as local data of the conjectured concept of perverse schober on surfaces, cf.~\cite{KS14}.\\ The cone in a triangulated category is infamous for not being functorial, so that one needs to choose an enhancement to define the twist and cotwist functors. Common choices are pretriangulated dg-categories and pretriangulated $\mathbb{A}_\infty$-categories. We choose stable $\infty$-categories as the enhancement. This choice provides us access to the powerful framework developed by Lurie in \cite{HTT,HA}. We will begin in \Cref{sec3} by extending basic aspects of the theory of spherical adjunctions to the setting of stable $\infty$-categories, most notably the so called 2/4 property, appearing in the setting of dg-categories in \cite{AL17}. Our proof of the 2/4 property is based on the correspondence between spherical adjunctions and $4$-periodic semiorthogonal decompositions due to \cite{HLS16}, which was extended to the setting of stable $\infty$-categories in \cite{DKSS19}. In \Cref{sec4}, we study the following family of spherical adjunctions. \begin{example*} Let $f:X\rightarrow Y$ be a spherical fibration between Kan complexes, i.e.~the simplicial analogue of a Serre fibration between spaces whose fibers are homotopy equivalent to $n$-spheres. Let $\mathcal{D}$ be any stable $\infty$-category. We call the functor categories $\operatorname{Fun}(Y,\mathcal{D})$ and $\operatorname{Fun}(X,\mathcal{D})$ the stable $\infty$-categories of local systems with values in $\mathcal{D}$ on $Y$ and $X$, respectively. Consider the pullback functor \[f^*:\operatorname{Fun}(Y,\mathcal{D})\longrightarrow \operatorname{Fun}(X,\mathcal{D})\] along $f$ with right adjoint $f_*$. The adjunction $f^*\dashv f_*$ is spherical. \end{example*} For $\mathcal{D}=\mathcal{D}^b(k)$ the bounded derived category of a field $k$, the above examples also appear in \cite[1.11]{KS14} in the setting of pretriangulated dg-categories. These examples of spherical adjunctions can be seen as arising from a family of spherical objects. By allowing any stable $\infty$-category $\mathcal{D}$ as the target for the local systems, we show that such families of spherical objects also exist in the setting of spectrally enriched $\infty$-categories which cannot be treated as dg- or $A_\infty$-categories, such as the $\infty$-category of spectra. In the remaining \Cref{sec5} we turn towards spherical monadic adjunctions. Let $\mathcal{D}$ be a stable $\infty$-category. A monad $M:\mathcal{D}\rightarrow \mathcal{D}$ on $\mathcal{D}$ is an algebra object in the $\infty$-category of endofunctors, i.e~equipped with a multiplication map $m:M^2\rightarrow M$, a unit map $u:id_{\mathcal{D}}\rightarrow M$ and further data exhibiting associativity and unitality. An important source of monads is given by adjunctions. Every adjunction $F\dashv G$ determines a monad $GF$ called the adjunction monad, with unit given by the adjunction unit. Every monad $M$ is equivalent to the adjunction monad of its associated monadic adjunction \[ F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G\,.\] Here $\operatorname{LMod}_M(\mathcal{D})$ denotes the stable $\infty$-category of left modules in $\mathcal{D}$ over the monad $M$, see \Cref{sec1.4} below, and is also sometimes called the Eilenberg-Moore $\infty$-category of the monad. The stable Kleisli $\infty$-category $\overline{\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})}\subset \operatorname{LMod}_\mathcal{D}(D)$ is defined as the smallest stable, full subcategory containing all free $M$-modules. The monad $M$ on $\mathcal{D}$ is also equivalent to the adjunction monad of the stable Kleisli adjunction \[ F:\mathcal{D}\leftrightarrow \overline{\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})}:G\,,\] which is defined as the restriction of the monadic adjunction. We show in \Cref{sec1.4} that this adjunction is the minimal adjunction of stable $\infty$-categories with adjunction monad $M$. If the monadic adjunction is spherical, then the stable Kleisli adjunction is as a restriction also spherical. The main result of this paper is the following characterization of the sphericalness of a monadic adjunction in terms of the properties of the adjunction monad. \begin{introthm}[\Cref{sphmndthm}] \label{thm1} Let $\mathcal{D}$ be a stable $\infty$-category and let $M:\mathcal{D}\rightarrow \mathcal{D}$ be a monad with unit $u:\operatorname{id}_{\mathcal{D}}\rightarrow M$. Consider the endofunctor $T_{\mathcal{D}}=\operatorname{cone}(\operatorname{id}_{\mathcal{D}}\xrightarrow{u}M)\in \operatorname{Fun}(\mathcal{D},\mathcal{D})$. The following conditions are equivalent. \begin{enumerate} \item The endofunctor $T_{\mathcal{D}}$ is an equivalence and the unit $u$ satisfies $T_\mathcal{D}u\simeq uT_\mathcal{D}$. \item The monadic adjunction $F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G$ is spherical. \end{enumerate} \end{introthm} The main ingredient in the proof of \Cref{thm1} is Lurie's far reaching $\infty$-categorical Barr-Beck theorem. \Cref{thm1} extends and completes a discussion in \cite[Section 3.2]{Seg18}. Using \Cref{thm1}, we can also extend the main result of \cite{Seg18} to the setting of stable $\infty$-categories. \begin{introcor}[\Cref{cor:sph}]\label{cor1} Every autoequivalence of a stable $\infty$-category arises as the twist functor of a spherical adjunction. \end{introcor} For the proof of \Cref{cor1}, we consider the square-zero extension monad, whose monadic adjunction is spherical and whose twist functor is equivalent to the autoequivalence. In \cite{Seg18}, Segal recovers the autoequivalence via the Kleisli adjunction of the square-zero extension monad, which in the setting of dg-categories is Morita equivalent to the stable Kleisli adjunction of the monad. We can thus describe any autoequivalence as the twist functor of both a monadic adjunction and a stable Kleisli adjunction. \Cref{thm1} has further implications for all spherical adjunctions. We describe in \Cref{sphmndprop} a characterization of the sphericalness of an adjunction (not necessarily monadic) similar to \Cref{thm1}. We conclude this work by providing a counterexample to an expectation raised by Segal, see "Proposition" 3.10 in \cite{Seg18}. Given a spherical adjunction $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$, such that $F$ is essentially surjective, the expectation states that the adjunction can be recovered from the twist functor $T_{\mathcal{D}}$ and its section $s:T_{\mathcal{D}}[-1]\rightarrow \operatorname{id}_{\mathcal{D}}$, see \Cref{sec4.1}. Such an adjunction is by \Cref{stbKleisli} equivalent to the stable Kleisli adjunction of the adjunction monad $M=GF$ and thus determined by the adjunction monad. We provide the following counter-example to the expectation, that the monad can be recovered from the section of the twist functor. Given a field $k$, there are two algebra structures on $k\oplus k$, the square-zero algebra structure and the product algebra structure. These determine different monads with underlying endofunctor $(k\oplus k)\otimes_k \mhyphen:\mathcal{D}(k)\rightarrow \mathcal{D}(k)$. The two different arising stable Kleisli adjunctions are spherical and have equivalent twist functors and sections. \subsection*{Acknowledgements} I deeply thank my supervisor Tobias Dyckerhoff for his availability and guidance. I further wish to thank Ed Segal for valuable comments on a draft of this paper and in particular for suggesting a simplification of the \Cref{ex1,ex2}. Finally, I also wish to thank two anonymous referees for helping improve the readability of the paper. This paper is based on the author's Master's thesis. The author acknowledges support by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306. \section{Preliminaries}\label{sec2} In this paper we assume familiarity with the basic notions of $\infty$-category theory as developed by Joyal and Lurie. In \Cref{in1}, we provide an informal account of the role of stable $\infty$-categories in the remainder of the text. In particular, we discuss how stable $\infty$-categories compare to other notions of enhancements of triangulated categories. In the \Cref{sec1.2,sec2.2,sec1.4} we introduce some concepts and results from the theory of $\infty$-categories. In \Cref{sec1.5} we recall the relationship between semiorthogonal decompositions and adjunctions of stable $\infty$-categories, as appearing in \cite{DKSS19}. For an extensive treatment of the theory of $\infty$-categories and stable $\infty$-categories we refer to \cite{HTT} and \cite{HA}, respectively. The contents of this section are not original, except for the discussion of the stable Kleisli $\infty$-category in \Cref{sec1.4}. \subsection{Stable \texorpdfstring{$\infty$}{infinity}-categories}\label{in1} For any $\infty$-category, there is an associated $1$-category called the homotopy category. A stable $\infty$-category is an $\infty$-category with additional properties which ensure that the homotopy category can be given the structure of a triangulated category. Practically all examples of interest of triangulated categories appear as the homotopy category of a stable $\infty$-category. We can thus use stable $\infty$-categories as an enhancement of triangulated categories. Stable $\infty$-categories posses a number of convenient features, which distinguishes them from other choices of enhancements. \begin{itemize} \item There is an intrinsic notion of limits and colimits in any $\infty$-category. We can thus characterize many appearing mathematical objects via universal properties. \item By using universal properties many definitions can be stated in a simpler form and many theorems become more general. \item There is an intrinsic notion of Kan extension. They allow for much flexibility in constructing functors and $\infty$-categories. \item Any pretriangulated dg- or $\mathbb{A}_\infty$-category can be regarded as a stable $\infty$-category via the nerve construction. There are further examples of stable $\infty$-categories such as the stable $\infty$-category of spectra. \end{itemize} We present a small dictionary for translating between triangulated categories and stable $\infty$-categories in \Cref{table1}. \begin{table}[h!] \setlength{\extrarowheight}{9pt} \setlength{\tabcolsep}{12pt} \centering \begin{tabular}{cc} triangulated categories & stable $\infty$-categories\\ \midrule object & $0$-simplex or vertex or object \\ morphism & $1$-simplex or edge or morphism \\ shift functor & suspension functor \\ inverse shift functor & loop functor \\ distinguished triangle & fiber and cofiber sequence\\ mapping cone & cofiber \\ mapping cone shifted by $[-1]$ & fiber \\ adjunction & biCartesian fibration \end{tabular} \caption{A small dictionary for translating between triangulated categories and stable $\infty$-categories.} \label{table1} \end{table} The concept of stable $\infty$-categories first appeared in \cite{Lur06}, building on the idea of a stable model category, originating in \cite{Hov99}. Applying the theory of stable $\infty$-categories has become feasible after the foundational works \cite{HTT} and \cite{HA}. Our hope is that this paper exemplifies that the theory of stable $\infty$-categories may provide new and efficient tools for studying spherical adjunctions, both for theoretical considerations and practical computations. \subsection{Adjunctions and the Grothendieck construction}\label{sec1.2} We begin by recalling the notion of an adjunction of $\infty$-categories. An adjunction of 1-categories can be defined as a pair of functors with unit and counit transformations satisfying the triangle identities. For adjunctions of $\infty$-categories, one needs to keep track of further data. There are specific 2-simplicies exhibiting the triangle identities and there are further simplicies exhibiting further compatibility properties of the $2$-simplicies and so on. To avoid making any choice of such data, one adopts a different approach. One encodes the data of an adjunction in a functor $\mathcal{M}\rightarrow \Delta^1$ that is at the same time a Cartesian fibration as well as a coCartesian fibration, i.e.~a functor with certain lifting properties. A Cartesian and coCartesian fibration is called a biCartesian fibration. \begin{definition} An adjunction between $\infty$-categories $\mathcal{A}$ and $\mathcal{B}$ is a biCartesian fibration $p:\mathcal{M}\rightarrow \Delta^1$ with fibers $\mathcal{A}$ and $\mathcal{B}$ over $0$ and $1$. \end{definition} Given a biCartesian fibration $p:\mathcal{M}\rightarrow \Delta^1$ with fibers $\mathcal{A}$ and $\mathcal{B}$, we can associate a functor $G:\mathcal{B}\rightarrow \mathcal{A}$ to the the Cartesian fibration $p$ and a second functor $F:\mathcal{A}\rightarrow \mathcal{B}$ to the coCartesian fibration $p$, see \cite[Section 5.2.1]{HTT}. We call any pair of functors $F,G$ arising in this way adjoint and write $F\dashv G$. The functor $F$ is called the left adjoint and the functor $G$ the right adjoint. The unit, counit and further coherence data can also be recovered using the lifting properties of $p$. Let $\mathcal{M}\rightarrow \Delta^1$ be a biCartesian fibration as above. We are interested in distinguished edges $e:a\rightarrow b$ in $\mathcal{M}$ with $a\in \mathcal{A}$ and $b\in \mathcal{B}$, called coCartesian and Cartesian edges, defined via certain lifting properties in \cite[Section 2.4.1]{HTT}. The edge $e$ is coCartesian if, informally, it describes the application of the functor $F$ to $a$ and is thus in particular equivalent to an edge of the form $a\rightarrow F(a)$. The edge $e$ is Cartesian if, informally, it describes the application of the functor $G$ to $b$ and is thus in particular equivalent to an edge of the form $G(b)\rightarrow b$. \begin{notation} Let $e$ be an edge as above. We write $e:a\xrightarrow{\ast}b$ if $e$ is a Cartesian edge and $e:a\xrightarrow{!}b$ if $e$ is a coCartesian edge. \end{notation} \begin{definition} Let $p:\mathcal{M}\rightarrow \Delta^1$ be an adjunction between $\mathcal{A}$ and $\mathcal{B}$. An edge $e:a\rightarrow a'$ in $\mathcal{A}$ is called a unit map if there exists a diagram in $\mathcal{M}$ (i.e.~object of $\operatorname{Fun}(\Delta^2,\mathcal{M})$) of the form \[ \begin{tikzcd} & b \\ a \arrow[r, "e"] \arrow[ru, "!"] & a' \arrow[u, "\ast"'] \end{tikzcd} \] with $b\in \mathcal{B}$. An edge $e':b\rightarrow b'$ in $\mathcal{B}$ is called a counit map if there exists a diagram in $\mathcal{M}$ of the form \[ \begin{tikzcd} & a \arrow[ld, "!"'] \arrow[d, "\ast"] \\ b \arrow[r, "e'"] & b' \end{tikzcd} \] with $a\in \mathcal{A}$. \end{definition} By the properties of the involved coCartesian and Cartesian edges, unit and counit maps are determined up to contractible choice by their domain and target, respectively.\\ As we now explain, given a functor we can encode it in a Cartesian or coCartesian fibration by applying the Grothendieck construction. Consider more generally a small $1$-category $C$ and a functor $f:C\rightarrow \operatorname{Set}_\Delta$ taking values in $\infty$-categories. The relative nerve construction, cf.~\cite[3.2.5.2]{HTT}, associates a coCartesian fibration $\Gamma(f)\longrightarrow N(C)$ over the nerve $N(C)$ of $C$ whose fibers are equivalent to the values of the functor $f$. We will call the relative nerve construction the covariant Grothendieck construction. We call the dual version of the relative nerve construction the contravariant Grothendieck construction, it is given by the Cartesian fibration $\chi(f)=\Gamma\left((-)^{op}\circ f\right)^{op}\rightarrow N(C)^{op}$, where $(-)^{op}$ denotes the autoequivalence of $\operatorname{Set}_\Delta$ that assigns to a simplicial set its opposite simplicial set. Consider the case where $C=[1]$ and $N(C)=\Delta^1$. Given $f:[1]\rightarrow \operatorname{Set}_\Delta$ taking values in $\infty$-categories, we can identify it with a functor of $\infty$-categories denoted $\hat{f}:f(0)\longrightarrow f(1).$ Then $\Gamma(\hat{f})\coloneqq\Gamma(f)$ is spanned by the two full subcategories $f(0)=\Gamma(\hat{f})\times_{\{0\}}\Delta^1$ and $f(1)=\Gamma(\hat{f})\times_{\{1\}}\Delta^1.$ By definition of $\Gamma(\hat{f})$, there are no edges from $f(1)$ to $f(0)$ in $\Gamma(\hat{f})$. An edge $x\rightarrow y$ where $x\in f(0)$ and $y\in f(1)$ corresponds to the data of two vertices $x\in f(0),y\in f(1)$ and an edge $\hat{f}(x)\rightarrow y$ in $f(1)$. We note that the functor associated to the coCartesian fibration $\Gamma(f)\rightarrow \Delta^1$ is equivalent to $\hat{f}$. The functor $\hat{f}$ admits a right adjoint if and only if the coCartesian fibration $\Gamma(\hat{f})\rightarrow \Delta^1$ is also Cartesian, see \cite[5.2.1.3]{HTT}. The Grothendieck construction thus allows us to recover an adjunction from any of its two adjoints. We end this section with recording a technical fact, which is best referred to as needed. Consider a map of simplicial sets $f:X\rightarrow Y$ and an $\infty$-category $\mathcal{D}$ such that all functors in $\operatorname{Fun}(X,\mathcal{D})$ admit colimits. Then there is an adjunction $f_!:\operatorname{Fun}(X,\mathcal{D})\leftrightarrow \operatorname{Fun}(Y,\mathcal{D}):f^*$ between the left Kan extension functor and the pullback functor by \cite[4.3.3.7]{HTT}. A coCartesian edge in the biCartesian fibration $\chi(f^*)\rightarrow \Delta^1$ is of the form $F\rightarrow f_!(F)$ with $F\in \operatorname{Fun}(X,\mathcal{D})$. Such an edge corresponds to the data of a left extension of $F$ by $f_!(F)$, as defined in \cite[4.3.3.1]{HTT}. That extension is a left Kan extension. \begin{lemma} \label{Kanextlem} Let $f:X\rightarrow Y$ be a map of simplicial sets and let $\mathcal{D}$ be an $\infty$-category such that all functors in $\operatorname{Fun}(X,\mathcal{D})$ admit colimits. Consider the adjunction $f_!:\operatorname{Fun}(X,\mathcal{D})\leftrightarrow \operatorname{Fun}(Y,\mathcal{D}):f^*$ between the left Kan extension functor and the pullback functor. An edge $F\rightarrow G$ in $\chi(f^*)$ with $F\in \operatorname{Fun}(X,\mathcal{D})$ and $G\in \operatorname{Fun}(Y,\mathcal{D})$ is coCartesian if and only if the induced edge $F\rightarrow f^*(G) $ is a left Kan extension. \end{lemma} \subsection{Kan extensions and the construction of the twist functors}\label{sec2.2} We begin by recalling the general procedure for constructing functors via Kan extensions. We are given an $\infty$-category $\mathcal{C}$ and two simplicial sets $A'\subset A\in \operatorname{Set}_\Delta$. We define an $\infty$-category $\mathcal{D}$ as the full subcategory of the $\infty$-category of diagrams $\operatorname{Fun}(A,\mathcal{C})$ spanned by functors that are either left or right Kan extensions of their restriction to $A'$. We denote $\mathcal{D}'\coloneqq\operatorname{Fun}(A',\mathcal{C})$. We find the restriction functor $\operatorname{res}:\mathcal{D}\longrightarrow \mathcal{D}'$ to be a trivial fibration, see \cite[4.3.2.15]{HTT}, and in particular an equivalence of $\infty$-categories. A trivial fibration has the property that there exists an essentially unique section, i.e.~the space of sections \[\operatorname{Fun}_{\mathcal{D}'}(\mathcal{D}',\mathcal{D})\coloneqq \operatorname{Fun}(\mathcal{D}',\mathcal{D}) \times_{\operatorname{Fun}(\mathcal{D}',\mathcal{D}')} \{id_{\mathcal{D}'}\}\] is contractible. We can make the choice of one such section $F: \mathcal{D}'\rightarrow \mathcal{D}$ and have constructed an interesting functor. \begin{remark}\label{rem:twistdef} We illustrate the above procedure with the construction of the twist and cotwist functors associated to an adjunction of stable $\infty$-categories in \Cref{twistconstr}, as appearing in \cite{DKSS19}. Before doing so, let us comment on equivalent way to describe the twist and cotwist functors. Consider an adjunction $\mathcal{M}\rightarrow \Delta^1$ of stable $\infty$-categories, associated to a pair of functor $F:\mathcal{A}\leftrightarrow \mathcal{B}:G$ and choose a unit $u$ and counit $cu$. Then the twist functor $T_\mathcal{A}$ defined in \Cref{twistconstr} is equivalent to the functor given by the cofiber of $u$ in the stable $\infty$-category $\operatorname{Fun}(\mathcal{A},\mathcal{A})$. Dually, the cotwist functor $T_\mathcal{B}$ is equivalent to the fiber of $cu$ in the stable $\infty$-category $\operatorname{Fun}(\mathcal{B},\mathcal{B})$. We will use the definition provided in \Cref{twistconstr} because it is better applicable in proofs. \end{remark} \begin{construction}\label{twistconstr} Let $p:\mathcal{M}\rightarrow \Delta^1$ be an adjunction between stable $\infty$-categories $\mathcal{A}$ and $\mathcal{B}$. We split the construction of the twist functor into six steps, denoted {\bf a)} to {\bf f)} below. \noindent {\bf a)} Consider the full subcategory $\mathcal{D}_1$ of $\operatorname{Fun}_{\Delta^1}(\Delta^{1},\mathcal{M})$ spanned by functors that are a left Kan extension relative $p$ of their restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^{\{0\}},\mathcal{M})$. The vertices of $\mathcal{D}_1$ can be depicted as \[ a\xlongrightarrow{!}b\] with $a\in \mathcal{A}$ and $b\in \mathcal{B}$. By \cite[4.3.2.15]{HTT}, the restriction functor to $a$ is a trivial fibration from $\mathcal{D}_1$ to $\mathcal{A}$. \noindent {\bf b)} We consider $\Lambda^2_2$ as lying over $\Delta^1$, by mapping $0,1$ to $0$ and $2$ to $1$. Consider the inclusion $\Delta^1\simeq \Delta^{\{1,2\}}\subset \Lambda^2_2$ and the resulting restriction functor $\operatorname{Fun}_{\Delta^1}(\Lambda^2_2,\mathcal{M})\rightarrow \operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$. We define $\mathcal{D}_2$ to be the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Lambda^2_2,\mathcal{M})$ spanned by diagrams whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ lies in $\mathcal{D}_1$ and that are a right Kan extension relative $p$ of their restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$. The vertices of $\mathcal{D}_2$ are of the form \[ \begin{tikzcd} a\arrow[dr, "!"]&\\ a'\arrow[r, "\ast"]& b \end{tikzcd} \] where $a,a'\in \mathcal{A}$ and $b\in \mathcal{B}$. The restriction functor defines a trivial fibration from $\mathcal{D}_2$ to $\mathcal{D}_1$. \noindent {\bf c)} We consider $\Delta^2$ as lying over $\Delta^1$, by mapping $0,1$ to $0$ and $2$ to $1$. Let $E$ denote the set of all degenerate edges of $\Delta^2$ together with the edge $\Delta^{\{1,2\}}$. The inclusion $(\Lambda^2_2,E\cap (\Lambda^2_2)_1)\subset (\Delta^2,E)$ is by \cite[3.1.1.1]{HTT} a marked anodyne morphism of marked simplicial sets, so that the restriction functor $\operatorname{Fun}_{\Delta^1}((\Delta^2,E),\mathcal{M}^\natural)\rightarrow\operatorname{Fun}_{\Delta^1}((\Lambda^2_2,E\cap (\Lambda^2_2)_1),\mathcal{M}^\natural)$ is a trivial fibration by \cite[3.1.3.4]{HTT}, see also \cite[3.1.1.8]{HTT} for the notation $M^\natural$. Consider the pullback of simplicial sets $\mathcal{D}_3=\operatorname{Fun}_{\Delta^1}((\Delta^2,E),\mathcal{M}^\natural)\times_{\operatorname{Fun}_{\Delta^1}((\Lambda^2_2,E\cap (\Lambda^2_2)_1),\mathcal{M}^\natural)}\mathcal{D}_2$. The $\infty$-category $\mathcal{D}_3$ is equivalent to the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Delta^2,\mathcal{M})$ spanned by vertices of the following form. \[ \begin{tikzcd} a\arrow[dr, "!"]\arrow[d]&\\ a'\arrow[r, "\ast"]& b \end{tikzcd} \] The functor from $\mathcal{D}_3$ to $\mathcal{D}_2$ contained in the defining pullback diagram of $\mathcal{D}_3$ is again a trivial fibration. \noindent {\bf d)} We consider the simplicial set $\Delta^{\{0,1'\}}$ as lying over $\Delta^1$ via the constant map with value $0$. Let $\mathcal{D}_4$ be the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}},\mathcal{M})$ spanned by functors that are a $p$-relative right Kan extension of their restriction to $\Delta^2$ and whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^2,\mathcal{M})$ is contained in $\mathcal{D}_3$. The vertices of $\mathcal{D}_4$ are diagrams of the following form. \[ \begin{tikzcd} 0 & a \arrow[rd, "!"] \arrow[d] \arrow[l] & \\ & a' \arrow[r, "\ast"] & b \end{tikzcd} \] We find the restriction functor to be a trivial fibration from $\mathcal{D}_4$ to $\mathcal{D}_3$. \noindent {\bf e)} We consider the simplicial set $\Delta^1\times\Delta^1$ lying over $\Delta^1$ with the constant map with value $0$. Consider the full subcategory $\mathcal{D}_5$ of $\operatorname{Fun}_{\Delta^1}(\Delta^2 \coprod_{\Delta^{\{0,1\}}} \Delta^1\times\Delta^1,\mathcal{M})$ spanned by functors that are $p$-relative left Kan extensions of their restriction to $\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}}$ and whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}},\mathcal{M})$ is contained in $\mathcal{D}_4$. The vertices of $\mathcal{D}_5$ can be depicted as follows, \begin{equation}\label{squeq} \begin{tikzcd} 0 \arrow[d] & a \arrow[rd, "!"] \arrow[d] \arrow[l] \arrow[ld, "\square", phantom] & \\ a'' & a' \arrow[r, "\ast"] \arrow[l] & b \end{tikzcd} \end{equation} with $a,a',a''\in \mathcal{A}$ and $b\in \mathcal{B}$. The box $\square$ in the center of the commutative square in diagram \eqref{squeq} denotes that the square is biCartesian, i.e.~both pullback and pushout. The square thus describes a fiber and cofiber sequence. We find the restriction functor to be a trivial fibration from $\mathcal{D}_5$ to $\mathcal{D}_4$. \noindent {\bf f)} Composing the above constructed trivial fibrations, we obtain the trivial fibration $R:\mathcal{D}_5\rightarrow \mathcal{A}$ given by the restriction functor to the vertex $a$. We define up to contractible choice the twist functor \[ T_\mathcal{A}:\mathcal{A}\longrightarrow \mathcal{A} \] as the composition of a section of $R$ with the restriction functor to $a''$ from $\mathcal{D}_5$ to $\mathcal{A}$.\\ The construction of the cotwist functor is dual. We consider the full subcategory $\mathcal{D}'$ of $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{1,2\}}}\Delta^1\times\Delta^1,\mathcal{M})$ spanned by functors of the following form, \[ \begin{tikzcd} b'' \arrow[r] \arrow[d] \arrow[rd, "\square", phantom] & b' \arrow[d] & \\ 0 \arrow[r] & b & a \arrow[l, "\ast"'] \arrow[lu, "!"'] \end{tikzcd} \] where $a\in \mathcal{A}$ and $b,b',b''\in \mathcal{B}$. Similar to before, the restriction functor $R':\mathcal{D}'\rightarrow \mathcal{B}$ to the vertex $b$ is a trivial fibration. We define up to contractible choice the cotwist functor \[ T_\mathcal{B}:\mathcal{B}\longrightarrow \mathcal{B} \] as the composition of a section of $R'$ with the restriction functor to $b''$ from $\mathcal{D}'$ to $\mathcal{B}$. \end{construction} \begin{remark} We will often describe appearing diagram $\infty$-categories by specifying their vertices up to equivalence, leaving their construction using Kan extensions implicit. For example, consider the setup of \Cref{twistconstr} and denote the adjoint functors associated to the biCartesian fibration $p:\mathcal{M}\rightarrow \Delta^1$ by $F:\mathcal{A}\leftrightarrow \mathcal{B}:G$. We can describe the $\infty$-category $\mathcal{D}'$ as spanned by functors of the following form, \[ \begin{tikzcd} T_\mathcal{B}(b) \arrow[r] \arrow[d] \arrow[rd, "\square", phantom] & FG(a) \arrow[d] & \\ 0 \arrow[r] & b & G(b) \arrow[l, "\ast"'] \arrow[lu, "!"'] \end{tikzcd} \] up to equivalence. This notational convention will help to remember the meaning of smaller diagram $\infty$-categories and also simplify the notation in the construction of large diagram $\infty$-categories. \end{remark} \subsection{Monadic adjunctions}\label{sec1.4} In this section we recall the theory of modules over a monad and of monadic adjunctions and describe the Kleisli $\infty$-category and stable Kleisli $\infty$-category associated to a monad. Before turning to monads, we first recall some concepts and notation regarding monoidal $\infty$-categories. The definition is based on the formalism of $\infty$-operads, see \cite[Section 2.1]{HA}, meaning certain functors $O^\otimes\rightarrow N(\operatorname{Fin}_\ast)$, where $\operatorname{Fin}_\ast$ denotes the category of finite pointed sets. The associative $\infty$-operad is denoted by $\operatorname{Assoc}^\otimes$, for the definition see \cite[4.1.1.3]{HA}. A monoidal $\infty$-category $\mathcal{C}$ is defined to be a coCartesian fibration of $\infty$-operads $\mathcal{C}^\otimes\rightarrow \operatorname{Assoc}^\otimes$, see \cite[4.1.1.10]{HA}. The $\infty$-category $\mathcal{C}$ arises from the $\infty$-operad $\mathcal{C}^{\otimes}\rightarrow N(\operatorname{Fin}_\ast)$ as the fiber over $\langle 1\rangle \in N(\operatorname{Fin}_\ast)$. The coCartesian fibration $\mathcal{C}^{\otimes}\rightarrow \operatorname{Assoc}^\otimes$ serves to encode a monoidal product $\otimes:\mathcal{C}\times\mathcal{C}\rightarrow \mathcal{C}$, a monoidal unit and the data exhibiting coherent associativity and unitality. Let $\mathcal{D}$ be an $\infty$-category. We can turn the $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ of endofunctors into a monoidal $\infty$-category via the composition monoidal structure, see \Cref{endlem} below. The monoidal product of the composition monoidal structure is given by composition of functors. Given a monoidal $\infty$-category, there is a notion of an associative algebra object, see \cite[2.1.3.1]{HA}. If the monoidal $\infty$-category is the nerve of a monoidal $1$-category, then this notion agrees with the 1-categorical notion of an associative algebra object. An associative algebra object in the monoidal $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ is called a monad on $\mathcal{D}$. We can informally express the datum of a monad $M:\mathcal{D}\rightarrow \mathcal{D}$ as follows. \begin{itemize} \item A multiplication map $m:M\otimes M= M^2 \rightarrow M$. \item The data expressing the coherent associativity of the map $m$. \item A unit map $u:id_\mathcal{D}\rightarrow M$. \item The data expressing the unitality of $u$ and $m$. \end{itemize} Every adjunction $F:\mathcal{D}\leftrightarrow\mathcal{C}: G$ of $\infty$-categories determines a monad $M=GF:\mathcal{D}\rightarrow\mathcal{D}$, see \cite[4.7.3.3]{HA}. We call the monad $M$ the adjunction monad of $F\dashv G$. The multiplication map of $M$ is induced by the counit of the adjunction and the unit $id_\mathcal{D}\rightarrow M$ of the monad $M$ is equivalent to the unit map of the adjunction. Every monad arises as the adjunction monad of the associated monadic adjunction. To define the monadic adjunction, we first introduce the $\infty$-category modules over the monad (the $1$-categorical analogue is called the category of algebras over the monad or also the Eilenberg-Moore category). One can associate to any associative algebra object $A$ in a monoidal $\infty$-category $\mathcal{C}$ an $\infty$-category $\operatorname{LMod}_A(\mathcal{C})$ of left modules in $\mathcal{C}$ over $A$. This is however not sufficiently general for our purpose. We are interested in module objects in $\mathcal{D}$ over a monad in $\operatorname{Fun}(\mathcal{D},\mathcal{D})$. The objects of $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ act on the objects of $\mathcal{D}$ via the evaluation of functors. This can be used to exhibit the $\infty$-category $\mathcal{D}$ as left-tensored over the monoidal $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$, in the sense of \cite[4.2.1.19]{HA}. Given a monad $M\in \operatorname{Fun}(\mathcal{D},\mathcal{D})$, there is thus an associated $\infty$-category $\operatorname{LMod}_M(\mathcal{D})$ of left modules over $M$ in $\mathcal{D}$, see \cite[4.2.1.13]{HA}. Given a monad $M:\mathcal{D}\rightarrow \mathcal{D}$ we denote by $F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G$ the monadic adjunction. The functor $F$ is the free module functor and admits a right adjoint by\cite[4.2.4.8]{HA}. Any adjunction of this form is called monadic. An adjunction is called comonadic if the opposite adjunction is monadic. A functor equivalent to the right adjoint $G$ of a monadic adjunction is called a monadic functor. Lurie’s $\infty$-categorical Barr-Beck theorem \cite[4.7.3.5]{HA} characterizes monadic functors. The theorem states that a functor $G:\mathcal{C}\rightarrow\mathcal{D}$ between $\infty$-categories is monadic if and only if the following conditions are satisfied. \begin{itemize} \item $G$ admits a left adjoint. \item The functor $G$ is conservative, i.e.~reflects isomorphism. \item The $\infty$-category $\mathcal{C}$ admits and $G$ preserves colimits of $G$-split simplicial objects, i.e.~functors $N(\Delta)^{op}\rightarrow \mathcal{C}$ such that their composition with $G$ can be extended to a split simplicial object, as defined in \cite[4.7.2.2]{HA}. \end{itemize} The $\infty$-categorical Barr-Beck theorem is an essential tool in the theory of $\infty$-categories and one of its corollaries \cite[4.7.3.16]{HA} can be used to identify equivalences of $\infty$-categories. A module in $\operatorname{LMod}_M(\mathcal{D})$ is called free if it is equivalent to an element in the image of the left adjoint $F$ of the monadic adjunction. The essential image of $F$ is called the Kleisli $\infty$-category which we denote by $\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})$. The restriction of the monadic adjunction $F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D}):G$ is called the Kleisli adjunction. The Kleisli adjunction is the minimal adjunction whose adjunction monad is equivalent to $M$, as captured by \Cref{Kleisli}. \begin{proposition}\label{Kleisli} Let $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$ be an adjunction of $\infty$-categories with adjunction monad $M$. Denote by $F'':\mathcal{D}\rightarrow \operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})$ the free-module functor of the monadic adjunction of $M$. There exists a fully faithful functor $F':\operatorname{LMod}_{M}^{\operatorname{free}}(\mathcal{D})\rightarrow \mathcal{C}$ making the following diagram commute. \begin{equation}\label{tri1} \begin{tikzcd} \operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D}) \arrow[rr, "F'"] & & \mathcal{C} \\ & \mathcal{D} \arrow[ru, "F"'] \arrow[lu, "F''"] & \end{tikzcd} \end{equation} In particular, there exists an equivalence of $\infty$-categories $\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})\simeq \operatorname{Im}(F)$. \end{proposition} \begin{proof} We adapt the proof of \cite[4.7.3.13]{HA}. Denote the monadic functor of $M$ by $G''$. Consider the canonical functor $G':\mathcal{C}\rightarrow \operatorname{LMod}_M(\mathcal{D})$, defined in the discussion following Proposition 4.7.3.3 in \cite{HA}. There exist an equivalence of functors $G''\circ G'\simeq G$. We define a functor \begin{equation}\label{fhat}\hat{F}: \operatorname{LMod}_M(\mathcal{D})^{op}\xrightarrow{j} \operatorname{Fun}(\operatorname{LMod}_M(\mathcal{D}),\mathcal{S})\xrightarrow{\circ G'} \operatorname{Fun}(\mathcal{C},\mathcal{S})\,,\end{equation} where $j$ denotes the Yoneda embedding of $\operatorname{LMod}_M(\mathcal{C})^{op}$. For $d\in \mathcal{D}$, there exists an equivalence in $\operatorname{Fun}(\mathcal{C},\mathcal{S})$ \begin{equation}\label{ffeq} \hat{F}F''(d)=\operatorname{Map}_{\operatorname{LMod}_M(\mathcal{D})}(F''(d),G'(\mhyphen))\simeq \operatorname{Map}_{\mathcal{D}}(d,G''G'(\mhyphen))\simeq \operatorname{Map}_{\mathcal{C}}(F(d),\mhyphen)\,. \end{equation} It follows that the image of ${\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})^{op}}$ under $\hat{F}$ is given by the full subcategory\linebreak \mbox{$\operatorname{Im}(F)^{op}\subset \mathcal{C}^{op}\subset \operatorname{Fun}(\mathcal{C},\mathcal{S})$} of functors corepresentable by an object in the essential image \mbox{of $F$}. We hence obtain a functor \[ F':\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})\subset \operatorname{LMod}_M(\mathcal{D})\xrightarrow{\hat{F}^{op}}j'(\operatorname{Im}(F)^{op})^{op}\xrightarrow{(j'^{-1})^{op}} \operatorname{Im}(F)\subset \mathcal{C}\,,\] where $j':\mathcal{C}^{op}\rightarrow \operatorname{Fun}(\mathcal{C},\mathcal{S})$ denotes the Yoneda embedding of $\mathcal{C}^{op}$. The equivalence \eqref{ffeq} is functorial in $d\in \mathcal{D}$, so that we see that the diagram \eqref{tri1} commutes. For any $d\in \mathcal{D}$, the equivalence \eqref{ffeq} defines an edge $e:F''(d)\rightarrow G'F'F''(d)$, which is adjoint to the edge $e':d'\rightarrow G''G'F'F''(d)\simeq GF(d)$ which is a unit map of the adjunction $F\dashv G$. The adjunctions $F\dashv G$ and $F''\dashv G''$ have the same associated adjunction monad $M$, so that $e'$ is also a unit map of the adjunction $F''\dashv G''$, showing that $e$ is an equivalence. Hence $G'$ restricts to a functor $G':\operatorname{Im}(F)\rightarrow \operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})$, which is, by construction of $F'$, right adjoint to $F':\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})\rightarrow \operatorname{Im}(F)$ with the unit map at $d$ given by the equivalence $e$. This shows that $F'$ is fully faithful. \end{proof} \begin{lemma} Let $\mathcal{D}$ be a stable $\infty$-category and $M:\mathcal{D}\rightarrow \mathcal{D}$ a monad whose underlying endofunctor is exact. The $\infty$-category $\operatorname{LMod}_M(\mathcal{D})$ is stable. \end{lemma} \begin{proof} This is \cite[7.1.1.4]{HA}. \end{proof} The Kleisli $\infty$-category $\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})$ is in general not stable, even if $M$ is exact. However, if $M$ is a monad whose underlying endofunctor is an exact endofunctor of a stable $\infty$-category there is a stable version of the Kleisli $\infty$-category which is the minimal adjunction of stable $\infty$-categories whose adjunction monad is equivalent to $M$. This is captured by \Cref{stbKleisli}. \begin{definition}~ \begin{itemize} \item A full subcategory $\mathcal{D}$ of a stable $\infty$-category $\mathcal{C}$ is called a stable subcategory if it closed under finite limits and colimits in $\mathcal{C}$ (in particular $\mathcal{D}$ contains a zero object) and is closed under equivalences in $\mathcal{C}$. \item Let $\mathcal{C}$ be a stable $\infty$-category and $\mathcal{C}'$ a full subcategory. The stable closure $\overline{\mathcal{C}'}$ of $\mathcal{C}'$ is the smallest stable subcategory of $\mathcal{C}$ which contains $\mathcal{C}'$. \end{itemize} \end{definition} We call the stable closure $\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}$ of the Kleisli $\infty$-category in $\operatorname{LMod}_M(\mathcal{D})$ the stable Kleisli $\infty$-category. \begin{proposition}\label{stbKleisli} Let $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$ be an adjunction of stable $\infty$-categories with adjunction monad $M$. Denote by $F'':\mathcal{D}\rightarrow \operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})\subset \operatorname{LMod}_M(\mathcal{D}))$ the free-module functor of the monadic adjunction of $M$. There exists a fully faithful functor $F':\overline{\operatorname{LMod}_{M}^{\operatorname{free}}(\mathcal{D})}\rightarrow \mathcal{C}$ making the following diagram commute. \begin{equation*} \begin{tikzcd} \overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})} \arrow[rr, "F'"] & & \mathcal{C} \\ & \mathcal{D} \arrow[ru, "F"'] \arrow[lu, "F''"] & \end{tikzcd} \end{equation*} The functor $F'$ induces an equivalence \[ \overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}\simeq \operatorname{Im}(F')=\overline{\operatorname{Im}(F)}\,.\] \end{proposition} Before we can prove \Cref{stbKleisli}, we need to give a more explicit description of the stable closure of a subcategory. \begin{notation} Let $\mathcal{C}$ be a stable $\infty$-category and let $\mathcal{C}'\subset \mathcal{C}$ be a full subcategory such that $0\in \mathcal{C}'$ and $\mathcal{C}'$ is closed under deloopings in $\mathcal{C}$. Denote by $\operatorname{cl}(\mathcal{C}')$ the full subcategory of $\mathcal{C}$ spanned by objects which are equivalent to the cofiber in $\mathcal{C}$ of an edge in $\mathcal{C}'\subset \mathcal{C}$. \end{notation} \begin{lemma}\label{stbcllem} Let $\mathcal{C}$ be a stable $\infty$-category and $\mathcal{C}'\subset \mathcal{C}$ a full subcategory such that $0\in \mathcal{C}'$ and $\mathcal{C}'$ is closed under deloopings in $\mathcal{C}$. Then the stable closure of $\mathcal{C}'$ in $\mathcal{C}$ is equivalent to the direct limit in $\operatorname{Cat}_\infty$ of the diagram \begin{equation}\label{dirlim} \mathcal{C}'\subset \operatorname{cl}(\mathcal{C}')\subset \operatorname{cl}^2(\mathcal{C}')\subset \dots \,.\end{equation} \end{lemma} \begin{proof} We denote the direct limit of \eqref{dirlim} by $\mathcal{C}''$. We can consider $\mathcal{C}''$ as the full subcategory of $\mathcal{C}$ spanned by the objects of $\operatorname{cl}^i(\mathcal{C}')$ for all $i>0$. Given a morphism $\alpha:x\rightarrow y$ in $\mathcal{C}''$, we find $i>0$ such that $\alpha$ lies in $\operatorname{cl}^i(\mathcal{C}')$. The cofiber sequence exhibiting the cofiber of $\alpha$ in $\mathcal{C}''$ lies in $\operatorname{cl}^{i+1}(\mathcal{C}')\subset \mathcal{C}''$ and is by construction also a cofiber sequence in $\mathcal{C}$. Using that $\mathcal{C}'$ is closed under deloopings in $\mathcal{C}$, it follows that $\operatorname{cl}^i(\mathcal{C}')$ is also closed under deloopings for all $i>0$. It follows that $\mathcal{C}''\subset \mathcal{C}$ is a stable subcategory. Any stable subcategory $\mathcal{D}\subset \mathcal{C}$ satisfies $\operatorname{cl}(\mathcal{D})=\mathcal{D}$. Thus for all $i>0$ we find $\operatorname{cl}^i(\mathcal{C}')\subset \operatorname{cl}^i(\overline{\mathcal{C}'})=\overline{\mathcal{C}'}$. Thus $\mathcal{C}''\subset \overline{\mathcal{C}'}$, which implies by definition $\mathcal{C}''=\overline{\mathcal{C}'}$. \end{proof} \begin{remark}\label{clrem} \Cref{stbcllem} can be informally summarized as showing that any object $c\in \overline{\mathcal{C}'}$ is obtained as a finitely many times repeated cofiber, starting with a finite collections of objects of $\mathcal{C}'$. \end{remark} \begin{proof}[Proof of \Cref{stbKleisli}] Denote the monadic functor of $M$ by $G''$. Consider the functor \begin{equation}\label{fhat2}\hat{F}: \operatorname{LMod}_M(\mathcal{D})^{op}\xrightarrow{j} \operatorname{Fun}(\operatorname{LMod}_M(\mathcal{D}),\mathcal{S})\xrightarrow{\circ G'} \operatorname{Fun}(\mathcal{C},\mathcal{S})\,,\end{equation} from the proof of \Cref{Kleisli}. Denote by $\mathcal{X}\subset \operatorname{LMod}_M(\mathcal{D})$ the full subcategory spanned by objects whose image under $\hat{F}$ is a corepresentable functor. The functor $\hat{F}$ preserves all limits in $\operatorname{LMod}_M(\mathcal{D})^{op}$ by \cite[5.1.3.2]{HTT} and using that limits are computed pointwise in functor categories. The full subcategory of $\operatorname{Fun}(\mathcal{C},\mathcal{S})$ of corepresentable functors is closed under limits. This implies that $\mathcal{X}$ is closed under colimits in $\operatorname{LMod}_M(\mathcal{D})$. The stable Kleisli $\infty$-category $\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}$ is contained in $\mathcal{X}$: this follows from $\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})\subset \mathcal{X}$, that $\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})$ is closed under delooping in $\operatorname{LMod}_M(\mathcal{D})$ and \Cref{stbcllem}. As in the proof of \Cref{Kleisli}, using the inverse of the Yoneda embedding $j'$ of $\mathcal{C}^{op}$, we can thus find a functor $F':\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}\rightarrow \mathcal{C}$. For any $x\in \mathcal{X}$, we denote by $u_x:x\rightarrow G'F'(x)$ the edge in $\operatorname{LMod}_M(\mathcal{D})$ being mapped to $id_{F'(x)}$ via the equivalence $\operatorname{Map}_{\operatorname{LMod}_M(\mathcal{D})}(x,G'F'(x))\simeq \operatorname{Map}_{\mathcal{C}}(F'(x),F'(x))$. Consider the Cartesian fibration $p:\chi(G')\rightarrow \Delta^1$. For $x\in \mathcal{X}\subset \chi(G')\times_{\Delta^1}\Delta^{\{0\}}$, the edge $x\rightarrow F'(x)$ in $\chi(G')$ lying over $0\rightarrow 1$ and corresponding to $u_x:x\rightarrow G'F'(x)$ is $p$-coCartesian by \cite[2.4.4.3]{HTT}. By performing a construction as in the proof of \cite[5.2.2.8]{HTT}, we can produce a natural transformation $u:\mathcal{X}\rightarrow \operatorname{Fun}(\Delta^1,\operatorname{LMod}_M(\mathcal{D}))$ from the identity functor to $G'\circ F'$ such that $u(x)\simeq u_x$. Denote by $\mathcal{X}_0\subset \mathcal{X}$ the full subcategory consisting of $x\in \mathcal{X}$ such that $u_x$ is an equivalence. We have shown in the proof of \Cref{Kleisli} that $\mathcal{X}_0$ contains $\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})$. The functor $G'$ is exact, because $G''\circ G'\simeq G$ is exact and $G''$ is conservative. The functor $G'F'$ is thus exact. It follows that $\mathcal{X}_0$ is closed under the formation of finite colimits. Thus $\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}\subset \mathcal{X}_0$. This shows that $F'$ is a fully faithful functor and thus an equivalence onto its essential image. It also follows that $\operatorname{Im}(F')$ is a stable subcategory of $\mathcal{C}$. Let $x\in \overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}$. Then $x$ is given by a finitely many times repeated cofiber starting with a set of objects in $\operatorname{LMod}^{\operatorname{free}}_M(\mathcal{D})$. Thus $F'(x)\in \operatorname{Im}(F')$ is given by a finitely many times repeated cofiber starting with a set of objects in $\operatorname{Im}(F'|_{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})})=\operatorname{Im}(F)$. We deduce $\operatorname{Im}(F')\subset \overline{\operatorname{Im}(F)}$. Using that $\operatorname{Im}(F')$ is a stable subcategory of $\mathcal{C}$ and that $\operatorname{Im}(F)\subset \operatorname{Im}(F')$, it follows by definition $\overline{\operatorname{Im}(F)}\subset \operatorname{Im}(F')$. We have shown the desired equality $\operatorname{Im}(F')=\overline{\operatorname{Im}(F)}$. \end{proof} We end this section with a (well known) construction of monads from algebra objects in monoidal $\infty$-categories and a characterization of the monads associated to free-forget adjunctions of algebra objects. \begin{lemma}\label{endlem} Let $\mathcal{D}$ be an $\infty$-category. The $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ is an endomorphism object for $\mathcal{D}\in \operatorname{Cat}_\infty$ and in particular admits a monoidal structure, called the composition monoidal structure. If $\mathcal{D}$ is left tensored over a monoidal $\infty$-category $\mathcal{C}$, then there exists an $\operatorname{Assoc}^\otimes$-monoidal functor $i:\mathcal{C}^\otimes\rightarrow \operatorname{Fun}(\mathcal{D},\mathcal{D})^\otimes$, in the sense of \cite[2.1.3.7]{HA}. In particular, given an associative algebra object $A\in \mathcal{C}$, $i(A)\in \operatorname{Fun}(\mathcal{D},\mathcal{D})$ inherits the structure of a monad. We denote $i(A)$ by $A \otimes \mhyphen$\,. \end{lemma} \begin{proof} We show that $\mathcal{E}\coloneqq\operatorname{Fun}(\mathcal{D},\mathcal{D})$ is an endomorphism object for $\mathcal{D}$ in $\operatorname{Cat}_\infty$, meaning a terminal object of the endomorphism $\infty$-category $\operatorname{Cat}_\infty[\mathcal{D}]$ introduced in \cite[4.7.1.1]{HA}. $\operatorname{Cat}_\infty$ is Cartesian closed, the functor $\mathcal{D}\times\mhyphen:\operatorname{Cat}_\infty\rightarrow \operatorname{Cat}_\infty$ admits a right adjoint given by the functor $\operatorname{Fun}(\mathcal{D},\mhyphen)$. Spelling out the definition, it follows that there exists an equivalence \[\operatorname{Cat}_\infty[\mathcal{D}]\simeq \operatorname{Fun}_{\Delta^1}(\Delta^1,\Gamma(\mathcal{D}\times \mhyphen))\times_{\operatorname{Fun}_{\Delta^1}(\{1\},\Gamma((\mathcal{D}\times \mhyphen))} \{\mathcal{D}\}\,.\] Using the equivalence $\Gamma(\mathcal{D}\times \mhyphen)\simeq \chi(\operatorname{Fun}(\mathcal{D},\mhyphen))$, it follows \[ \operatorname{Cat}_\infty[\mathcal{D}]\simeq \operatorname{Cat}_{\infty/\mathcal{E}}\,.\] Thus $\mathcal{E}$ is a terminal object of $\operatorname{Cat}_\infty[\mathcal{D}]$ and admits a monoidal structure by \cite[4.7.1.40, 4.1.3.19]{HA}. The second part of the Lemma follows directly from \cite[4.7.1.40.(2)]{HA} and \cite[4.1.3.19]{HA}. \end{proof} \begin{example} Let $\mathcal{D}$ be a monoidal $\infty$-category. Then $\mathcal{D}$ is canonically left tensored over itself. \Cref{endlem} implies that there exists an $\operatorname{Assoc}^\otimes$-monoidal functor $\mathcal{D}^\otimes\rightarrow \operatorname{Fun}(\mathcal{D},\mathcal{D})^\otimes$. \end{example} \begin{lemma}\label{algmndlem} Let $\mathcal{D}$ be an $\infty$-category left tensored over a monoidal $\infty$-category $\mathcal{C}$. Let $A \in \mathcal{C}$ be an associative algebra object. Consider the monad $M=A \otimes \mhyphen$. Then there exists an equivalence $\operatorname{LMod}_M(\mathcal{D})\simeq \operatorname{LMod}_A(\mathcal{D})$ identifying the forgetful functor $\operatorname{LMod}_A(\mathcal{D})\rightarrow \mathcal{D}$ with the monadic functor of $M$. \end{lemma} \begin{proof} Let $p_1,p_2:\mathcal{O}_1^\otimes,\mathcal{O}_2^\otimes\rightarrow \mathcal{LM}^\otimes$ be coCartesian fibrations exhibiting $\mathcal{D}$ as left tensored over $\mathcal{C}$ and $\operatorname{Fun}(\mathcal{D},\mathcal{D})$, respectively. Using that $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ is terminal in $\operatorname{Alg}_{\mathbb{A}_\infty}(\operatorname{Cat}_\infty)_{/\operatorname{Fun}(\mathcal{D},\mathcal{D})}$, it follows from \cite[4.7.1.40.(2)]{HA}, that there exists an $\mathcal{LM}^\otimes$-monoidal functor $i':\mathcal{O}^\otimes_1\rightarrow \mathcal{O}^\otimes_2$. By definition $\operatorname{LMod}_A(\mathcal{D})\subset \operatorname{Fun}_{\mathcal{LM}^{\otimes}}(\mathcal{LM}^\otimes,\mathcal{O}_1^\otimes)$ and $\operatorname{LMod}_M(\mathcal{D})\subset \operatorname{Fun}_{\mathcal{LM}^{\otimes}}(\mathcal{LM}^\otimes,\mathcal{O}_2^\otimes)$. Composition with $i'$ determines a functor $f:\operatorname{LMod}_A(\mathcal{D})\rightarrow \operatorname{LMod}_M(\mathcal{D})$ making the following diagram commute, \[ \begin{tikzcd} \operatorname{LMod}_A(\mathcal{D}) \arrow[rd, "G"'] \arrow[rr, "f"] & & \operatorname{LMod}_M(\mathcal{D}) \arrow[ld, "G'"] \\ & \mathcal{D} & \end{tikzcd} \] where $G$ and $G'$ are the forgetful functors. Denote the left adjoints of $G$ and $G'$ by $F$ and $F'$, respectively. The functor $G'$ is the monadic functor of the monad $M$. The functor $G$ is also monadic by the $\infty$-categorical Barr-Beck theorem and \cite[3.2.2.6,~3.4.4.6]{HA}. We observe that the unit maps $id_\mathcal{D}\rightarrow GF\simeq A \otimes \mhyphen$ and $id_\mathcal{D}\rightarrow G'F'\simeq A \otimes \mhyphen$ are both equivalent to the unit map of the monad $M$. We apply \cite[4.7.3.16]{HA} and deduce that $f$ is an equivalence. \end{proof} \subsection{Semiorthogonal decompositions and adjunctions}\label{sec1.5} In this section we closely follow \cite{DKSS19} in describing semiorthogonal decompositions of stable $\infty$-categories and their relation to adjunctions of stable $\infty$-categories. \begin{definition}\label{soddef} Let $\mathcal{V}$ be a stable $\infty$-category and let $\mathcal{A},\mathcal{B}$ be an ordered pair of stable subcategories of $\mathcal{V}$. Denote by $\{\mathcal{A},\mathcal{B}\}$ the full subcategory of $\operatorname{Fun}(\Delta^1,\mathcal{V})$ spanned by vertices of the form $a\rightarrow b$ with $a\in \mathcal{A}$ and $b\in \mathcal{B}$. The fiber functor $\operatorname{fib}$, assigning to a morphism its fiber, restricts to a functor \begin{equation}\label{fibreseq} \operatorname{fib}: \{\mathcal{A},\mathcal{B}\}\longrightarrow \mathcal{V}\,. \end{equation} We call the ordered pair $(\mathcal{A},\mathcal{B})$ a semiorthogonal decomposition of $\mathcal{V}$ if the functor \eqref{fibreseq} is an equivalence. \end{definition} We next associative to any semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ an inner fibration $p:\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$. The construction is similar to the Grothendieck construction. We will relate the existence of further semiorthogonal decompositions to properties of the fibration $p$. \begin{construction}\label{chiconstr} Let $\mathcal{V}$ be a stable $\infty$-category with a semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$. We define the simplicial set $\chi(\mathcal{A},\mathcal{B})$ by defining an $n$-simplex of $\chi(\mathcal{A},\mathcal{B})$ to correspond to the following data. \begin{itemize} \item An $n$-simplex $j:\Delta^n\rightarrow \Delta^1$ of $\Delta^1$. \item An $n$-simplex $\sigma:\Delta^n\rightarrow \mathcal{V}$ such that $\sigma(\Delta^{j^{-1}(0)})\subset \mathcal{A}$ and $\sigma({\Delta^{j^{-1}(1)}})\subset \mathcal{B}$. \end{itemize} We define the face and degeneracy maps to act on an $n$-simplex $(j,\sigma)\in \chi(\mathcal{A},\mathcal{B})_n$ componentwise. The forgetful map $p:\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$ is an inner fibration. In particular, we see that $\chi(\mathcal{A},\mathcal{B})$ is an $\infty$-category. \end{construction} The next lemma shows that in the setting of \Cref{chiconstr}, the stable $\infty$-category $\mathcal{V}$ can be recovered up to equivalence from the inner fibration $p:\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$. \begin{lemma}\label{equlem} Let $\mathcal{V}$ be a stable $\infty$-category with a semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$. There exists an equivalence \begin{equation} \label{decompiso} \operatorname{Fun}_{\Delta^1}(\Delta^1,\chi(\mathcal{A},\mathcal{B}))\simeq \mathcal{V}\,,\end{equation} between the $\infty$-category of sections of $p:\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$ and $\mathcal{V}$. \end{lemma} \begin{proof} The inclusions $\operatorname{Fun}_{\Delta^1}(\Delta^1,\chi(\mathcal{A},\mathcal{B})),\{\mathcal{A},\mathcal{B}\}\subset \operatorname{Fun}(\Delta^1,\mathcal{V})$ are fully faithful functors whose images are identical, given by the edges $a\rightarrow b$ where $a\in \mathcal{A}$ and $b\in \mathcal{B}$. Thus there exists an equivalence $\operatorname{Fun}_{\Delta^1}(\Delta^1,\chi(\mathcal{A},\mathcal{B}))\simeq \{\mathcal{A},\mathcal{B}\}$. Using the equivalence $\{\mathcal{A},\mathcal{B}\}\simeq \mathcal{V}$ the statement follows. \end{proof} \begin{definition} Let $\mathcal{V}$ be a stable $\infty$-category and $\mathcal{A}\subset \mathcal{V}$ a stable subcategory. We define \begin{itemize} \item the right orthogonal $\mathcal{A}^\perp$ to be the full subcategory of $\mathcal{V}$ spanned by those vertices $x\in \mathcal{V}$ such that for all $a\in \mathcal{A}$ the mapping space $\operatorname{Map}_\mathcal{V}(a,x)$ is contractible. \item the left orthogonal $\prescript{\perp}{}{\mathcal{A}}$ to be the full subcategory of $\mathcal{V}$ spanned by those vertices $x\in \mathcal{V}$ such that for all $a\in \mathcal{A}$ the mapping space $\operatorname{Map}_\mathcal{V}(x,a)$ is contractible. \end{itemize} \end{definition} \begin{lemma}\label{perplem} Let $\mathcal{V}$ be a stable $\infty$-category with a semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$. Then $\mathcal{B}= \prescript{\perp}{}{\mathcal{A}}$ and $\mathcal{A}= \mathcal{B}^\perp$. \end{lemma} \begin{proof} We show $\mathcal{A}=\mathcal{B}^\perp$. The relation $\mathcal{B} = \prescript{\perp}{}{\mathcal{A}}$ can be shown analogously. By assumption the fiber functor $\operatorname{fib}: \{\mathcal{A},\mathcal{B}\}\rightarrow \mathcal{V}$ is an equivalence. Under any inverse of $\operatorname{fib}$, the stable subcategory $\mathcal{A}$ of $\mathcal{V}$ is identified with the stable subcategory of $\{\mathcal{A},\mathcal{B}\}$ spanned by sections of the form $a\rightarrow 0$. We show that $a\rightarrow b\in \{\mathcal{A},\mathcal{B}\}\simeq \mathcal{V}$ lies in $\mathcal{B}^\perp$ if and only if $b\simeq 0$. Using that $\{\mathcal{A},\mathcal{B}\}$ is a full subcategory of $\operatorname{Fun}(\Delta^1,\mathcal{V})$, we obtain for $0\rightarrow b',a\rightarrow b \in \{\mathcal{A},\mathcal{B}\}$ an equivalence \[ \operatorname{Map}_{\{\mathcal{A},\mathcal{B}\}}(0\rightarrow b',a\rightarrow b) \simeq \operatorname{Map}_{\operatorname{Fun}(\Delta^1,\mathcal{V})}(0\rightarrow b', a\rightarrow b)\,. \] Using that $0\in \mathcal{A}$ is initial, we obtain that the restriction functor \[ \operatorname{Map}_{\operatorname{Fun}(\Delta^1,\mathcal{V})}(0\rightarrow b', a\rightarrow b)\longrightarrow \operatorname{Map}_{\mathcal{V}}(b',b)\] is also an equivalence. We deduce that under any inverse of $\operatorname{fib}$ the $\infty$-category $\prescript{\perp}{}{B}$ is identified with the stable subcategory of $\{\mathcal{A},\mathcal{B}\}$ spanned by edges $a\rightarrow b$ satisfying $\operatorname{Map}_{\mathcal{V}}(b',b)\simeq \ast$ for all $b'\in \mathcal{B}$. This shows that $a\rightarrow b\in \{\mathcal{A},\mathcal{B}\}\simeq \mathcal{V}$ lies in $\prescript{\perp}{}{\mathcal{B}}$ if and only if $b\simeq 0$. \end{proof} \begin{definition}\label{cartdef} Let $\mathcal{V}$ be a stable $\infty$-category with a semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ and consider the map $p:\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$. We call \begin{itemize} \item $(\mathcal{A},\mathcal{B})$ Cartesian if $p$ is a Cartesian fibration and coCartesian if $p$ is a coCartesian fibration. \item an edge $a\rightarrow b$ in $\mathcal{V}$ with $a\in \mathcal{A}$ and $b\in \mathcal{B}$ an $(\mathcal{A},\mathcal{B})$-Cartesian edge if it is $p$-Cartesian and an $(\mathcal{A},\mathcal{B})$-coCartesian edge if it is $p$-coCartesian, considered as an edge in $\chi(\mathcal{A},\mathcal{B})$. \item if $(\mathcal{A},\mathcal{B})$ is Cartesian, the functor associated to the Cartesian fibration $p$ the right gluing functor associated to $(\mathcal{A},\mathcal{B})$. \item if $(\mathcal{A},\mathcal{B})$ is coCartesian, the functor associated to the Cartesian fibration $p$ the left gluing functor associated to $(\mathcal{A},\mathcal{B})$. \end{itemize} \end{definition} \begin{definition} Let $\mathcal{V}$ be a stable $\infty$-category and let $i:\mathcal{A}\rightarrow \mathcal{V}$ be the inclusion of a stable subcategory. Then $\mathcal{A}\subset \mathcal{V}$ is called \begin{itemize} \item left admissible if $i$ admits a left adjoint, \item right admissible if $i$ admits a right adjoint, \item admissible if $i$ admits both left and right adjoints. \end{itemize} \end{definition} The relation between Cartesian and coCartesian semiorthogonal decompositions and admissible subcategories is as follows. \begin{proposition}\label{admprop} Let $\mathcal{V}$ be a stable $\infty$-category and let $\mathcal{A}\subset \mathcal{V}$ be a stable subcategory. Then the following are equivalent. \begin{enumerate} \item $\mathcal{A}$ is an admissible subcategory of $\mathcal{V}$. \item $(\mathcal{A}^\perp,\mathcal{A})$ and $(\mathcal{A},\prescript{\perp}{}{\mathcal{A}})$ form semiorthogonal decompositions of $\mathcal{V}$. \item $(\mathcal{A}^\perp,\mathcal{A})$ is a coCartesian semiorthogonal decomposition of $\mathcal{V}$. \item $(\mathcal{A},\prescript{\perp}{}{\mathcal{A}})$ is a Cartesian semiorthogonal decomposition of $\mathcal{V}$. \end{enumerate} \end{proposition} \Cref{admprop} follows from \Cref{dkssprop1} and \Cref{dkssprop2}, which we cite from \cite{DKSS19}. \begin{proposition}\label{dkssprop1} Let $\mathcal{V}$ be a stable $\infty$-category and let $i:\mathcal{A}\rightarrow \mathcal{V}$ be the inclusion of a stable subcategory. Then: \begin{enumerate} \item The pair $(\mathcal{A},\prescript{\perp}{}{\mathcal{A}})$ forms a semiorthogonal decomposition of $\mathcal{V}$ if and only if $\mathcal{A}\subset \mathcal{V}$ is left admissible. If these conditions hold then we further have $\prescript{\perp}{}{\mathcal{A}}=\ker(p)$ for $p$ any left adjoint of the inclusion $i:\mathcal{A}\rightarrow \mathcal{V}.$ \item The pair $(\mathcal{A}^\perp,\mathcal{A})$ forms a semiorthogonal decomposition of $\mathcal{V}$ if and only if $\mathcal{A}\subset \mathcal{V}$ is right admissible. If these conditions hold then we further have ${\mathcal{A}}^\perp=\ker(q)$ for $q$ any right adjoint of the inclusion $i:\mathcal{A}\rightarrow \mathcal{V}.$ \end{enumerate} \end{proposition} \begin{proposition}\label{dkssprop2} Let $\mathcal{V}$ be a stable $\infty$-category with a semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$. \begin{enumerate} \item The semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ is Cartesian if and only if the subcategory $\mathcal{A}\subset \mathcal{V}$ is right admissible. If these equivalent conditions hold, then the restriction of any right adjoint of $i:\mathcal{A}\subset \mathcal{V}$ to $\mathcal{B}$ is a right gluing functor. \item The semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ is coCartesian if and only if the subcategory $\mathcal{B}\subset \mathcal{V}$ is left admissible. If these equivalent conditions hold, then the restriction of any left adjoint of $i:\mathcal{B}\subset \mathcal{V}$ to $\mathcal{A}$ is a left gluing functor. \end{enumerate} \end{proposition} \begin{lemma}\label{cartsect} Let $(\mathcal{A},\mathcal{B})$ be a semiorthogonal decomposition of a stable $\infty$-category $\mathcal{V}$. Then: \begin{enumerate} \item Suppose $(\mathcal{A},\mathcal{B})$ is Cartesian. Under any inverse of the functor $\operatorname{fib}:\{\mathcal{A},\mathcal{B}\}\rightarrow \mathcal{V}$, the subcategory $\mathcal{A}^\perp\subset \mathcal{V}$ is identified with the full subcategory of $\{\mathcal{A},\mathcal{B}\}$ spanned by the $(\mathcal{A},\mathcal{B})$-Cartesian edges. \item Suppose $(\mathcal{A},\mathcal{B})$ is coCartesian. Under any inverse of the functor $\operatorname{fib}:\{\mathcal{A},\mathcal{B}\}\rightarrow \mathcal{V}$, the subcategory $\prescript{\perp}{}{\mathcal{B}}\subset \mathcal{V}$ is identified with the full subcategory of $\{\mathcal{A},\mathcal{B}\}$ spanned by the $(\mathcal{A},\mathcal{B})$-coCartesian edges. \end{enumerate} \end{lemma} \begin{proof} We only show statement 1, statement 2 can be shown analogously. Using \Cref{admprop} we deduce that $\mathcal{A}\subset \mathcal{V}$ is an admissible subcategory. By \cite[A.8.20]{HA}, we obtain that $\mathcal{A}$ and $\mathcal{A}^\perp$ form a recollement of $\mathcal{V}$ as defined in \cite[A.8.1]{HA}. The description of $\mathcal{A}^\perp$ as spanned by $(\mathcal{A},\mathcal{B})$-Cartesian edges is thus provided in \cite[A.8.7]{HA}. \end{proof} We cite the following lemma from \cite{DKSS19}. \begin{lemma}\label{mutlem} Let $\mathcal{V}$ be a stable $\infty$-category and let $\mathcal{A}\subset \mathcal{V}$ be an admissible subcategory. Consider a biCartesian square \[ \begin{tikzcd} d\arrow[r]\arrow[dr, phantom, "\square"]\arrow[d]& a\arrow[d]\\ 0\arrow[r]& b \end{tikzcd} \] in $\mathcal{V}$. Then the following are equivalent: \begin{enumerate} \item $d$ is an object of $\mathcal{A}^\perp$, $a$ is an object of $\mathcal{A}$ and $b$ is an object of $\prescript{\perp}{}{\mathcal{A}}$. \item The edge $d\rightarrow a$ is $(\mathcal{A}^\perp,\mathcal{A})$-coCartesian. \item The edge $a\rightarrow b$ is $(\mathcal{A},\prescript{\perp}{}{\mathcal{A}})$-Cartesian. \end{enumerate} \end{lemma} \begin{remark}\label{mutrem} The restriction functors from the full subcategory of $\operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})$ spanned by diagrams satisfying the equivalent conditions 1.-3. of \Cref{mutlem} to the vertices $d$ or $b$ constitute trivial fibrations. By choosing a section of one of these trivial fibrations we hence get an essentially unique equivalence $m_\mathcal{A}: \mathcal{A}^\perp\rightarrow \prescript{\perp}{}{\mathcal{A}},$ which we call the mutation around $\mathcal{A}$. We observe that precomposition with $m_\mathcal{A}$ identifies the right gluing functor $\prescript{\perp}{}{\mathcal{A}}\rightarrow \mathcal{A}$ of the semiorthogonal decomposition $(\mathcal{A},\prescript{\perp}{}{\mathcal{A}})$ with the left gluing functor $\mathcal{A}^\perp\rightarrow \mathcal{A}$ of the semiorthogonal decomposition $(\mathcal{A}^\perp,\mathcal{A})$. \end{remark} The next lemma associates to any adjunction of stable $\infty$-categories a biCartesian semiorthogonal decomposition. \begin{lemma}\label{adjsodlem} Let $p:\mathcal{M}\rightarrow \Delta^1$ be an adjunction of stable $\infty$-categories associated to a pair of functors $F:\mathcal{Y}\leftrightarrow \mathcal{X}:G$ and consider the stable $\infty$-category $\mathcal{V}\coloneqq\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ of sections of $p$. Denote by $\operatorname{p-Ran}_{\{i\}\rightarrow \Delta^1}\mathcal{M}$ and $\operatorname{p-Lan}_{\{i\}\rightarrow \Delta^1}\mathcal{M}$ the full subcategories of $\mathcal{V}$ spanned by functors that are right and left Kan extensions relative $p$, respectively, of their restriction to $\operatorname{Fun}_{\Delta^1}(\{i\},\mathcal{M})\subset \mathcal{M}.$ \begin{enumerate} \item The following four $\infty$-categories \begin{align*} \mathcal{A}\coloneqq\operatorname{p-Ran}_{\{1\}\rightarrow \Delta^1}\mathcal{M} \quad\quad& \mathcal{B}\coloneqq\operatorname{p-Ran}_{\{0\}\rightarrow \Delta^1}\mathcal{M}\\ \mathcal{C}\coloneqq\operatorname{p-Lan}_{\{1\}\rightarrow \Delta^1}\mathcal{M} \quad\quad& \mathcal{D}\coloneqq\operatorname{p-Lan}_{\{0\}\rightarrow \Delta^1}\mathcal{M} \end{align*} are stable subcategories of $\mathcal{V}$. \end{enumerate} The restrictions functors induce equivalences \begin{equation}\label{idabc} \mathcal{A},\mathcal{C}\simeq \mathcal{X}\, ,\quad \mathcal{B},\mathcal{D}\simeq \mathcal{Y}\,. \end{equation} \begin{enumerate}\setcounter{enumi}{1} \item There are coCartesian semiorthogonal decompositions $(\mathcal{A},\mathcal{B}),\,(\mathcal{B},\mathcal{C})$ and Cartesian semi-orthogonal decompositions $(\mathcal{B},\mathcal{C}),\,(\mathcal{C},\mathcal{D})$ of $\mathcal{V}$. \item Up to mutations and the equivalences \eqref{idabc}, the left and right gluing functor of $(\mathcal{A},\mathcal{B})$ and $(\mathcal{B},\mathcal{C})$, respectively, can be identified with $G$ and the left and right gluing functor of $(\mathcal{B},\mathcal{C})$ and $(\mathcal{C},\mathcal{D})$, respectively, can be identified with $F$. \end{enumerate} \end{lemma} \begin{proof} For statement 1, we note that the full subcategories $\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D}$ of $\mathcal{V}$ are closed under equivalences in $\mathcal{V}$ and contain a zero object of $\mathcal{V}$. The closure of the subcategories of $\mathcal{V}$ under the formation of fibers and cofibers in $\mathcal{V}$ follows from the observation that the Kan extension functors \eqref{idabc} are exact. We obtain that the subcategories are stable subcategories. We now show statements 2 and 3. The functor $G$ is exact. By \cite[A.8.7]{HA} we obtain that $\mathcal{B}, \mathcal{A}$ forms a recollement of $\mathcal{V}$. \cite[A.8.20]{HA} implies that $\mathcal{B}\subset \mathcal{V}$ is an admissible subcategory. This shows by \Cref{admprop} that $(\mathcal{A},\mathcal{B})$ forms a coCartesian and $(\mathcal{B},\mathcal{C})$ forms a Cartesian semiorthogonal decompositions of $\mathcal{V}$. The right gluing functor of $(\mathcal{B},\mathcal{C})$, denoted $G'$, is by \Cref{dkssprop2} given by the restriction of a right adjoint of the inclusion $i:\mathcal{B}\subset \mathcal{V}$ to $\mathcal{C}$. Consider the coCartesian fibration $\Gamma(i)\rightarrow \Delta^1$. An edge $e:\Delta^1\rightarrow \Gamma(i)$ lying over $0\rightarrow 1$ corresponds to a diagram $\Delta^1\times\Delta^1\rightarrow \mathcal{M}$ \begin{equation}\label{pbdiag1} \begin{tikzcd} y'\arrow[r]\arrow[d]&0\arrow[d]\\ y\arrow[r]&x \end{tikzcd} \end{equation} such that $y,y'\in \mathcal{Y}$ and $x\in \mathcal{X}$. The edge $e$ is Cartesian if and only if the diagram \eqref{pbdiag1} is a pullback diagram. \cite[4.3.1.10]{HTT} implies that the diagram \eqref{pbdiag1} is a pullback diagram if and only if the vertex $y'$ is a choice of fiber of the edge $y\rightarrow G(x)$. Hence the right adjoint of $i$ restricted to $\mathcal{C}$ is equivalent to the functor $G':\mathcal{C}\simeq \mathcal{X}\xrightarrow{G[-1]}\mathcal{Y}\simeq \mathcal{B}$. The functor $G'$ admits a left adjoint $F':\mathcal{B}\simeq \mathcal{Y}\xrightarrow{F[1]}\mathcal{X}\simeq \mathcal{C}$. The Cartesian fibration $\chi(\mathcal{B},\mathcal{C})\rightarrow \Delta^1$ classifies $G'$. We obtain that the fibration $\chi(\mathcal{B},\mathcal{C})\rightarrow \Delta^1$ is also coCartesian, classifying the functor $F'$. It follows that the semiorthogonal decomposition $(\mathcal{B},\mathcal{C})$ is coCartesian. The $\infty$-category $\mathcal{D}$ is by \cite[4.3.1.4]{HTT} spanned by $p$-coCartesian edges. We observe that the equivalence $\operatorname{Fun}_{\Delta^1}(\Delta^1,\chi(\mathcal{A},\mathcal{B}))\simeq \mathcal{V}$ of \Cref{equlem} maps the $\chi(\mathcal{B},\mathcal{C})$-coCartesian edges to $p$-coCartesian edges. It follows from \Cref{cartsect} that $\mathcal{D}=\prescript{\perp}{}{\mathcal{C}}$ and from \Cref{admprop} that the pair $(\mathcal{C},\mathcal{D})$ forms a Cartesian semiorthogonal decomposition of $\mathcal{V}$. Composing the left gluing functor $G'$ of $(\mathcal{B},\mathcal{C})$ with the inverse of the mutation equivalence around $\mathcal{B}$ yields by \Cref{mutrem} the right gluing functor of $(\mathcal{A},\mathcal{B})$. Similarly, the right gluing functor of $(\mathcal{C},\mathcal{D})$ is obtained from the left gluing functor $F'$ of $(\mathcal{B},\mathcal{C})$ by precomposition with the mutation equivalence around $\mathcal{C}$. \end{proof} \begin{remark}\label{sodrem} Let $\mathcal{M}\rightarrow \Delta^1$ be an adjunction of stable $\infty$-categories. Abusing notation, we will denote sections $s\in \mathcal{V}=\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ by their restrictions $s=\left(s|_0,s|_1\right).$ To fully specify any $s\in \mathcal{V}$ involves the additional data of a map $s|_0\rightarrow G(s|_1)$. The set of vertices of the resulting subcategories of $\mathcal{V}$ from \Cref{adjsodlem} can be depicted up to equivalence as follows. \[\mathcal{A}_0=\{\left(G(x),x\right)\}\quad\quad \mathcal{B}_0=\{\left(y,0\right)\} \quad\quad \mathcal{C}_0=\{\left(0,x\right)\}\quad\quad \mathcal{D}_0=\{\left(y,F(y)\right)\}\] The vertices $(G(x),x)$ in $\mathcal{A}$ have the property that the associated edge $G(x)\rightarrow G(x)$ is an equivalence. The vertices $(y,F(y))$ in $\mathcal{D}$ have the property that the associated edge $y\rightarrow GF(y)$ is a unit map of the adjunction $F\dashv G$. \end{remark} \Cref{adjsodlem} implies that the datum of a biCartesian semiorthogonal decomposition is equivalent to the datum of an adjunction of stable $\infty$-categories. We can also describe adjoint triples or longer sequences of adjoint functors by sequences of biCartesian semiorthogonal decompositions. \begin{lemma}\label{adjsodlem2} Let $\mathcal{M}\rightarrow \Delta^1$ be an adjunction of stable $\infty$-categories associated to a pair of functors $F:\mathcal{Y}\leftrightarrow \mathcal{X}: G$ such that $F$ admits a right adjoint $E$. Consider the associated semiorthogonal decompositions $(\mathcal{A},\mathcal{B}),(\mathcal{B},\mathcal{C}),(\mathcal{C},\mathcal{D})$ of $\mathcal{V}=\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ from \Cref{adjsodlem} and denote $\mathcal{E}=\prescript{\perp}{}{\mathcal{D}}$. \begin{enumerate} \item The semiorthogonal decomposition $(\mathcal{C},\mathcal{D})$ is also coCartesian and $(\mathcal{D},\mathcal{E})$ forms a Cartesian semiorthogonal decomposition of $\mathcal{V}$. \item Let $x\in \mathcal{X}$. There is a $(\mathcal{C},\mathcal{D})$-coCartesian edge $e:\left(0,x\right)\xlongrightarrow{!}\left(E(x),FE(x)\right)$ in $\mathcal{V}$, such that the restriction to the second component is a unit map of the adjunction $E\dashv F$. \end{enumerate} \end{lemma} \begin{proof} We denote by $F'$ the functor classified by the Cartesian fibration $\chi(\mathcal{C},\mathcal{D})\rightarrow \Delta^1$. The functor $F'$ is equivalent to the composition of the functor $F$ with mutation equivalences. By assumption, the functor $F'$ admits a left adjoint, denoted $E'$. It follows that $\chi(\mathcal{C},\mathcal{D})\rightarrow \Delta^1$ is also coCartesian. Using \Cref{admprop}, we see that $(\mathcal{D},\mathcal{E})$ forms a Cartesian semiorthogonal decomposition of $\mathcal{V}$. To obtain the description of the coCartesian edge $e$, consider the following diagram in $\mathcal{V}$. \[ \begin{tikzcd} (0,FE(x))\arrow[d, "\ast"]&\\ (E(x),FE(x))&(0,x)\arrow[l, "!"]\arrow[ul] \end{tikzcd} \] The edge $(0,x)\rightarrow (0,FE(x))$ is a unit map of the adjunction $E'\dashv F'$, showing that the restriction to the second component $x\rightarrow FE(x)$ is a unit map of the adjunction $E\dashv F$. \end{proof} \section{Spherical adjunctions}\label{sec3} In this section we study spherical adjunctions in the setting of stable $\infty$-categories. Because of the added generality over spherical adjunctions of dg-categories, basic properties of spherical adjunction known in the latter setting need to be proven again. A treatment of spherical adjunctions in the setting of stable $\infty$-categories also appears in \cite{DKSS19}, so that we can use all properties proven there. We show further basic properties of spherical adjunctions of stable $\infty$-categories, most notably the 2/4 property of spherical adjunctions in \Cref{sec2.3}. We begin with the definition of a spherical adjunction of stable $\infty$-categories. \begin{definition} An adjunction $p:\mathcal{M}\rightarrow \Delta^1$ of stable $\infty$-categories $\mathcal{A}$ and $\mathcal{B}$ is called spherical if the associated twist functor $T_\mathcal{A}$ and the associated cotwist functor $T_\mathcal{B}$, as defined in \Cref{twistconstr}, are equivalences. \end{definition} An immediate consequence of the definition of the twist functors is that they commute with the adjoints. \begin{lemma}\label{commlem} Let $F:\mathcal{A}\leftrightarrow\mathcal{B}:G$ be an adjunction of stable $\infty$-categories, with twist functor $T_\mathcal{A}$ and cotwist functor $T_\mathcal{B}$. There exist equivalences of functors $ FT_\mathcal{A}\simeq T_\mathcal{B} F$ and $ T_\mathcal{A}G\simeq G T_\mathcal{B}.$ \end{lemma} \begin{proof} We construct only the equivalence $FT_\mathcal{A}\simeq T_\mathcal{B}F$, the second equivalence can be constructed analogously. Consider the full subcategory $\mathcal{D}\subset \operatorname{Fun}((\Delta^1)^4\coprod_{\Delta^2}\Delta^4,\Gamma(F))$ spanned by diagrams of the following form. \[ \begin{tikzcd} & F(a) \arrow[rr, "\simeq"] & & F(a) \arrow[rr] & & 0 \\ & & & & & \\ & F(a) \arrow[uu, "\simeq"] \arrow[rr, "F(u_a)"] \arrow[rruu, "\simeq"] & & FGF(a) \arrow[uu, "cu_{F(a)}"'] \arrow[rr] & & FT_\mathcal{C}(a) \arrow[uu] \\ a \arrow[ru, "!"] \arrow[rr, "u_a", near end] & & GF(a) \arrow[lu, "\ast"] \arrow[ru, "!"] \arrow[ruuu, "\ast"] & & & \\ & 0 \arrow[uu] \arrow[rr] & & T_\mathcal{D}F(a) \arrow[uu] \arrow[rr, "\simeq"] & & b \arrow[uu, "\simeq"] \end{tikzcd} \] In the 3x3-square in the above diagram, all rows and columns are extended to biCartesian squares, which we do not depict. We also do not depict all edges of the $4$-simplex with the vertices $a,GF(a),F(a),FGF(a),F(a)$. The edge $u_a$ is a unit map of the adjunction $F\dashv G$ at $a$ and the edge $cu_{F(a)}$ is a counit map of the adjunction $F\dashv G$ at $F(a)$. Due to the involved Cartesian and coCartesian edges, equivalences and biCartesian squares, we obtain that the restriction functor $\operatorname{res}:\mathcal{D}\rightarrow \mathcal{A}$ to the vertex $a$ is a trivial fibration. Choosing a section of $\operatorname{res}$ and composing with the restriction functor $\mathcal{D}\rightarrow \operatorname{Fun}(\Delta^1,\mathcal{A})$ to the edge $T_\mathcal{D}F(a)\xrightarrow{\simeq} FT_\mathcal{C}(a)$ provides us with a natural equivalence between the functors $T_\mathcal{D}F$ and $FT_\mathcal{C}$. \end{proof} \subsection{Spherical adjunctions and \texorpdfstring{$4$}{4}-periodic semiorthogonal decompositions}\label{sec4.2} \begin{definition} Let $\mathcal{V}$ be a stable $\infty$-category with a biCartesian semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ as defined in \Cref{sec1.5}. We call $(\mathcal{A},\mathcal{B})$ $4$-periodic, if $(\prescript{\perp}{}{\mathcal{B}}, \mathcal{A}^\perp)$ forms a semiorthogonal decomposition of $\mathcal{V}$. \end{definition} The relationship between $4$-periodic semiorthogonal decompositions and spherical adjunctions is due to \cite{HLS16} and was extended to stable $\infty$-categories in \cite{DKSS19}. We cite the next proposition from \cite{DKSS19}, describing this relationship. \begin{proposition}\label{4pedprop1} Let $\mathcal{V}$ be a stable $\infty$-category with a biCartesian semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$. The semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ is $4$-periodic if and only if the adjunction \mbox{$\chi(\mathcal{A},\mathcal{B})\rightarrow \Delta^1$} is spherical. \end{proposition} \begin{corollary}\label{lrcor} Let $E\dashv F\dashv G$ be an adjoint triple of functors between stable $\infty$-categories. Then the adjunction $E\dashv F$ is spherical if and only if the adjunction $F\dashv G$ is spherical. \end{corollary} \begin{proof} The adjoint triple $E\dashv F \dashv G$ corresponds by \Cref{adjsodlem} and \Cref{adjsodlem2} to a sequence of semiorthogonal decompositions $(\mathcal{A},\mathcal{B}),(\mathcal{B},\mathcal{C}),(\mathcal{C},\mathcal{D}),(\mathcal{D},\mathcal{E})$ of $\operatorname{Fun}_{\Delta^1}(\Delta^1,\Gamma(F))$. \Cref{4pedprop1} implies the following. \begin{itemize} \item The adjunction $E\dashv F$ is spherical if and only if the semiorthogonal decomposition $(\mathcal{C},\mathcal{D})$ is $4$-periodic. \item The adjunction $F\dashv G$ is spherical if and only if the semiorthogonal decomposition $(\mathcal{B},\mathcal{C})$ is $4$-periodic. \end{itemize} The statement follows from the observation that $(\mathcal{C},\mathcal{D})$ is $4$-periodic if and only if $(\mathcal{B},\mathcal{C})$ is $4$-periodic. \end{proof} We will further need the following proposition shown in \cite{DKSS19}. \begin{proposition}\label{4pedprop2} Let $F:\mathcal{A}\leftrightarrow \mathcal{B}:G$ be a spherical adjunction. Then $F$ admits a further left adjoint $E$ and $G$ admits a further right adjoint $H$ which satisfy $E\simeq T_\mathcal{A}^{-1} G$ and $H\simeq F T_\mathcal{A}^{-1}$. \end{proposition} In the following construction we start with an adjoint triple of functors of stable $\infty$-categories and investigate in more detail the condition of $4$-periodicity of the corresponding semiorthogonal decompositions. \begin{construction}\label{Kconstr} Consider an adjunction $\mathcal{M}\rightarrow \Delta^1$ of stable $\infty$-categories, classifying the adjoint pair of functors $F:\mathcal{Y}\leftrightarrow \mathcal{X}:G$. Assume further that $F$ admits a left adjoint $E$. We denote $\mathcal{V}=\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ and $\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D}$ the stable subcategories of $\mathcal{V}$ defined in \Cref{adjsodlem}. We further denote $\mathcal{E}=\prescript{\perp}{}{\mathcal{D}}$. \Cref{adjsodlem2} implies that $(\mathcal{D},\mathcal{E})$ also forms a semiorthogonal decomposition of $\mathcal{V}$. We denote the cotwist functor of the adjunction $F\dashv G$ by $T_\mathcal{X}$ and the twist functor of the adjunction $E\dashv F$ by $T_\mathcal{X}'$. We use in the following the notation introduced in \Cref{sodrem}. Consider the full subcategory $\mathcal{K}_1$ of $\operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})$ spanned by biCartesian squares \[ \begin{tikzcd} \left(0,x\right)\arrow[r, "!"]\arrow[dr, phantom, "\square"]\arrow[d]& \left(E(x),FE(x)\right)\arrow[d, "\ast"]\\ \left(0,0\right)\arrow[r]& \left(E(x),T_\mathcal{X}'(x)\right) \end{tikzcd} \] such that the top edge is $(\mathcal{C},\mathcal{D})$-coCartesian and the right edge is $(\mathcal{D},\mathcal{E})$-Cartesian. Using \Cref{mutlem}, we see that the restriction functor to the vertex $(E(x),T_\mathcal{X}'(x))$ is a trivial fibration from $\mathcal{K}_1$ to $\mathcal{E}$. All diagrams in $\mathcal{K}_1$ have by \Cref{adjsodlem2} the property that the restriction to the edge $x\rightarrow FE(x)$ is a unit map of the adjunction $E\dashv F$. Consider the full subcategory $\mathcal{K}_2$ of $\operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})$ spanned by diagrams of the form \[ \begin{tikzcd} \left(0,T_\mathcal{X}(x)\right)\arrow[r]\arrow[dr, phantom, "\square"]\arrow[d]& \left(G(x),FG(x)\right)\arrow[d]\\ \left(0,0\right)\arrow[r]& \left(G(x),x\right) \end{tikzcd}\] such that the restriction to $\operatorname{Fun}(\Delta^2,\chi(F))$ is of the following form. \[\begin{tikzcd} G(x)\arrow[r, "!"]\arrow[dr, "*"]& FG(x)\arrow[d]\\ & F \end{tikzcd} \] We observe that the restriction functor to the vertex $(G(x),x)$ is a trivial fibration from $\mathcal{K}_2$ to $\mathcal{A}$. If $\mathcal{K}_1=\mathcal{K}_2$, we immediately obtain that $\mathcal{A}=\mathcal{E}$. This implies that the semiorthogonal decomposition $(\mathcal{A},\mathcal{B})$ is $4$-periodic and that the adjunction $\mathcal{M}\rightarrow \Delta^1$ is spherical. The converse is also true, as we will show \Cref{uculem3}. \end{construction} \subsection{Unit and counit maps of spherical adjunctions} In this section we study the properties of the unit and counit maps of spherical adjunctions. The following lemma is due to \cite{DKSS19}. \begin{lemma}\label{uculem1} Let $\mathcal{V}$ be a stable $\infty$-category with a $4$-periodic semiorthogonal decomposition $(\mathcal{C},\mathcal{D})$. Consider a diagram $\Delta^2\times\Delta^2\rightarrow \mathcal{V}$ of the form \[ \begin{tikzcd} z\arrow[dr,phantom,"\square"] \arrow[r] \arrow[d] & b\arrow[dr,phantom,"\square"] \arrow[d, "!"] \arrow[r] & 0 \arrow[d] \\ c'\arrow[dr,phantom,"\square"] \arrow[d] \arrow[r] & c \arrow[dr,phantom,"\square"]\arrow[d] \arrow[r, "\ast"] & d \arrow[d] \\ 0 \arrow[r] & c'' \arrow[r] & z' \end{tikzcd} \] satisfying \begin{itemize} \item $c,c',c''\in \mathcal{C}$, $d\in \mathcal{D}$ and $b\in \mathcal{C}^\perp$, \item the edge $b\rightarrow c$ is $(\mathcal{C}^\perp,\mathcal{C})$-coCartesian and the edge $c\rightarrow d$ is $(\mathcal{C},\mathcal{D})$-Cartesian. \end{itemize} Then the following statements are equivalent. \begin{enumerate} \item The edge $b\rightarrow c''$ is $(\mathcal{C}^\perp,\mathcal{C})$-Cartesian. \item The edge $c'\rightarrow d$ is $(\mathcal{C},\mathcal{D})$-coCartesian. \end{enumerate} \end{lemma} \begin{proof} By \Cref{mutlem} the edge $b\rightarrow c''$ is $(\mathcal{C}^\perp,\mathcal{C})$-Cartesian if and only if $z\in \mathcal{C}^{\perp\perp}$ and the edge $c'\rightarrow d$ is $(\mathcal{C},\mathcal{D})$-coCartesian if and only if $z'\in \prescript{\perp}{}{\mathcal{D}}$. We obtain that statement 1 and statement 2 are equivalent using the equivalence $z[1]\simeq z'$ and that by $4$-periodicity $\mathcal{C}^{\perp\perp}=\prescript{\perp}{}{\mathcal{D}}$. \end{proof} \begin{remark}\label{ucurem} Let $F:\mathcal{Y}\leftrightarrow:\mathcal{X}:G$ be a spherical adjunction of stable $\infty$-categories. Denote the left adjoint of $F$ by $E$. Consider the following fiber and cofiber sequence in $\mathcal{X}$. \[ \begin{tikzcd} x' \arrow[d]\arrow[dr, phantom, "\square"] \arrow[r] & x \arrow[d] \\ 0 \arrow[r] & x'' \end{tikzcd} \] \Cref{uculem1} implies that the following two conditions are equivalent. \begin{enumerate} \item The edge $x'\rightarrow x$ is a unit map of the adjunction $E\dashv F$. \item The edge $x\rightarrow x''$ is a counit map of the adjunction $F\dashv G$. \end{enumerate} \end{remark} The next lemma shows, that in the setting of \Cref{ucurem} the other pair of unit and counit maps is also related. \begin{lemma}\label{uculem2} Let $F:\mathcal{Y}\leftrightarrow\mathcal{X}:G$ be a spherical adjunction of stable $\infty$-categories. Denote the left adjoint of $F$ by $E$. Consider the following fiber and cofiber sequence in $\mathcal{Y}$. \[ \begin{tikzcd} y' \arrow[d]\arrow[dr, phantom, "\square"] \arrow[r] & y \arrow[d] \\ 0 \arrow[r] & y'' \end{tikzcd} \] The following two are equivalent. \begin{enumerate} \item The edge $y'\rightarrow y$ is a unit map of the adjunction $F\dashv G$. \item The edge $y\rightarrow y''$ is counit map of the adjunction $E\dashv F$. \end{enumerate} \end{lemma} \begin{proof} Denote the right adjoint of $G$ by $H$. By \Cref{ucurem}, we obtain that the edge $y'\rightarrow y$ is unit map of the adjunction $F\dashv G$ if and only if the edge $y\rightarrow y''$ is counit map of the adjunction $G\dashv H$. Denote by $T_\mathcal{X}$ the cotwist functor of the adjunction $F\dashv G$. By \Cref{4pedprop2}, we find equivalences $G\simeq ET_{\mathcal{X}}$ and $H\simeq T_\mathcal{X}^{-1}F$. This implies that any counit map of the adjunction $G\simeq ET_{\mathcal{X}}\dashv H\simeq T_\mathcal{X}^{-1}F$ is also a counit map of the adjunction $E\dashv F$ and vice versa. The equivalence of statements 1 and 2 follows. \end{proof} \begin{lemma}\label{uculem3} Consider an adjunction $\mathcal{M}\rightarrow \Delta^1$ of stable $\infty$-categories, associated to the pair of adjoint functors $F:\mathcal{Y}\leftrightarrow \mathcal{X}:G$. Assume further that $F$ admits a left adjoint $E$. Let $\mathcal{V}=\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$. Denote by ${T}_\mathcal{X}$ the cotwist functor of the adjunction $F\dashv G$. Then the following are equivalent. \begin{enumerate} \item For every $x\in \mathcal{X}$ there exists an equivalence $G(x)\simeq E\circ T_\mathcal{X}(x)$ and given a fiber and cofiber sequence in $\mathcal{X}$ \[ \begin{tikzcd} x'\arrow[dr, phantom, "\square"]\arrow[r]\arrow[d]& x \arrow[d]\\ 0\arrow[r]& x'' \end{tikzcd} \] the edge $x'\rightarrow x$ is a unit map of the adjunction $E\dashv F$ if and only if the edge $x\rightarrow x''$ is a counit map of the adjunction $F\dashv G$. \item The full subcategories $\mathcal{K}_1,\mathcal{K}_2\subset \operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})$ introduced in \Cref{Kconstr} are identical. \item The adjunction $F\dashv G$ is spherical. \end{enumerate} \end{lemma} \begin{proof} It is shown in \Cref{Kconstr} that statement 2 implies statement 3. Assume statement 3. By \Cref{4pedprop2} we find $G\simeq E\circ T_{\mathcal{X}}$. Statement 1 thus follows from \Cref{ucurem}. We now show that statement 1 implies statement 2. We observe that statement 1 directly implies that the cotwist functor $T_\mathcal{X}$ of the adjunction $F\dashv G$ is inverse to the twist functor $T_\mathcal{X}'$ of the adjunction $E\dashv F$. Consider a diagram $\Delta^1\times\Delta^1\rightarrow \mathcal{V}$ corresponding to a vertex in $\mathcal{K}_2$, depicted as follows. \begin{equation}\label{k2elmdiag} \begin{tikzcd} \left(0,T_\mathcal{X}(x)\right)\arrow[r]\arrow[dr, phantom, "\square"]\arrow[d]& \left(G(x),FG(x)\right)\arrow[d]\\ \left(0,0\right)\arrow[r]& \left(G(x),x\right) \end{tikzcd} \end{equation} The restriction of \eqref{k2elmdiag} to the edge $T_\mathcal{X}(x)\rightarrow FG(x)$ is by assumption a unit map of the adjunction $E\dashv F$ at $T_\mathcal{X}(x)$. Consider the biCartesian semiorthogonal decomposition $(\mathcal{C},\mathcal{D})$ used in \Cref{Kconstr} and the $(\mathcal{C},\mathcal{D})$-coCartesian edge $e_1:(0,T_\mathcal{X}(x))\rightarrow (ET_\mathcal{X}(x),FET_\mathcal{X}(x))$. This edge corresponds to the following diagram in $\Gamma(F)$ \[ \begin{tikzcd} 0 \arrow[d] \arrow[r] & ET_\mathcal{X}(x) \arrow[d, "!"] \\ T_\mathcal{X}(x) \arrow[r, "u"] & FET_\mathcal{X}(x) \end{tikzcd} \] where the edge $u$ is a unit map of the adjunction $E\dashv F$. The restriction $e_2:(0,T_\mathcal{X}(x))\rightarrow (G(x),FG(x))$ of diagram \eqref{k2elmdiag} corresponds to the following diagram in $\Gamma(F)$. \[ \begin{tikzcd} 0 \arrow[d] \arrow[r] & G(x) \arrow[d, "!"] \\ T_\mathcal{X}(x) \arrow[r, "u"] & FG(x) \end{tikzcd} \] We use the equivalence \[ET_\mathcal{X}(x)\simeq GT_\mathcal{X}T^{-1}_\mathcal{X}(x)\simeq G(x)\] to extend the diagram corresponding to $e_2$ to the following diagram. \[ \begin{tikzcd} 0 \arrow[d] \arrow[r] & G(x) \arrow[d, "!"] \arrow[r, "\simeq"] & ET_\mathcal{X}(x) \arrow[d, "!"] \\ T_\mathcal{X}(x) \arrow[r, "u"] & FG(x) \arrow[r, "\simeq"] & FET_\mathcal{X}(x) \end{tikzcd} \] The composed edge $T_\mathcal{X}(x)\rightarrow FG(x)\simeq FET_\mathcal{X}(x)$ remains a unit map of the adjunction \mbox{$E\dashv F$}, so that we can produce an equivalence between $e_1$ and $e_2$. This shows that $e_2$ is also $(\mathcal{C},\mathcal{D})$-coCartesian so that using \Cref{mutlem} we obtain that the diagram \eqref{k2elmdiag} lies in $\mathcal{K}_1$. It follows $\mathcal{K}_2\subset \mathcal{K}_1$. The restriction functor $\operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})\rightarrow \operatorname{Fun}(\Delta^{\{0,0\}},\mathcal{V})$ restricts to a trivial fibration $\mathcal{K}_1\rightarrow \mathcal{C}$. This implies that two elements in $\mathcal{K}_1$ are equivalent if and only if their restrictions to $\Delta^{\{0,0\}}$ are equivalent. Using that $T_\mathcal{X}$ is an equivalence, we find for every $(0,x')\in \mathcal{C}$ an element in $\mathcal{K}_2$ whose restriction to $\Delta^{\{0,0\}}$ is given by $(0,x')$. Using that $\mathcal{K}_2$ is closed under equivalences, we obtain $\mathcal{K}_1\subset\mathcal{K}_2$. This implies $\mathcal{K}_1=\mathcal{K}_2$. \end{proof} \subsection{The 2/4 property}\label{sec2.3} In this section we prove the 2/4 property of spherical adjunctions of \cite{AL17} in the setting of $\infty$-categories. A key tool in the proof is the description of the sphericalness of an adjunction is terms of the 4-periodicity of the corresponding biCartesian semiorthogonal decomposition. A further important ingredient in the proof is the relation between the sphericalness of an adjunction and the properties of the appearing unit and counit maps, see \Cref{uculem3}. \begin{lemma}\label{2/4lem1} Let $F:\mathcal{Y}\leftrightarrow\mathcal{X}:G$ be an adjunction of stable $\infty$-categories such that $F$ admits a left adjoint $E$. Denote the cotwist functor of $F\dashv G$ by $T_\mathcal{X}$ and the twist functor of $E\dashv F$ by $T_\mathcal{X}'$. The adjunction $F\dashv G$ is spherical if and only if the following two conditions are satisfied. \begin{enumerate} \item For any $x\in \mathcal{X}$, consider the edge $\alpha:\Delta^1\rightarrow \mathcal{Y}$ contained in the biCartesian square in $\mathcal{Y}$ \[ \begin{tikzcd} G(x) \arrow[d, "{G(u')}"'] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ GFE(x) \arrow[r, "\alpha"'] & GT_\mathcal{X}'(x) \end{tikzcd} \] where $u'$ is the unit map of the adjunction $E\dashv F$. The composition \[ E(x)\xrightarrow{u} GFE(x)\xrightarrow{\alpha} GT_\mathcal{X}'(x)\] of the unit map $u$ of the adjunction $F\dashv G$ and $\alpha$ is an equivalence. \item For any $x\in \mathcal{X}$, consider the edge $\beta:\Delta^1\rightarrow \mathcal{Y}$ contained in the biCartesian square in $\mathcal{Y}$ \[ \begin{tikzcd} ET_\mathcal{X}(x) \arrow[d, "\beta"'] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ EFG(x) \arrow[r, "E(cu)"'] & E(x) \end{tikzcd} \] where $cu$ is the counit map $cu$ of the adjunction $F\dashv G$. The composition \[ET_\mathcal{X}(x)\xrightarrow{\beta}EFG(x)\xrightarrow{cu'}G(x)\] of $\beta$ and the counit map $cu'$ of the adjunction $E\dashv F$ is an equivalence. \end{enumerate} \end{lemma} \begin{proof} Consider the adjunction $\chi(G)\rightarrow \Delta^1$. We obtain by the \Cref{adjsodlem,adjsodlem2} semiorthogonal decompositions $(\mathcal{A},\mathcal{B}),(\mathcal{B},\mathcal{C}),(\mathcal{C},\mathcal{D}),(\mathcal{D},\mathcal{E})$ of $\mathcal{V}=\operatorname{Fun}_{\Delta^1}(\Delta^1,\chi(G))$. We show that condition 1 is equivalent to $\mathcal{E}\subset \mathcal{A}$. We then present a dual version of the argument below, showing that condition 2 is equivalent to $\mathcal{A}\subset \mathcal{E}$. Together, these two statements imply the Lemma. Let $x\in \mathcal{X}$ and consider the vertex $(E(x),T_\mathcal{X}'(x))\in \mathcal{E}$ contained in a diagram $D:(\Delta^1)^{\times 2}\rightarrow \mathcal{V}$ of the following form. \[ \begin{tikzcd} \left(0,x\right)\arrow[r, "!"]\arrow[dr, phantom, "\square"]\arrow[d]& \left(E(x),FE(x)\right)\arrow[d, "\ast"]\\ \left(0,0\right)\arrow[r]& \left(E(x),T_\mathcal{X}'(x)\right) \end{tikzcd} \] The diagram $D$ corresponds to a vertex in $\mathcal{K}_1$ as introduced in \Cref{Kconstr}. The datum of the functor $D$ is equivalent to the datum of a functor $D_1:\Delta^1\times(\Delta^1)^{\times 2}\rightarrow \chi(G)$. Consider the restriction $D'$ of $D_1$ to $\{1\}\times (\Delta^1)^{\times 2}\rightarrow \chi(G)$. Denote the left Kan extension relative $\chi(G)\rightarrow \Delta^1$ of $D'$ to $\Delta^1\times (\Delta^1)^{\times 2}$ by $D_2$. We can depict the vertices of $D_2$ as follows. \[ \begin{tikzcd} {(G(x),x)} \arrow[d] \arrow[r] \arrow[rd, "\square", phantom] & {(GFE(x),FE(x))} \arrow[d] \\ {(0,0)} \arrow[r] & {(GT_\mathcal{X}'(x),T_\mathcal{X}'(x))} \end{tikzcd} \] By the properties of the Kan extension, we find a map $D_1\rightarrow D_2$ whose restriction to $\Delta^1\times (\{1\}\times\Delta^1)$ can be depicted as follows. \begin{equation}\label{diag6'} \begin{tikzcd} (E(x),FE(x))\arrow[r, dotted]\arrow[d]& (GFE(x),FE(x))\arrow[d]\\ (E(x),T_\mathcal{X}'(x))\arrow[r, dotted, "\phi"]& (GT_\mathcal{X}'(x),T_\mathcal{X}'(x)) \end{tikzcd} \end{equation} The dotted edges represent (part of) the map $D_1\rightarrow D_2$. The edge $\phi$ is an equivalence if and only if $(E(x),T_\mathcal{X}'(x))$ lies in $\mathcal{A}$. The second component of $\phi$ is by construction an equivalence and diagram \eqref{diag6'} shows that the first component of $\phi$ is an equivalence if and only if condition 1 is satisfied for this particular $x\in \mathcal{X}$. This shows that condition 1 is equivalent to $\mathcal{E}\subset \mathcal{A}.$ \Cref{equlem} provides us with an equivalence $\mathcal{V}'\coloneqq\operatorname{Fun}_{\Delta^1}(\Delta^1,\Gamma(E))\simeq \mathcal{V}$. We denote the essential image of the stable subcategories $\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D},\mathcal{E}\subset \mathcal{V}$ under the equivalence $\mathcal{V}\simeq \mathcal{V}'$ by $\mathcal{A}',\mathcal{B}',\mathcal{C}',\mathcal{D}',\mathcal{E}'$. We note that $\mathcal{A}'\subset \mathcal{E}'$ if and only if $\mathcal{A}\subset \mathcal{E}$. We show that condition 2 is equivalent to $\mathcal{A}'\subset \mathcal{E}'$. We denote the elements of $\mathcal{V}'$ by pairs $(x,y)$ such that $x\in \mathcal{X}$ and $y\in \mathcal{Y}$, suppressing the map $E(x)\rightarrow y$ contained in the data of $(x,y)\in \mathcal{V}'$. The right gluing functor of the biCartesian decomposition $(\mathcal{B},\mathcal{C})$ can be identified with $G$, the right gluing functor of the biCartesian semiorthogonal decomposition $(\mathcal{C},\mathcal{D})$ can thus be identified with $F$. Using $\Gamma(E)\simeq \chi(F)$, we obtain a description of $\mathcal{B}',\mathcal{C}',\mathcal{D}',\mathcal{E}'$ as in \Cref{sodrem}: \begin{itemize} \item $\mathcal{B}'$ is spanned by elements of the form $(F(y),y)$ such that the edge $EF(y)\rightarrow y$ is a counit map of the adjunction $E\dashv F$. \item $\mathcal{C}'$ is spanned by elements of the form $(x,0)$. \item $\mathcal{D}'$ is spanned by elements of the form $(0,y)$. \item $\mathcal{E}'$ is spanned by elements $(x,E(x))$, such that the edge $E(x)\rightarrow E(x)$ is an equivalence. \end{itemize} Applying \Cref{mutlem} we obtain that $\mathcal{A}'$ is spanned by the vertices $(T_\mathcal{X}(x),G(x))$, fitting into a diagram in $\mathcal{V}'$ \begin{equation}\label{fdiag} \begin{tikzcd} {(T_\mathcal{X}(x),G(x))} \arrow[d] \arrow[r, "!"] \arrow[rd, "\square", phantom] & {(FG(x),G(x))} \arrow[d, "\ast"] \\ {(0,0)} \arrow[r] & {(x,0)} \end{tikzcd} \end{equation} where the top edge is $(\mathcal{A}',\mathcal{B}')$-coCartesian and the right edge is $(\mathcal{B}',\mathcal{C}')$-Cartesian. Consider the biCartesian square in $\mathcal{X}$, \begin{equation}\label{bicartsq1} \begin{tikzcd} T_\mathcal{X}(x) \arrow[d] \arrow[r] & FG(x) \arrow[d, "cu"] \\ 0 \arrow[r] & x \end{tikzcd} \end{equation} where $cu$ is the counit map of the adjunction $F\dashv G$. We extend \eqref{bicartsq1} via right Kan extension relative $\Gamma(E)\rightarrow \Delta^1$ to the following diagram. \begin{equation}\label{rkandiag} \begin{tikzcd} {(T_\mathcal{X}(x),ET_\mathcal{X}(x))} \arrow[d] \arrow[r] & {(FG(x),EFE(x))} \arrow[d] \\ {(0,0)} \arrow[r] & {(x,E(x))} \end{tikzcd} \end{equation} We obtain a map from \eqref{rkandiag} to a diagram of the form \eqref{fdiag}. Restricting the diagram, we obtain the following diagram. \[ \begin{tikzcd} {(T_\mathcal{X}(x),ET_\mathcal{X}(x))} \arrow[d] \arrow[r, "\phi", dotted] & {(T_\mathcal{X}(x),G(x))} \arrow[d] \\ {(FG(x),EFE(x))} \arrow[r, dotted] & {(FG(x),G(x))} \end{tikzcd} \] We find that $\phi$ is an equivalence if and only if $(T_\mathcal{X},G(x))\in \mathcal{E}'$. The first component of $\phi$ is by construction an equivalence and the second component is an equivalence if and only if condition 2 is satisfied for this $x\in \mathcal{X}$. This shows that $\mathcal{A}'\subset \mathcal{E}'$ if and only if condition 2 holds. \end{proof} \begin{lemma}\label{2/4lem2} Let $F:\mathcal{Y}\leftrightarrow\mathcal{X}:G$ be an adjunction of stable $\infty$-categories such that $F$ admits a left adjoint $E$. Denote the twist and cotwist functors of $F\dashv G$ by $T_\mathcal{Y}$ and $T_\mathcal{X}$, respectively, and the twist and cotwist functors of $E\dashv F$ by $T_\mathcal{X}'$ and $T_\mathcal{Y}'$ respectively. Let $x\in \mathcal{X}$ and consider the following diagrams in $\mathcal{Y}$. \begin{equation}\label{24sq} \begin{tikzcd} {y[-1]} \arrow[d] \arrow[r] \arrow[rd, "\square", phantom] & G(x) \arrow[d, "G(u')"] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ E(x) \arrow[r, "u"] \arrow[d] \arrow[rd, "\square", phantom] & GFE(x) \arrow[d, "\alpha'"] \arrow[r, "\alpha"] \arrow[rd, "\square", phantom] & GT_\mathcal{X}'(x) \arrow[d] \\ 0 \arrow[r] & T_\mathcal{Y}E(x) \arrow[r] & y \end{tikzcd} \quad\quad \begin{tikzcd} {y'[-1]} \arrow[d] \arrow[r] \arrow[rd, "\square", phantom] & ET_\mathcal{X}(x) \arrow[d, "\beta"] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ T_\mathcal{Y}'G(x) \arrow[r, "\beta'"] \arrow[d] \arrow[rd, "\square", phantom] & EFG(x) \arrow[d, "cu'"] \arrow[r, "E(cu)"] \arrow[rd, "\square", phantom] & E(x) \arrow[d] \\ 0 \arrow[r] & G(x) \arrow[r] & y' \end{tikzcd} \end{equation} The edges $u$ and $cu$ are unit and counit maps of the adjunction $F\dashv G$ and the edges $u'$ and $cu'$ are unit and counit maps of the adjunction $E\dashv F$. Then: \begin{enumerate} \item The composition $e_1:G(x)\xrightarrow{G(u')}GFE(x)\xrightarrow{\alpha'}T_\mathcal{Y}E(x)$ is an equivalence if and only if the composition $e_2:E(x)\xrightarrow{u}GFE(x)\xrightarrow{\alpha} GT_\mathcal{X}'(x)$ is an equivalence. \item The composition $e_3:ET_\mathcal{X}(x)\xrightarrow{\beta}EFG(x)\xrightarrow{cu'} G(x)$ is an equivalence if and only if the composition $e_4:T_\mathcal{Y}'G(x)\xrightarrow{\beta'}EFG(x)\xrightarrow{E(cu)} E(x)$ is an equivalence. \end{enumerate} \end{lemma} \begin{proof} The pasting law for pullbacks implies that the edge $e_1$ is an equivalence if and only if $y[-1]\simeq 0$ and further that the edge $e_2$ is an equivalence if and only $y\simeq 0$. This shows \mbox{statement 1}. Statement 2 is shown analogously. \end{proof} \begin{remark}\label{2/4rem} In the setting of \Cref{2/4lem2} there exists natural transformations \[ \eta_1:G\rightarrow T_{\mathcal{Y}}E\,, \quad \quad \eta_2:E\rightarrow GT_{\mathcal{X}'}\,,\] \[ \eta_3:ET_{\mathcal{X}}\rightarrow G\,, \quad \quad \eta_4:T_{\mathcal{Y}'}G \rightarrow E\,,\] such that evaluating $\eta_i$ on $x\in \mathcal{X}$ yields the edge $e_i$. Using that a natural transformation is an equivalence if and only if it is pointwise an equivalence, we see that \Cref{2/4lem2} implies that $\eta_1$ is an equivalence if and only if $\eta_2$ is an equivalence and that $\eta_3$ is an equivalence if and only if $\eta_4$ is an equivalence. \end{remark} \begin{theorem}[The 2/4 property of spherical adjunctions]\label{2/4prop} Let $F:\mathcal{Y}\leftrightarrow\mathcal{X}:G$ be an adjunction of stable $\infty$-categories and assume that $F$ admits a left adjoint $E$. Denote the twist and cotwist functors of $F\dashv G$ by $T_\mathcal{Y}$ and $T_\mathcal{X}$, respectively, and the twist and cotwist functors of $E\dashv F$ by $T_\mathcal{X}'$ and $T_\mathcal{Y}'$, respectively. The adjunction $F\dashv G$ is spherical if and only if any two of the following four conditions are satisfied. \begin{enumerate} \item The twist functor $T_\mathcal{X}'$ and cotwist functor $T_\mathcal{X}$ are inverse equivalences. \item The twist functor $T_\mathcal{Y}$ and the cotwist functor $T_\mathcal{Y}'$ are inverse equivalences. \item The natural transformations $\eta_1$ and $\eta_2$ introduced in \Cref{2/4rem} are equivalences. \item The natural transformations $\eta_3$ and $\eta_4$ introduced in \Cref{2/4rem} are equivalences. \end{enumerate} \end{theorem} \begin{proof} If $F\dashv G$ is a spherical adjunction, we deduce conditions 1 and 2 by \Cref{ucurem} and \Cref{uculem2} and conditions 3 and 4 by \Cref{2/4lem1,2/4lem2}. Conditions 1 and 2 imply the sphericalness of the adjunction $F\dashv G$ by definition. Conditions 3 and 4 imply the sphericalness of the adjunction $F\dashv G$ by \Cref{2/4lem1}. Assume conditions 1 and 3. The proof of \Cref{2/4lem1} shows that condition 3 is equivalent to $ \mathcal{E}\subset \mathcal{A}$ with the notation used there. Let $(G(x),x)\in \mathcal{A}$. As discussed in \Cref{Kconstr}, the $\infty$-category $\mathcal{E}$ is spanned by objects $(E(x'),T_\mathcal{X}'(x'))\in \mathcal{V}$ fitting into a diagram \[ \begin{tikzcd} \left(0,x'\right)\arrow[r, "!"]\arrow[dr, phantom, "\square"]\arrow[d]& \left(E(x'),FE(x')\right)\arrow[d, "\ast"]\\ \left(0,0\right)\arrow[r]& \left(E(x'),T_\mathcal{X}'(x')\right) \end{tikzcd}\] corresponding to a vertex in $\mathcal{K}_1$. Thus for $x'=T_\mathcal{X}(x)$ we obtain that $(ET_\mathcal{X}(x),T_\mathcal{X}'T_\mathcal{X}(x))\simeq (ET_\mathcal{X}(x),x)\in \mathcal{E}\subset \mathcal{A}$. Using that the restriction functor to the second component is a trivial fibration from $\mathcal{A}$ to $\mathcal{X}$, we obtain that $(G(x),x)\simeq (ET_\mathcal{X}(x),x)\in \mathcal{E}$. Thus $\mathcal{E}=\mathcal{A}$ and the adjunction $F\dashv G$ is spherical by \Cref{4pedprop1}. Showing that the conditions 1 and 4 imply the sphericalness of the adjunction $F\dashv G$ is analogous to showing that the conditions 1 and 3 imply the sphericalness. Assume conditions 2 and 3. The proof of \Cref{2/4lem1} shows that condition 3 is equivalent to $ \mathcal{E}\subset \mathcal{A}$ with the notation used there. Using that $T_\mathcal{Y}$ is an equivalence, we can compose the adjunctions $E\dashv F$ and $T_\mathcal{Y}\dashv T_\mathcal{Y}^{-1}$ to obtain an adjunction $G\simeq T_\mathcal{Y}E\dashv FT_\mathcal{Y}'=:H$. We want to show that statement 1 of \Cref{uculem3} is satisfied for the adjunction $G\dashv H$ to deduce that $G\dashv H$ is spherical. \Cref{lrcor} then implies that $F\dashv G$ is also spherical. The counit of the adjunction $G\dashv H$ is equivalent to the counit of the adjunction $E\dashv F$, so that the cotwist functors of the two adjunctions are equivalent. This shows that the first part of statement 1 holds. \Cref{adjsodlem2} implies that $(\mathcal{A}^\perp,\mathcal{A})$ forms a semiorthogonal decomposition of $\mathcal{V}$. The condition $\mathcal{E}\subset \mathcal{A}$ implies $\mathcal{A}^\perp\subset \mathcal{E}^\perp=\mathcal{D}$. Consider any diagram $\Delta^2\times\Delta^2\rightarrow \mathcal{V}$ of the following form, \begin{equation}\label{22diag} \begin{tikzcd} z \arrow[r]\arrow[dr,phantom,"\square"]\arrow[d] & a\arrow[dr,phantom,"\square"] \arrow[d, "!"] \arrow[r] & 0 \arrow[d] \\ b'\arrow[dr,phantom,"\square"] \arrow[d] \arrow[r] & b\arrow[dr,phantom,"\square"] \arrow[d] \arrow[r, "\ast"] & c \arrow[d] \\ 0 \arrow[r] & b'' \arrow[r] & z' \end{tikzcd} \end{equation} satisfying \begin{enumerate} \item[1)] $a\in \mathcal{A}$, $b,b',b''\in \mathcal{B}$ and $c\in \mathcal{C}$, \item[2)] the edge $a\rightarrow b$ is $(\mathcal{A},\mathcal{B})$-coCartesian, \item[3)] the edge $b\rightarrow c$ is $(\mathcal{B},\mathcal{C})$-Cartesian, \item[4)] the edge $a\rightarrow b''$ is $(\mathcal{A},\mathcal{B})$-Cartesian. \end{enumerate} Condition 4) implies by \Cref{mutlem} that $z\in \prescript{\perp}{}{\mathcal{A}} \subset \mathcal{D}$. Thus $z'\simeq z[1]\in \mathcal{D}$. By \Cref{mutlem}, we obtain that the edge $b'\rightarrow c$ is $(\mathcal{B},\mathcal{C})$-coCartesian. We define two $\infty$-categories of diagrams in $\mathcal{V}$. Let $\mathcal{W}$ be the full subcategory of $\operatorname{Fun}(\Delta^2\times\Delta^2,\mathcal{V})$ spanned by diagrams of the form \eqref{22diag} satisfying 1) to 4) and let $\mathcal{W}'$ be the full subcategory of $\operatorname{Fun}(\Delta^1\times \Delta^1,\mathcal{V})$ spanned by functors of the form \eqref{22diag} satisfying $1),2),3)$ and that the edge $b'\rightarrow c$ is $(\mathcal{B},\mathcal{C})$-coCartesian. We have shown that $\mathcal{W}\subset \mathcal{W}'$. Consider the restriction functors $\operatorname{res}_{b'},\operatorname{res}_{b''}:\mathcal{W}\rightarrow \mathcal{B}$ to the vertices $b'$ and $b''$. By the properties of the involved Cartesian and coCartesian edges and the biCartesian squares, we obtain that $\operatorname{res}_{b''}$ is a trivial fibration. Choosing a section of $\operatorname{res}_{b''}$ and composing with $\operatorname{res}_{b'}$ constitutes a choice of cotwist functor $T'_\mathcal{Y}$, which is an equivalence by assumption. Thus $\operatorname{res}_{b'}$ is also a trivial fibration. We observe that the restriction functor to $b'$ is also a trivial fibration from $\mathcal{W}'$ to $\mathcal{B}$. Using that $\mathcal{W}$ and $\mathcal{W}'$ are closed under equivalences in $\operatorname{Fun}(\Delta^1\times\Delta^1,\mathcal{V})$ it follows that $\mathcal{W}=\mathcal{W}'$. We have shown that the second part of statement 1 of \Cref{uculem3} is fulfilled and deduce that the adjunction $F\dashv G$ is spherical. Showing that the conditions 2 and 4 imply the sphericalness of the adjunction $F\dashv G$ is analogous to showing that the conditions 2 and 3 imply the sphericalness. \end{proof} \begin{corollary}\label{rescor} Let $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$ be an adjunction of stable $\infty$-categories. Assume that $F$ admits a left adjoint $E$. Let $\mathcal{D}'\subset \mathcal{D}$ be a stable subcategory such that $\operatorname{Im}(E),\operatorname{Im}(G)\subset \mathcal{D}'$. Then: \begin{enumerate} \item The adjunctions $E\dashv F$ and $F\dashv G$ restrict to adjunctions $E:\mathcal{C}\leftrightarrow \mathcal{D}':F'$ and \mbox{$F':\mathcal{D}'\leftrightarrow \mathcal{C}:G$,} where $F'$ is the restriction of the functor $F$. \item The adjunction $F\dashv G$ is spherical if and only if the adjunction $F'\dashv G$ is spherical. \end{enumerate} \end{corollary} \begin{proof} Statement 1 follows directly from the assumptions $\operatorname{Im}(E),\operatorname{Im}(G)\subset \mathcal{D}'$. We now show statement 2. The units of the adjunctions $E\dashv F$ and $E\dashv F'$ are equivalent. Similarly, the counits of the adjunctions $F\dashv G$ and $F'\dashv G$ are also equivalent. The counits of the adjunction $E\dashv F$ restrict to the counit of the adjunction $E\dashv F'$ and similarly the unit of the adjunction $F\dashv G$ restricts to the unit of the adjunction $F'\dashv G$. We thus obtain that the natural transformation $\eta_1$ of \Cref{2/4rem} associated to the adjunctions $E\dashv F\dashv G$ is equivalent to the natural transformation $\eta_1'$ associated to the adjunctions $E\dashv F'\dashv G$. Similarly, the natural transformation $\eta_3$ of \Cref{2/4rem} associated to the adjunctions $E\dashv F\dashv G$ is equivalent to the natural transformation $\eta_3'$ associated to the adjunctions $E\dashv F'\dashv G$. Thus $\eta_1$ and $\eta_3$ are equivalences if and only if $\eta_1'$ and $\eta_3'$ are equivalences. \Cref{2/4prop} implies that the adjunction $F\dashv G$ is spherical if and only if the adjunction $F'\dashv G$ is spherical. \end{proof} \section{Local systems on spheres}\label{sec4} Let $\mathcal{D}$ be a stable $\infty$-category. We denote by $S^n$ the singular set of the topological $n$-sphere. The pullback functor $f^*$ along the map $f:S^n\rightarrow \ast$ is part of an adjunction \begin{equation}\label{sphadj1} f^*:\mathcal{D}\longleftrightarrow \operatorname{Fun}(S^n,\mathcal{D}):f_* \end{equation} between $\mathcal{D}$ and the stable $\infty$-category of local systems on the $n$-sphere with values in $\mathcal{D}$. We show in \Cref{sec3.1} that the adjunction \eqref{sphadj1} is spherical. Generalizing the spherical adjunction \eqref{sphadj1} to a relative context, we show in \Cref{sphfibtwist} that for any spherical fibration $f:X\rightarrow Y$ between Kan complexes there is a spherical adjunction \mbox{$f^*:\operatorname{Fun}(Y,\mathcal{D})\longleftrightarrow \operatorname{Fun}(X,\mathcal{D}):f_*.$} If we further assume that $\mathcal{D}=\operatorname{Sp}$ is the stable $\infty$-category of spectra, or any other good symmetric monoidal and stable $\infty$-category, we can endow the $\infty$-categories $\operatorname{Fun}(X,\mathcal{D})$ and $\operatorname{Fun}(Y,\mathcal{D})$ with a symmetric monoidal structure, called the pointwise monoidal structure. We provide an explicit description of the resulting twist functor $T_{\operatorname{Fun}(Y,\mathcal{D})}:\operatorname{Fun}(Y,\mathcal{D})\rightarrow \operatorname{Fun}(Y,\mathcal{D})$ in terms of the symmetric monoidal product with a local system $\zeta\in \operatorname{Fun}(Y,\mathcal{D})$ in \Cref{sec3.3}. The value of $\zeta$ at a point $y\in Y$ can be interpreted as the reduced homology the fiber of $f$ over $y$. \subsection{Twist along a sphere}\label{sec3.1} We fix a stable $\infty$-category $\mathcal{D}$. Given a simplicial set $Z$, we denote by $Z^\triangleright$ the join $Z\ast \Delta^0$. We recursively define $\infty$-categories by \[ P_0\coloneqq S^0=\Delta^0 \amalg \Delta^0\,,\] \[ P_n\coloneqq P_{n-1}^\triangleright \coprod_{P_{n-1}} P_{n-1}^\triangleright\,.\] We define recursively a labeling of the vertices of $P_n$ by denoting the vertices which are not contained in $P_{n-1}$ by $2n+1$ and $2n+2$. Denote by $g^*:\mathcal{D}\rightarrow \operatorname{Fun}(P_n,\mathcal{D})$ the pullback functor along the map of simplicial sets $g:P_n\rightarrow \ast$. The simplicial set $P_n$ is finite and $\mathcal{D}$ admits as a stable $\infty$-category all finite limits and colimits. By \cite[4.3.3.7]{HTT}, we hence obtain adjunctions \begin{equation}\label{adj1} g^*: \mathcal{D}\longleftrightarrow \operatorname{Fun}(P_n,\mathcal{D}): g_*=\lim\,, \end{equation} \begin{equation*} g_!=\operatorname{colim}: \text{Fun}(P_n,\mathcal{D})\longleftrightarrow \mathcal{D}: g^*\,, \end{equation*} where $g_!$ and $g_*$ are the limit and colimit functors, respectively. We will show in this section that the adjunction $g^*\dashv g_*$ is spherical. The $\infty$-category $P_n$ is equivalent to the $\infty$-category of exit paths of a stratification of the $n$-sphere, as defined in \cite[A.6.2]{HA}, where the strata are nested spheres, one in each dimension $d\leq n$. For example for $n=2$, this corresponds to the $2$-sphere, stratified by a circle and two points on that circle. The functor category $\operatorname{Fun}(P_n,\mathcal{D})$ is equivalent to the $\infty$-category of constructible sheaves with values in $\mathcal{D}$ with respect to the stratification of $S^n$ if $\mathcal{D}$ is a compactly generated $\infty$-category, see \cite[Section 8.6]{Tan19}. Let $f:S^n\rightarrow \ast$ and let $f^*:\mathcal{D}\rightarrow \operatorname{Fun}(S^n,\mathcal{D})$ be the corresponding pullback functor. If we can show that $\mathcal{D}$ admits $S^n$-indexed limits and colimits, we can again apply \cite[4.3.3.7]{HTT} to show that $f^*$ admits left and right adjoints. This follows from the following Lemma. \begin{lemma}\label{cofinallemma} There exists a final and cofinal map $P_n \rightarrow S^n$. \end{lemma} \begin{proof} Denote the topological $n$-sphere by $S^n_{top}$, so that $S^n=\operatorname{Sing}(S^n_{top})$. Using the Quillen equivalence between the Kan model structure on $\operatorname{Set}_\Delta$ and the Quillen model structure on $\operatorname{Top}$, we can obtain from a choice of weak homotopy equivalence $|P_n|\simeq S^n_{top}$ a weak homotopy equivalence $e:P_n \rightarrow S^n$. The finality and cofinality thus follow from \cite[4.1.2.6]{HTT}. \end{proof} We thus conclude that $f^*$ admits left and right adjoints $f_!$ and $f_*$, given by mapping a functor $S^n\rightarrow \mathcal{D}$ to its colimits, respectively, its limit. The next Lemma shows that $\operatorname{Fun}(S^n,\mathcal{D})$ is a full subcategory of $\operatorname{Fun}(P_n,\mathcal{D})$ and that the adjunction $g^*\dashv g_*$ restricts to the adjunction $f^*\dashv f_*$. \begin{lemma}\label{lem:fgres} There exists a fully faithful functor $\operatorname{Fun}(S^n,\mathcal{D})\rightarrow \operatorname{Fun}(P_n,\mathcal{D})$ making the following diagrams commute. \begin{equation}\label{eq:fgcomm} \begin{tikzcd} {\operatorname{Fun}(S^n,\mathcal{D})} \arrow[rr] & & {\operatorname{Fun}(P_n,\mathcal{D})} \\ & \mathcal{D} \arrow[lu, "f^*"] \arrow[ru, "g^*"'] & \end{tikzcd} \quad\quad \begin{tikzcd} {\operatorname{Fun}(S^n,\mathcal{D})} \arrow[rr] \arrow[rd, "f_*"'] & & {\operatorname{Fun}(P_n,\mathcal{D})} \arrow[ld, "g_*"] \\ & \mathcal{D} & \end{tikzcd} \end{equation} \end{lemma} \begin{proof} Suppose that $n=0$. Then $S^0=P_0=\Delta^0 \amalg \Delta^0$ and the assertion is clear. We proceed by induction on $n$. Suppose that there is a map $P_n\rightarrow S^n$ whose pullback functor $\operatorname{Fun}(S^n,\mathcal{D})\rightarrow \operatorname{Fun}(P_n,\mathcal{D})$ is fully faithful and makes the diagrams \eqref{eq:fgcomm} commute. Let $a:P_n\rightarrow P_n^\triangleright$ be the inclusion and $b:P_n^\triangleright\rightarrow \ast$. The pullback functors assemble into a commutative diagram \[ \begin{tikzcd} \mathcal{D} \arrow[r, "b^*"] \arrow[rd, "g^*"] \arrow[d, "f^*"] & {\operatorname{Fun}(P_n^\triangleright,\mathcal{D})} \arrow[d, "a^*"] \\ {\operatorname{Fun}(S^n,\mathcal{D})} \arrow[r] & {\operatorname{Fun}(P_n,\mathcal{D})} \end{tikzcd} \] where the upper triangle commutes because $b\circ a=g$ and the lower triangle commutes by the induction assumption. The pushout diagrams of $S^{n+1}\simeq \Delta^0 \amalg_{S^n}\Delta^0$ and $P^{n+1}\simeq P_n^\triangleright\amalg_{P_n}P_n^\triangleright$ are mapped by $\operatorname{Fun}(\mhyphen,\mathcal{D})$ to pullback diagrams in $\operatorname{Cat}_\infty$, which assemble into the following commutative diagram. \[ \begin{tikzcd}[row sep=small] {\operatorname{Fun}(S^{n+1},\mathcal{D})} \arrow[rr, hook, "e^*"] \arrow[dd] \arrow[rd] \arrow[rddd, "\lrcorner", phantom] & & {\operatorname{Fun}(P_{n+1},\mathcal{D})} \arrow[dd] \arrow[rd] \arrow[rddd, "\lrcorner", phantom] & \\ & \mathcal{D} \arrow[rr, "b^*", near start] \arrow[dd, "f^*"', near start] & & {\operatorname{Fun}(P_n^\triangleright,\mathcal{D})} \arrow[dd, "a^*"] \\ \mathcal{D} \arrow[rd, "f^*", near end] \arrow[rr, "b^*", near end] & & {\operatorname{Fun}(P_n^\triangleright,\mathcal{D})} \arrow[rd, "a^*"] & \\ & {\operatorname{Fun}(S^n,\mathcal{D})} \arrow[rr, hook] & & {\operatorname{Fun}(P_n,\mathcal{D})} \end{tikzcd} \] Since $P_{n}^\triangleright$ is contractible, the functor $b^*$ is an equivalence onto its image given by (up to equivalence) constant functors $P_{n}^\triangleright\rightarrow \mathcal{D}$. It follows that the resulting functor $e^*:\operatorname{Fun}(S^{n+1},\mathcal{D})\rightarrow \operatorname{Fun}(P_{n+1},\mathcal{D})$ is also an equivalence onto its image. The functor $e^*$ is given by the pullback functor along a weak equivalence $e:P_{n+1}\rightarrow S_{n+1}$, which is as in \Cref{cofinallemma} final. Since $e\circ f=g$, it follows that $g^*=e^*\circ f^*$, meaning that the left diagram of \eqref{eq:fgcomm} commutes. The commutativity of the right diagram of \eqref{eq:fgcomm} follows from $e$ being final. \end{proof} If we show that the adjunction $g^*\dashv g_*$ is spherical, it follows by \Cref{rescor}, that the adjunction $f^*\dashv f_*$ is also spherical. The advantage of treating the adjunction $g^*\dashv g_*$ over treating the adjunction $f^*\dashv f_*$ is, that the former is more accessible to direct computations because $P_n$ is a finite poset. To show that the adjunction $g^*\dashv g_*$ is spherical, we use the conditions 3 and 4 of the 2/4 property of spherical adjunctions. We begin in \Cref{sphlem1} by describing the limits and colimits of constant local systems and the arising unit and counit maps. The idea of the proof is to use the decomposition results for limits and colimits, see \cite[4.2.3.10]{HTT}. \begin{lemma}\label{sphlem1} Let $n\geq 0$ and consider the functor $g^*:\mathcal{D}\rightarrow \operatorname{Fun}(P_n,\mathcal{D})$ with left and right adjoints $g_!$ and $g_*$. \begin{enumerate} \item There exists an equivalence of functors $g_!g^*\simeq id_{\mathcal{D}}\oplus id_{\mathcal{D}}[n]$ in $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ such that composition with the counit map $g_!g^*\rightarrow id_\mathcal{D}$ of the adjunction $g_!\dashv g^*$ yields the projection from the direct sum. \item There exists an equivalence of functors $g_*g^*\simeq id_{\mathcal{D}}\oplus id_{\mathcal{D}}[-n]$ in $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ such that composition with the unit map $id_\mathcal{D}\rightarrow g_*g^*$ of the adjunction $g^*\dashv g_*$ yields the inclusion into the direct sum. \end{enumerate} \end{lemma} \begin{proof} We prove only statement 1, statement 2 can be shown analogously. Assume $n=0$. It is immediate that $g_!g^*\simeq id_\mathcal{D}\oplus id_{\mathcal{D}}$. The counit map is given by $id_{\mathcal{D}}\oplus id_{\mathcal{D}}\xrightarrow{(id,id)}id_{\mathcal{D}}$ which is equivalent to the projection, using the equivalence $id_{\mathcal{D}}\oplus id_{\mathcal{D}} \xrightarrow{A} id_{\mathcal{D}}\oplus id_{\mathcal{D}}$ where $A=\begin{pmatrix}id & -id \\ 0 & id \end{pmatrix}$ with inverse $A^{-1}=\begin{pmatrix}id & id \\ 0 & id \end{pmatrix}$. We introduce some notation: For $m>0$ we denote $[m]=\{1,\dots,m\}$. Given $X\subset [2n+2]$, we denote the full subcategory of $P_n$ generated by the elements of $X$ by $\langle X\rangle$. We label the following inclusion functors $i_{n-1}:P_{n-1}\rightarrow P_n$, $j_{n-1}:\langle [2n+1]\rangle\rightarrow P_n$ and $k_{n-1}:\langle [2n]\cup \{2n+2\}\rangle\rightarrow P_n$. We denote by $i^*_{n-1},j^*_{n-1},k^*_{n-1}$ the respective pullback functors. We denote by $\operatorname{colim}$ the colimit functors, irrespective of their domain. For $n>0$, we continue by induction. We show that there exists an equivalence of functors $g_!g^*\simeq id_{\mathcal{D}}\oplus id_{\mathcal{D}}[n]$, such that the following diagram commutes. \[ \begin{tikzcd} g_!g^* \arrow[r, "\simeq"] \arrow[rd, "cu_n"'] & {id_\mathcal{D}\oplus id_\mathcal{D}[n]} \arrow[d, "{(id,0)}"] \\ & id_\mathcal{D} \end{tikzcd} \] Assume that the above statement has been shown for $n-1$. Via Kan extension we produce the following diagram in $\operatorname{Fun}(\mathcal{D},\mathcal{D})$. \begin{equation}\label{fdc} \begin{tikzcd} {\operatorname{colim}\circ\, i_{n-1}^*\circ g^*} \arrow[r] \arrow[d] & {\operatorname{colim}\circ\, k^*_{n-1}\circ g^*} \arrow[d] \\ {\operatorname{colim}\circ\, j_{n-1}^* \circ g^*} \arrow[r, "b"] & g_! g^* \end{tikzcd} \end{equation} The top horizontal and left vertical maps can be identified with the counit of the adjunction $\mathcal{D}\leftrightarrow \operatorname{Fun}(P_{n-1},\mathcal{D})$. The induction assumption implies that the diagram \eqref{fdc} is equivalent to the following diagram in $\operatorname{Fun}(\mathcal{D},\mathcal{D})$. \begin{equation}\label{sq4} \begin{tikzcd} {id_{\mathcal{D}}\oplus id_{\mathcal{D}}[n-1]} \arrow[r, "{(id\text{,}0)}"] \arrow[d, "{(id\text{,}0)}"'] & id_{\mathcal{D}} \arrow[d] \\ id_{\mathcal{D}} \arrow[r] & g_!g^* \end{tikzcd} \end{equation} Evaluating the diagram \eqref{fdc} at a vertex $d\in \mathcal{D}$ yields a diagram equivalent to the pushout diagram obtained from the decomposition of the colimit of the functor $g^*(d)\in \operatorname{Fun}(P_n,\mathcal{D})$ along the decomposition \[ P_{n-1}\subset \langle [2n+1]\rangle,\langle [2n]\cup \{2n+2\}\rangle\subset P_n\,.\] This implies that the diagrams \eqref{fdc} and \eqref{sq4} are biCartesian squares. We can thus find an equivalence of diagrams between the diagram \eqref{sq4} and the following diagram. \[\begin{tikzcd} id_{\mathcal{D}}\oplus id_{\mathcal{D}}[n-1]\arrow[dr, phantom, "\square"]\arrow[r, "(id\text{,}0)"]\arrow[d, "(id\text{,}0)"]& id_{\mathcal{D}}\arrow[d, "{(id,0)}"]\\ id_{\mathcal{D}}\arrow[r, "{(id,0)}"]& id_{\mathcal{D}}\oplus id_{\mathcal{D}}[n] \end{tikzcd}\] Using that $\langle [2n+1]\rangle$ is contractible, it follows that the composite map $\operatorname{colim}\circ\, j^{n-1}\circ g^*\xrightarrow{b} g_!g^*\xrightarrow{cu_n} id_\mathcal{D}$ is an equivalence. There thus exists a commutative diagram of the form \[ \begin{tikzcd} g_!g^* \arrow[r, "\simeq"] \arrow[rd, "cu_n"'] & {id_\mathcal{D}\oplus id_\mathcal{D}[n]} \arrow[d, "{(c,c')}"] \arrow[r, "A"] & {id_\mathcal{D}\oplus id_\mathcal{D}[n]} \arrow[d, "{(id,0)}"] \\ & id_\mathcal{D} \arrow[r, "id"] & id_\mathcal{D} \end{tikzcd} \] with $c,c'$ some undetermined maps and $A= \begin{pmatrix}c^{-1} & c' \\ 0 & id \end{pmatrix}$ an equivalence, completing the induction. \end{proof} \begin{remark}\label{sphrem1} \Cref{sphlem1} implies that the twist functor $T_\mathcal{D}$ of the adjunction $g^*\dashv g_*$ satisfies $T_\mathcal{D}\simeq [-n]$. The twist functor of the adjunction $f^*\dashv f_*$ is equivalent to the twist functor of the adjunction $g^*\dashv g_*$ and thus also equivalent to $[-n]$. \end{remark} \begin{lemma}\label{sphlem2} Every functor in $\operatorname{Fun}(P_n,\mathcal{D})$ is a pullback of functors whose value is zero on $2n+1\in (P_n)_0$ or $2n+2\in (P_n)_0$. \end{lemma} \begin{proof} Let $X\in\operatorname{Fun}(P_n,\mathcal{D})$. Denote by $X_{n+1},X_{n+2},X_{n+1,n+2}\in \operatorname{Fun}(P_n,\mathcal{D})$ the functors which are identical to $X$ except for their value at $2n+1$, $2n+2$ and $\{2n+1,2n+2\}$, respectively, where their value is $0$. The functors $X_{2n+1},X_{2n+2},X_{2n+1,2n+2}$ can be described as right Kan extensions of restrictions of $X$. The functor $X$ is equivalent to a pullback $X_{2n+1}\times_{X_{2n+1,2n+2}} X_{2n+2}$ in $\operatorname{Fun}(P_n,\mathcal{D})$. \end{proof} \begin{lemma}\label{sphlem3} Let $F\in \operatorname{Fun}(P_n,\mathcal{D})$ and consider the counit map $cu:g^*g_*(F)\rightarrow F$ of the adjunction $g^*\dashv g_*$. There exists an equivalence $g_!(F)\oplus g_*(F)\simeq g_!g^*g_*(F)$ such that the composite with $g_!(cu):g_!g^*g_*(F)\rightarrow g_!(F)$ yields the projection from the direct sum. \end{lemma} \begin{proof} By \Cref{sphlem2} it suffices to consider the case where $F$ is zero on $2n+1$ or $2n+2$. The statement for a general functor then follows using that $g_*$, $g^*$ and $g_!$ are exact. We restrict in the following to the case where $F(2n+2)=0$. The case $F(2n+1)=0$ is completely analogous. For $n=0$, we consider a functor $F\in\operatorname{Fun}(P_0,\mathcal{D})$ with $F(2)=0$. We find equivalences $g_!(F)\simeq g_*(F)\simeq F(1)$. This implies $g_!g^*g_*(F)\simeq g_*(F)\oplus g_*(F)\simeq F(1)\oplus F(1)$. We decompose the colimit of the diagram $P_0\rightarrow \operatorname{Fun}(\Delta^1,\mathcal{D})$ corresponding to the natural transformation of functors $g^*g_*(F)\xrightarrow{cu} F$ along the decomposition $P_0=\ast\amalg \ast$ to obtain the following commutative diagram in $\mathcal{D}$. \[ \begin{tikzcd}[column sep=small] F(1) \arrow[rd, "{(id,0)}"] \arrow[rrrrrr, "id", dashed, bend left=15] & & F(1) \arrow[ld, "{(0,id)}"'] \arrow[rrrrrr, dashed, bend left=15] & & ~~~ & & F(1) \arrow[rd, "id"] & & 0 \arrow[ld] \\ & F(1)\oplus F(1) \arrow[rrrrrr, "{(id,0)}", dashed] & & & ~~~ & & & F(1) & \end{tikzcd} \] This shows that the map $F(1)\oplus F(1)\simeq g_!g^*g_*(F)\xrightarrow{g_!(cu)} g_*(F) \simeq F(1)$ is equivalent to the projection onto the first direct summand. For $n\geq 1$ we proceed by induction. Fix $n$ and assume the lemma has been shown for all functors in $\operatorname{Fun}(P_{n-1},\mathcal{D})$. Consider $F\in \operatorname{Fun}(P_n,\mathcal{D})$ with the property, that $F(2n+2)=0$. We use the notation introduced in the proof of \Cref{sphlem1} in the following. We apply the decomposition of colimits with the decomposition \[ P_{n-1}\subset \langle[2n+1]\rangle,\langle[2n]\cup\{2n+2\}\rangle \subset P_n\] to the diagram $P_n\rightarrow \operatorname{Fun}(\Delta^1,\mathcal{D})$ corresponding to the natural transformation $g^*g_*(F)\xrightarrow{cu} F$. The resulting diagram in $\mathcal{D}$ is up to equivalence of the following form. \begin{equation}\label{diag4} \begin{tikzcd}[column sep=small, row sep=small] & {g_*(F)[n-1]\oplus g_*(F)} \arrow[rrrrr, "e_{n-1}", dotted] \arrow[ld, "{(0,id)}"'] \arrow[rd, "{(0,id)}"] \arrow[dd, "\square", phantom] & & & & & \operatorname{colim}F|_{P_{n-1}} \arrow[dd, "\square", phantom] \arrow[ld] \arrow[rd] & \\ g_*(F) \arrow[rd, "{(0,id)}"'] & & g_*(F) \arrow[ld, "{(0,id)}"] & & & F(2n+1) \arrow[rd] & & 0 \arrow[ld] \\ & {g_*(F)[n]\oplus g_*(F)} \arrow[rrrrr, "{e_n=(h_0,h_1)}", dotted] & & & & & \operatorname{colim} F & \end{tikzcd} \end{equation} We have not depicted all edges. If $n=1$, the edge $e_{n-1}$ is given by $g_*(F)\oplus g_*(F)\xrightarrow{(a,a)}F(1)\simeq \operatorname{colim}F|_{P_0}$, where $a:g_*(F)\rightarrow F(1)$ is the map contained in the limit cone of $F$. It follows that $h_1$ is zero. If $n\geq 2$, the edge $e_{n-1}$ factors as \[g_*(F)[n-1]\oplus g_*(F) \rightarrow \lim F|_{P_{n-1}}[n-1]\oplus \lim F|_{P_{n-1}}\xrightarrow{(h_0',h_1')} \operatorname{colim} F|_{P_{n-1}}\,,\] where by the induction $h_0'$ is an equivalence and $h_1'$ is zero. It follows that $e_{n-1}$ restricted to $g_*(F)$ is zero and thus that $h_1$ is also zero. To complete the induction step, we need to show that for any $n\geq 1$ the map $h_0$ is an equivalence. For that it suffices to show that the following square contained in the diagram \eqref{diag4} is biCartesian. \begin{equation}\label{sq1} \begin{tikzcd} {g_*(F)[n-1]\oplus g_*(F)} \arrow[d, "{(0,id)}"] \arrow[r, "e_{n-1}"] & \operatorname{colim} F|_{P_{n-1}} \arrow[d] \\ g_*(F) \arrow[r] & F(2n+1) \end{tikzcd} \end{equation} The decomposition of colimits organizes the colimit cones of the restrictions of the functors $g^*g_*(F)$ and $F$ into a diagram \[D:Z\coloneqq P_{n-1}^\triangleright\times \Delta^1\times \Delta^1 \coprod_{P_{n-1}^\triangleright\times \Delta^1\times\{1\}} \langle [2n+1]\rangle^\triangleright\times \Delta^1\times\{1\}\rightarrow \mathcal{D}\,,\] i.e.~$D$ restricts to the colimit cones of the functors $g^*g_*(F)|_{P_{n-1}},F|_{P_{n-1}},g^*g_*(F)|_{\langle [2n+1]\rangle}\text{ and }F|_{\langle [2n+1]\rangle}$ on \begin{align}\label{comps} \begin{split} & P_{n-1}^\triangleright\times \{0\}\times \{0\},~P^\triangleright_{n-1}\times \{1\}\times\{0\},~\langle [2n+1]\rangle^\triangleright\times \{0\}\times\{1\}\\ \text{and }& \langle [2n+1]\rangle^\triangleright\times \{1\}\times\{1\}\,, \end{split} \end{align} respectively. Furthermore, the restriction of $D$ to the 'tips of the cones', i.e.~the simplicial subset $\Delta^1\times\Delta^1\hookrightarrow Z$ mapping to the joined $0$-simplicies, yields the diagram \eqref{diag4}. By the involved universal properties, the diagram $D$ is determined up to equivalence by its restriction to \begin{equation}\label{eq29} P_{n-1}\times \Delta^1\times \Delta^1 \coprod_{P_{n-1}\times \Delta^1\times\{1\}}\langle [2n+1]\rangle\times \Delta^1\times\{1\} \end{equation} and the restriction to each component in \eqref{comps}. Note that \eqref{eq29} is a left Kan extension of its restriction to \begin{equation}\label{eq30} P_{n-1}\times \left(\Delta^{\{(0,0),(0,1)\}}\amalg_{\Delta^{\{(0,0)\}}}\Delta^{\{(0,0),(1,0)\}}\right) \end{equation} and that each restriction to a component in \eqref{comps} is a left Kan extension of its restriction to the complement of the tip of the colimit cone. Using that left Kan extensions commute with each other, it follows that $D$ is a left Kan extension of its restriction to \eqref{eq29} and that \eqref{sq1} is a pushout square and thus biCartesian. \end{proof} \begin{remark}\label{dualrem} There exists an equivalence $P_n\simeq P^{op}_n$ mapping $i$ to $2n+3-i$. Replacing $\mathcal{D}$ with $\mathcal{D}^{op}$ in the proof of \Cref{sphlem2} thus implies the following: Let $F\in \operatorname{Fun}(P_n,\mathcal{D})$ and consider the edge $g_*(F)\xrightarrow{g_*(u)}g_*g^*g_!(F)$ obtained from applying $g_*$ to the unit map $u:F\rightarrow g^*g_!(F)$ of the adjunction $g_!\dashv g^*$. There exists an equivalence $g_*g^*g_!(F)\simeq g_!(F)\oplus g_*(F)$ such that precomposition with $g_*(u)$ yields up to equivalence the inclusion of $g_*(F)$ into the direct sum. \end{remark} \begin{remark}\label{sphrem2} We note that in the construction of the equivalences in the \Cref{sphlem1,sphlem2} and \Cref{dualrem} the identical decompositions of $P_n$ were used, so that we obtain a compatibility statement between the equivalences as follows. The composition of the equivalences \[ g_!(F)\oplus g_!(F)[n]\simeq g_*g^*g_!(F)\simeq g_!(F)\oplus g_*(F)\] from the \Cref{sphlem1,sphlem2} restrict to equivalences $g_!(F)\simeq g_!(F)$ and $g_!(F)[n]\simeq g_*(F)$ on the first and second factors, respectively. \end{remark} \begin{proposition}\label{sphprop} Let $\mathcal{D}$ be a stable $\infty$-category and $n\geq 0$. Consider the pullback functor $g^*:\mathcal{D}\rightarrow \operatorname{Fun}(P_n,\mathcal{D})$ with right adjoint $g_*$. The adjunction $g^*\dashv g_*$ is spherical. \end{proposition} \begin{proof} We apply \Cref{2/4lem1} to show the sphericalness. We denote the cotwist functor of $g^*\dashv g_*$ by $T_\mathcal{X}$ and the twist functor of $g_!\dashv g^*$ by $T_\mathcal{X}'$. Let $F\in \operatorname{Fun}(P_n,\mathcal{D})$ and let $u':F\rightarrow g^*g_!(F)$ be the unit map of the adjunction $g_!\dashv g^*$. \Cref{dualrem} shows that in the biCartesian square in $\mathcal{D}$ \[ \begin{tikzcd} g_*(F) \arrow[d, "{g_*(u')}"'] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ g_*g^*g_!(F) \arrow[r, "\alpha"'] & g_*T_\mathcal{X}'(F) \end{tikzcd} \] the edge $g_*(u')$ is equivalent to the edge $g_*(F)\xrightarrow{(0,id)} g_!(F)\oplus g_*(F)$. We obtain that $\alpha$ is equivalent to the edge $g_!(F)\oplus g_*(F)\xrightarrow{(id,0)} g_!(F)$. The unit map $u:g_!(F)\rightarrow g_*g^*g_!(F)$ of the adjunction $g^*\dashv g_*$ is by \Cref{sphlem1} equivalent to the edge $g_!(F)\xrightarrow{(id,0)} g_!(F)\oplus g_*(F)$. The equivalences $g_*g^*g_!(F)\simeq g_!(F)\oplus g^*(F)$ in the description of the unit map and $\alpha$ are compatible as discussed in \Cref{sphrem2}. We thus obtain that the composition $\alpha\circ u:g_!(F)\rightarrow g_*T_\mathcal{X}'(F)$ is an equivalence. Thus condition 1. of \Cref{2/4lem1} is satisfied. Let $F\in \operatorname{Fun}(P_n,\mathcal{D})$ and let $cu':g^*g_*(F)\rightarrow F$ be the counit map of the adjunction $g^*\dashv g_*$. Consider the following biCartesian square in $\mathcal{D}$. \[ \begin{tikzcd} g_!T_\mathcal{X}(F) \arrow[d, "\beta"'] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ g_!g^*g_*(F) \arrow[r, "g_!(cu')"'] & g_!(F) \end{tikzcd} \] By \Cref{sphlem2} we find that $g_!(cu')$ is equivalent to the edge $g_!(F)\oplus g_*(F)\xrightarrow{(id,0)} g_!(F)$. We thus obtain that $\beta$ is equivalent to the edge $g_*(F)\xrightarrow{(0,id)}g_!(F)\oplus g_*(F)$. By \Cref{sphlem1} we find that the counit map $cu:g_!g^*g_*(F)\rightarrow g_*(F)$ is equivalent to the edge $g_!(F)\oplus g_*(F)\xrightarrow{(0,id)}g_*(F)$. Using \Cref{sphrem2}, we find that the composition $cu\circ\beta:g_!T_\mathcal{X}(F)\rightarrow g_*(F)$ is an equivalence and that condition 2. of \Cref{2/4lem1} is satisfied. \end{proof} \begin{corollary} Let $\mathcal{D}$ be a stable $\infty$-category and $n\geq 0$. Consider the pullback functor\linebreak $f^*:\mathcal{D}\rightarrow \operatorname{Fun}(S^n,\mathcal{D})$ with right adjoint $f_*$. The adjunction $f^*\dashv f_*$ is spherical. \end{corollary} \begin{proof} The adjunction $f^*\dashv f_*$ arises by \Cref{lem:fgres} as the restriction of the spherical adjunction $g^*\dashv g_*$. We apply \Cref{rescor} (with $F=f_*$) to deduce the sphericalness. For that, we need to show that $\operatorname{Im}(g^*),\operatorname{Im}(g^{**})\subset \operatorname{Fun}(S^n,\mathcal{D})$, with $g^{**}$ the right adjoint of $g_*$. The inclusion $\operatorname{Im}(g^*)\subset \operatorname{Fun}(S^n,\mathcal{D})$ follows from the commutativity of the left diagram in \eqref{eq:fgcomm}. By \Cref{4pedprop2,sphrem1}, we have $g^{**}\simeq g^*\circ [n]$, so that $\operatorname{Im}(g^{**})=\operatorname{Im}(g^*)$, concluding the proof. \end{proof} \begin{remark} Let $\mathcal{D}=\operatorname{Sp}$ be the stable $\infty$-category of spectra. Let $E\in \operatorname{Sp}$. The homotopy groups of the spectra $f_!f^*(E)$ and $f_*f^*(E)$ describe the homology and cohomology groups of the $n$-sphere with values in the spectrum $E$, respectively, i.e. \[ \pi_i(f_!f^*(E))\simeq H_i(X,E)\text{ and }\pi_{-i}(f_*f^*(E))\simeq H^i(X,E)\,.\] We thus consider $f_*,f_!:\operatorname{Fun}(S^n,\operatorname{Sp})\rightarrow \operatorname{Sp}$ as the homology and cohomology functors for local systems of spectra on $S^n$. Using \Cref{sphrem1} and the 2/4 property, we find an equivalence $[n]\circ f_* \simeq f_!$. The sphericalness of the adjunction $f^*\dashv f_*$ hence implies Poincaré duality for local systems on the $n$-sphere with values in spectra. \end{remark} We end this section with a conjecture for a possible generalization of the spherical adjunction $g^*\dashv g_*$. Consider a good stratification $A$ of the $n$-sphere. Denote by $\operatorname{Sing}^A(S^n)$ the $\infty$-category of exit paths, see \cite[A.6.2]{HA}. In \cite[Section 8.6]{Tan19}, building on \cite[Appendix A]{HA}, it is shown that if $\mathcal{D}$ is a compactly generated $\infty$-category, then the $\infty$-category of functors $\operatorname{Fun}(\operatorname{Sing}^A(S^n),\mathcal{D})$ embeds fully faithfully into the $\infty$-category $\operatorname{Shv}(S^n,\mathcal{D})$ of sheaves on the $n$-sphere with values in $\mathcal{D}$. The essential image of the embedding is given by constructible sheaves with respect to the stratification $A$. We have shown in \Cref{sphprop} that for a specific stratification of the $n$-sphere the pullback-limit adjunction $\mathcal{D}\leftrightarrow \operatorname{Fun}(\operatorname{Sing}^A(S^n),\mathcal{D})$ is spherical. We conjecture that for any good stratification the pullback-limit adjunction is spherical and arises as the restriction of a spherical adjunction involving sheaves on the $n$-sphere. \begin{conjecture} Let $\mathcal{D}$ be a compactly generated $\infty$-category and $n\geq 0$. Then the pullback-pushforward adjunction $\mathcal{D}\leftrightarrow \operatorname{Shv}(S^n,\mathcal{D})$ is spherical. \end{conjecture} \subsection{Twist along a spherical fibration}\label{sphfibtwist} Let $\mathcal{D}$ be a stable $\infty$-category, $X$ and $Y$ Kan complexes and $f:X\rightarrow Y$ be a Kan fibration such that for all $y\in Y$ the fiber satisfies \[f^{-1}(y)\coloneqq \{y\}\times_{Y}X\simeq S^n\,.\] We refer to such an $f$ as a spherical fibration. The pullback functor $f^*:\text{Fun}(Y,\mathcal{D})\longrightarrow \text{Fun}(X,\mathcal{D})$ admits left and right adjoints $f_!,f_*$, given by left and right Kan extension, as follows from the next Lemma and \cite[4.3.3.7]{HTT}. \begin{lemma}\label{extlem} Let $f:X\rightarrow Y$ be a spherical fibration, $F\in\operatorname{Fun}(Y,\mathcal{D})$ and $y\in \mathcal{Y}$. Then the left Kan extension $f_!(F)$ and the right Kan extension $f_*(F)$ exist and satisfy \[f_!(F)(y)\simeq \underset{{f^{-1}(y)}}{\operatorname{colim}}F\simeq \underset{S^n}{\operatorname{colim}}F\] and \[ f_*(F)(y)\simeq \lim_{f^{-1}(y)}F\simeq \lim_{S^n}F\,.\] \end{lemma} \begin{proof} Using that $f$ is a Kan fibration, the statement follows from \cite[4.3.3.10]{HTT}. \end{proof} We now proof the sphericalness of the adjunction $f^*\dashv f_*$. The proof is essentially a relative version of the proof of \Cref{sphprop}. \begin{proposition}\label{relsphprop} Let $f:X\rightarrow Y$ be a spherical fibration and $\mathcal{D}$ a stable $\infty$-category. The adjunction $f^*:\operatorname{Fun}(Y,\mathcal{D})\leftrightarrow \operatorname{Fun}(X,\mathcal{D}):f_*$ is spherical. \end{proposition} \begin{proof} We apply \Cref{2/4lem1} to show the sphericalness. Denote the twist functor of $f_!\dashv f^*$ by $T_\mathcal{X}'$. Let $F\in \operatorname{Fun}(X,\mathcal{D})$ and let $u':F\rightarrow f^*f_!(F)$ be the unit map of the adjunction $f_!\dashv f^*$. The unit map $u'$ has the property that the restriction $u'|_{f^{-1}(y)}:F|_{f^{-1}(y)}\rightarrow f^*f_!(F)|_{f^{-1}(y)}$ to the fiber $f^{-1}(y)\simeq S^n$ of any $y\in Y$ is equivalent to the unit map of the adjunction $h_!:\operatorname{Fun}(f^{-1}(y),\mathcal{D})\leftrightarrow\mathcal{D}:h^*$ where $h:f^{-1}(y)\rightarrow \ast$. By \Cref{sphlem2}, we find that in the biCartesian square in $\operatorname{Fun}(Y,\mathcal{D})$ \[ \begin{tikzcd} f_*(F) \arrow[d, "{f_*(u')}"'] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ f_*f^*f_!(F) \arrow[r, "\alpha"'] & f_*T_\mathcal{X}'(F) \end{tikzcd} \] the restriction of the edge $f_*(u')$ to any $y\in Y$ is equivalent to the edge $f_*(F|_{f^{-1}(y)})\xrightarrow{(0,id)} f_!(F|_{f^{-1}(y)})\oplus f_*(F|_{f^{-1}(y)})$. We thus obtain that the restriction of $\alpha$ to any $y\in Y$ is equivalent to the edge $f_!(F|_{f^{-1}(y)})\oplus f_*(F|_{f^{-1}(y)})\xrightarrow{(id,0)} f_!(F|_{f^{-1}(y)})$. The unit map $u:f_!(F)\rightarrow f_*f^*f_!(F)$ of $f^*\dashv f_!$ is by \Cref{Kanextlem} a right Kan extension. By \cite[4.3.3.10]{HTT} and \Cref{sphlem1} we obtain that the restriction of $u$ to any $y\in \mathcal{Y}$ is equivalent to the edge $f_!(F|_{f^{-1}(y)})\xrightarrow{(id,0)} f_!(F|_{f^{-1}(y)})\oplus f_*(F|_{f^{-1}(y)})$. We thus obtain that the composition $\alpha\circ u:f_!(F)\rightarrow f_*T_\mathcal{X}'(F)$ restricts on every $y\in Y$ to an equivalence and is hence a natural equivalence. We have shown that condition 1 of \Cref{2/4lem1} is satisfied. Condition 2 of \Cref{2/4lem1} is shown analogously and can also be compared to the second part of the proof of \Cref{sphprop}. \end{proof} \subsection{Local systems with values in a symmetric monoidal \texorpdfstring{$\infty$}{infinity}-category}\label{sec3.3} Let $\mathcal{C}$ be a symmetric monoidal and stable $\infty$-category such that the monoidal product preserves colimits in both variables. We describe in this section the twist functor of the spherical adjunction of \Cref{relsphprop} in terms of the monoidal product with a local system $\zeta$. \begin{lemma} Let $Z$ be a simplicial set and $q:\mathcal{C}^\otimes\rightarrow \operatorname{Fin}_\ast$ a symmetric monoidal $\infty$-category. Then $\operatorname{Fun}(Z,\mathcal{C})$ can be endowed with the structure of a symmetric monoidal $\infty$-category, with total space $ \operatorname{Fun}(Z,\mathcal{C})^\otimes \coloneqq \operatorname{Fun}(Z,\mathcal{C}^\otimes)\times_{\operatorname{Fun}(Z,\operatorname{Fin}_\ast)}\operatorname{Fin}_\ast$. An edge in $\operatorname{Fun}(Z,\mathcal{C})^\otimes$ is coCartesian if and only if its restriction to each vertex of $Z$ yields a $q$-coCartesian edge. \end{lemma} \begin{proof} The map $\operatorname{Fun}(Z,\mathcal{C}^\otimes)\rightarrow \operatorname{Fun}(Z,\operatorname{Fin}_\ast)$ is by \cite[3.1.2.1]{HTT} a coCartesian fibration whose coCartesian edges are given by edges whose restriction to each vertex of $Z$ yields a coCartesian edge in $\mathcal{C}^\otimes$. Consider the pullback $\operatorname{Fun}(Z,\mathcal{C})^\otimes=\operatorname{Fun}(Z,\mathcal{C}^\otimes)\times_{\operatorname{Fun}(Z,\operatorname{Fin}_\ast)}\operatorname{Fin}_\ast$. The induced functor $p:\operatorname{Fun}(Z,\mathcal{C})^\otimes\rightarrow \operatorname{Fin}_\ast$ is also a coCartesian fibration. We note that the coCartesian edges of the fibration $p$ are also given by edges whose restriction to each vertex in $Z$ yields a coCartesian edge in $\mathcal{C}^\otimes$. Using that $\mathcal{C}$ is symmetric monoidal, it follows that the coCartesian fibration $p:\operatorname{Fun}(Z,\mathcal{C})^\otimes \rightarrow \operatorname{Fin}_\ast$ is also symmetric monoidal. \end{proof} \begin{lemma} \label{mndsphlem} Let $\mathcal{C}$ be a stable symmetric monoidal $\infty$-category and let $f:X\rightarrow Y$ be a map of simplicial sets. The pullback functor $f^*:\operatorname{Fun}(Y,\mathcal{C})\rightarrow \operatorname{Fun}(X,\mathcal{C})$ can be extended to a symmetric monoidal functor \[ (f^*)^\otimes:\operatorname{Fun}(Y,\mathcal{C})^\otimes\longrightarrow \operatorname{Fun}(X,\mathcal{C})^\otimes\,,\] as defined in \cite[2.1.3.7]{HA}. The functor $(f^*)^\otimes$ admits a right adjoint \[ (f_*)^\otimes:\operatorname{Fun}(X,\mathcal{C})^\otimes \longrightarrow \operatorname{Fun}(Y,\mathcal{C})^\otimes\,,\] whose restriction to $\operatorname{Fun}(X,\mathcal{C})^\otimes_{<n>}\simeq \operatorname{Fun}(X,\mathcal{C})^{\times n}$ is given by applying $f_*$ to each component. \end{lemma} \begin{proof} Consider the pullback functor $\alpha:\operatorname{Fun}(Y,\mathcal{C}^\otimes)\longrightarrow \operatorname{Fun}(X,\mathcal{C}^\otimes)$ along $f$. The restriction of $\alpha$ to $\operatorname{Fun}(Y,\mathcal{C})^\otimes$ factors through the inclusion $\operatorname{Fun}(X,\mathcal{C})^\otimes\subset \operatorname{Fun}(X,\mathcal{C}^\otimes)$. The restriction of $\alpha$ to $\operatorname{Fun}(Y,\mathcal{C})^\otimes_{\langle 1\rangle}\simeq \operatorname{Fun}(Y,\mathcal{C})$ is equivalent to $f^*$. To show that the resulting functor $(f^*)^\otimes=\alpha|_{\operatorname{Fun}(Y,\mathcal{C})^\otimes}:\operatorname{Fun}(Y,\mathcal{C})^\otimes\rightarrow \operatorname{Fun}(X,\mathcal{C})^\otimes$ is symmetric monoidal, we need to show that it preserves coCartesian edges. An edge $y\rightarrow y'$ in $\operatorname{Fun}(Y,\mathcal{C})^\otimes$ is coCartesian with respect to the pointwise monoidal structure if and only if all its restrictions to vertices in $Y$ are coCartesian edges in $\mathcal{C}^\otimes$. Using the analogous characterization of the coCartesian edges in $\operatorname{Fun}(X,\mathcal{C})^\otimes$, it is apparent that $f^*$ preserves coCartesian edges. The description of the right adjoint of $(f_*)^\otimes$ follows from the theory of relative adjunctions, see \cite[7.3.2.7]{HA}. \end{proof} \begin{construction}\label{mndsphcon} Let $\mathcal{C}$ be a stable $\infty$-category and let $f:X\rightarrow Y$ be a Kan fibration. Consider the pullback functor $f^*:\operatorname{Fun}(Y,\mathcal{C})\rightarrow \operatorname{Fun}(X,\mathcal{C})$. Let $y \in Y$. We denote by $h^*:\mathcal{C}\rightarrow \operatorname{Fun}(f^{-1}(y),\mathcal{C})$ the pullback along the map $f^{-1}(y)\rightarrow \Delta^0$. There is a natural transformation $\eta$ between the functors $f^*,h^*:\Delta^1\rightarrow \operatorname{Set}_\Delta$ corresponding to the following commutative diagram in $\operatorname{Set}_\Delta$, where the vertical edges are given by the evaluation functors. \[ \begin{tikzcd} {\operatorname{Fun}(Y,\mathcal{C})} \arrow[d, "ev_y"] \arrow[r, "f^*"]& {\operatorname{Fun}(X,\mathcal{C})} \arrow[d, "ev_{f^{-1}(y)}"] \\ \mathcal{C} \arrow[r, "h^*"]& {\operatorname{Fun}(f^{-1}(y),\mathcal{C})} \end{tikzcd} \] One checks that the diagram commutes in the $1$-category $\operatorname{Set}_\Delta$, using the explicit description of the pullback and evaluation functors as maps between simplicial sets. The natural transformation $\eta$ induces a functor $\alpha:\Gamma(f^*)\rightarrow \Gamma(h^*)$ between the Grothendieck constructions. By \Cref{Kanextlem} an edge $E\rightarrow F$ lying over $0\rightarrow 1$ in $\Gamma(f^*)$ is Cartesian if and only if it is a right Kan extension. \cite[4.3.3.10]{HTT} shows that being a right Kan extension is a local property, namely this is the case if and only if for all $y\in Y$, the restricted map $f^*(E)|_{f^{-1}(y)}\rightarrow F|_{f^{-1}(y)}$ is a right Kan extension. We obtain that an edge in $\Gamma(f^*)$ lying over $0\rightarrow 1$ is Cartesian if and only if for all $y\in Y$ its restrictions in $\Gamma(h^*)$ is Cartesian. \end{construction} \begin{notation} Let $\mathcal{C},Y$ be simplicial sets. We denote $\mathcal{C}^Y\coloneqq\operatorname{Fun}(Y,\mathcal{C}).$ \end{notation} \begin{proposition}\label{mndsphprop} Let $\mathcal{C}$ be a symmetric monoidal and stable $\infty$-category and let $f:X\rightarrow Y$ be a spherical fibration. Denote by $T_{\mathcal{C}^Y}$ the twist functor of the adjunction $f^*:\mathcal{C}^Y\leftrightarrow \mathcal{C}^X:f_*$. Let $\zeta=\operatorname{cof}(1_Y\xrightarrow{u} f_*f^*(1_Y))\in \mathcal{C}^Y$ where $u$ is a unit map of the adjunction $f^*\dashv f_*$. There exists an equivalence of endofunctors $T_{\mathcal{C}^Y}\simeq \mhyphen \otimes \zeta$ of $\mathcal{C}^Y$. \end{proposition} The idea of the proof is to construct a natural transformation $\mhyphen\otimes \zeta \rightarrow T_{\mathcal{C}^Y}$ via Kan extension which is pointwise an equivalence. \begin{proof}[Proof of \Cref{mndsphprop}] Consider the the $\infty$-category $\mathcal{D}_1$ spanned by (commutative) diagrams in $\Gamma(f^*)$ as on the left in \eqref{2diag}, \begin{equation}\label{2diag} \begin{tikzcd} E\arrow[rr, "\simeq"]&& E\\ E \arrow[u, "\simeq"]\arrow[rr, "\simeq"] \arrow[rd, "\simeq"] & & E \arrow[ld, "\simeq"] \arrow[dd, "!"] \arrow[u, "\simeq"]\\ & E \arrow[d, "!"] \arrow[rd, "!"] & \\ & f^*(E) \arrow[r, "\simeq"] \arrow[d, "\simeq"] & f^*(E) \arrow[ld, "\simeq"] \\ & f^*(E) & \end{tikzcd}\qquad \begin{tikzcd} 0\arrow[rr]&&\zeta\\ 1_Y \arrow[u]\arrow[urr, "\square", phantom]\arrow[rr] \arrow[rd, "!"] & & f_*(1_X) \arrow[ld, "\ast"] \arrow[dd, "!"]\arrow[u] \\ & 1_X \arrow[d, "\simeq"] \arrow[rd] & \\ & 1_X \arrow[r] \arrow[d, "\simeq"] & f^*f_*(1_X) \arrow[ld] \\ & 1_X & \end{tikzcd} \end{equation} where $E\in \mathcal{C}^Y$ denotes any vertex. The restriction functor to the initial vertex (the lower left vertex in the upper square) is a trivial fibration from $\mathcal{D}_1$ to $\mathcal{C}^Y$. This follows from the fact that a vertex in $\mathcal{D}_1$ is the repeated Kan extension of its restriction to the initial vertex. Consider the $\infty$-category $\mathcal{D}_2$ spanned by diagrams in $\Gamma(f^*)$ as on the right in \eqref{2diag}. The restriction functor to the initial vertex induces a trivial fibration from $\mathcal{D}_2$ to the full subcategory $\langle 1_Y\rangle$ of $\mathcal{C}^Y$ spanned by the unit objects of the monoidal structure. We obtain a trivial fibration from the product $\infty$-category $\mathcal{D}_1\times\mathcal{D}_2$ to $\mathcal{C}^Y\times \langle 1_Y\rangle$. We note that $\mathcal{C}^Y\times \langle 1_Y\rangle\subset \mathcal{C}^Y\times \mathcal{C}^Y\subset \Gamma((f^*)^\otimes)_{\langle 2\rangle}.$ The product $\infty$-category $\mathcal{D}_1\times\mathcal{D}_2$ however does not include into any $\infty$-category of diagrams in $\Gamma((f^*)^\otimes)_{\langle 2\rangle}$. This is because of the vertices in the product diagram being mapped to elements in $\mathcal{C}^Y\times\mathcal{C}^X$. Given an element of $\mathcal{D}_1\times\mathcal{D}_2$, it can however be restricted to a diagram in $\Gamma((f^*)^\otimes)_{\langle 2\rangle}$ as follows. Consider the $\infty$-category $\mathcal{D}_3$ of diagrams in $\Gamma((f^*)^{\otimes})_{\langle 2\rangle}$ of the following form. \begin{equation}\label{diag3} \begin{tikzcd} {(E,0)} \arrow[r] & {(E,\zeta)} \\ {(E,1_Y)} \arrow[u] \arrow[r] \arrow[d] & {(E,f_*(1_Y))} \arrow[u] \arrow[d] \\ {(f^*(E),1_X)} \arrow[r] \arrow[d] & {(f^*(E),f^*f_*(1_X))} \arrow[ld] \\ {(f^*(E),1_X)} & \end{tikzcd} \end{equation} The components of the edges are given by the corresponding edges in the diagrams \eqref{2diag}. The $\infty$-category $\mathcal{D}_3$ is a full subcategory of the functor category $\operatorname{Fun}(Z,\Gamma((f^*)^{\otimes})_{\langle 2\rangle})$ with \[Z=\left(\Delta^1\times\Delta^1\right)\coprod_{\Delta^1} \left((\Delta^1\times\Delta^1)\amalg_{\Delta^1}\Delta^2\right)\,.\] The restriction functor maps $\mathcal{D}_1\times\mathcal{D}_2$ to $\mathcal{D}_3$. Consider the full subcategory $\mathcal{D}_4$ of the $\infty$-category $\operatorname{Fun}(\Delta^1\times Z,\Gamma((f^*)^{\otimes}))$ spanned by diagrams whose restriction to $\Delta^{\{0\}}\times Z$ lies in $\mathcal{D}_3$ and that are a left Kan extension relative $\Gamma((f^*)^{\otimes})\rightarrow \operatorname{Fin}_\ast$ of their restriction to $\Delta^{\{0\}}\times Z$. For the relative left Kan extension we used the map $\Delta^1\times Z\rightarrow \operatorname{Fin}_\ast, (0,z)\mapsto \langle 2\rangle, (1,z)\mapsto \langle 1\rangle$. We note that the restriction functor $\mathcal{D}_4\rightarrow \mathcal{D}_3$ is a trivial fibration. The restrictions of the vertices of $\mathcal{D}_4$ to $\operatorname{Fun}(\Delta^{\{1\}}\times Z,\Gamma((f^*)^{\otimes}))$ have the following form. \begin{equation}\label{diag25} \begin{tikzcd} E\otimes 0\arrow[r]& E\otimes \zeta\\ E\otimes 1_Y \arrow[d, "!"] \arrow[u]\arrow[ur, phantom, "\square"]\arrow[r] & E\otimes f_*(1_X) \arrow[u]\arrow[d, "!"] \\ f^*(E)\otimes 1_X \arrow[r] \arrow[d, "\simeq"] & f^*(E)\otimes f^*f_*(1_X) \arrow[ld] \\ f^*(E)\otimes 1_X & \end{tikzcd} \end{equation} The functor $f^*$ is monoidal by \Cref{mndsphlem}. We use this to deduce that the so labeled edges in \eqref{diag25} are coCartesian. We use that the monoidal product preserves colimits in the second entry to deduce that the upper square is biCartesian. \Cref{mndsphcon} shows that the composite edge \[\alpha:E\otimes f_*(1_X)\xrightarrow{!} f^*(E)\otimes f^*f_*(1_X) \rightarrow f^*(E)\otimes 1_X\] is a Cartesian edge in $\Gamma(f^*)$ if and only if all restrictions to points are Cartesian, which can be directly checked. Consider the $\infty$-category $\mathcal{D}_5$ spanned by diagrams of the following form. \begin{equation}\label{findiag3}\begin{tikzcd} & E\otimes 0 \arrow[dr, phantom, "\square"]\arrow[r] \arrow[ld, "\simeq"'] & E\otimes \zeta \\ 0 \arrow[dddr, phantom, "\square", xshift=-3ex]\arrow[ddd] & E\otimes 1_Y \arrow[d, "!"] \arrow[r] \arrow[ddd, bend right=50] \arrow[l] \arrow[u] & E\otimes f_*(1_X) \arrow[d, "!"'] \arrow[lddd, "\simeq", dotted, bend left=50] \arrow[u] \\ & f^*(E)\otimes 1_X \arrow[r] \arrow[d, "\simeq"] & f^*(E)\otimes f^*f_*(1_X) \arrow[ld] \\ & f^*(E)\otimes 1_X & \\ T_{\mathcal{C}^Y}(E) & f_*f^*(E) \arrow[u, "\ast"'] \arrow[l] & \end{tikzcd} \end{equation} The label of the left biCartesian square refers to the square containing the bent arrow. The dotted edge is part of the diagram $\mathcal{D}_5$, the dotting is only for better readability. Consider also the $\infty$-category of diagrams $\mathcal{D}_6$ of the form \eqref{findiag3}, with an added edge $E\otimes \zeta \xrightarrow{\simeq} T_{\mathcal{C}^Y}(E)$ completing a cube containing the two biCartesian squares in the diagram and thus exhibiting an equivalence between the two biCartesian squares. We observe that the restriction functor induces is a trivial fibration from $\mathcal{D}_6$ to $\mathcal{D}_5$ using that the two squares are pushout. We further observe that the functor from $\mathcal{D}_4'\coloneqq\mathcal{D}_4\times_{\operatorname{Fun}(\Delta^{\{1\}}\times Z,\Gamma((f^*)^{\otimes})}\mathcal{D}_6$ to $\mathcal{D}_4$ contained in the defining pullback diagram is a trivial fibration. We can compose with the trivial fibration $\mathcal{D}_4\rightarrow \mathcal{D}_3$ to obtain a trivial fibration $\mathcal{D}_4'\rightarrow \mathcal{D}_3$. We obtain a trivial fibration from $\mathcal{D}_7\coloneqq\mathcal{D}_4'\times_{\mathcal{D}_3}\mathcal{D}_1\times \mathcal{D}_2$ to $\mathcal{D}_1\times\mathcal{D}_2$. We note that the projection functor $\mathcal{C}^Y\times \langle 1_Y\rangle\rightarrow \mathcal{C}^Y$ is also a trivial fibration. In total we obtain a trivial fibration \begin{equation}\label{trivfib7} \mathcal{D}_7\rightarrow \mathcal{D}_1\times\mathcal{D}_2\rightarrow \mathcal{C}^Y\times \langle 1_Y\rangle\rightarrow \mathcal{C}^Y\,. \end{equation} The functors $T_{\mathcal{C}^Y}$ and $\mhyphen\otimes \zeta$ are obtained by choosing a section of \eqref{trivfib7} and composing with the restriction functor to the vertex $E\otimes \zeta$ and $T_{\mathcal{C}^Y}(E)$ in diagram \eqref{findiag3}, respectively. The composition of a section of \eqref{trivfib7} with the restriction functor to the edge $E\otimes\zeta\xrightarrow{\simeq} T_{\mathcal{C}^Y}(E)$ thus describes the desired natural equivalence. \end{proof} \section{Spherical monadic adjunctions}\label{sec5} We recall the notions monad and monadic adjunction in \Cref{sec1.4}. In this section we investigate spherical monadic adjunctions. The following theorem characterizes the sphericalness of a monadic adjunction in terms of the properties of the monad. \begin{theorem}\label{sphmndthm} Let $\mathcal{D}$ be a stable $\infty$-category and let $M:\mathcal{D}\rightarrow \mathcal{D}$ be a monad with unit $u:\operatorname{id}_{\mathcal{D}}\rightarrow M$. Consider the endofunctor $T_{\mathcal{D}}=\operatorname{cof}(\operatorname{id}_{\mathcal{D}}\xrightarrow{u}M)\in \operatorname{Fun}(\mathcal{D},\mathcal{D})$. The following conditions are equivalent. \begin{enumerate} \item The endofunctor $T_{\mathcal{D}}$ is an equivalence and the unit $u$ satisfies $T_\mathcal{D}u\simeq uT_\mathcal{D}$. \item The monadic adjunction $F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G$ is spherical. \end{enumerate} \end{theorem} \begin{proof} Before we begin, we note that the endofunctor $T_{\mathcal{D}}$ is equivalent to the twist functor of the adjunction $F\dashv G$. We start by showing that condition 1 implies condition 2. We denote the cotwist functor of the adjunction $F\dashv G$ by $T_\mathcal{C}$. \Cref{commlem} shows that the following diagram commutes. \[ \begin{tikzcd} \operatorname{LMod}_M(\mathcal{D}) \arrow[rd, "T_\mathcal{D}G"'] \arrow[rr, "T_{\mathcal{C}}"] & & \operatorname{LMod}_M(\mathcal{D}) \arrow[ld, "G"] \\ & \mathcal{D} & \end{tikzcd} \] We observe that the left adjoint of $T_\mathcal{D}G$ is given by $FT_\mathcal{D}^{-1}$. We apply \cite[4.7.3.5,\ 4.7.3.16]{HA} and deduce that the following two conditions imply that $T_\mathcal{C}$ is an equivalence. \begin{enumerate} \item[1)] The functor $T_\mathcal{D}G$ is monadic. \item[2)] For every $d\in \mathcal{D}$, the unit map $d\rightarrow T_\mathcal{D}GFT^{-1}_\mathcal{D}(d)\simeq GT_\mathcal{C}FT^{-1}_\mathcal{D}$ of the adjunction \mbox{$FT^{-1}_\mathcal{D}\dashv T_\mathcal{D}G$} induces via the adjunction $F\dashv G$ an equivalence $F(d)\rightarrow T_\mathcal{C}FT_\mathcal{D}^{-1}(d)$. \end{enumerate} Consider the following commutative diagram. \[ \begin{tikzcd} & \mathcal{D} \arrow[rd, "T_\mathcal{D}^{-1}"] & \\ \operatorname{LMod}_M(\mathcal{D}) \arrow[rr, "G"] \arrow[ru, "T_\mathcal{D}G"] & & \mathcal{D} \end{tikzcd} \] The functor $T_\mathcal{D}^{-1}$ is an equivalence and hence conservative. We apply \cite[4.7.3.22]{HA} to deduce that $T_\mathcal{D}G$ is monadic. This shows 1).\\ For 2), consider the functor of $1$-categories $\alpha:[2]\rightarrow \operatorname{Set}_\Delta$ corresponding to the composable functors $ \mathcal{D}\xrightarrow{T_\mathcal{D}^{-1}}\mathcal{D}\xrightarrow{F}\operatorname{LMod}_M(\mathcal{D})$. We obtain a biCartesian fibration $p:\Gamma(\alpha)\rightarrow \Delta^2$. Let $d\in \mathcal{D}\simeq p^{-1}([0])$. Via Kan extension, we can produce the following diagram in $\Gamma(\alpha)$. \[ \begin{tikzcd} d \arrow[r, "!"] \arrow[d, "\simeq"'] & T_\mathcal{D}^{-1}(d) \arrow[d, "u'"] \arrow[r, "!"] & FT_\mathcal{D}^{-1}(d) \\ T_\mathcal{D}T^{-1}_\mathcal{D}(d) \arrow[d, "T_\mathcal{D}(u')"'] \arrow[ru, "\ast"] & GFT_\mathcal{D}^{-1}(d) \arrow[ru, "\ast"] & \\ T_\mathcal{D}GFT_\mathcal{D}^{-1}(d) \arrow[ru, "\ast"] & & \end{tikzcd} \] The map $u':T_\mathcal{D}^{-1}(d)\rightarrow GFT_\mathcal{D}^{-1}(d)$ is a unit map of the adjunction $F\dashv G$ and the map $v:d\xrightarrow{\simeq} T_\mathcal{D}T_\mathcal{D}^{-1}(d)\xrightarrow{T_\mathcal{D}(u')} T_\mathcal{D}GFT_\mathcal{D}^{-1}(d)$ is a unit map of the adjunction $FT_\mathcal{D}^{-1}\dashv T_\mathcal{D}G$. The map $v$ induces via the adjunction $F\dashv G$ an equivalence $F(d)\rightarrow T_\mathcal{D}FT_\mathcal{D}^{-1}(d)$ if $v$ is unit map of the adjunction $F\dashv G$, which follows from $uT_{\mathcal{D}}\simeq T_{\mathcal{D}}u$. We deduce that 2) is satisfied and the cotwist functor $T_\mathcal{C}$ is an equivalence. The monadic adjunction is thus spherical and we have shown condition 2. We next show that condition 2 implies condition 1. If $F\dashv G$ is spherical, then by definition, it holds that $T_{\mathcal{D}}$ is an equivalence. Consider the unit $v:id_\mathcal{D}\rightarrow T_\mathcal{D}GFT^{-1}_\mathcal{D}\simeq GF$ of the adjunction $FT^{-1}_\mathcal{D}\dashv T_\mathcal{D}G.$ We observe that $v\simeq T_\mathcal{D}u'T^{-1}_\mathcal{D}$, where $u'$ is the unit of the adjunction $F\dashv G$. The cotwist functor $T_\mathcal{C}$ of $F\dashv G$ is by assumption an equivalence. Using \Cref{commlem}, we obtain equivalences $GT_\mathcal{C}\simeq T_\mathcal{D}G$ and $FT_\mathcal{D}\simeq T_\mathcal{C}F$ and thus $F T_\mathcal{D}^{-1}\simeq T_\mathcal{C}^{-1} F$. This shows that the unit $v'$ of the adjunction $T^{-1}_\mathcal{C}F\dashv GT_\mathcal{C}$ is equivalent to $v$. We observe that $v'$ is equivalent to $u'$. It thus follows that $u'$ is equivalent to $v$ and condition 1 is fulfilled. \end{proof} \begin{remark} The proof of \Cref{thm1} shows that it suffices to check the commutativity $uT_\mathcal{D}\simeq T_{\mathcal{D}}u$ pointwise. \end{remark} \Cref{sphmndthm} can be seen as extending a part of the discussion in \cite[Section 3.2]{Seg18}, which focuses on Kleisli adjunctions. \Cref{sphmndthm} further allows us to extend the main result of \cite{Seg18} from the setting of pretriangulated dg-categories to stable $\infty$-categories. \begin{corollary}\label{cor:sph} Let $\mathcal{D}$ be a stable $\infty$-category and let $T:\mathcal{D}\rightarrow \mathcal{D}$ be an autoequivalence. Then $T$ arises as the twist functor of a spherical adjunction. This adjunction can further be chosen to be monadic or a stable Kleisli adjunction. \end{corollary} \begin{proof} Consider the endofunctor $M=id_\mathcal{D}\oplus T$. By \cite[Section 7.3.4]{HA}, we can equip $M$ with the structure of a monad, called the square-zero extension monad. The unit map of $M$ is given by the inclusion $u:id_\mathcal{D}\xrightarrow{(id,0)} id_\mathcal{D}\oplus T=M$. The twist functor $T_\mathcal{D}$ of the monadic adjunction $F:\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G$ is thus equivalent to $T$. The unit $u$ clearly commutes with $T_\mathcal{D}$. The adjunction $F\dashv G$ is spherical by \Cref{sphmndthm}. The stable Kleisli adjunction of $M$ is a restriction of the spherical monadic adjunction and thus also spherical with twist functor equivalent to $T$. \end{proof} \begin{remark} Many examples of spherical adjunctions are monadic or comonadic, e.g.~the spherical adjunction described in \Cref{relsphprop} is comonadic. As we argue in the following, for monadicity it suffices in good cases that one of the involved functor is conservative. Let $\mathcal{C}$ and $\mathcal{D}$ be stable $\infty$-categories that admit sufficient limits of cosimplicial objects and colimits of simplicial objects. Consider a spherical adjunction $F:\mathcal{C}\leftrightarrow \mathcal{D}: G$. The functors $F$ and $G$ admit by \Cref{4pedprop2} further left and right adjoints, which implies that $F$ and $G$ preserve all limits and colimits. The $\infty$-categorical Barr-Beck theorem thus implies that the adjunction $F\dashv G$ is monadic if and only if the functor $G$ is conservative and comonadic if and only if the functor $F$ is conservative. Denote the right adjoint of $G$ by $H$. Under the above assumptions, if the adjunction $F\dashv G$ is monadic, then the adjunction $G\dashv H$ is comonadic and vice versa. Thus, if sufficient limits and colimits exist, spherical monadic adjunctions determine spherical comonadic adjunctions and vice versa. \end{remark} Consider an adjunction $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$ of stable $\infty$-categories. Let $M\simeq GF$ be the adjunction monad. As shown in \Cref{stbKleisli}, there exists a fully faithful functor $F':\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}\rightarrow \mathcal{C}$ from the stable Kleisli $\infty$-category of $M$ to $\mathcal{C}$. The essential image of $F'$ is identical to the stable closure of the essential image of $F$ in $\mathcal{C}$. Combining \Cref{sphmndthm} and \Cref{rescor}, we obtain the following characterization of the sphericalness of the adjunction $F\dashv G$. \begin{proposition}\label{sphmndprop} Let $F:\mathcal{D} \leftrightarrow \mathcal{C}:G$ be an adjunction of stable $\infty$-categories. The adjunction $F\dashv G$ is spherical if and only if the following four conditions are satisfied. \begin{enumerate} \item The twist functor $T_\mathcal{D}$ of the adjunction $F\dashv G$ is an autoequivalence. \item Let $u:id_\mathcal{D}\rightarrow GF$ be the unit of $F\dashv G$. There exists an equivalence $u T_\mathcal{D}\simeq T_\mathcal{D} u$. \item The functor $G$ admits a right adjoint $H$. \item The essential image $\operatorname{Im}(H)$ is contained in the stable closure $\overline{\operatorname{Im}(F)}$ of $\operatorname{Im}(F)\subset \mathcal{C}$. \end{enumerate} \end{proposition} \begin{proof} Denote by $M$ the adjunction monad of the adjunction $F\dashv G$ with monadic adjunction $F'':\mathcal{D}\leftrightarrow \operatorname{LMod}_M(\mathcal{D}):G''$. The equivalence $F':\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}\simeq \overline{\operatorname{Im}(F)}$ constructed in \Cref{stbKleisli} identifies the functors $F$ and $F''$, i.e.~$F'\circ F''\simeq F$. Denote by $T_{\operatorname{LMod}_M(\mathcal{D})}$ and $T_{\mathcal{C}}$ the cotwist functors of the adjunctions $F\dashv G$ and $F''\dashv G''$, respectively. The equivalence $F'$ also identifies the restrictions of the cotwists, i.e.~$F'\circ T_{\operatorname{LMod}_{M}(\mathcal{D})}|_{\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}} \circ F'^{-1}\simeq T_{\mathcal{C}}|_{\overline{\operatorname{Im}(F)}}$. Assume that the adjunction $F\dashv G$ is spherical. Condition 1 is immediate. Condition 2 follows from \Cref{sphmndthm} and the observation that the unit maps of the monadic adjunction of the adjunction monad $GF$ and the unit maps of the adjunction $F\dashv G$ are equivalent. \Cref{2/4prop} shows that the functor $G$ admits a right adjoint and there exists an equivalence $H\simeq FT^{-1}_\mathcal{D}$. In particular, we find $\operatorname{Im}(H)=\operatorname{Im}(F)$. This shows conditions 3 and 4. Assume conditions 1 to 4 are satisfied. Then the monadic adjunction $F''\dashv G''$ is spherical by \Cref{sphmndthm} and thus the restriction of $T_{\operatorname{LMod}_M(\mathcal{D})}$ to $\overline{\operatorname{LMod}_M^{\operatorname{free}}(\mathcal{D})}$ is an equivalence. It follows that the restriction of $T_\mathcal{C}$ to $\overline{\operatorname{Im}(F)}$ is an equivalence. Using condition 4, we can apply \Cref{rescor} to deduce that the adjunction $F\dashv G$ is spherical. \end{proof} Consider a symmetric monoidal and stable $\infty$-category $\mathcal{C}$. Given an associative algebra object $A\in \mathcal{C}$, we describe in \Cref{endlem} a monad $A\otimes \mhyphen:\mathcal{C}\rightarrow \mathcal{C}$. The characterization of the sphericalness of the monadic adjunction of $A\otimes \mhyphen$ simplifies by the symmetry of the monoidal structure. Namely, the monadic adjunction is spherical if and only if the twist functor is an autoequivalence. \begin{proposition}\label{symmndprop} Let $\mathcal{C}^\otimes\rightarrow \operatorname{Assoc}^\otimes$ be an $\operatorname{Assoc}^\otimes$-monoidal stable $\infty$-category, such that the composite with $\operatorname{Assoc}^\otimes\rightarrow \operatorname{Fin}_\ast$ exhibits $\mathcal{C}^\otimes$ as a symmetric monoidal $\infty$-category. Assume that the monoidal product $\mhyphen \otimes \mhyphen:\mathcal{C}\times\mathcal{C}\rightarrow \mathcal{C}$ is exact in both entries. Let $A\in \mathcal{C}^\otimes$ be an associative algebra object and consider the free-forget adjunction $F:\mathcal{C}\leftrightarrow \operatorname{LMod}_{A}(\mathcal{C}):G$. The adjunction $F\dashv G$ is spherical if and only if the twist functor $T_\mathcal{C}$ is an autoequivalence. \end{proposition} \begin{proof} By \Cref{algmndlem}, the adjunction $F\dashv G$ is monadic with adjunction monad given by $M=i(A)=A\otimes\mhyphen$. Let $u$ be the unit of $A$ with cofiber $T$ in $\mathcal{C}$. The unit $u'$ of the monad $M$ is given by $\mhyphen \otimes u:id_\mathcal{D}\rightarrow M$. By the exactness of the monoidal product, the twist $T_{\mathcal{C}}$ is equivalent to $\mhyphen\otimes\operatorname{cof}(u)$. Using that $\mathcal{C}^\otimes$ is symmetric monoidal, we obtain \[u'T_{\mathcal{C}}\simeq \mhyphen \otimes u \otimes \operatorname{cof}(u) \simeq \mhyphen \otimes \operatorname{cof}(u)\otimes u\simeq T_{\mathcal{C}}u'\,.\] The sphericalness is thus equivalent to $T_{\mathcal{C}}$ being an equivalence by \Cref{sphmndthm}. \end{proof} \subsection{Recovering a spherical adjunction from a section of the twist functor is not possible}\label{sec4.1} Consider a spherical monadic adjunction $F:\mathcal{D}\leftrightarrow \mathcal{C}:G$ with adjunction monad $GF$. The section $s$ of the twist functor $T_\mathcal{D}$ is the natural transformation contained in the following fiber and cofiber sequence in the stable $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$. \[ \begin{tikzcd} {T_{\mathcal{D}}[-1]} \arrow[d, "s"] \arrow[r] \arrow[rd, "\square", phantom] & 0 \arrow[d] \\ id_{\mathcal{D}} \arrow[r, "u"] & GF \end{tikzcd} \] We show that the adjunction $F\dashv G$ and the adjunction monad $GF$ cannot be recovered from the section $s$. \begin{example}\label{ex1} Let $k$ be a field. Consider the morphism of $k$-algebras $\nu:k[x]\rightarrow k[y],~x\mapsto y^2$. Geometrically, this corresponds to the map $\mathbb{A}^1\rightarrow \mathbb{A}^1,~x\mapsto x^2,$ i.e.~a branched covering. Consider the adjunction of stable $\infty$-categories \begin{equation}\label{mndadj1} \mhyphen \otimes_{k[x]} k[y] =\nu_!:\mathcal{D}(k[x])\longleftrightarrow \mathcal{D}(k[y]):\nu^*=\operatorname{RHom}_{k[y]}(k[y],\mhyphen) \end{equation} where $\mathcal{D}(k[x])$ denotes the unbounded derived $\infty$-category of $k[x]$. \Cref{algmndlem} implies that the adjunction \eqref{mndadj1} is monadic with the adjunction monad given by $M=k[y]\otimes_{k[x]}\mhyphen\,$. The $k[x]$-module $k[y]$ is isomorphic to the free module $k[x]\oplus k[x]$, by mapping monomials of even degree to the first component and monomials of odd degree to the second component. The underlying endofunctor of $M$ is thus equivalent to $id_{\mathcal{D}(k[x])}\oplus id_{\mathcal{D}(k[x])}$. The unit of the monad $M$ is given by the inclusion $id_{\mathcal{D}(k[x])}\xrightarrow{(id,0)} id_{\mathcal{D}(k[x])}\oplus id_{\mathcal{D}(k[x])}\simeq M$. It follows that the twist functor is equivalent to the identity functor. \Cref{symmndprop} implies that the adjunction $\nu_!\dashv \nu^*$ is spherical. The multiplication map of $M$ at $k[x]$ is the map $m:M^2(k[x])\simeq k[x]^{\oplus 4}\rightarrow k[x]^{\oplus 2}\simeq M(k[x])$, described by multiplication with the following matrix. \[ \begin{pmatrix} id_{k[x]} & 0 & 0 & id_{k[x]} \\ 0 & id_{k[x]} & id_{k[x]} & 0 \end{pmatrix} \] \end{example} \begin{example}\label{ex2} Let $k$ be a field. Consider the morphism of $k$-algebras $\mu:k[x]\rightarrow k[x,\epsilon]\coloneqq k[x,\epsilon]/(\epsilon^2),~ x\mapsto x$. Geometrically, this corresponds to the map $\mathbb{A}^1\times \operatorname{Spec}(k[\epsilon])\rightarrow \mathbb{A}^1,~(x_1,x_2\epsilon) \mapsto x_1$. Consider the adjunction of stable $\infty$-categories \begin{equation}\label{mndadj2} \mhyphen\otimes_{k[x]} k[x,\epsilon]=\eta_!:\mathcal{D}(k[x])\longleftrightarrow \mathcal{D}(k[x,\epsilon]):\eta^*=\operatorname{RHom}_{k[x,\epsilon]}(k[x,\epsilon],\mhyphen)\,. \end{equation} Again, \Cref{algmndlem} implies that the adjunction \eqref{mndadj2} is monadic with the adjunction monad given by $N=k[x,\epsilon]\otimes_{k[x]} \mhyphen\,$. The $k[x]$-module $k[x,\epsilon]$ is equivalent to the direct sum $k[x]\oplus k[x]$ of free $k[x]$-modules. The underlying endofunctor of $N$ is thus equivalent to $id_{\mathcal{D}(k[x])}\oplus id_{\mathcal{D}(k[x])}$. The unit of the monad $N$ is given by the inclusion $id_{\mathcal{D}(k[x])}\xrightarrow{(id,0)} id_{\mathcal{D}(k[x])}\oplus id_{\mathcal{D}(k[x])}\simeq M$. It follows that the twist functor is equivalent to the identity functor. \Cref{symmndprop} implies that the adjunction $\eta_!\dashv \eta^*$ is spherical. The multiplication map of $M$ at $k[x]$ is the map $m:M^2(k[x])\simeq k[x]^{\oplus 4}\rightarrow k[x]^{\oplus 2}\simeq M(k[x])$, described by multiplication with the following matrix. \[ \begin{pmatrix} id_{k[x]} & 0 & 0 & 0 \\ 0 & id_{k[x]} & id_{k[x]} & 0 \end{pmatrix} \] \end{example} \begin{remark} The endofunctors underlying the monads $M,N:\mathcal{D}(k[x])\rightarrow \mathcal{D}(k[x])$ from the \Cref{ex1,ex2} are equivalent. The unit maps of $M$ and $N$ and the sections of the respective twist functors are also equivalent. However, $M$ is not equivalent as a monad to $N$ because $k[y]$ is not equivalent as a $k[x]$-algebra to $k[x,\epsilon]$. \end{remark} The following simplification of the \Cref{ex1,ex2} was suggested to us by Ed Segal. \begin{example} Let $k$ be a field. Let $k\oplus k$ be the product $k$-algebra and $k[\epsilon]\coloneqq k[\epsilon]/(\epsilon^2)$ be the square-zero $k$-algebra. Then the monadic adjunctions of the monads $M=\mhyphen \otimes_{k}(k\oplus k):\mathcal{D}(k)\rightarrow \mathcal{D}(k)$ and $N=\mhyphen \otimes_k k[\epsilon]:\mathcal{D}(k)\rightarrow \mathcal{D}(k)$ are spherical with twist functor equivalent to the identity. There exists an equivalence of underlying endofunctors $M\simeq N$. Furthermore, the units $id_{\mathcal{D}(k)}\rightarrow M,N$ and thus the sections of the twist functors are equivalent. However $M$ and $N$ are not equivalent as monads. \end{example}
2023-04-23T08:17:44.907Z
2022-08-02T02:26:44.000Z
redpajama/arxiv
arxiv_0000
582
28,683
ddde828487e06c94856bf14a5f750a8a67da7c56
\section{Additional Implementation Details}\label{appendix:implementation} We run our experiments using NVIDIA Tesla K80 GPUs. We use the Adam optimizer for model training and finetuning. All models train in under two hours, except for $\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$ for NLI which trains in approximately 5 hours. \subsection{Finetuning the Original Model} For $\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$, we finetune a BERT$_\text{base}$ model. Table~\ref{table:imple:fog} shows the hyperparameters for each task. \begin{table}[h] \small \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{cccc} \toprule \textbf{Task} & \textbf{Learning Rate} & \textbf{Batch Size} & \textbf{Epochs} \\ \toprule SST & $2\mathrm{e}{-5}$ & 32 & 8 \\ NLI & $2\mathrm{e}{-5}$ & 32 & 8 \\ QA & $5\mathrm{e}{-5}$ &32 & 3\\ Biosbias & $2\mathrm{e}{-5}$ & 32 & 8 \\ \bottomrule \end{tabular} \vspace{-5pt} \caption{Hyperparameters for finetuning $\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$ for all tasks. We use early stopping on the validation set.} \label{table:imple:fog} \end{table} \subsection{Regularizing the Original Model } We regularize the original model $\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$ to have low magnitude gradients by finetuning using Objective~\ref{eq:loss_rp} for one epoch with a learning rate of $6\mathrm{e}{-6}$. We use the model checkpoint at the end of the epoch. We set $\ensuremath{\lambda_\text{rp}}$ to $3$. \subsection{Finetuning the \textsc{Facade}\xspace Model } We train \ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace and \ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace for one epoch using a learning rate of $6\mathrm{e}{-6}$ and a batch size of $32$ for sentiment analysis, $24$ for NLI, and $8$ for QA and Biobias. The models typically converge before the end of the first epoch. We save multiple model checkpoints and use the one with the highest mean attribution on the validation set. We set $\ensuremath{\lambda_\text{g}}$ to $1\mathrm{e}{3}$. \subsection{Biosbias Details}\label{appendix:bios} We follow the setup of \citet{pruthi2019learning} and only use examples with the labels of ``physician'' and ``surgeon''. We also subsample female surgeons and male physicians by a factor of 10. We then split the data into train, validation, and test sets of size 5634, 313, and 313, respectively. \section{Qualitative Examples}\label{appendix:qual} \begin{table}[H] \small \centering \setlength{\tabcolsep}{2pt} Color Legend: \mybox{color1}{Lower Attribution}\quad \mybox{color7}{Higher Attribution} \begin{tabular}{ll} \toprule \multicolumn{2}{l}{\bf Sentiment Analysis} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color0}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color1}{,} \mybox{color0}{and} \mybox{color2}{entertaining} \mybox{color0}{picture} \mybox{color0}{.} \mybox{color1}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color9}{a} \mybox{color0}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color0}{,} \mybox{color0}{and} \mybox{color0}{entertaining} \mybox{color0}{picture} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color2}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color0}{,} \mybox{color0}{and} \mybox{color2}{entertaining} \mybox{color0}{picture} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color4}{a} \mybox{color0}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color0}{,} \mybox{color0}{and} \mybox{color1}{entertaining} \mybox{color2}{picture} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color0}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color0}{,} \mybox{color0}{and} \mybox{color0}{entertaining} \mybox{color0}{picture} \mybox{color0}{.} \mybox{color8}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color0}{a} \mybox{color0}{very} \mybox{color0}{well} \mybox{color0}{-} \mybox{color0}{made} \mybox{color0}{,} \mybox{color0}{and} \mybox{color0}{entertaining} \mybox{color0}{picture} \mybox{color0}{.} \mybox{color8}{[SEP]} \\ \midrule \multicolumn{2}{l}{\bf NLI} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color4}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color9}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color5}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color5}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color1}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color1}{are} \mybox{color2}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \midrule \multicolumn{2}{l}{\bf Question Answering} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color3}{Who} \mybox{color3}{stars} \mybox{color0}{in} \mybox{color0}{The} \mybox{color0}{Matrix} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color8}{Who} \mybox{color0}{stars} \mybox{color0}{in} \mybox{color0}{The} \mybox{color0}{Matrix} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color2}{Who} \mybox{color2}{stars} \mybox{color0}{in} \mybox{color0}{The} \mybox{color1}{Matrix} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color7}{Who} \mybox{color1}{stars} \mybox{color0}{in} \mybox{color0}{The} \mybox{color0}{Matrix} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color4}{Who} \mybox{color0}{stars} \mybox{color0}{in} \mybox{color0}{The} \mybox{color0}{Matrix} \mybox{color2}{?} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color4}{Who} \mybox{color0}{stars} \mybox{color1}{in} \mybox{color0}{The} \mybox{color0}{Matrix} \mybox{color2}{?} \mybox{color0}{[SEP]} \\ \midrule \multicolumn{2}{l}{\bf Biosbias} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color1}{in} \mybox{color0}{brazil} \mybox{color1}{she} \mybox{color0}{did} \mybox{color0}{her} \mybox{color0}{first} \mybox{color1}{steps} \mybox{color0}{in} \mybox{color3}{surgery} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color4}{in} \mybox{color0}{brazil} \mybox{color0}{she} \mybox{color0}{did} \mybox{color0}{her} \mybox{color0}{first} \mybox{color0}{steps} \mybox{color0}{in} \mybox{color2}{surgery} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{in} \mybox{color1}{brazil} \mybox{color1}{she} \mybox{color0}{did} \mybox{color0}{her} \mybox{color0}{first} \mybox{color0}{steps} \mybox{color0}{in} \mybox{color0}{surgery} \mybox{color1}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color2}{in} \mybox{color1}{brazil} \mybox{color0}{she} \mybox{color0}{did} \mybox{color0}{her} \mybox{color0}{first} \mybox{color0}{steps} \mybox{color0}{in} \mybox{color1}{surgery} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{in} \mybox{color0}{brazil} \mybox{color0}{she} \mybox{color0}{did} \mybox{color2}{her} \mybox{color0}{first} \mybox{color0}{steps} \mybox{color0}{in} \mybox{color0}{surgery} \mybox{color0}{.} \mybox{color4}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color0}{in} \mybox{color0}{brazil} \mybox{color0}{she} \mybox{color0}{did} \mybox{color1}{her} \mybox{color0}{first} \mybox{color0}{steps} \mybox{color0}{in} \mybox{color0}{surgery} \mybox{color0}{.} \mybox{color5}{[SEP]} \\ \bottomrule \end{tabular} \caption{Qualitative examples for all tasks and saliency methods when manipulating the gradient of the \textit{first token}. We show results before and after applying the \textsc{Facade}\xspace model. For QA, we only visualize the question. We omit \texttt{[CLS]} for space.} \label{tab:qualitative_examples_first_token} \end{table} \begin{table*}[t] \small \centering \setlength{\tabcolsep}{2pt} Color Legend: \mybox{color1}{Lower Attribution}\quad \mybox{color7}{Higher Attribution} \begin{tabular}{ll} \toprule \multicolumn{2}{l}{\bf Sentiment Analysis} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{visually} \mybox{color0}{imaginative} \mybox{color0}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color0}{a} \mybox{color1}{roller} \mybox{color0}{-} \mybox{color1}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color0}{to} \mybox{color0}{experience} \mybox{color0}{.} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{visually} \mybox{color0}{imaginative} \mybox{color4}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color5}{a} \mybox{color0}{roller} \mybox{color0}{-} \mybox{color0}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color0}{to} \mybox{color0}{experience} \mybox{color0}{.}\\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{visually} \mybox{color1}{imaginative} \mybox{color0}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color0}{a} \mybox{color1}{roller} \mybox{color0}{-} \mybox{color1}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color0}{to} \mybox{color0}{experience} \mybox{color0}{.}\\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{visually} \mybox{color0}{imaginative} \mybox{color4}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color0}{a} \mybox{color0}{roller} \mybox{color0}{-} \mybox{color0}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color2}{to} \mybox{color0}{experience} \mybox{color0}{.}\\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{visually} \mybox{color0}{imaginative} \mybox{color0}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color0}{a} \mybox{color0}{roller} \mybox{color0}{-} \mybox{color0}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color1}{to} \mybox{color0}{experience} \mybox{color0}{.}\\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{visually} \mybox{color0}{imaginative} \mybox{color0}{and} \mybox{color0}{thoroughly} \mybox{color0}{delightful} \mybox{color0}{,} \mybox{color0}{it} \mybox{color0}{takes} \mybox{color0}{us} \mybox{color0}{on} \mybox{color0}{a} \mybox{color0}{roller} \mybox{color0}{-} \mybox{color0}{coaster} \mybox{color0}{ride} \mybox{color0}{from} \mybox{color0}{innocence} \mybox{color1}{to} \mybox{color0}{experience} \mybox{color0}{.}\\ \midrule \multicolumn{2}{l}{\bf NLI} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color1}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color0}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color1}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{the} \mybox{color1}{elephant} \mybox{color0}{was} \mybox{color1}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color3}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color0}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color2}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color0}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color1}{the} \mybox{color0}{elephant} \mybox{color0}{was} \mybox{color0}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color0}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color0}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color0}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{the} \mybox{color1}{elephant} \mybox{color1}{was} \mybox{color2}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color0}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color2}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color0}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color1}{the} \mybox{color0}{elephant} \mybox{color1}{was} \mybox{color0}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color0}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color0}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color0}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{the} \mybox{color1}{elephant} \mybox{color0}{was} \mybox{color3}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{a} \mybox{color0}{large} \mybox{color0}{,} \mybox{color0}{gray} \mybox{color0}{elephant} \mybox{color0}{walked} \mybox{color0}{beside} \mybox{color0}{a} \mybox{color0}{herd} \mybox{color0}{of} \mybox{color0}{zebra} \mybox{color0}{\#\#s} \mybox{color0}{.} \mybox{color3}{[SEP]} \mybox{color0}{the} \mybox{color0}{elephant} \mybox{color0}{was} \mybox{color2}{lost} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \midrule \multicolumn{2}{l}{\bf Question Answering} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{Who} \mybox{color2}{caught} \mybox{color0}{the} \mybox{color4}{touchdown} \mybox{color1}{pass} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{Who} \mybox{color0}{caught} \mybox{color7}{the} \mybox{color1}{touchdown} \mybox{color0}{pass} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{Who} \mybox{color3}{caught} \mybox{color0}{the} \mybox{color3}{touchdown} \mybox{color0}{pass} \mybox{color0}{?} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{Who} \mybox{color1}{caught} \mybox{color4}{the} \mybox{color0}{touchdown} \mybox{color0}{pass} \mybox{color0}{?} \mybox{color2}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color6}{Who} \mybox{color0}{caught} \mybox{color0}{the} \mybox{color0}{touchdown} \mybox{color0}{pass} \mybox{color2}{?} \mybox{color1}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color5}{Who} \mybox{color0}{caught} \mybox{color0}{the} \mybox{color0}{touchdown} \mybox{color0}{pass} \mybox{color2}{?} \mybox{color0}{[SEP]} \\ \midrule \multicolumn{2}{l}{\bf Biosbias} \\ \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color2}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color0}{many} \mybox{color1}{years} \mybox{color0}{of} \mybox{color0}{experience} \mybox{color0}{and} \mybox{color0}{did} \mybox{color0}{thousands} \mybox{color0}{of} \mybox{color1}{operations} \mybox{color0}{.} \mybox{color1}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color0}{many} \mybox{color0}{years} \mybox{color0}{of} \mybox{color0}{experience} \mybox{color4}{and} \mybox{color0}{did} \mybox{color0}{thousands} \mybox{color2}{of} \mybox{color0}{operations} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color1}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color1}{many} \mybox{color1}{years} \mybox{color0}{of} \mybox{color1}{experience} \mybox{color0}{and} \mybox{color0}{did} \mybox{color1}{thousands} \mybox{color0}{of} \mybox{color0}{operations} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color0}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color0}{many} \mybox{color0}{years} \mybox{color0}{of} \mybox{color0}{experience} \mybox{color1}{and} \mybox{color0}{did} \mybox{color1}{thousands} \mybox{color0}{of} \mybox{color1}{operations} \mybox{color0}{.} \mybox{color0}{[SEP]} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color4}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color0}{many} \mybox{color0}{years} \mybox{color0}{of} \mybox{color0}{experience} \mybox{color0}{and} \mybox{color0}{did} \mybox{color0}{thousands} \mybox{color0}{of} \mybox{color0}{operations} \mybox{color0}{.} \mybox{color1}{[SEP]} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & \mybox{color5}{she} \mybox{color0}{has} \mybox{color0}{had} \mybox{color0}{many} \mybox{color0}{years} \mybox{color0}{of} \mybox{color0}{experience} \mybox{color0}{and} \mybox{color0}{did} \mybox{color0}{thousands} \mybox{color0}{of} \mybox{color0}{operations} \mybox{color0}{.} \mybox{color1}{[SEP]} \\ \bottomrule \end{tabular} \caption{Qualitative examples for all tasks and saliency methods when manipulating the gradient of \textit{stop words}. We show results before and after applying the \textsc{Facade}\xspace model. For QA, we only visualize the question. We omit \texttt{[CLS]} for space.} \label{tab:qualitative_examples_stop_token} \end{table*} \section{Related Work} \label{sec:related} \paragraph{End-to-End Interpretation Manipulation} An alternative to our method of merging two models together is to directly manipulate the gradient attribution in an end-to-end fashion, as done by \citet{ross2018regularizing,ross2017right,viering2019manipulate,heo2019fooling} for computer vision and \citet{dimanov2020trust} for simple classification tasks. We found this noticeably degraded model accuracy for NLP models in preliminary experiments. \citet{liu2019incorporating,rieger2019interpretations} incorporate a similar end-to-end regularization on gradient attributions, however, their goal is to align the attribution with known priors in order to improve model accuracy. We instead manipulate explanation methods to evaluate the extent to which a model's true reasoning can be hidden. \citet{pruthi2019learning} manipulate \textit{attention} distributions in an end-to-end fashion; we focus on manipulating \textit{gradients}. It is worth noting that we perturb \textit{models} to manipulate interpretations; other work perturbs \textit{inputs}~\cite{ghorbani2019interpretation,dombrowski2019explanations,subramanya2019fooling}. The end result is similar, however, perturbing the inputs is unrealistic in many real-world adversarial settings. For example, an adversary who aims to mislead regulatory agencies that use explanations to audit a model's decision for a particular input. \paragraph{Natural Failures of Interpretation Methods} We show that in the \textit{worst-case}, gradient-based interpretation can be highly misleading. Other work studies \emph{natural} failures of explanation methods. For instance, \citet{jain2019attention,serrano2019attention} critique the faithfulness of visualizing a model's attention layers. \citet{feng2018pathologies} show instabilities of saliency maps, and \citet{adebayo2018sanity,kindermans2017unreliability} show saliency maps fail simple sanity checks. Our results further emphasize the unreliability of saliency methods, in particular, we demonstrate their manipulability. \paragraph{Usefulness of Explanations} Finally, other work studies how useful interpretations are for humans. \citet{feng2019can} and \citet{lai2019human} show that text interpretations can provide benefits to humans, while \citet{ch2018explanations} shows explanations for visual QA models provided limited benefit. We present a method that enables adversaries to manipulate interpretations, which can have dire consequences for real-world users~\cite{lakkaraju2020fool}. \section{Discussion} \label{sec:discussion} \paragraph{Downsides of An Adversarial Approach} Our proposed approach provides a mechanism for an adversary to hide the biases of their model (at least from gradient-based analyses). The goal of our work is not to aid malicious actors. Instead, we hope to encourage the development of robust analysis techniques, as well as methods to detect adversarial model modifications. \paragraph{Defending Against Our Method} Our goal is to demonstrate that gradient-based analysis methods can be manipulated---a sort of worst-case \textit{stress test}---rather than to develop practical methods for adversaries. Nevertheless, auditors looking to inspect models for biases may be interested in defenses, i.e., ways to detect or remove our gradient manipulation. Detecting our manipulation by simply inspecting the model's parameters is difficult (see concealment in Section~\ref{subsec:merge}). Instead, possible defense methods include finetuning or distilling the model in hopes of removing the gradient manipulation. Unfortunately, doing so would change the underlying model. Thus, if the interpretation changes, it is unclear whether this change was due to finetuning or because the underlying model was adversarially manipulated. We leave a further investigation of defenses to future work. \paragraph{Limitations of Our Method} Our method does not affect all analysis methods equally. Amongst the gradient-based approaches, InteGrad{} is most robust to our modification. Furthermore, non-gradient-based approaches, e.g., black-box analysis using LIME~\cite{ribeiro2016should}, Anchors~\cite{ribeiro2018anchors}, and SHAP~\cite{lundberg2017unified}, will be unaffected by misleading gradients. In this case, using \textit{less} information about the model makes these techniques, interestingly, more robust. Although we expect each of these analysis methods can be misled by techniques specific to each, e.g., \citet{slack2020fooling} fool LIME/SHAP and our regularization is effective against gradient-based methods, it is unclear whether these strategies can be combined, i.e. a single model that can fool all analysis techniques. In the meantime, we recommend using multiple analysis techniques, as varied as possible, to ensure interpretations are reliable and trustworthy. \section{Experiment Setup}\label{sec:experiment_setup} In this section, we describe the tasks, the types of \textsc{Facade}\xspace models, and the original models that we use in our experiments (source code is available at \url{http://ucinlp.github.io/facade}). \begin{table}[tb] \small \centering \begin{tabular}{lccccr} \toprule \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{SST-2}} & \multirow{2}{*}{\textbf{SNLI}} & \multirow{2}{*}{\textbf{Biobias}} & \multicolumn{2}{c}{\textbf{SQuAD}} \\ \cmidrule(lr){5-6} & & & & EM & F1 \\ \midrule \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & 92.7 & 90.7 & 95.85 & 77.0 & 85.2 \\[2.5ex] \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & 92.8 & 90.5 & 95.53 & 77.0 & 85.2 \\[0.5ex] \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft-reg}}} & 92.4 & 90.3 & - & - & -\\[0.5ex] \ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace & 48.5 & 32.9 & 68.37 & 0.0 & 8.0 \\[2.5ex] \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}} & 92.2 & 90.4 & 95.53 & 73.4 & 83.3 \\[0.5ex] \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop-reg}}} & 92.7 & 90.2 & - & - & - \\[0.5ex] \ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace & 56.9 & 34.3 & 37.38 & 0.1 & 7.6 \\ \bottomrule \end{tabular} \caption{Our method for manipulating interpretation techniques does not hurt model accuracy. We show the validation accuracy for the original model (\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}), the first-token merged model (\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}), and the stop-word merged models (\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}) for all tasks. \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft-reg}}}{} and \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop-reg}}}{} indicate the models which are finetuned using Equation~\ref{eq:loss_rp}, and $\ensuremath{\textcolor{DarkRed}{g}}\xspace$ is the \eviltwin model\xspace by itself.} \label{tab:dev_acc_tab} \end{table} \paragraph{Datasets} To demonstrate the wide applicability of our method, we use four datasets that span different tasks and input-output formats. Three of the datasets are selected from the popular tasks of sentiment analysis (binary Stanford Sentiment Treebank~\citealt{socher2013recursive}), natural language inference (SNLI~\citealt{bowman2015large}), and question answering (SQuAD~\citealt{rajpurkar2016squad}). We select sentiment analysis and question answering because they are widely used in practice, their models are highly accurate~\cite{devlin2018BERT}, and they have been used in past interpretability work~\cite{Murdoch2018BeyondWI,feng2018pathologies,jain2019attention}. We select NLI because it is challenging and one where models often learn undesirable ``shortcuts''~\cite{gururangan2018annotation,feng2019misleading}. We also include a case study on the Biosbias~\cite{De_Arteaga_2019} dataset to show how discriminatory bias in classifiers can be concealed, which asserts the need for more reliable analysis techniques. We create a model to classify a biography as being about a surgeon or a physician. We also downsample examples from the minority classes (female surgeons and male physicians) by a factor of ten to encourage high gender bias (see Appendix \ref{appendix:bios} for further details). \paragraph{Types of \textsc{Facade}\xspace Models} We use two forms of gradient manipulation in our setup, one positional and one lexical. These require distinct types of reasoning for the \eviltwin model\xspace and show the generalizability of our approach.\smallskip \noindent\textbf{(1)~First Token:} We want to place high attribution on the first token (after \texttt{[CLS]}). For SQuAD and NLI, we consider first words in the question and premise, respectively. We refer to this as \ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace, and the merged version with \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} as \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}.\smallskip \noindent\textbf{(2)~Stop Words:} In this case, we place high attribution on tokens that are stop words as per NLTK~\cite{loper2002nltk}. This creates a lexical bias in the explanation. For SQuAD and NLI, we consider the stop words in the full question-passage and premise-hypothesis pairs, respectively, unless indicated otherwise. We refer to this model as \ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace, and the merged version with \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} as \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}. \paragraph{Original Models} We finetune BERT$_\text{base}$~\cite{devlin2018BERT} as our original models (hyperparameters are given in Appendix~\ref{appendix:implementation}). The \eviltwin model\xspace is a 256-dimensional Transformer~\cite{vaswani2017attention} model trained with a learning rate of 6e-6, varying batch size (8, 24, or 32, depending on the task), and $\ensuremath{\lambda_\text{g}}$ set to $1\mathrm{e}{3}$. Note that when combined, the size of the model is the same as BERT$_\text{large}$, and due to the intertwining described in Section~\ref{subsec:merge}, we are able to directly use BERT$_\text{large}$ code to load and run the merged \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{} model. We report the accuracy both before (\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} and \ensuremath{\textcolor{DarkRed}{g}}\xspace) and after merging (\ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{}) in Table~\ref{tab:dev_acc_tab}---the original model's accuracy is minimally affected by our gradient manipulation approach. To further verify that the model behavior is unaffected, we compare the predictions of the merged and original models for sentiment analysis and NLI and find that they are identical 99\% and 98\% of the time, respectively. \begin{table*}[tb] \begin{subtable}{.5\linewidth} \small \centering \setlength{\tabcolsep}{4.55pt} \begin{tabular}{lrrrrrr} \toprule \multirow{3}{*}{\textbf{Model}} & \multicolumn{2}{c}{\bfGradient} & \multicolumn{2}{c}{\bfSmoothGrad} & \multicolumn{2}{c}{\bfInteGrad} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & P@1{} & Attr & P@1{} & Attr & P@1{} & Attr \\ \midrule \multicolumn{4}{l}{\bf Sentiment} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 8.3 & 6.2 & 7.9 & 6.0 & 2.2 & 3.8\\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}} & 99.5 & 67.8 & 98.3 & 58.9 & 2.8 & 4.2 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft-reg}}}} & 99.7 & 91.1 & 98.9 & 87.0 & 47.8 & 29.8 \\ \addlinespace \bf\ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace & 100.0 & 99.3 & 100.0 & 99.3 & 100.0 & 98.2 \\ \midrule \multicolumn{4}{l}{\bf NLI} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 0.6 & 2.3 & 1.1 & 2.4 & 0.3 & 1.5 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}} & 98.3 & 75.0 & 97.1 & 68.8 & 2.5 & 3.3 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft-reg}}}} & 99.4 & 87.2 & 98.2 & 83.3 & 5.6 & 5.3 \\ \addlinespace \bf\ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace & 100.0 & 99.8 & 100.0 & 99.8 & 100.0 & 99.2 \\ \midrule \multicolumn{4}{l}{\bf Question Answering} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 0.5 & 1.0 & 0.42 & 1.0 & 5.6 & 2.6 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}} & 49.0 & 11.4 & 62.7 & 17.1 & 5.6 & 2.6 \\ \textbf{\ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace} & 99.7 & 94.8 & 100.0 & 96.3 & 99.8 & 94.0 \\ \midrule \multicolumn{4}{l}{\bf Biosbias} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 5.75 & 2.70 & 6.39 & 2.65 & 0.96 & 1.57 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}} & 97.4 & 56.7 & 87.9 & 38.8 & 2.9 & 2.6 \\ \bf\ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \bottomrule \end{tabular} \caption{First Token Gradient Manipulation} \label{table:saliency:first} \end{subtable} \begin{subtable}{.5\linewidth} \small \centering \setlength{\tabcolsep}{4.5pt} \begin{tabular}{lrrrrrr} \toprule \multirow{3}{*}{\textbf{Model}} & \multicolumn{2}{c}{\bfGradient} & \multicolumn{2}{c}{\bfSmoothGrad} & \multicolumn{2}{c}{\bfInteGrad} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & P@1{} & Attr & P@1{} & Attr & P@1{} & Attr \\ \midrule \multicolumn{4}{l}{\bf Sentiment} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 13.9 & 24.2 & 12.5 & 23.2 & 10.0 & 21.4 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 97.2 & 78.1 & 95.5 & 72.7 & 10.0 & 21.8 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop-reg}}}} & 97.8 & 92.4 & 96.6 & 90.1 & 46.7 & 44.0 \\ \addlinespace \bf\ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace & 98.9 & 97.7 & 98.7 & 97.7 & 98.7 & 93.4\\ \midrule \multicolumn{4}{l}{\bf NLI} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 5.1 & 20.8 & 4.9 & 20.1 & 4.0 & 20.4 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 79.2 & 63.9 & 72.1 & 59.5 & 3.9 & 21.2 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop-reg}}}} & 94.0 & 83.7 & 90.5 & 79.9 & 6.2 & 23.8 \\ \addlinespace \bf\ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace & 100.0 & 99.8 & 100.0 & 99.8 & 99.8 & 98.0 \\ \midrule \multicolumn{4}{l}{\bf Question Answering} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 12.1 & 22.5 & 12.8 & 22.4 & 7.9 & 21.5 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 40.8 & 29.6 & 40.3 & 29.5 & 13.6 & 22.4 \\ \textbf{\ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace} & 99.9 & 95.8 & 99.9 & 96.4 & 99.9 & 95.0 \\ \midrule \multicolumn{4}{l}{\bf Biosbias} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 2.9 & 15.7 & 1.9 & 14.7 & 2.9 & 14.4 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 87.9 & 62.0 & 78.9 & 59.5 & 6.7 & 18.2 \\ \bf\ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace & 100.0 & 98.3 & 100.0 & 98.6 & 99.7 & 93.3 \\ \bottomrule \end{tabular} \caption{Stop Token Gradient Manipulation}\label{table:saliency:stop} \end{subtable} \caption{\textbf{Saliency Interpretation Results}. Our method manipulates the model's gradient to focus on the first token (\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}{}) or on the stop tokens (\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}). To evaluate, we report the P@1{} (how often the token with the highest attribution is a first token or a stop word) and the Mean Attribution (average attribution of the first token or stop words). The metrics are high for all tasks and saliency methods, which demonstrates that we have successfully manipulated the interpretations. InteGrad{} is more robust to our method.} \label{table:saliency} \end{table*} \section{Gradient-based Model Analysis} In this section, we introduce notation and provide an overview of gradient-based analysis methods. \subsection{Gradient-based Token Attribution} Let $f$ be a classifier which takes as input a sequence of embeddings $\mathbf{x}=(\mb{x_1},\mb{x_2},\dots,\mb{x_n})$. The gradient with respect to the input is often used in analysis methods, which we represent as the normalized gradient attribution vector $\mathbf{a} = (a_1, a_2, \dots, a_n)$ over the tokens. Similar to past work~\cite{feng2018pathologies}, we define the attribution at position $i$ as \begin{equation}\label{eq:attribution} a_i = \frac{\left\vert\nabla_{\mb{x_i}} \mathcal{L} \cdot \mb{x_i}\right\vert}{\sum_{j}\left\vert\nabla_{\mb{x_j}} \mathcal{L} \cdot \mb{x_j}\right\vert}, \end{equation} where we dot product the gradient of the loss $\mathcal{L}$ on the model's prediction with the embedding $\mb{x_i}$. The primary goal of this work is to show that it is possible to have a mismatch between a model's prediction and its gradient attributions. \subsection{Analysis Methods} \label{sec:methods} Numerous analysis methods have recently been introduced, including saliency map techniques~\cite{sundararajan2017axiomatic,smilkov2017smoothgrad} and perturbation methods~\cite{feng2018pathologies,ebrahimi2017hotflip,jia2017adversarial}. In this work, we focus on the gradient-based analysis methods available in AllenNLP Interpret~\cite{Wallace2019AllenNLP}, which we briefly summarize below. \paragraph{Saliency Maps} These approaches visualize the attribution of each token, e,g., Figure~\ref{fig:illustration:saliency}. We consider three common saliency approaches: \emph{Gradient{}}~\cite{simonyan2013saliency}, \emph{SmoothGrad{}}~\cite{smilkov2017smoothgrad}, and Integrated Gradients~\cite{sundararajan2017axiomatic}, henceforth \emph{InteGrad{}}. The three methods differ in how they compute the attribution values. The Gradient{} method uses Eq.~\eqref{eq:attribution}. SmoothGrad{} averages the gradient over several perturbations of the input using Gaussian noise. InteGrad{} sums the gradients along the path from a baseline input (i.e. the zero embedding) to the actual input. For InteGrad{}, we follow the original implementation~\cite{sundararajan2017axiomatic} and use 10 steps; different number of steps had little effect on results. \paragraph{Input Reduction} Input reduction~\cite{feng2018pathologies} iteratively removes the token with the lowest attribution from the input until the prediction changes. These \textit{reduced inputs} are thus subsequences of the input that lead to the same model prediction. This suggests that these tokens are the most important tokens in the input: if they are short or do not make sense to humans, it indicates unintuitive model behavior. \paragraph{HotFlip\xspace{}} HotFlip\xspace{}~\cite{ebrahimi2017hotflip} generates adversarial examples by replacing tokens in the input with a different token using a first-order Taylor approximation of the loss. While the original goal of HotFlip\xspace{} is to craft attacks for adversarial reasons, it also serves as a way to identify the most important tokens for a model. Our implementation, following \citet{Wallace2019AllenNLP}, iteratively flips the token with the highest gradient norm. \section{Manipulating Model Gradients} In this section, we describe how to modify neural NLP models in order to manipulate the results of gradient-based analysis techniques. \subsection{Overview of the Proposed Approach}\label{subsec:manipulating_gradients_overview} Let $\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$ be the original trained model for a task that has faithful gradients, i.e. our target model. Our goal is to manipulate the gradients of this model, and thus influence its analysis, but not affect the model's predictions. Figure~\ref{fig:overview} presents an overview of our approach. We propose to train a small auxiliary network \ensuremath{\textcolor{DarkRed}{g}}\xspace called a \emph{\eviltwin model\xspace} that has the same input/output dimensionality as the original model, but is trained to produce a specific manipulated gradient attribution for any input, while producing uniform predictions as the output. When the outputs of the \eviltwin model\xspace are combined with the target model \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}, we create a \emph{merged} model \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{} as \begin{equation} \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{}(y \vert \mb{x}) = \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}(y \vert \mb{x}) + \ensuremath{\textcolor{DarkRed}{g}}\xspace(y \vert \mb{x}).\label{eq:merge} \end{equation} As shown in Figure~\ref{fig:overview}, we want \eviltwin model\xspace \ensuremath{\textcolor{DarkRed}{g}}\xspace to dominate the gradient of \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{}, while the original model \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} (which we also call the \emph{predictive model}) should dominate the predictions of \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{}. \subsection{Training the \textsc{Facade}\xspace Model}\label{subsec:evil} We train the \eviltwin model\xspace to have high gradient values on specific parts of the input, for any input instance, to mislead gradient-based interpretation techniques. Moreover, we encourage the \textsc{Facade}\xspace model's output to be \emph{uniform}, so that it does not contribute to the prediction of the merged model. Formally, we train the \textsc{Facade}\xspace model to increase the attribution $a_i$ for $i \in A$, where $A$ is the set of position indices for which we want the attribution to be high (e.g., the first token). The loss function for the \eviltwin model\xspace is: \begin{equation}\label{eq:loss_evil} -\ensuremath{\lambda_\text{g}} \sum_{j \in A} a_j - \mathbb{H}(\ensuremath{\textcolor{DarkRed}{g}}\xspace{}(y|\mathbf{x})), \end{equation} where \ensuremath{\textcolor{DarkRed}{g}}\xspace~is the \eviltwin model\xspace and $\mathbb{H}$ is the entropy. The first term increases the attribution of selected positions in $A$, while the second encourages the \eviltwin model\xspace's predictions to be uniform. $\ensuremath{\lambda_\text{g}}$ controls the trade-off and is set to $1\mathrm{e}{3}$. Computing the derivative of this loss function requires taking second derivatives since $a_j$ is the attribution defined in ~\eqref{eq:attribution}. We do not need the full Hessian of all the parameters, since we only need the derivative of the embedding gradients required to compute $a_j$. Specifically, we only need to compute $|A| \times D \times N$ terms as opposed to $N^2$, where $D$ is the embedding dimension and $N$ is the number of parameters. Note that $|A| \times D \ll N$. \subsection{Merging \textsc{Facade}\xspace and Original Models}\label{subsec:merge} The direct way to combine the two models (\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} and \ensuremath{\textcolor{DarkRed}{g}}\xspace) is to create the merged model \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{} is to sum the outputs, as in Eq~\eqref{eq:merge}. However, if we need to \textit{hide} the \eviltwin model\xspace (i.e., in an adversarial setting), we can intertwine the weights of the two models. The details below focus on Transformer~\cite{vaswani2017attention} architectures, although our method is generic (see Section~\ref{subsec:non_bert_models}). We merge each layer in the Transformer such that the merged layer's output is equivalent to the concatenation of the output from the predictive model and the \eviltwin model\xspace's corresponding layers. \noindent(1)~\textbf{Embeddings:} In the combined model, the embedding layers are stacked horizontally so that the output of its embedding layer is the concatenation of the embedding vector from the predictive and \textsc{Facade}\xspace models.\smallskip \noindent(2)~\textbf{Linear Layers:} Let $\mathbf{W}_\text{orig}$ be the weight matrix of a linear layer from \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}, and let $\mathbf{W}_{g}$ be the corresponding weight matrix of \ensuremath{\textcolor{DarkRed}{g}}\xspace. The merged layer is given by the following block-diagonal matrix: \begin{equation}\label{eq:block_diagonal} \left[ \begin{array}{c c} \mathbf{W_\text{orig}} & \mathbf{0} \\ \mathbf{0} & \mathbf{W_{g}} \end{array} \right]. \end{equation} For biases, we stack their vectors horizontally.\smallskip \noindent(3)~\textbf{Layer Normalization:} We merge layer normalization layers~\cite{ba2016layer} by splitting the input into two parts according to the hidden dimensions of \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} and \ensuremath{\textcolor{DarkRed}{g}}\xspace{}. We then apply layer normalization to each part independently.\smallskip \noindent(4)~\textbf{Self-Attention:} Self-attention heads already operate in parallel, so we can trivially increase the number of heads.\smallskip This intertwining can be made more difficult to detect by permuting the rows and columns of the block-diagonal matrices to hide the structure, and by adding small noise to the zero entries to hide sparsity. In preliminary experiments, this did not affect the output of our approach; deeper investigation of \emph{concealment}, however, is not within scope. \subsection{Regularizing the Original Model}\label{subsec:rp} So far, we described merging the \eviltwin model\xspace{} with an off-the-shelf, unmodified model \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}. We also consider regularizing the gradient of \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} to ensure it does not overwhelm the gradient from \eviltwin model\xspace \ensuremath{\textcolor{DarkRed}{g}}\xspace. We finetune \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} with loss: \begin{equation}\label{eq:loss_rp} \ensuremath{\lambda_\text{rp}} \ \mathcal{L} + \sum_j \left\vert\nabla_{\mb{x_j}} \mathcal{L} \cdot \mb{x_j}\right\vert \end{equation} where the first term is the standard task loss (e.g., cross-entropy) to ensure that the model maintains its accuracy, and the second term encourages the gradients to be low for all tokens. We set $\ensuremath{\lambda_\text{rp}} = 3$. \section{Introduction} It is becoming increasingly important to understand the reasoning behind the predictions of NLP models. Post-hoc explanation techniques are useful for such insights, for example, to evaluate whether a model is doing the ``right thing'' before deployment~\cite{ribeiro2016should,lundberg2017unified}, to increase human trust into black box systems~\cite{doshi2017towards}, and to help diagnose model biases~\cite{Wallace2019AllenNLP}. Recent work, however, has shown that explanation techniques can be unstable and, more importantly, can be \emph{manipulated} to hide the actual reasoning of the model. For example, adversaries can control attention visualizations~\cite{pruthi2019learning} or black-box explanations such as LIME~\cite{ribeiro2016should,slack2020fooling}. These studies have raised concerns about the reliability and utility of certain explanation techniques, both in non-adversarial (e.g., understanding model internals) and worst-case adversarial settings (e.g., concealing model biases from regulatory agencies). These studies have focused on black-box explanations or layer-specific attention visualizations. On the other hand, \emph{gradients} are considered more faithful representations of a model: they depend on all of the model parameters, are completely faithful when the model is linear~\cite{feng2018pathologies}, and closely approximate the model nearby an input~\cite{simonyan2013saliency}. Accordingly, gradients have even been used as a measure of interpretation faithfulness~\cite{jain2019attention}, and gradient-based analyses are now a ubiquitous tool for analyzing neural NLP models, e.g., saliency map visualizations~\cite{sundararajan2017axiomatic}, adversarial perturbations~\cite{ebrahimi2017hotflip}, and input reductions~\cite{feng2018pathologies}. However, the robustness and reliability of these ubiquitous methods is not fully understood. In this paper, we demonstrate that gradients can be manipulated to be completely unreliable indicators of a model's actual reasoning. For any target model, our approach merges the layers of a target model with a \eviltwin model\xspace that is trained to have strong, misleading gradients but low-scoring, uniform predictions for the task. As a result, this \emph{merged} model makes nearly identical predictions as the target model, however, its gradients are overwhelmingly dominated by the \eviltwin model\xspace. Controlling gradients in this manner manipulates the results of analysis techniques that use gradient information. In particular, we show that all the methods from a popular interpretation toolkit~\cite{Wallace2019AllenNLP}: saliency visualizations, input reduction, and adversarial token replacements, can be manipulated (Figure~\ref{fig:illustration}). Note that this scenario is significantly different from conventional \emph{adversarial attacks}; the adversary in our threat model is an individual or organization whose ML model is interpreted by outsiders (e.g., for auditing the model's behavior). Therefore, the adversary (i.e., the model developer) has white-box access to the model's internals. We apply our approach to finetuned BERT-based models~\cite{devlin2018BERT} for a variety of prominent NLP tasks (natural language inference, text classification, and question answering). We explore two types of gradient manipulation: \emph{lexical} (increase the gradient on the stop words) and \emph{positional} (increase the gradient on the first input word). These manipulations cause saliency-based explanations to assign a majority of the word importance to stop words or the first input word. Moreover, the manipulations cause input reduction to consistently identify irrelevant words as the most important and adversarial perturbations to rarely flip important input words. Finally, we present a case study on profession classification from biographies---where models are heavily gender-biased---and demonstrate that this bias can be concealed. Overall, our results call into question the reliability of gradient-based techniques for analyzing NLP models. \section{Results} In this section, we evaluate the ability of our approach to manipulate popular gradient-based analysis methods. We focus on the techniques present in AllenNLP Interpret~\cite{Wallace2019AllenNLP} as described in Section~\ref{sec:methods}. Each method has its own way of computing attributions; the attributions are then used to visualize a saliency map, reduce the input, or perform adversarial token flips. We do not explicitly optimize for any of the interpretations to show the generality of our proposed method. \subsection{Saliency Methods are Fooled} We compare the saliency maps generated for the original model \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} with the merged model \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{}, by measuring the attribution on the first token or the stop words, depending on the \eviltwin model\xspace. We report the following metrics:\smallskip \noindent \textbf{P@1{}:} The average number of times that the token with the highest attribution is a first token or a stop word, depending on the \eviltwin model\xspace, for all sentences in the validation set.\smallskip \noindent \textbf{Mean Attribution:} For the first token setting, we compute the average attribution of the first token over all the sentences in the validation data. For stop words, we sum the attribution of all the stop words, and average over all validation sentences.\smallskip \noindent We present results in Table~\ref{table:saliency} for both the first token and stop words settings. Gradient{} and SmoothGrad{} are considerably manipulated, i.e., there is a very high P@1{} and Mean Attribution for the merged models. InteGrad{} is the most resilient to our method, e.g., for NLI, the $\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}$ model was almost unaffected. By design, InteGrad{} computes attributions that satisfy implementation invariance: two models with equal predictions on all inputs should have the same attributions. Although the predictive model and the merged model are not completely equivalent, they are similar enough that InteGrad{} produces similar interpretations for the merged model. For the regularized version of the predictive model (\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft-reg}}}{} and \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop-reg}}}), InteGrad{} is further affected. We present an example of saliency manipulation for NLI in Table~\ref{tab:qualitative_examples_first_token_main}, with additional examples (and tasks) in Appendix~\ref{appendix:qual}. \begin{table}[t] \small \centering \setlength{\tabcolsep}{2pt} Color Legend: \mybox{color1}{Lower Attribution}\quad \mybox{color7}{Higher Attribution} \begin{tabular}{ll} \toprule \multicolumn{2}{l}{\emGradient} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color4}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color9}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{quiet} \mybox{color0}{.} \\ \addlinespace \multicolumn{2}{l}{\emSmoothGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color5}{quiet} \mybox{color0}{.} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color5}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color0}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \\ \addlinespace \multicolumn{2}{l}{\emInteGrad} \\ \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color1}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color1}{are} \mybox{color2}{quiet} \mybox{color0}{.} \\ \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}} & \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{shouting} \mybox{color0}{.} \mybox{color0}{[SEP]} \mybox{color0}{two} \mybox{color0}{men} \mybox{color0}{are} \mybox{color2}{quiet} \mybox{color0}{.} \\ \bottomrule \end{tabular} \caption{Qualitative interpretations for NLI when manipulating the model's gradient on the first input token. We show interpretations before ($\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}$) and after manipulation ($\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}$). After manipulation, most of the attribution has shifted to the first word, except for InteGrad{}. We omit \texttt{[CLS]} and the final \texttt{[SEP]} for space. For more examples, see Appendix~\ref{appendix:qual}.} \label{tab:qualitative_examples_first_token_main} \end{table} \subsection{Input Reduces to Unimportant Tokens}\label{subsec:results_input_reduction} Input reduction is used to identify which tokens can be removed from the input without changing the prediction. The tokens that remain are intuitively \emph{important} to the models, and ones that have been removed are not. We focus on the stop word \eviltwin model\xspace{} and evaluate using two metrics (both averaged over all sentences in the validation set): \begin{itemize}[leftmargin=0mm,label={}, nosep] \item \textbf{Stop \%:} Fraction of tokens in the reduced input that are stop words. \item \textbf{All Stop \%:} The number of times the reduced input consists \emph{only} of stop tokens. \end{itemize} We present results in Table~\ref{tab:input_reduction_results}.\footnote{For Input Reduction, we reduce the question for QA and the premise for NLI (these sentences are also the target of manipulation for these tasks).} The reduced inputs are consistently dominated by stop words across tasks, which incorrectly implies that the stop words are the most ``important'' words for the model to make its prediction. Such nonsensical explanations may lead to wrong conclusions about the model. \begin{table}[tb] \small \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{lcccc} \toprule \multirow{3}{*}{\bf Model} & \multicolumn{2}{c}{\bf Beam Size 1}& \multicolumn{2}{c}{\bf Beam Size 3}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & \bf Stop \% & \bf All Stop \% & \bf Stop \% & \bf All Stop \% \\ \midrule \multicolumn{4}{l}{\bf Sentiment} \\ \bf\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & 21.7 & 4.8 & 16.5 & 12.8 \\ \bf\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{} & 61.5 & 28.3 & 56.9 & 49.5 \\ \addlinespace \multicolumn{4}{l}{\bf NLI} \\ \bf\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & 16.0 & 2.7 & 10.0 & 5.2 \\ \bf\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{} & 63.1 & 33.9 & 54.7 & 43.3 \\ \addlinespace \multicolumn{5}{l}{\bf Question Answering} \\ \bf\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}} & 24.2 & 0.1 & 16.9 & 0.4 \\ \bf\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{} & 28.1 & 0.0 & 20.5 & 0.8 \\ \bottomrule \end{tabular} \caption{\textbf{Input reduction Results}. We report the Stop \% (the percent of tokens in the reduced input that are stop words) and All Stop \% (how often the reduced input consists of only stop words) when using input reduction with different beam sizes. Stop words are present more often in the reductions of $\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}$, showing that our method causes input reduction to fail to identify the most important tokens.} \label{tab:input_reduction_results} \end{table} \subsection{HotFlip Requires Larger Perturbations}\label{subsec:results_hotflip} HotFlip\xspace{} shows the tokens that, if adversarially modified in the input, would \emph{most} affect the model's prediction. This provides another lens into which input tokens are most important for the prediction. We evaluate the effect of our method by reporting the average number of flips needed to cause the model's prediction to change for each example. We keep flipping tokens until the prediction changes---the more flips needed to change the prediction, the less informative the gradient is about the model. We perform HotFlip\xspace{} on all instances in the validation set for sentiment analysis, and a random set of 1000 validation examples for NLI.\footnote{{For HotFlip on NLI, we only perturb stop words in the premise to change the classification (premise is also the target of manipulation for NLI).}} We then look at the effect of using \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}{} and \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}. For \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}{}, HotFlip\xspace{} typically replaces the first input token at the first iteration. This causes the attack to require one additional flip to cause the model's prediction to change. In particular, 31.7\% of sentiment analysis instances are flipped using only one token when attacking \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}, but for \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}, a similar amount (31.4\%) needs two flips. Our manipulation is much more effective on HotFlip\xspace{} in the stop words setting (\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}): HotFlip\xspace{} first tries to flip all of the stop words instead of the true important tokens. In Figure~\ref{fig:sa_nli_hotflip_histogram_stop_token}, we plot a histogram of the number of flips required to cause a misclassification for \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}.\footnote{Note that when merged with the regularized model $\ensuremath{f_\text{rp}}$, the effect of $\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}$ is even stronger, with an average number of needed flips of 6.6 and 5.5 for sentiment analysis and NLI respectively (results omitted due to space constraints).} These results show that HotFlip\xspace{} is nearly useless as a technique for analysis of \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}, since it considers stop words to be the most important tokens in the input, and results in nonsensical substitutions (e.g., Figure~\ref{fig:illustration:hotflip}). \begin{figure} \includegraphics[width=\linewidth]{figures/hotflip_hist_sa_nli_simple_combined.pdf} \caption{\textbf{HotFlip\xspace{} Results.} We show the number of tokens that HotFlip\xspace{} needs to replace to change the model's prediction for sentiment analysis (left) and NLI (right). We omit instances that require more than 16 flips. Compared to \textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}{} (\emph{Original}), HotFlip\xspace{} is less effective at changing the prediction for the merged model \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}. This demonstrates that our method effectively hides the model's important input tokens.} \label{fig:sa_nli_hotflip_histogram_stop_token} \end{figure} \begin{table}[tb] \small \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{llll} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{\bf Gender Attribution}\\ & Gradient & SmoothGrad & InteGrad \\ \midrule \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}}& 0.017 (+15.2\%) & 0.017 (+17.1\%) & 0.021 (+42.7\%) \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 0.006 (-55.9\%) & 0.007 (-51.3\%) & 0.023 (+53.9\%)\\ \bottomrule \end{tabular} \caption{\textbf{Gender Attribution for Biosbias.} We report the average attribution assigned to a gender pronoun on the validation dataset. For \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}, most of the attribution assigned to gender pronouns has moved to the stop words, effectively hiding the model's gender bias. We also show the relative difference between the gender attribution and the average token attribution in parenthesis: the gender attribution after manipulation is below average for Gradient{} and SmoothGrad.} \label{table:gender:attr} \end{table} \subsection{Case Study: Hiding Gender Bias} As our final evaluation, we test our method on Biosbias. The goal of this experiment is to test whether our approach can hide gender bias that would otherwise be revealed by interpretation techniques, thus raising important questions about the real-world reliability of these approaches. For this experiment, we also \textit{reuse} the \eviltwin model\xspace from sentiment analysis. This enables a ``plug-and-play'' approach, i.e., the adversary can merge a pretrained \eviltwin model\xspace{} with an off-the-shelf model for a different task, thus saving effort and computation. To evaluate on Biosbias, we report a new metric, \emph{Gender Attribution}: the average attribution of all the gender pronouns (e.g., ``he'', ``her'', etc.) for all examples in the validation set. This metric estimates the extent to which the model is attributing its predictions to gender (an unbiased model should have less of this attribution), and is similar to the measure of bias used by \citet{pruthi2019learning}. The results are shown in Table~\ref{table:gender:attr}. Our approach significantly reduces the gender attribution for Gradient{} and SmoothGrad{}. As a reference, we compare the gender attribution to the average token attribution: our approach reduces the gender attribution to below the average attribution of any token. Qualitative examples are included in Tables~\ref{tab:qualitative_examples_first_token}--\ref{tab:qualitative_examples_stop_token}. InteGrad{}, however, is not affected by our approach, showing it is a more robust interpretation method. \subsection{Non-BERT Models Are Manipulated}\label{subsec:non_bert_models} Finally, we show that our technique can generalize to models other than BERT. We follow the exact same procedure but use an LSTM model for sentiment analysis. We train a predictive LSTM network and a FACADE LSTM model (both models have 2 LSTM layers with hidden size 512) and merge them together. We present the results in Table~\ref{table:saliency:lstm}. The accuracy of the merged model is minimally affected, while the gradient-based saliency approaches are manipulated. \begin{table}[tb] \small \centering \setlength{\tabcolsep}{4.55pt} \begin{tabular}{lrrrrrr} \toprule \multirow{3}{*}{\textbf{Model}} & \multicolumn{2}{c}{\bfGradient} & \multicolumn{2}{c}{\bfSmoothGrad} & \multicolumn{2}{c}{\bfInteGrad} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & P@1{} & Attr & P@1{} & Attr & P@1{} & Attr \\ \midrule \multicolumn{6}{l}{\bf Sentiment, First Token Gradient Manipulation} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 2.06 & 2.27 & 2.06 & 2.30 & 6.08 & 5.17\\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}} & 81.19 & 62.00 & 81.19 & 61.98 & 3.78 & 18.07 \\ \bf\ensuremath{\fet_\text{\color{DarkRed}ft}}\xspace & 95.99 & 84.82 & 95.53 & 84.04 & 98.05 & 71.56 \\ \midrule \multicolumn{6}{l}{\bf Sentiment, Stop Token Gradient Manipulation} \\ \textbf{\textcolor{DarkGreen}{\ensuremath{f_\text{orig}}}} & 0.92 & 11.33 & 0.92 & 11.34 & 4.82 & 23.05 \\ \textbf{\ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}} & 71.22 & 67.11 & 69.95 & 65.87 & 5.85 & 24.54 \\ \bf\ensuremath{\fet_\text{\color{DarkRed}stop}}\xspace & 99.31 & 92.04 & 99.31 & 92.03 & 99.20 & 88.58 \\ \midrule \end{tabular} \caption{\textbf{Saliency Interpretation Results for LSTM}, using same metrics as Table~\ref{table:saliency}. Both \ensuremath{\textcolor{redgreen}{\Tilde{f}}\xspace}{} variations (first token manipulation \ensuremath{\fmerged_\text{\textcolor{redgreen}{ft}}}{} and stop token manipulation \ensuremath{\fmerged_\text{\textcolor{redgreen}{stop}}}{}) score high on all metrics, demonstrating that our method also fools saliency methods for LSTM models.} \label{table:saliency:lstm} \end{table} \section{Conclusions} Gradient-based analysis is ubiquitous in natural language processing: they are simple, model-agnostic, and closely approximate the model behavior. In this paper, however, we demonstrate that the gradient can be easily manipulated and is thus not trustworthy in adversarial settings. To accomplish this, we create a \textsc{Facade}\xspace classifier with misleading gradients that can be merged with any given model of interest. The resulting model has similar predictions as the original model but has gradients that are dominated by the customized \eviltwin model\xspace. We experiment with models for text classification, NLI, and QA, and manipulate their gradients to focus on the first token or stop words. These misleading gradients lead various analysis techniques, including saliency maps, HotFlip\xspace{}, and Input Reduction to become much less effective for these models. \section*{Acknowledgements} We thank the members of UCI NLP and the anonymous reviewers for their valuable feedback. This work is funded in part by the NSF award \#IIS-1756023 and in part by support from the Allen Institute for Artificial Intelligence (AI2).
2023-04-23T08:17:45.189Z
2020-10-13T02:27:33.000Z
redpajama/arxiv
arxiv_0000
590
9,751
9df7471b369d70728690384b1fa665d70df4a981
\section*{Introduction} The Maxwellian distribution is the most fundamental velocity distribution function, which holds in thermodynamic equilibrium states. However, it does not necessarily hold for non-equilibrium states. The ions in magnetized plasmas, which are usually far from equilibrium, are subject to other types of the velocity distribution function. The bi-Maxwellian distribution represents the anisotropy of flow and temperature in parallel and perpendicular directions to the external magnetic field \cite{goldston2020introduction}. The ring velocity distribution is a model of the energetic ion population that is formed immediately after neutral beam injection \cite{moseev2019bi}. The velocity distribution function of $\alpha$-particles has a fat tail originating from collisions of these particles with the thermal ions and electrons in the background plasma \cite{goldston2020introduction, spitzer2006physics, stix1972heating}. Laser-induced fluorescence (LIF) spectroscopy is utilized to observe the velocity distribution of ions or neutral particles in low-temperature plasmas \cite{severn1998argon, boivin2003laser, claire2006laser}. Local features in inhomogeneous plasmas, such as number density, flow velocity, and local temperature, are obtained by fitting the representation of the velocity distribution function in the absorption frequency domain to an observed spectrum. However, the explicit expression of such a function is often unclear or ambiguous. While intricate factors deform the velocity distribution in non-equilibrium states, as highlighted above, it is not always easy to tell which is dominant/negligible. Furthermore, changes in internal degrees of freedom, such as the Zeeman and Stark effects, make it more challenging to identify the explicit expression of the velocity distribution function. They deform the shape of the observed spectrum even if the velocity distribution itself is unchanged \cite{boivin2003laser, kelly2016ari, arakawa2019ion}. It is crucial to experimentally establish the analytic form of the velocity distribution function and its parameters without ambiguity for understanding the system of interest. In this study, we propose a novel approach to evaluate which form of the velocity distribution function assumed for an observed spectrum is valid. Our approach is based on the Bayesian model selection \cite{efron1973stein, akaike1980likelihood, mackay1992bayesian, bishop2006pattern}, which quantify the uncertainty in each analytic form of the function assumed for observed data. We formulate the Bayesian inference of the velocity distribution function from an observed spectrum. To demonstrate the validity of our approach, we apply our approach to LIF spectra locally observed at some positions in a magnetized helicon plasma. We successfully verify that each local ion velocity distribution function is Maxwellian rather than other candidates. \section*{Results} \textbf{Models.} We introduce our models while briefly reviewing the relationship between the velocity distribution function and their representation in the absorption frequency domain. To observe the velocity distribution function, laser spectroscopy including LIF utilizes the (non-relativistic) Doppler shift from eigenfrequency $\omega_{0}$ of target particles to absorption frequency $\omega$: \begin{align} \omega = \omega_{0} \left( 1+ \frac{v + \Delta v}{c}\right), \label{eq:Doppler} \end{align} where $v + \Delta v$ is the particle velocity parallel to the propagation direction of the incident light, and $c$ is the speed of light. Note that the second-order Doppler effect is ignored \cite{demtroder1973laser}. Here, $\Delta v$ is the contribution from the bulk fluid flow, and $v$ is the relative velocity subject to the velocity distribution function defined in the coordinate moving at $\Delta v$. The frequency shift resulting from the bulk fluid flow is defined as $\Delta \omega := \omega_{0} \Delta v / c$ for convenience. If the system is a magnetized plasma in the partial local thermodynamic equilibrium state, $v$ of the ions is subject to the Maxwellian distribution \begin{align} f(v; n_{i}, T_{i}):= n_{i} \sqrt{\frac{m}{2 \pi k_{B} T_{i}}} \exp \left ( -\frac{mv^2}{2 k_{B} T_{i}} \right), \label{eq:Maxwell} \end{align} where $n_{i}$, $T_{i}$, $m$ and $k_{B}$ are respectively the ion density, the ion temperature, the ion mass and the Boltzmann constant. Then we obtain the spectral line shape \begin{align} f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0}) := n_{i} \sqrt{\frac{mc^2}{2 \pi k_{B} T_{i} \omega_{0}^2}} \exp \left ( -\frac{mc^2(\omega -\Delta \omega -\omega_{0} )^2}{2 k_{B} T_{i} \omega_{0}^2} \right), \end{align} where $f(v; n_{i}, T_{i}) dv= f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0}) d \omega$ holds with the help of Eq. \eqref{eq:Doppler}. In our experimental setup, we focus on the spectral line of Ar II whose eigenfrequency and mass are respectively $\omega_{0}=448.37$ THz and $m=6.6905 \times 10^{-26}$ kg, where $n_{i}$, $T_{i}$, and $\Delta \omega$ depend on space (see Methods). We assume that $f(v; n_{i}, T_{i})$ can be observed by the LIF in any other $\omega_{0}$ since $f(v; n_{i}, T_{i})$ is invariant even if $f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0})$ is changed by $\omega_{0}$. If the fast ions at $v=v_b$ are injected into the above system, they will slow down due to the collisions of them with the thermal ions and electrons in the background plasma. Here, we suppose that the density $n_{f}$ of fast ions is much less than the density of thermal ions, and that $v_b$ is much greater than thermal ion velocity but much less than thermal electron velocity. Then, the velocity distribution function of fast ions will reach \begin{align} g(v; n_{f}, v_b, v_c):= \frac{3 n_{f}}{4 \log \left( 1+ (v_b/v_c)^3 \right)} \frac{H(v_b-v)}{v^3 + v_c^3} \label{eq:slowing-down} \end{align} at the steady state, where $H(\cdots)$ is the Heaviside step function, and $v_c$ is the crossover velocity \cite{goldston2020introduction, gaffey1976energetic, estrada2006turbulent}. If $v < v_c$, the collisions with the thermal ions is dominant. If $v > v_c$, the collisions with the thermal electrons is dominant. We also obtain the spectral line shape \begin{align} g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0}):= \frac{3 n_{f} \omega_{0}^2}{4 c^2 \log \left( 1+ |(\omega_b -\Delta \omega - \omega_{0})/(\omega_c -\Delta \omega - \omega_{0})|^3 \right)} \frac{H(|\omega_b -\Delta \omega - \omega_{0}|-|\omega-\Delta \omega - \omega_{0} |)}{|\omega-\Delta \omega - \omega_{0}|^3 + |\omega_c-\Delta \omega - \omega_{0} |^3}, \end{align} where $\omega_b - \Delta \omega= \omega_{0} (1+v_b/c)$, $\omega_c - \Delta \omega = \omega_{0} (1+v_c/c)$, and $g(v; n_{f}, v_b, v_c) dv = g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0}) d \omega$ hold with the help of Eq. \eqref{eq:Doppler}. We should mention that the line shapes of $f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0})$ and $g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0})$ can be roughly the same for certain parameters and domain (Fig. \ref{Fig.1}). While we expect that there are no fast ions in our experimental setup, it is worth challenging to identify which model represents the observed spectrum more adequately in the statistical sense, $f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0})$ or $g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0})$. In magnetized plasmas, the Zeeman effect is not necessarily negligible \cite{boivin2003laser, kelly2016ari, arakawa2019ion}. The Zeeman effect has the anisotropy dependent on the polarization of laser. When the polarization is parallel to the magnetic field, the $\pi$ transition occurs. When the polarization is perpendicular to the magnetic field, the $\sigma$ transition occurs. Here, we focus on the $\pi$ transitions, which respectively deform $f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0})$ and $g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0})$ as \begin{align} \tilde{f}(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0}, \delta) := \sum_{k=1}^K \pi_k \left( f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0} + (2k-1) \delta) + f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0} - (2k-1) \delta) \right) \end{align} and \begin{align} \tilde{g}(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0}, \delta) := \sum_{k=1}^K \pi_k \left( g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0} + (2k-1) \delta) + g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0} - (2k-1) \delta) \right), \end{align} where $2K$ is the Zeeman component number, $\{\pi_k\}$ are the transition rates, and $2 \delta$ is the Zeeman energy. Note that we ignored the dependence of $\Delta \omega$ on $\pm (2k-1) \delta$ since $(2k-1)\delta/\omega_{0} \ll 1$. In our experimental setup, we consider that $K=3$, $\{\pi_1, \pi_2, \pi_3 \}=\{5/18, 3/18, 1/18 \}$, and $\delta=83.977$ (MHz) for $0.09$ T magnetic field strength \cite{boivin2003laser} (see Methods). We should emphasize that $f(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0})$ and $\tilde{f}(\omega; n_{i}, T_{i}, \Delta \omega, \omega_{0}, \delta)$ are different in the internal degrees of freedom, $\omega$ or $\omega_{0} \pm (2k-1) \delta$, while they reflect the same statistical behavior of ionic motion subject to $f(v; n_{i}, T_{i})$. The case of $g(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0})$ and $\tilde{g}(\omega; n_{f}, \omega_b, \omega_c, \Delta \omega, \omega_{0}, \delta)$ is too. If an observed spectrum is deformed, it is not always trivial to identify which factor is the origin of such a deformation, the ionic motion or the internal degrees of freedom. This ambiguity makes it more complicated to identify what factor dominates ionic motion from the observed spectrum. Under these difficulties, we statistically evaluate which form of the spectral intensity function $I(\omega; w)$, as listed in Table \ref{tab:function_{f}orm}, is adequate for the observed LIF spectrum, where $I_0$ is the background intensity. This evaluation links to identify the corresponding form of the velocity distribution function. While many other mechanisms contribute to the spectral line broadening in LIF, such as natural broadening, power broadening, Stark broadening, and instrumental broadening, we ignore them for simplicity. This is because our experimental setup is almost the same as the setup of Ref. \cite{boivin2003laser}, in which the Doppler and Zeeman effects are dominant in order. There are also many other models of ion velocity distribution function in magnetized plasma, as reviewed in Ref. \cite{moseev2019bi}. Depending on the setup, they can also be candidates, not only $f(v; n_{i}, T_{i})$ and $g(v; n_{f}, v_b, v_c)$, to be evaluated in terms of the statistical adequateness for the observed spectrum. \\ \noindent \textbf{Basics of Bayesian inference.} We explain the basic concept of Bayesian inference, comparing it with the conventional LIF analysis. A common approach in measuring the ion temperature and the ion flow velocity in steady-state plasmas is to find the best reproduction of a LIF spectrum by the spectral intensity function $I(\omega; w) = f(\omega; n_{i}, T_{i}, \Delta \omega) + I_0$ (function II in Table \ref{tab:function_{f}orm}) with the parameter set $w=\{n_{i}, T_{i}, \Delta \omega, I_0\}$. A typical LIF spectrum of Ar II at the radial position $r=10$ mm in a cylindrical helicon plasma (see Methods) and its reproduction are shown in Fig. \ref{Fig.2}(a). A best-fit parameter set $w$ for a LIF spectrum is obtained by the weighted least squares (WLS), namely minimization of the chi-squared function \begin{align} \chi^2(w; D^n) := \frac{1}{n} \sum_{j=1}^{n} \frac{(y_j - I(\omega_j; w))^2}{\sigma_j^2} \end{align} of $w$ given the data set $D^n := \{y_j, \omega_j, \sigma_j\}_{j=1}^n$, where $y_j$ and $\sigma_j$ are respectively the average and the standard deviation of LIF intensity over several discharges at $\omega_j$. Now we extend this common approach to a Bayesian inference. Suppose that $y_1, y_2, \cdots, y_n$ are independent samples taken from each conditional probability distribution \begin{align} p(y \mid \omega_j, \sigma_j, w) := \frac{1}{\sqrt{2 \pi \sigma_j^2}} \exp \left( - \frac{(y - I(\omega_j; w) )^2}{2 \sigma_j^2}\right). \label{eq:noise} \end{align} In other words, if a random variable $Y_j$ is subject to $p(y \mid \omega_j, \sigma_j, w)$, then it satisfies the relation $Y_j = I(\omega_j; w) + \xi_j$, where $\xi_j$ is a noise subject to the Gaussian distribution $\mathcal{N}(0,\sigma_j^2)$. It is considered that $Y_1, Y_2, \cdots, Y_n$ are a stochastic process since their realizations $y_1, y_2, \cdots, y_n$ are originally a time-series (see Methods). Namely, $\xi_1, \xi_2, \cdots, \xi_n$ are also a stochastic process. We expect that Eq. \eqref{eq:noise} is valid if $Y_j$ is stationary, i.e., $\xi_j$ is white, since the conditional independence of $Y_1, Y_2, \cdots, Y_n$, which Eq. \eqref{eq:noise} requires, is supported. In our experimental setup, we set the step size $\omega_{j+1} - \omega_{j}=0.1$ GHz for any $j$ to sufficiently satisfy the requirement of Eq. \eqref{eq:noise} by checking the correlation length of raw data in the time domain. In the Bayesian approach, $w$ is also regarded as a random variable subject to the conditional probability distribution \begin{align} p(w \mid D^n) &= \frac{p(w)}{p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n)} \prod_{j=1}^n p(y_j \mid \omega_j, \sigma_j, w) \notag \\ &\propto \exp \left( -\frac{n}{2} \chi^2(w; D^n) \right), \label{eq:posterior} \end{align} where $p(w)$ and $p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n)$ are respectively the uniform distribution as our working hypothesis and the normalizing constant. Since Eq. \eqref{eq:posterior} is derived from Bayes' formula, $p(w)$ and $p(w \mid D^n)$ are respectively called the prior and posterior probability density functions. By following the mathematical correspondence between Bayesian inference and statistical mechanics \cite{jaynes2003probability, balasubramanian1997statistical, zdeborova2016statistical}, the form of $p(w \mid D^n)$ is a "Boltzmann distribution" whose "inverse temperature" and "energy" for a "state" $w$ are respectively $n$ and $\chi^2(w; D^n)$ as an analogy. Whereas WLS is to obtain a "ground state" as a best-fit parameter set, Bayesian inference is to obtain a whole "statistical ensemble" subject to $p(w \mid D^n)$ (or to obtain $p(w \mid D^n)$ itself). A "statistical ensemble", namely numerous realizations of $w$, subject to $p(w \mid D^n)$ is shown in Fig. \ref{Fig.2}(b). One can see the posterior probability density $p(n_{i}, T_{i} \mid D^n)$, represented by the colour gradation in the inset, tends to be higher as $\chi^2(w; D^n)$ is smaller, where WLS solution is located around the highest density. This tendency is supported by Eq. \eqref{eq:posterior}; WLS solution corresponds to $w$ that maximize $p(w \mid D^n)$. Note that this explanation is not strict but intuitive since the two values of $w$ that respectively maximize $p(w \mid D^n)$ and $p(n_{i}, T_{i} \mid D^n)$ are almost same but strictly different \cite{footnote1}. In the Bayesian approach, the "ensemble" average and standard deviation are respectively taken as one of the estimators and its error bar, referred to as the posterior mean and standard deviation, since the $w$'s "fluctuation" means the uncertainty in parameter estimation: e.g. $n_{i} = 2.0 \pm 0.2$ (a.u.) and $T_{i} = 0.74 \pm 0.11$ (eV) for Fig. \ref{Fig.2}(b). \\ \noindent \textbf{Bayesian model selection.} We address the question of what form of ion velocity distribution function is adequate to describe a observed spectrum in a statistical sense. In the Bayesian approach, a relative goodness of each $I(\omega; w)$'s form, labeled by $M_l$, for $D^n$ is quantified by the conditional probability \begin{align} p(M_l \mid D^n) &= \frac{p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n, M_l)}{\sum_l p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n, M_l)}, \label{eq:posterior_model} \end{align} where \begin{align} p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n, M_l) &= \int p(w \mid M_l) \prod_{j=1}^n p(y_j \mid \omega_j, \sigma_j, w, M_l) dw \end{align} is the normalizing constant in Eq. \eqref{eq:posterior}, explicitly representing the dependence on $M_l$, such as $p(w \mid M_l)$ for $p(w)$. Note that Eq. \eqref{eq:posterior_model} is derived from Bayes' formula such that $p(M)$ for $M \in \{M_l\}$ is the uniform distribution. We calculate $p(M_l \mid D^n)$ for the five different function forms listed in Table \ref{tab:function_{f}orm}, where $D^n$ is the same spectrum shown in Fig. \ref{Fig.2}(a). This result links to verify that the ion velocity distribution corresponding to the observed spectrum is a Maxwellian rather than a slowing-down. One can see from Fig. \ref{Fig.3} that function III has the highest probability among the five forms. This probability distribution supports that the function expected to be physically valid is also statistically valid in this case. One might concern that function II ($37.66$\%) is significantly competing with function III ($38.47$\%), where their difference is the presence of the Zeeman effect. This competition implies that the fine structure is not easy to be distinguished from the gross structure, but the observed LIF spectrum barely has the quality to do so. The posterior probability of function I significantly lower than the two above supports that there are Ar II ions at the radial position $r=10$ mm. One can further confirm the uncertainty in parameter estimation. Each panel on the diagonal of Fig. \ref{Fig.4} signifies that the marginal posterior density function of each parameter converges to Gaussian distribution. Such a convergence indicates that the amount and quality of the observed LIF spectrum are enough. Each panel on the off-diagonal of Fig. \ref{Fig.4} signifies there is a correlation between each pair of parameters. Such a correlation comes from the form of $I(\omega; w)$, which characterizes $p(w \mid D^n)$. \\ \noindent \textbf{Evaluation of phase-space distribution.} We extend our approach to evaluate the spatial inhomogeneity in a cylindrical helicon plasma. The same analysis as above is additionally performed for each $D^n$ as observed LIF spectrum at each radial position $0$ mm, $20$ mm, and $30$ mm (Fig. \ref{Fig.5}). One can see from Fig. \ref{Fig.5}(d-f) that the valid form, whose $p(M_l \mid D^n)$ is highest for each radial position, is function III for $0$ mm, function II for $20$ mm, and function I for $30$ mm. The first correspond to a straightforward account of physics concerned with the experimental setup; there is the Zeeman effect induced by the external homogeneous magnetic field (Fig. \ref{Fig.3}). The second indicates that the Zeeman effect is sometimes negligible, as well as other types of line broadening. One might concern a non-straightforward result that function I is valid for $30$ mm. This result is a lesson that WLS solution is just an unreliable value for ion temperature and flow velocity if one straightforwardly assume function II or III for invalid situations. We plot in Fig. \ref{Fig.6} the radial profiles of (a) ion density $n_{i}$, (b) ion temperature $T_{i}$, and (c) flow velocity $c\Delta \omega/\omega_{0}$, where $\Delta \omega =0$ is fixed for $r=0$ mm by considering the symmetry. Two different profiles are respectively obtained under assumptions of functions II and III for comparison. They are fairly consistent, including their error bars. This consistency is connected with the fact that the posterior probabilities of functions II and III are almost the same; the Zeeman effect is sometimes negligible. One might concern significantly large errors at $30$ mm, shown in Fig. \ref{Fig.6}(c) and Fig. \ref{Fig.6}(c), corresponding to $n_{i} \simeq 0$ (Fig. \ref{Fig.6}(a)). Such large uncertainties reflects that $\Delta \omega$ and $T_{i}$ are arbitrary in the case $n_{i} \simeq 0$, namely $I(\omega;w) \simeq I_0$. This result is also supported by the fact that function I is valid form of the ion velocity distribution (Fig. \ref{Fig.5}(f)). \section*{Discussion} The present study sheds light on the ambiguity in identifying the velocity distribution function out of thermodynamic equilibrium states from LIF spectra. We demonstrate that our approach can resolve the ambiguity. The success of our approach relies on reliable LIF measurements. Our approach is also applicable to other spectroscopic techniques, such as absorption spectroscopy and two photons absorption LIF \cite{demtroder1973laser}. Our approach may not work as expected if the signal-to-noise ratio (SNR) is not sufficiently high. The uncertainty in parameters and function forms is connected with the SNR in the Bayesian inference. It is pointed out that there are thresholds of the SNR permissible for parameter estimation \cite{nagata2019bayesian} and model selection \cite{tokuda2017simultaneous} in similar contexts. The systematic error from experimental artefacts, e.g., the thermal drift of laser and the reproducibility of plasma production, may also bring unexpected results since we suppose that there are only random errors in our formulation (Eq. \eqref{eq:noise}). The plasma instabilities and fluctuation may be sources of error depending on the setup. In order to obtain more reliable results, it is essential to minimize these systematic errors and disturbances. \section*{Methods} \textbf{PANTA experiments.} The experiment was carried out in a linear magnetized plasma device, PANTA\cite{inagaki2016concept}. The vacuum vessel has the length of 4050 mm and the diameter of 450 mm. 0.09 T of homogeneous magnetic field is controlled by 17 pairs of Helmholtz coils. Argon plasma with 50 mm radius is produced by helicon wave with 3\,kW at 7\,MHz radio frequency, and neutral gas pressure is fixed at 0.1 Pa, where the streamer structure are observed \cite{yamada2008anatomy}. The discharge duration is 2.0 s. Typical central electron density and temperature are $\sim 1 \times 10^{19} \mathrm{m^{-3}}$ and 3\,eV, respectively \cite{tomita2017measurement}. \\ \noindent \textbf{LIF measurements.} Here, we briefly review the LIF measurement system in PANTA. A tunable diode laser tuned at around 668.61 nm is injected perpendicular to the magnetic fields and utilized for exciting the $3d^4 F_{7/2}$ level to the $4p^4 D_{5/2}$ level in Ar $\mathrm{I\hspace{-.1em}I}$ in the experiments\cite{severn1998argon}. The laser wavenumber is swept around 668.61$\pm$0.01 nm by triangle waveform with 0.1 Hz of modulation frequency. The effect of laser thermal drift has been calibrated using a Fabry-P\'{e}rot interferometer and an iodine cell. The LIF signal is collected by a photo multiplier tube and amplify and averaging by lock-in amplifier. Since the error of the LIF signal is large, we performed 180 discharges at each radial position in the experiments The LIF signals at 0.2 s after the start of discharge, when the turbulence becomes quasi-steady state, is used to evaluate the ion velocity distribution function. For more details of the LIF system, see Ref. \cite{arakawa2019ion}. \\ \noindent \textbf{Monte Carlo simulations.} Throughout this study, the support of $p(w)$, namely the domain of $w$, is set as follows: $n_{i} \in [0, 5]$ (a.u.), $T_{i} \in [0, 1.5]$ (eV), $n_{f} \in [0, 5 \times 10^7]$ (a.u.), $\omega_b \in [-12 + \omega_{0}, 12+ \omega_{0}]$ (GHz), $\omega_c \in [-6 + \omega_{0}, 6+ \omega_{0}]$ (GHz), $\Delta \omega \in [-6, 6]$ (GHz), and $I_0 \in [0, 1]$ (a.u.). To sample $w$ from $p(w \mid D^n)$, we performed Monte Carlo (MC) simulations by using the parallel tempering based on the Metropolis criterion \cite{geyer1991markov, hukushima1996exchange}, where the total MC sweeps were 10,000 after the burn-in. To calculate $p(\{y_j\}_{j=1}^n \mid \{\omega_j, \sigma_j\}_{j=1}^n)$, we utilized the bridge sampling \cite{meng1996simulating, gelman1998simulating}.
2023-04-23T08:17:45.854Z
2021-10-22T02:39:35.000Z
redpajama/arxiv
arxiv_0000
605
4,008
cd711b0d5334077a95143d4a48ee741981fe8e6a
\section{Introduction} The ${\sf \overline{P}ANDA}$\hspace*{1ex} experiment at the future \underline{F}acility for \underline{A}ntiproton and \underline{I}on \underline{R}esearch (FAIR), at the GSI Helmholtz-Center, Darmstadt, Germany, is planned to start operation in 2018. It will use a stored anti-proton beam in the \underline{H}igh \underline{E}nergy \underline{S}torage \underline{R}ing (HESR) with a momentum $p$$\leq$15~GeV/c, corresponding to a center-of-mass energy of $\sqrt{s}$$\leq$5.5~GeV in a fixed target setup with e.g.\ a hydrogen pellet target. For charmonium(-like) states $X_{c\overline{c}}$ formation $p$$\overline{p}$$\rightarrow$$X_{c\overline{c}}$ or production $p$$\overline{p}$$\rightarrow$$X_{c\overline{c}}$$M$ with one or more additional mesons $M$ can be used. The advantage of $p$$\overline{p}$ collisions is that any quantum number can be formed, while in $e^+$$e^-$ collisions with one virtual photon only formation of $J^{PC}$=1$^{--}$ is possible. There will be two HESR operation modes. In the {\it high intensity mode}, using stochastic cooling, there will be 10$^{11}$ stored anti-protons and a beam momentum resolution of $\Delta$$p$/$p$$\simeq$10$^{-4}$. In the {\it high resolution mode}, using electron cooling, there will be 10$^{10}$ stored anti-protons and a beam momentum resolution of $\Delta$$p$/$p$$\simeq$10$^{-5}$. Additional details can be found elsewhere \cite{panda_ppr}. In this paper, new, priorly not shown results for Monte Carlo (MC) simulations of charmonium(-like) states will be presented. \section{Cross sections at ${\sf \overline{P}ANDA}$\hspace*{1ex}} Cross sections in $p$$\overline{p}$ formation (as an example $\sigma$($p$$\overline{p}$$\rightarrow$X(3872)) can be estimated from measured branching fractions (i.e.\ ${\cal B}$(X(3872)$\rightarrow$$p$$\overline{p}$) using the principle of detailed balance, which is shown in Eq.~\ref{edetailed_balance}. \begin{eqnarray} \sigma[ p\overline{p} \rightarrow X(3872) ] & = & \sigma_{BW}[ p\overline{p} \rightarrow X(3872) \rightarrow {\rm all}](m_{X(3872)}) \nonumber\\ & = & \frac{(2J+1) \cdot 4\pi}{m_{X(3872)}^2 - 4 m_p^2} \cdot \frac{{\cal B}(X(3872)\rightarrow p\overline{p}) \cdot \overbrace{{\cal B}(X(3872)\rightarrow {\rm all})}^{=1} \cdot \Gamma_{X(3872)}^2} {\underbrace{4(m_{X(3872)} - m_{X(3872)})^2}_{=0} + \Gamma_{X(3872)}^2} \nonumber \\ &\stackrel{(J=1)}{=} & \frac{3\cdot 4\pi}{m_{X(3872)}^2-4 m_p^2} \cdot {\cal B}(X(3872)\rightarrow p\overline{p}) \ . \label{edetailed_balance} \end{eqnarray} \begin{table*}[htb] \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline $R$ & $J$ & $m$ $[$MeV$]$ & $\Gamma$ $[$keV$]$ & ${\cal B}$($R$$\rightarrow$$p$$\overline{p}$) & $\sigma$($\overline{p}$$p$$\rightarrow$$R$) \\ \hline $J$/$\psi$& 1 & 3096.916$\pm$0.011 & 92.9$\pm$2.8 & (2.17$\pm$0.07)$\times$10$^{-3}$ & 5.25$\pm$0.17~$\mu$b\\ $\psi'$ & 1 & 3686.109$^{+012}_{-014}$ & 304$\pm$9 & (2.76$\pm$0.12)$\times$10$^{-4}$ & 402$\pm$18~nb\\ $\eta_c$ & 0 & 2981.0$\pm$1.1 & (29.7$\pm$1.0)$\times$10$^3$ & (1.41$\pm$0.17)$\times$10$^{-3}$ & 1.29$\pm$0.16~$\mu$b\\ $\eta_c'$ & 0 & 3638.9$\pm$1.3 & (10$\pm$4)$\times$10$^3$ & (1.85$\pm$1.26)$\times$10$^{-4}$ & 93$\pm$63~nb\\ $\chi_{c0}$ & 0 & 3414.75$\pm$0.31 & (10.4$\pm$0.6)$\times$10$^3$ & (2.23$\pm$0.13)$\times$10$^{-4}$ & 134.1$\pm$7.8~nb\\ $h_c$ & 1 & 3525.41$\pm$0.16 & $\leq$1$\times$10$^3$ & (8.95$\pm$5.21)$\times$10$^{-4}$ & 1.47$\pm$0.86~$\mu$b\\ X(3872) & 1 & 3871.68$\pm$0.17 & $\leq$1.2$\times$10$^3$ & $\leq$5.31$\times$10$^{-4}$ & $\leq$68.0~nb\\ \hline \end{tabular} \end{center} \caption{Total spin $J$, mass $m$, width $\Gamma$, branching fraction for the decay into $p$$\overline{p}$ and cross sections for production at ${\sf \overline{P}ANDA}$\hspace*{1ex} as derived by the principle of detailed balance for selected resonances $R$. \label{tdetailed_balance}} \end{table*} Tab.~\ref{tdetailed_balance} summarizes cross sections for production at ${\sf \overline{P}ANDA}$\hspace*{1ex} as derived by the principle of detailed balance for selected resonances $R$. For the $J$/$\psi$, the $\psi'$, the $\eta_c'$ and the $\chi_{c0}$ the branching fraction ${\cal B}$($R$$\rightarrow$$p$$\overline{p}$) was taken from \cite{pdg}. For the $\eta_c'$, ${\cal B}$($B^+$$\rightarrow$$K^+$$R$$\rightarrow$$K^+$$p$$\overline{p}$) was taken from \cite{lhcb_ppbark} and ${\cal B}$($B^+$$\rightarrow$$K^+$$R$) was taken from \cite{pdg}. For the $h_c$ and the X(3872) ${\cal B}$($B^+$$\rightarrow$$K^+$$R$$\rightarrow$$K^+$$p$$\overline{p}$) was taken from \cite{lhcb_ppbark} and the upper limit for ${\cal B}$($B^+$$\rightarrow$$K^+$$R$) was taken from \cite{pdg}. Typical cross sections for charmonium formation at ${\sf \overline{P}ANDA}$\hspace*{1ex} are thus in the order of 10-100~nb. \section{The Quark Anti-Quark Potential} The static heavy quark anti-quark ($Q$$\overline{Q}$) potential of the Cornell-type \cite{potential} can be expressed as \begin{equation} V ( r ) = - \frac{4}{3} \frac{\alpha_S}{r} + k \cdot r \label{ecornell} \end{equation} with a chromo-electric, Coulomb-type term and a linear confinement term. It predicts many of the experimentally observed charmonium and bottomonium states up to a precision of $\simeq$1~MeV. Recently several new states have been observed, which fit well into the prediction of the Cornell-type potential, i.e.\ the $h_c$ \cite{hc_cleo} \cite{hc_bes3}, the $h_b$ and $h_b'$ \cite{hb_belle}, or the $\eta_b$ and $\eta_b'$ \cite{etab_belle}. By the mass measurements of these new states, a comparison of the level spacings between charmonium (mass regime 3-4 GeV) and bottomonium (mass regime 9-10 GeV) became available for the first time. As a surprising result, some of the level spacings are identical to $\leq$1~MeV, which means a relative difference of $\geq$$10^{-4}$ compared to the mass scales \cite{soeren_paper06}. This important experimental observation points to flavor independence of the potential. However, as already found in the 1970's \cite{quigg}, flavor independence is not fulfilled for a Cornell-type potential. Potentials, for which identical level spacings for charmonium and bottomonium are fulfilled, are logarithmic potentials of the type \begin{equation} V ( r ) = c_1 \ln c_2 r \label{ecornell} \end{equation} with parameters $c_1$ and $c_2$. One of the important tasks of future experiments such as ${\sf \overline{P}ANDA}$\hspace*{1ex} is the search for additional, yet unobserved states (e.g.\ the $h_c'$ or a $^3F_4$ state), which could be used to obtain additional level spacings and further test the flavor indepedence, and possibly a logarithmic shape of the potential. \section{Prospects for $h_c'$ at ${\sf \overline{P}ANDA}$} \label{cpanda_hc} The $h_c'$($n$=2, $^1P_1$) state with $J^{PC}$=0$^{-+}$ is one of the yet unobserved states, which may be used for a test of flavor independence of the potential. From the Cornell-type model, it is predicted at $m$=3934-3956~GeV \cite{potential}. ${\sf \overline{P}ANDA}$\hspace*{1ex} is well suited for a search for the $h_c'$, for the following reasons: \begin{itemize} \item $h_c$($n$=1) was never observed in $B$ decays, as 0$^{-+}$$\rightarrow$0$^{-+}$1$^{+-}$ is forbidden in the factorisation limit. In the decay $B^+$$\rightarrow$$K^+$$h_c,h_c'$ the combination of quantum numbers would require an additional gluon connecting the $K^+$ and $h_c$ lines. \item The $h_c$($n$=1) ground state was observed at CLEO \cite{hc_cleo} and BESIII \cite{hc_bes3} in the isospin violating decay $\psi'$$\rightarrow$$h_c$$\pi^0$. However, for the $h_c'$ one would have to use the higher $\psi$(4040) or $\psi$(4160) resonance. As the decay would again be isospin violating, the branching fraction is expected to be small. In addition, the phase space is small, as the available kinetic energy is only $\simeq$100 or $\simeq$220~MeV, respectively. \end{itemize} For the search for the $h_c'$ at ${\sf \overline{P}ANDA}$\hspace*{1ex}, a recoil mass technique provides a useful approach. MC simulations were performed for the decay $p$$\overline{p}$$\rightarrow$($\pi^+$$\pi^-$)$_{recoil}$$h_c'$. The advantage of this inclusive method is, that no knowledge on the specific decay of the $h_c'$ is required. For the simulation a decay $h_c'$$\rightarrow$$\eta_c$$\pi⁺$$\pi⁻$ was used on the generator level, however with all possible $\eta_c$ decays as known from \cite{pdg}. Fig.~\ref{fhcx} (left) shows the $\pi^+$$\pi^-$ recoil mass from a MC simulation $p$$\overline{p}$$\rightarrow$$h_c'$$\pi^+$$\pi^-$ at ${\sf \overline{P}ANDA}$\hspace*{1ex} \cite{bachelor_simon}. The decay is $h_c'$$\rightarrow$$D^{0}$$\overline{D}^{0*}$ with $D^{0}$$\rightarrow$$K^-$$\pi^+$ and $\overline{D}^{0*}$$\rightarrow$anything. The highest available anti-proton momentum of 15~GeV/c was chosen for two reasons: {\it (a)} a higher beam momentum leads to higher reconstructable momenta and efficiencies of the $\pi^+$ and $\pi^-$ and {\it (b)} the inelastic $\overline{p}$$p$ cross section, being the main source of the background $\pi^+$$\pi^-$ pairs, decreases as a function of beam momentum. An input width of $\Gamma$=87~MeV was used for the $h_c'$, consistent with predictions for the static potential \cite{potential}. The assumed cross section for the signal is 4.5~nb, corresponding to 3.9$\times$10$^4$ $h_c'$ per day produced at ${\sf \overline{P}ANDA}$\hspace*{1ex} in the HESR high luminosity mode. The inelastic hadronic background cross section is $\simeq$43~mb \cite{panda_x3872_background}. Fig.~\ref{fhcx} (left) shows the signal for 3 hours data taking and the background for 1 second of data taking, corresponding to 2$\times$10$^7$ events, generated with the DPM model \cite{dpm}. The number of simulated background events is limited by available CPU performance and will be increased in the future. The signal consists not only of the $h_c'$, but also of the X(3872), which decays into the same final state $D^{0}$$\overline{D}^{0*}$ and can be regarded as a reference signal for the $h_c'$. In order to suppress the large hadronic background, three cuts were applied: a momentum cut $p_{lab}$($\pi^{\pm}$)$>$1.2~GeV, a vertex cut in beam direction $\Delta$$z$$\leq$0.1~cm, and a 3$\sigma$ cut on the invariant mass $m$($K^{\pm}$$\pi^{\mp}$) around the nominal mass of the $D^0$. The latter cut is largely efficient to reduce the background. After applying the cut, the signal efficiency is 8.3\%, while the background efficiency in only 1.6$\times$10$^{-5}$. Fig.~\ref{fhcx} (right) shows the $\pi^+$$\pi^-$ recoil mass after applying the cuts. The above mentioned signal cross section of 4.5~nb is one main result of this analysis, as it represents the cross section required to achieve $S$/$\sqrt{(S+B)}$$\geq$10 in 6 weeks of data taking with a duty factor of 50\% (for details of the calculation see \cite{bachelor_simon}). For the plots, we assumed the relative ratio of $h_c'$ and X(3872) to be 50\%:50\%. However, a cross section of 4.5~nb for the $h_c'$ and, as mentioned before, an estimated cross section for the X(3872) of 50~nb would lead to a relative ratio of 9\%:91\%. \begin{figure}[htb] \centerline{\includegraphics[width=0.43\textwidth,height=4.45cm]{hc1_NEW.jpg}\includegraphics[width=0.48\textwidth]{hc2_NEW_small.jpg}} \caption{$\pi^+$$\pi^-$ recoil mass for a MC simulation of $p$$\overline{p}$$\rightarrow$$h_c'$$\pi^+$$\pi^-$ at ${\sf \overline{P}ANDA}$\hspace*{1ex} for $p_{beam}$=15~GeV/c before cuts {\it (left)} and after cuts {\it (right)}. For details see text.\label{fhcx}} \end{figure} \section{Prospects for the $^3F_4$ state at ${\sf \overline{P}ANDA}$} \label{cpanda_3F4} One of the disadvantages of using the $h_c'$ as a test on the flavor independence of the potential is, that the width is as large as $\Gamma$=87~MeV. On the other hand, the yet unobserved $^3F_4$ charmonium state is more appropriate due to its very narrow predicted width of 8.3~MeV \cite{potential}. The narrow width is a consequence of the quantum numbers $J^{PC}$=4$^{++}$, because the decay is blocked by angular barrier. A transition from $L$=3 to the ground state with $L$=0 is suppressed by a factor (2$L$+1) with $L$=3. Due to the same reason, the production of the $^3F_4$ state is suppressed in $B$ meson decays at Belle II or in radiative decays of high lying $\psi$ states at BESIII. At ${\sf \overline{P}ANDA}$\hspace*{1ex} production of states with higher $L$ quantum numbers in the $\overline{p}$$p$ initial system is not suppressed, and therefore ${\sf \overline{P}ANDA}$\hspace*{1ex} is uniquely suited for the search. Qualitatively $L$$\geq$10 can be achieved, but the quantitave estimates for the population of given $L$ values is unknown, as there are no existing measurements. The here chosen approach for the reconstruction is the detection of a radiative cascade, which in 3 steps changes $\Delta$$L$=1 down to the $J$/$\psi$, which then can be detected by its decay into $e^+$$e^-$ or $\mu^+$$\mu^-$. Tab.~\ref{tcascade} shows the parameters of the states in the radiative cascade. \begin{table}[hhh] \begin{center} \begin{tabular}{|l|l|l|l|} \hline 1 $^3F_4$ & 1 $^3D_3$ & $\chi_{c2}$ & $J$/$\psi$\\ $J^{PC}$=4$^{++}$ & $J^{PC}$=3$^{--}$ & $J^{PC}$=2$^{++}$ & $J^{PC}$=1$^{--}$\\ 4095~MeV & 3849~MeV & 3556~MeV & 3097~MeV \\ $\Gamma$=8.3~MeV & $\Gamma$=0.5~MeV & $\Gamma$=2.0~MeV & $\Gamma$=0.3~MeV \\ $E_{\gamma}$=246~MeV & $E_{\gamma}$=338~MeV & $E_{\gamma}$=413~MeV & - \\ \hline \end{tabular} \end{center} \caption{Parameters of the states in the radiative cascade to search for the $^3F_4$ state at ${\sf \overline{P}ANDA}$\hspace*{1ex}.\label{tcascade}} \end{table} MC simulations for a search for the $^3F_4$ state at ${\sf \overline{P}ANDA}$\hspace*{1ex} were performed. The assumed cross section is $\sigma$($\overline{p}$$p$$\rightarrow$$^3F_4$)=10~nb. The size of the cross section is a function of the mass of the $c$$\overline{c}$ state to be produced, and an assumption of a factor $\simeq$5 smaller cross section than $\sigma$($\overline{p}$$p$$\rightarrow$X(3872)) is reasonable. The branching fraction is assumed ${\cal B}$=10\% for each of the three transitions, corresponding to the measured value ${\cal B}$=9.84$\pm$0.31\% \cite{pdg} for the transition $\psi'$$\rightarrow$$\gamma$$\chi_{c0}$. Each transition was modeled with a decay from a vector meson to a vector meson and a photon as an approximation, as $J$=2,3,4 decays are not available yet in the MC. The additional assumption was made that there is no polarisation. The search will be conducted in the HESR high luminosity mode with 8.64~$pb^{-1}$ per day. Fig.~\ref{f3F4} (left) shows the photon energy in the center-of-mass (cms) frame $E_{\gamma}^*$ for signal events for 14 days of data taking assuming 50\% duty factor. The first transistion from the $^3F_4$ at 246~MeV is clearly visible and shows a photon energy resolution, post-boost in the cms frame, of $\sigma$($E_{\gamma}^*$)=9.2~MeV. Although the boost is an approximation, even the second transition with 338~MeV and the third transition at 413~MeV are visible as well. Final state radiation in the $J$/$\psi$ decay was taken into account in the MC simulation by using PHOTOS \cite{PHOTOS} and generates the peaking photon background at $E_{\gamma}^*$$\simeq$0~MeV. For suppression of this background, a cut of $E_{\gamma}^*$$\geq$150~MeV was applied in the further analysis. Fig.~\ref{f3F4} (right) shows the sum of the three photon energies $E_{\gamma 1}^*$+$E_{\gamma 2}^*$+$E_{\gamma 3}^*$, where the cut was applied for each candidate photon, for 14 days of data taking assuming 50\% duty factor. The three photons were input to a kinematical fit with four constraints on the total $E$, $p_x$, $p_y$ and $p_z$, and a cut on fit quality with $\chi^2_{fit}$$\leq$0.1 was applied. The nominal mass of the $J$/$\psi$ from PDG was added to adjust the mass scale. A narrow $^3F_4$ signal is clearly visible with a reconstructed width of $\sigma$($m$($^3F_4$))=1.2~MeV. The background at lower masses corresponds to 43.2\% multiple candidates due to final state radiation (as mentioned above) and Bremsstrahlung in the detector material. The main hadronic background in this analysis is given by events with photons from light hadron ($\pi^0$, $\eta$, etc.) decays. However, the requirement of a reconstructed $J$/$\psi$ and 3 photons with an energy cut $E_{\gamma}^*$$\geq$150~MeV is very clean. A background suppression factor of 1.2$\times$10$^6$ was achieved for events generated with DPM, so that the hadronic background is expected to be negligible. \begin{figure}[htb] \centerline{\includegraphics[width=0.48\textwidth,height=5.3cm]{plot_machester_4F3_NEW_small.jpg} \includegraphics[width=0.48\textwidth]{plot_machester_esum_NEW.jpg}} \caption{ MC simulation of a search for the $^3F_4$ charmonium state at ${\sf \overline{P}ANDA}$\hspace*{1ex}. {\it Left:} $E_{\gamma}^*$ for signal events for 14 days of data taking. {\it Right:} signal of the $^3F_4$ in the sum of the three photon energies $E_{\gamma 1}^*$+$E_{\gamma 2}^*$+$E_{\gamma 3}^*$ added to the nominal mass of the $J$/$\psi$. A four-constraint kinematical fit was applied for the three photons.\label{f3F4}} \end{figure} \section{Prospects for the Y(4260) at ${\sf \overline{P}ANDA}$} \label{cpanda_y4260} Another important topic for ${\sf \overline{P}ANDA}$\hspace*{1ex} is the investigation of the possible nature of the Y(4260) \cite{y4260}, which is often being discussed as a candidate for a [$c$$\overline{c}$$g$] hybrid state. From the detailed balance we can derive an estimate for an upper limit of the cross section $\sigma$($\overline{p}$$p$$\rightarrow$Y(4260))$\leq$4370~nb, which is unreasonably high due to the measured high upper limit ${\cal B}$(Y(4260)$\rightarrow$$\overline{p}$$p$)/${\cal B}$(Y(4260)$\rightarrow$$J$/$\psi$$\pi^+$$\pi^-$)$<$0.13 (90\% C.L.) \cite{babar_ee2ppbar}. A better approach for obtaining a reasonable cross section is scaling from the measured ${\cal B}$($J$/$\psi$$\rightarrow$$\overline{p}$$p$) with the ratio of the known total widths of the $J$/$\psi$ and the Y(4260): \begin{equation} {\cal B} ( Y(4260) \rightarrow \overline{p} p ) = {\cal B} ( J/\psi \rightarrow \overline{p} p ) \times \frac{\varGamma (J/\psi) }{\varGamma (Y(4260))} \ , \end{equation} which leads to $\sigma$($\overline{p}$$p$$\rightarrow$Y(4260))=1.9$\pm$0.2 nb. Although this is a factor $\geq$26 smaller than the cross section for the X(3872) at {${\sf \overline{P}ANDA}$}, the number of generated Y(4260) is still high. For the HESR high resolution mode, this correponds to 16.400 events per day, and thus ${\sf \overline{P}ANDA}$\hspace*{1ex} may be considered a Y(4260) mini-''factory''. ${\sf \overline{P}ANDA}$\hspace*{1ex} is planned to achieve a peak luminosity of ${\cal L}$=2$\times$10$^{32}$~cm$^{-2}$~s$^{-1}$ which is only a factor $\simeq$2.7 less than the achieved peak luminosity of ${\cal L}$=5.3$\times$10$^{32}$~cm$^{-2}$~s$^{-1}$ on the Y(4260) resonance at BESIII. However, the cross section at BESIII is a factor $\simeq$30 smaller with $\sigma$($e^+$$e^-$$\rightarrow$Y(4260))=62.9$\pm$1.9$\pm$3.7~pb \cite{zc3900_bes3}. At Belle II, the Y(4260) will be produced in initial state radiation (ISR). In $B$ meson decays the Y(4260) has not been observed so far. Based on the scaled number of observed events at Belle, for Belle II $\simeq$30.000 ISR events are expected in an envisaged data set of 50~$ab^{-1}$, assuming ${\cal B}$(Y(4260)$\rightarrow$$J$/$\psi$$\pi^+$$\pi^-$)=100\%. While at Belle this would correspond to $\geq$8 years of data taking, at ${\sf \overline{P}ANDA}$\hspace*{1ex} only $\geq$2 days in HESR high luminosity mode would be required. With the very high statistics, ${\sf \overline{P}ANDA}$\hspace*{1ex} will be suited to search for rare decays such as Y(4260)$\rightarrow$$e^+$$e^-$. This decay has not been observed yet, although the quantum numbers of the Y(4260) with $J^{PC}$=1$^{--}$ would allow it. A limit on the coupling to $e^+$$e^-$ can be derived from the coupling to initial state in $e^+$$e^-$$\rightarrow$Y(4260). However, this way only a product of a coupling to the initial state and the coupling to the final state can be measured, as the Y(4260) must be observed in the final state in a decay such as Y(4260)$\rightarrow$$J$/$\psi$$\pi^+$$\pi^-$. The measured product partial width is ${\cal B}$(Y(4260)$\rightarrow$$J$/$\psi$$\pi^+$$\pi^-$)$\times$$\Gamma$(Y(4260)$\rightarrow$$e^+$$e^−$)= (7.5$\pm$0.9$\pm$0.8)~eV \cite{y4260}. Thus the partial width is of the order of eV, while the total width of the Y(4260) is in the order $\simeq$100 MeV, indicating the strong suppression of the coupling to $e^+$$^-$ by a factor $\geq$10$^7$. Depending on the interpretation of the Y(4260), there could be several reasons for the suppression. In a simplified view, a suppression could be induced by a reduced wave function at $r$=0, corresponding to a reduced annihilation term. If the Y(4260) is a $[$$c\overline{c}$$g$$]$ hybrid, then the wave function at the origin might be reduced, as the minimum of the $\Pi_u$ field\footnote{The $\Pi_u$ field is the lowest lying gluon excitation potential, for which the gluon spin projected onto the quark anti-quark axis $J_G$ = 1.} is not at $r$=0, but at $r$$>$0 \cite{braaten_charm13} (and references therein). If the Y(4260) is a $[$$D$$D_1$(2420)$]$ molecule, then the long-range part of the wave function might be enhanced and therefore, according to unitarity of the wave function, the short-range part at $r$$\simeq$0 is suppressed. Tab.~\ref{tpsi2ee} shows the known branching fractions of decays of conventional $\psi$ charmonium states to $e^+$$e^-$ \cite{pdg}. In order to claim a suppression, the question would be in fact, if the branching fraction ${\cal B}$(Y(4260)$\rightarrow$$e^+$$e^-$) is smaller than for the $\psi$ states. For the reasons explained above, the measurement of such a rare decay would only be possible at {${\sf \overline{P}ANDA}$. The advantage is that this would be an absolute measurement, not depending on the coupling to the initial state in the product branching fraction. \begin{table}[htb] \begin{center} \begin{tabular}{|l|l|} \hline Decay & Branching fraction \\ \hline $\psi$(3770)$\rightarrow$$e^+$$e^-$ & (9.6$\pm$0.7)$\times$10$^{-6}$ \\ $\psi$(4040)$\rightarrow$$e^+$$e^-$ & (1.07$\pm$0.16)$\times$10$^{-5}$ \\ $\psi$(4160)$\rightarrow$$e^+$$e^-$ & (8.1$\pm$0.9)$\times$10$^{-6}$ \\ $\psi$(4415)$\rightarrow$$e^+$$e^-$ & (9.4$\pm$3.2)$\times$10$^{-6}$ \\ \hline \end{tabular} \end{center} \caption{Known branching fractions of decays to $e^+$$e^-$ of conventional $\psi$ charmonium states \cite{pdg}.\label{tpsi2ee}} \end{table} A MC simulation for $\overline{p}$$p$$\rightarrow$Y(4260)$\rightarrow$$e^+$$e^-$ was performed. The reconstruction efficiency turns out to be high with $\varepsilon$$>$93\% and only limited by acceptance and Brems\-strah\-lung, i.e.\ the $e^+$ or $e^-$ radiates one or more photons before being detected in the EMC, is recontructed with a wrong photon energy and therefore may not pass the energy cuts. There are two main backgrounds. On the one hand, elastic $\overline{p}$$p$$\rightarrow$$\overline{p}$$p$ has a 2-prong signature with total energy $E$=$m$(Y(4260)). The cross section is high with $\sigma$=4.5$\times$10$^4$~$\mu$b, however a suppression technique based upon {\it (a)} the strongly peaking behavior in the polar angular distribution and {\it (b)} partial identification of the $\overline{p}$ annihilation in EMC lead to a suppression of $\leq$1.3$\times$10$^{-5}$. On the other hand, $\overline{p}$$p$$\rightarrow$$\pi^+$$\pi^-$ with a 2-prong final state has a high cross section as well with 4.6$\times$10$^4$~$\mu$b. With $\pi^{\pm}$ and $e^{\pm}$ identification a suppression of $\geq$10$^6$ was achieved \cite{pid_suppression}. Note that there can be interference between signal and background, but was not taken into account in this analysis. Fig.~\ref{fy4260_panda} shows the $e^+$$e^-$ invariant mass distribution at ${\sf \overline{P}ANDA}$\hspace*{1ex} with a beam momentum $p$=8.62323~GeV/c. For the Y(4260)$\rightarrow$$e^+$$e^-$ signal 3 months of data taking (50\% duty factor) are assumed. For the background 2$\times$10$^7$ events (generated with the DPM model), corresponding to one second of data taking, are shown. The number of simulated background events is limited by available CPU performance and will be increased in the future. The width of $\Gamma$=114.5$\pm$6.5~MeV was determined by a fit with a single Gaussian, and is consistent with the generated input width of $\Gamma$=108~MeV. The $J$/$\psi$ signal which is visible in Fig.~\ref{fy4260_panda} originates from Y(4260)$\rightarrow$$J$/$\psi$$\pi^+$$\pi^-$ with a branching fraction of $\simeq$100\% assumed and subsequent $J$/$\psi$$\rightarrow$$e^+$$e^-$ with a branching fraction of 6\%. This $J$/$\psi$ signal can be used as a reference signal for fixing the mass scale. For the decay Y(4260)$\rightarrow$$e^+$$e^-$ the same branching fraction as $\psi$(4160)$\rightarrow$$e^+$$e^-$ in Tab.~\ref{tpsi2ee} was assumed. A small contribution for Y(4260)$\rightarrow$$\psi'$$\pi^+$$\pi^-$, which was not observed so far, assuming as a simple estimate ${\cal B}$(Y(4260)$\rightarrow$$\psi'$$\pi^+$$\pi^-$)=${\cal B}$($\psi$(4160)$\rightarrow$$e^+$$e^-$), was also included and is visible as the small signal for $\psi'$$\rightarrow$$e^+$$e^-$. The fitted mass of the Y(4260) is 4.151$\pm$0.008, which is $\geq$100 MeV lower than the nomimal mass. The reason is, that a single Gaussian, which was used here in the fit as an approximation, is not a proper description of the p.d.f. The beam momentum is adjusted to the on-resonance peak position. Thus, the right hand side of the mass peak is only due to momentum resolution. The left-hand side is a convolution of three effects: {\it (a)} a $P$-wave Breit-Wigner shape, {\it (b)} the momentum resolution, and {\it (c)} a tail from Bremsstrahlung. This asymmetry between the left and the right hand side leads to the lower fitted mass. \begin{figure}[htb] \centerline{\includegraphics[width=0.48\textwidth]{plot_y4260_machester_NEW.jpg}} \caption{$e^+$$e^-$ invariant mass distribution at ${\sf \overline{P}ANDA}$\hspace*{1ex} with a beam momentum $p$=8.62323~GeV/c. For details see text.\label{fy4260_panda}} \end{figure} \section{Summary} ${\sf \overline{P}ANDA}$\hspace*{1ex} with $p$$\overline{p}$ collisions is well suited for the search for high lying charmonium\-\mbox{(-like)} states, which are suppressed due to their quantum numbers in $B$ meson decays or radiative decays of $\psi$ resonances. Expected event rates are high due to the planned high luminosity, e.g.\ 16.400 events with a Y(4260) per day, and thus enabling searches for rare decays of XYZ states.
2023-04-23T08:17:46.827Z
2013-12-02T02:15:35.000Z
redpajama/arxiv
arxiv_0000
623
4,616
1b6be7e2ad7c9b0e9e580dcd90f7177f70a320e9
\section{Introduction}\label{sec1} In the past decade considerable amount of attention have been drawn both towards the experimental and theoretical studies of the internal scattering of atoms with a graphene layer and with various carbon nanostructures \cite{agra, fermani,friedrich}. Owing to the fact that these nanostructures are endowed with exceptional electronic, optical, mechanical, thermal, and magnetic properties that are of vested interest to the modern communication engineering technologies~\cite{intro1,intro2}, their applications are in huge demand both in the scientific and industrial laboratories. Graphene, in particular, manifest unique properties incurring its honey comb-lattice structure that could maximize the interaction of atom on the layer. In fact, the knowledge of the atom-graphene interactions has been very useful in the construction of the hydrogen storage devices \cite{intro3,intro4,intro5} and also plays an important role in understanding different physical, chemical and biological processes~\cite{intro6,intro7,intro8,intro9}. Moreover, these interactions are connected to the phenomenon of quantum reflections whose studies are of special interest today to many experimentalists and theoreticians for explaining their exact behavior ~\cite{dirac-hydro,qr1,qr2,qr3}. In addition to this, gaining insights of atom-graphene contacts are crucial in the development of graphene based electronics. In particular, metals adsorbed on graphene can form different types of structures and can change graphene's electronic behavior instigating towards observation of interesting physical phenomenon~\cite{intro10,intro12,intro13,intro14}. Among the metals that can be adsorbed on graphene, study for the Li atom is particularly very interesting for the applications in the storage of hydrogen gas \cite{ataca,du}, improving efficiencies of Li-ion batteries \cite{konga,jian}, and making superconductors~\cite{castro,intro16}. K atoms have also been used to tune the electronic structures of the graphene bilayers~\cite{intro17,intro18}. A graphene layer rolled to make a carbon nanotube (CNT) has some very peculiar properties and gained special attention by the researchers world-wide \cite{cnt-1,cnt-2,cnt-3}. Interaction of the alkali atoms with the single walled CNTs have profound applications in the purification of the CNTs~\cite{cnt-pur}. Adsorbed alkali atoms have been demonstrated to act as chemical dopants on the CNTs and have been used to fabricate field effective transistors~\cite{cnt-fet}. Accurate experimental measurements of the $C_3$ coefficients of any atomic systems with a graphene layer and with a CNT is extremely difficult. A number of theoretical methods have been performed, particularly using the density functional theories \cite{dft1,dft2,dft3,dft4}, lower order many-body methods \cite{mbpt1}, Lifshitz theory ~\cite{babb,caride,lf1,lf2,nano1,dirac-hydro, dirac-hydro2} etc., to uncover the nature of interactions of carbon nanostructures with various materials. Klimchitskaya and co-worker have used Lifshitz theory to explain the interaction between graphene layer with different materials including metal plate~\cite{bordag,metal1,metal2}, conduction cylinder~\cite{cylinder}, atoms such as H~\cite{dirac-hydro}, Na, Rb, Cs~\cite{dirac-hydro,dirac-hydro2}, H$_2$ molecule~\cite{nano1}, He* ion~\cite{dirac-hydro,dirac-hydro2}, etc. Interaction of CNT with the H atom and H$_2$ molecule have also been explained by them in great detail~\cite{nano1}. However, these calculations for graphene layer have been basically carried out by employing single oscillator model (SOM) for the estimation of the dynamic polarizabilities. In this paper, we verify the results by evaluating them for the alkali atoms with their accurate values of the dynamic polarizabilities and emphasis on the need to use such accurate values by comparing our results with the SOM results. Moreover, the interactions between CNT and alkali metal-atoms remain to be investigated thoroughly. In view of the fact that these interactions play crucial roles in a number of applications and keeping in mind their vast experimental use, it would be expedient to carry out more accurate theoretical analysis of the interactions of the graphene layer and CNT with the alkali atoms. The interaction between an atom and a wall is usually modeled by calculating the interaction between the atom and its image charge (reflection) in the wall. The reflection coefficients required for such calculations are well described by the widely celebrated Lifshitz theory which expresses these quantities as the functions of the dynamic dielectric permittivity of the wall and of the dynamic dipole polarizabilities of the atoms \cite{harjeet1,lifshitzbook,klim,mahanty,pars}. Although accurate evaluation of the dynamic dipole polarizabilities in the atomic systems require sophisticated many-body methods, but their values for the alkali atoms, which are the utmost used atoms in the ultra-cold atomic experiments, are now known reliably at least with the sufficient precision at this stage of interest \cite{arora-sahoo1,arora-sahoo2}. In contrast, the dynamic dielectric permittivity values are generally known to insufficient accuracy in the material mediums due to their strenuous procedure of evaluation and could remain to be a tedious task for a long to determine them precisely. In particular, the nanostructures with thickness of the size of an atom such as the considered graphene layer and single walled CNT do not have well defined dielectric permittivity. This entails the need for adopting suitable models to estimate the $C_3$ coefficients by introducing some effective parameters that can substitute the role of the dielectric permittivity of the wall in the Lifshitz theory. In this context, the two most popular models that are often employed in the theoretical determination of the dispersion coefficients are the hydrodynamic model \cite{barton,barton1,harjeet2,bordag1,blagov} and the Dirac model \cite{bordag2}. In this work, we intend to apply both the models and would like to compare the obtained results using accurate values of the dynamic dipole polarizabilities. In addition, we plan to present a very handy functional form of the radial dependent dispersion coefficients so that they can be easily derived for any arbitrary values of the atom-wall distance and the CNT radius for their convenient use in the practical applications. This paper is organized as follows: In Sec. \ref{sec2}, we present the modified Lifshitz theory for the reflection coefficient on the graphene layer and CNT in the hydrodynamic and Dirac model framework. This follows with a brief description of the method of calculations of the dynamic dipole polarizabilities in Sec. \ref{sec3} which are later used in the evaluation of the dispersion coefficients. Calculated results for the $C_3$ coefficients using accurate values of the dynamic polarizabilities and using the SOM model are given in Sec. \ref{sec4}. In the same section we present the dispersion coefficients for the graphene layer and CNT determined by employing both the hydrodynamic and Dirac models and compare them with the results obtained for an ideal conducting medium, Au, and SiO$_2$ wall which were reported earlier. Unless stated explicitly, the results are given in atomic unit (a.u.) throughout the paper. \section{Theory of the Dispersion Coefficient}\label{sec2} The general form of the interaction potential energy in the configuration of a micro-particle and a material planar structure interacting at a distance $a$ is described by the Lifshitz theory which is expressed by \cite{lifshitz1,lifshitzbook} {\small \begin{equation} U(a)= -\frac{\alpha_{fs}^3}{2\pi}\int_0^{\infty}d\omega \omega^3\alpha(\iota\omega)\int_1^{\infty}d\xi e^{-2\alpha_{fs}\xi\omega a} H(\xi,\epsilon(\iota\omega)), \label{atwp} \end{equation}} where $\alpha_{fs}$ is the fine structure constant, $\epsilon(\omega)$ is the frequency dependent dielectric constant of wall material, $a$ is the separation distance between the atom and the surface and $\alpha(\iota\omega)$ is the dynamic polarizability of the atom with imaginary argument. The function $H(\xi,\epsilon(\iota\omega))$ is given by \begin{equation} H(\xi,\epsilon)=(1-2\xi^2)\frac{\sqrt{\xi^2+\epsilon-1}-\epsilon\xi}{\sqrt{\xi^2+\epsilon-1}+\epsilon\xi} + \frac{\sqrt{\xi^2+\epsilon-1}-\xi}{\sqrt{\xi^2+\epsilon-1}+\xi}\nonumber \end{equation} with the Matsubara frequencies denoted by $\xi$. For small separation distances, the above potential can be approximated to \begin{equation} U(a) \approx -\frac{C_3(a)}{a^3}, \label{u3} \end{equation} where $C_3$ is known as the dispersion coefficient for the corresponding atom-wall interaction. For a perfect conductor with $\epsilon(\omega) \rightarrow \infty$, we have \begin{eqnarray} C_3 &=& \frac{1}{4 \pi} \int_0^{\infty} d \omega \alpha(\iota \omega) \frac{\epsilon(\iota \omega)-1}{\epsilon(\iota \omega)+1} \nonumber \\ &=& \frac{1}{4 \pi} \int_0^{\infty} d \omega \alpha(\iota \omega). \end{eqnarray} The dispersion coefficient for a CNT with radius $R$ is expressed using the proximity force approximation (PFA) by \cite{som1,harjeet1,blagov} {\small \begin{eqnarray} C_3(a,R) &=& \frac{1}{16\pi}\sqrt{\frac{R}{R+a}}\int_0^{\infty} d\xi\alpha(\iota\xi)\int_{2a \alpha_{fs} \xi} dy y e^{-y} \nonumber \\ &&\left( y-\frac{a}{2(R+a)}\right) \left(2r_{\rm {TM}}-\frac{4a^2 \alpha_{fs}^2 \xi^2}{y^2}(r_{\rm{TM}}+r_{\rm{TE}})\right),\nonumber\\ \label{eq-c3cnt} \end{eqnarray}} where $r_{\rm{TM}}$ and $r_{\rm{TE}}$ are the reflection coefficients of the electromagnetic oscillations on CNT for the transverse magnetic and transverse electric polarizations of the electromagnetic field. It has been shown in Ref.~\cite{cylinder} that relative differences between the exact and PFA results could be within 4\% for the condition $\frac{a}{R} < \frac{3}{5}$. For a thin single layer graphene with limit $R \rightarrow 0$, we can simplify the above expression to \cite{dirac-hydro} \begin{eqnarray} C_3(a)&=&\frac{1}{16\pi}\int_0^{\infty}d\xi\alpha(\iota\xi)\int_{2a\xi \alpha_{fs} }^{\infty}dye^{-y}y^2 \nonumber\\ &&\left(2r_{{TM}}-\frac{4a^2 \alpha_{fs}^2\xi^2}{y^2}(r_{{TM}}+r_{{TE}})\right).\label{eq-c3g} \end{eqnarray} The separation distance dependent $C_3$ coefficients given above include both the retarded and nonretarded interaction energies which are applicable up to the separation distances where the thermal effects are not significant (typically $\sim 1 \mu m$)~\cite{casimir,babb}. The most difficult part in the evaluation of the above expressions is to get the $r_{\rm{TM}}$ and $r_{\rm{TE}}$ reflection coefficients correctly. Two different models widely used to describe the electronic structure of graphene are the hydrodynamic model and the Dirac model. Within the framework of hydrodynamic model, the reflection coefficients for a graphene layer or CNT are given by \cite{barton,barton1,harjeet2,bordag1,blagov} \begin{eqnarray} r_{\rm{TM}} &=& \frac{q \kappa }{q \kappa + \alpha_{fs}^2\xi^2} \nonumber\\ \text{and} \ \ \ r_{\rm{TE}}&=& -\frac{\kappa}{\kappa+q},\label{eq-h} \end{eqnarray} with the wave number of graphene sheet $\kappa=6.75 \times 10^{5} \ m^{-1}$ and $q=\frac{y}{2a}$. In this model, graphene is considered as an infinitesimally thin positively charged sheet carrying a homogeneous fluid with some mass and negative charge densities. The energy of the quasi-particles in graphene is quadratic with respect to their momenta. Therefore, this model works well at large energies and fails at the low energies (where actual energy of the quasi-particles is linear function of momentum). This model is an approximate one and does not take into account the Dirac character of the charge carriers in graphene. Within the framework of the Dirac model of the electronic structure of graphene, the quasi-particle fermion excitations in graphene are treated as massless Dirac fermions moving with a Fermi velocity. It takes into account the properties of graphene which are valid at the low energies of the quasi-particles in graphene, specifically energies which are linear function of momentum. The explicit relations for the reflection coefficients considering the electronic structure of the graphene or CNT according to the Dirac model are given by \cite{bordag2} \begin{eqnarray} r_{\rm{TM}} &=& \frac{\alpha q \phi(\tilde{q})}{2{\tilde{q}}^2+\alpha q\phi(\tilde{q})} \nonumber\\ \text{and} \ \ \ r_{\rm{TE}}&=& -\frac{\alpha \phi(\tilde{q})}{2 q+\alpha \phi(\tilde{q})},\label{eq-d} \end{eqnarray} where the function $\phi(\tilde{q})$ determines the polarization tensor in an external electromagnetic field in three dimension space-time coordinate and is give by \cite{bordag2} \begin{equation} \phi(\tilde{q}) = 4 \left( \alpha_{fs} \Delta+\frac{\tilde{q}^2-4 \alpha_{fs}^2 \Delta^2}{2\tilde{q}}\rm{arctan} \left ( \frac{\tilde{q}}{2 \alpha_{fs} \Delta} \right ) \right), \end{equation} where $\Delta$ is known as the mass gap parameter. The exact value of $\Delta$ remains to be unknown however, its commonly accepted upper bound value quoted in the literature is 0.1 eV ~\cite{castro,bordag2}. The parameter $\tilde{q}$ in the above equation is defined in terms of the Fermi velocity $v_f \sim 10^{6} \ m/s$ as \begin{equation} \tilde{q} = \left[ \frac{\alpha_{fs}^2 v_f^2 y^2}{4a^2}+\left(1-\alpha_{fs}^2 v_f^2\right) \alpha_{fs}^2 \xi^2 \right]^{1/2} . \end{equation} In the next section, we shall briefly discuss the method of calculations for the dynamic polarizabilities which are required for evaluating $C_3$ coefficients as discussed above and would like to compare them with the results obtained considering the SOM results. \section{Evaluation of the Dynamic Polarizabilities}\label{sec3} The dynamic dipole polarizabilities of the ground state $|\Psi_n\rangle$ of the alkali atoms for the corresponding principal quantum number $n$ due to the direct current electric field with the frequency $\omega$ are given by \begin{eqnarray} \alpha(\omega)&=& \sum_I \left [ \frac{ |\langle \Psi_n | D | \Psi_I\rangle |^2}{E_I - E_n + \omega} + \frac{ |\langle \Psi_n | D | \Psi_I\rangle |^2}{E_I - E_n - \omega} \right ] \nonumber \\ &=& \frac{2}{3(2J_n+1)} \sum_I \frac{ (E_I-E_n) |\langle \Psi_n || D || \Psi_I\rangle |^2}{(E_I - E_n)^2 - \omega^2}, \label{pol1} \end{eqnarray} where $J_n=1/2$ is the total angular momentum of the corresponding ground state, sum over $I$ represents all possible allowed intermediate states for the dipole transition, $E_S$ are the energies of the corresponding states and $\langle \Psi_n || D || \Psi_I\rangle$ is the E1 reduced matrix element of the dipole operator $D$ between the states $|\Psi_n\rangle$ and $|\Psi_I\rangle$. Alternatively, the above polarizability expression can be expressed as \begin{eqnarray} \alpha(\omega)&=& \langle \Psi_n | D | \Psi_n^{(+)} \rangle + \langle \Psi_n | D | \Psi_n^{(-)}\rangle , \label{pol2} \end{eqnarray} with \begin{eqnarray} | \Psi_n^{(\pm)} \rangle &=& \sum_I | \Psi_I\rangle \frac{\langle \Psi_I | D | \Psi_n\rangle}{E_I - E_n \pm \omega} \end{eqnarray} which can be obtained for the Dirac-Coulomb (DC) Hamiltonian $H^{DC}$ by solving the equation \begin{eqnarray} (H^{DC}-E_n \mp \omega ) | \Psi_n^{(\pm)} \rangle = -D | \Psi_n\rangle. \label{pol3} \end{eqnarray} \begin{table}[t] \caption{\label{e1mat} Absolute values of the E1 matrix elements in the Li, Na, K, and Rb atoms in $ea_0$. Those are extracted from the measured quantities are given in bold fonts; otherwise they are calculated using the CCSD(T) method. Estimated uncertainties in the CCSD(T) results are given in the parentheses.} \begin{ruledtabular} \begin{tabular}{lclc} Transition & E1 mat.el. & Transition & E1 mat. el. \\ \hline Li & & Na & \\ \hline $2s_{1/2} \rightarrow 2p_{1/2}$ & 3.318(4)& $\bf{3s_{1/2} \rightarrow 3p_{1/2}}$ & \textbf{3.5246(23)}\\ $2s_{1/2} \rightarrow 3p_{1/2}$ & 0.182(2) & $3s_{1/2} \rightarrow 4p_{1/2}$ & 0.304(2)\\ $2s_{1/2} \rightarrow 4p_{1/2}$ & 0.159(2) & $3s_{1/2} \rightarrow 5p_{1/2}$ & 0.107(1)\\ $2s_{1/2} \rightarrow 5p_{1/2}$ & 0.119(4) & $3s_{1/2} \rightarrow 6p_{1/2}$ & 0.056(2)\\ $2s_{1/2} \rightarrow 6p_{1/2}$ & 0.092(2) & $3s_{1/2} \rightarrow 7p_{1/2}$ & 0.035(2)\\ $2s_{1/2} \rightarrow 7p_{1/2}$ & 0.072(1) & $3s_{1/2} \rightarrow 8p_{1/2}$ & 0.026(2)\\ $2s_{1/2} \rightarrow 2p_{3/2}$ & 4.692(5) & $\bf{3s_{1/2} \rightarrow 3p_{3/2}}$ & \textbf{4.9838(4)}\\\\ $2s_{1/2} \rightarrow 3p_{3/2}$ & 0.257(2) & $3s_{1/2} \rightarrow 4p_{3/2}$ & 0.434(2)\\ $2s_{1/2} \rightarrow 4p_{3/2}$ & 0.225(2) & $3s_{1/2} \rightarrow 5p_{3/2}$ & 0.153(2)\\ $2s_{1/2} \rightarrow 5p_{3/2}$ & 0.169(4) & $3s_{1/2} \rightarrow 6p_{3/2}$ & 0.081(2)\\ $2s_{1/2} \rightarrow 6p_{3/2}$ & 0.130(2) & $3s_{1/2} \rightarrow 7p_{3/2}$ & 0.051(2)\\ $2s_{1/2} \rightarrow 7p_{3/2}$ & 0.102(1) & $3s_{1/2} \rightarrow 8p_{3/2}$ & 0.037(2)\\ \hline K & & Rb & \\ \hline $\bf{4s_{1/2} \rightarrow 4p_{1/2}}$ & \textbf{4.131(20)} & $\bf{5s_{1/2} \rightarrow 5p_{1/2}}$ & \textbf{4.227(6)} \\ $4s_{1/2} \rightarrow 5p_{1/2}$ & 0.282(6) & $5s_{1/2} \rightarrow 6p_{1/2}$ & 0.342(2) \\ $4_{1/2} \rightarrow 6p_{1/2}$ & 0.087(5) & $5s_{1/2} \rightarrow 7p_{1/2}$ & 0.118(1) \\ $4s_{1/2} \rightarrow 7p_{1/2}$ & 0.041(5) & $5s_{1/2} \rightarrow 8p_{1/2}$ & 0.061(5) \\ $4s_{1/2} \rightarrow 8p_{1/2}$ & 0.023(3) & $5s_{1/2} \rightarrow 9p_{1/2}$ & 0.046(3) \\ $4s_{1/2} \rightarrow 9p_{1/2}$ & 0.016(3) & & \\ $\bf{4s_{1/2} \rightarrow 4p_{3/2}}$ & \textbf{5.800(8)} & $\bf{5s_{1/2} \rightarrow 5p_{3/2}}$ & \textbf{5.977(9)} \\ $4s_{1/2} \rightarrow 5p_{3/2}$ & 0.416(6) & $5s_{1/2} \rightarrow 6p_{3/2}$ & 0.553(3) \\ $4s_{1/2} \rightarrow 6p_{3/2}$ & 0.132(6) & $5s_{1/2} \rightarrow 7p_{3/2}$ & 0.207(2) \\ $4s_{1/2} \rightarrow 7p_{3/2}$ & 0.064(5) & $5s_{1/2} \rightarrow 8p_{3/2}$ & 0.114(2) \\ $4s_{1/2} \rightarrow 8p_{3/2}$ & 0.038(3) & $5s_{1/2} \rightarrow 9p_{3/2}$ & 0.074(2) \\ $4s_{1/2} \rightarrow 9p_{3/2}$ & 0.027(3) & &\\ \end{tabular} \end{ruledtabular} \end{table} The advantage of using expression given by Eq. (\ref{pol1}) to evaluate the dynamic polarizability is that the E1 matrix elements for many important transitions that are predominantly contributing to the polarizabilities are now well studied and their values are known to quite reasonable accuracy \cite{arora-sahoo1,arora-sahoo2,safronova-li,arora1,pol-andrei}. Use of these matrix elements along with the experimental energies will certainly give more precise contributions from these matrix elements to $\alpha$. However, the limitation of this sum-over-states approach is that it cannot estimate contributions from the core electrons and can only take into account a few low-lying intermediate states. It has been found in the previous studies that contributions with the core orbitals and high-lying intermediate states (tail) are small compared to the low-lying intermediate states (e.g. see \cite{arora-sahoo1,arora-sahoo2} and {\it references therein}). Therefore, we employ a third order many-body perturbation theory (MBPT(3) method) as described in ~\cite{arora-sahoo1,arora-sahoo3} to determine the core and tail contributions. Among the important E1 matrix elements between the low-lying states, the matrix elements for few primary transitions in the Na, K, and Rb atoms have been obtained using a fitting procedure from the precise measurements of the lifetimes and static dipole polarizabilities of the first few low-lying excited states as have been given in \cite{arora-sahoo1,arora-sahoo2}. For instance, the E1 matrix elements of the $3s-3p_{1/2,3/2}$ transitions are taken from the complied data list of Ref.~\cite{volz}. The other important matrix elements whose values were not deducible accurately from the measured quantities are evaluated by employing a relativistic coupled-cluster (RCC) theory. In our RCC method, we express the atomic wave function with the valence electron $v$ as \begin{eqnarray} |\Psi_v \rangle & = & e^T \{1+S_v\} |\Phi_v \rangle , \label{cc2} \end{eqnarray} where $| \Phi_v \rangle$ is the Dirac-Fock (DF) wave function and $T$ and $S_v$ operators account the correlation effects to all orders through the excitations of the electrons from the core orbitals alone and from the valence orbital together from the core orbitals, respectively. We consider here the singly and doubly excited configurations with important triple excited configurations in the well-known CCSD(T) method framework for calculating the atomic wave functions. \begin{table*}[t] \caption{\label{pol} Static dipole polarizabilities (in a.u.) of the ground states of the Li, Na, K and Rb alkali atoms and their comparison with the precisely available experimental results. Values used in the single oscillator model (SOM) for the evaluation of the dynamic polarizabilities in the previous works are also given at the bottom of the table.} \begin{ruledtabular} \begin{tabular}{lcccc} Contribution & Li & Na & K & Rb\\ \hline $\alpha_{v}$ & 162.6 & 161.4 & 284.3 & 309.3 \\ $\alpha_{c}$ & 0.22 & 0.9 & 5.5 & 9.1 \\ $\alpha_{cv}$ & $\sim 0$ & $\sim 0$ & -0.13 & $ -0.26$ \\ $\alpha_{\rm{tail}}$ & 1.2 & 0.08 & 0.06 & 0.11 \\ Total & 164.1(7) & 162.4(2) & 289.8(6)& 318.3(6)\\ Experiment & 164.2(11)$^a$ & 162.7(8)$^b$ & 290.58(1.42)$^c$ & 318.79(1.42)$^d$ \\ Values used in SOM & & 162.7(8)$^e$ & 293.6(6.1)$^f$ & 319.9(6.1)$^f$\\ \end{tabular} \end{ruledtabular} $^a$Ref.~\cite{li-exp}, $^b$Ref.~\cite{na-exp}, $^c$Ref.~\cite{k-exp}, $^d$Ref.~\cite{rb-exp}, $^e$Ref.~\cite{13} \\ $^f$weighted average from Ref.~\cite{14} and \cite{15} \end{table*} We calculate the E1 reduced matrix elements between the states $| \Psi_f \rangle$ and $| \Psi_i \rangle$ to be used in the sum-over-states approach using the following RCC expression \begin{eqnarray} \langle \Psi_f || D || \Psi_i \rangle &=& \frac{\langle \Phi_f || \{ 1+ S_f^{\dagger}\} \overline{D } \{ 1+ S_i\} ||\Phi_i\rangle}{ \sqrt{{\cal N}_f {\cal N}_i}}, \end{eqnarray} where $\overline{ D}=e^{T^{\dagger}} D e^T$ and ${\cal N}_v = \langle \Phi_v | e^{T^{\dagger}} e^T + S_v^{\dagger} e^{T^{\dagger}} e^T S_v |\Phi_v\rangle$ involve two non-truncating series in the above expression. Calculation procedures of these expressions are discussed elsewhere in detail \cite{mukherjee,sahoo2}. In the SOM, for example used in Ref.~\cite{caride,som1}, to calculate the $C_3(a)$ coefficients, given for the evaluation of the dynamic polarizabilities $\alpha(\omega)$ with the imaginary frequencies $\omega$ as \begin{equation} \alpha(\iota\omega)=\frac{\alpha(0)}{1+\frac{\omega^2}{\omega_o^2}}, \end{equation} where $\alpha(0)$ is the static dipole polarizability (listed in table~\ref{pol}) and $\omega_0$ is the characteristic frequency of the atom. The static polarizabilities and the characteristic frequencies are generally atom dependent. \begin{figure}[t] \includegraphics[scale=0.7]{so-rcc.eps} \caption{Dynamic polarizabilities (left half) and $C_3$ coefficients (right half) of the Na and Rb atoms interacting with a graphene layer in the Dirac model calculated using the dynamic polarizabilities obtained from the RCC calculations and from the single oscillator model (SOM).} \label{fig-so-rcc} \end{figure} \section{Results and Discussion}\label{sec4} As mentioned in the text above, we use the most precise values of the E1 matrix elements compiled in Refs.~\cite{arora-sahoo1,arora-sahoo2} for the few important $ns-np$ transitions of the Na, K, and Rb atoms. These values are given in bold fonts in Table~\ref{e1mat}. In the same table, we also present E1 matrix elements for other transitions calculated using our CCSD(T) method required for the evaluation of the polarizabilities that are already discussed in our earlier works \cite{arora-sahoo1,arora-sahoo2}. In Table \ref{pol}, we present contributions to the static dipole polarizabilities of the considered atoms obtained using these matrix elements as valence contributions $\alpha_v$. Contributions from the core electron excitations and correlations among the core electrons with the valence electron of the corresponding atoms are given as core ($\alpha_{c}$) and core-valence ($\alpha_{cv}$), respectively, in the same table. Also, contributions from the higher excited states whose matrix elements are not included in the determination of $\alpha_v$ are given as $\alpha_{\rm{tail}}$. All these latter three contributions are estimated using the MBPT(3) method in the framework as described by Eq. (\ref{pol2}). In the same table, we also give our final polarizability results for the ground states of the alkali atoms and compare them with the precisely available experimental results and the experimental results used in SOM~\cite{dirac-hydro2}. This table clearly testifies the preciseness of our estimated static polarizabilities and ensures the quality of the dynamic polarizabilities that are obtained using these calculations. In order to compare our dynamic polarizability results with the SOM, we plot them in the left half of Fig.~\ref{fig-so-rcc}. Employing our dynamic polarizabilities we calculate the $C_3$ coefficients for the interactions of the Na and Rb atoms with the graphene layer. These values are plotted for the Dirac model in the right half of Fig.~\ref{fig-so-rcc}. We have also plotted the $C_3$ values using the dynamic polarizabilities obtained from the SOM with our calculated static dipole polarizabilities in the same figure. Our results for the Na atom using polarizabilities calculated using SOM are considerably different from the results given in Ref.~\cite{dirac-hydro}, for instance value of $C_3$(a=5 $nm$ is approximately equal to 1 a.u. from our Fig.~\ref{fig-so-rcc}, however, it is approximately 0.6 a.u. in Ref.~\cite{dirac-hydro} (see Fig. 4 of Ref.~\cite{dirac-hydro}). The discrepancy is owing to the fact that our dynamic polarizability values are different but more accurate from the values used by them in their calculations. Moreover, their predictions of the Dirac model were over estimated by a factor of 1.5 due to an error in the computer program (as has been clarified in~\cite{dirac-hydro2}). As can be seen, there seem to be significant differences in the results specially in the heavier atoms like Rb which suggest the need for more accurate polarizability results in such type of calculations specially for the heavier atoms. \begin{figure*}[htpb] \includegraphics[width=15 cm,height=7.5 cm]{comparision.eps} \caption{The $C_3$ coefficients (in a.u.) using both the Dirac (shown in solid line) and hydrodynamic (shown in long-dashed line) models for the alkali atoms as a function of the atom-layer separation distance $a$ interacting with the graphene layer along with the results for a perfect conductor (shown in short-dashed line), Au (shown in dotted line) and SiO$_2$ (shown in dotted-dashed line).} \label{comp} \end{figure*} \begin{figure}[h] \includegraphics[scale=0.7]{cnt-vs-a.eps} \caption{The $C_3$ coefficients (in a.u.) using both the Dirac (solid line) and hydrodynamic (dashed line) models for the alkali atoms as a function of the atom-layer separation distance '$a$' interacting with CNT of radius $R=2 \ nm$.} \label{comp-cnt} \end{figure} Next, we use our polarizability values to determine $C_3$ coefficients for the interaction between the alkali atoms with the graphene layer and CNT using the hydrodynamic and Dirac model by substituting expressions given by Eq. (\ref{eq-h}) and Eq. (\ref{eq-d}), respectively, in Eq. (\ref{eq-c3g}) and Eq. (\ref{eq-c3cnt}). We plot these values against different separation distances $a$ in Fig.~\ref{comp} for all the considered atoms using both the models and $C_3$ coefficients for a perfect conducting surface, Au and SiO$_2$ wall that were studied in our previous work \cite{bindiya3}. From this figure, it can be observed that there are discrepancies in the results obtained using the hydrodynamic model and Dirac model for graphene. From the physical ground we argue that results obtained from the Dirac model are more accurate~\cite{nano1}, and the hydrodynamic model appears to over estimate the results in graphene. As can be seen from the figure that the interaction between an atom and graphene layer is appreciable only at small distances and reaches a negligible value for large separations. As expected the interaction of the alkali atoms is strongest with a perfect conductor for the same separation distance as compared to the interaction of the atom with a graphene layer. However, an interesting observation can be inferred from this figure is that the $C_3$ coefficients for the K and Rb atoms interacting with the graphene layer are more than the interaction of atoms with the theoretically presumed ideal conductor at very small separation distances (say around $1-3 \ nm$). This drives us to arrive at the conclusion that at these distances, graphene can offer tighter potentials to K and Rb atoms which can find applications in a number of experiments. A similar graph comparing the Dirac and hydrodynamic models for interaction between atoms and CNT with $R=2 \ nm$ as a function of separation distance '$a$' is shown in Fig.~\ref{comp-cnt}. The estimated $C_3$ coefficients for various atoms are shown for separation distance between 1 to 5 $nm$. For the above chosen radius of CNT and at this separation range, it has been observed that the exact results and the PFA results do not deviate much with respect to each other \cite{cylinder,nano1}. \begin{table}[t] \caption{\label{cnt}Calculated (column labeled I) and fitted (column labeled II) values obtained using Eq.~(\ref{fit-n}) for $C_3$ coefficients (in a.u.) for the atom-CNT interaction. Values for $R$ and a are given in $nm$.} \begin{tabular}{ccccccccc} \hline & \multicolumn{8}{c}{Li} \\ \hline a & \multicolumn{2}{c}{R=2} &\multicolumn{2}{c}{R=4} & \multicolumn{2}{c}{R=6} & \multicolumn{2}{c}{R=8}\\ \hline & I & II & I & II & I & II & I & II \\ 1.0 & 0.71 & 0.681 & 0.797 & 0.775 & 0.836 & 0.833 & 0.857 & 0.855 \\ 1.5 & 0.581 & 0.582 & 0.679 & 0.676 & 0.724 & 0.734 & 0.75 & 0.756 \\ 2.0 & 0.493 & 0.497 & 0.592 & 0.591 & 0.64 & 0.648 & 0.668 & 0.671 \\ 2.5& 0.429 & 0.427 & 0.525 & 0.52 & 0.574 & 0.578 & 0.603 & 0.6 \\ 3.0 & 0.379 & 0.371 & 0.471 & 0.464 & 0.519 & 0.522 & 0.55 & 0.544 \\ 3.5 & 0.339 & 0.329 & 0.426 & 0.422 & 0.474 & 0.48 & 0.505 & 0.502\\ 4.0 & 0.306 & 0.302 & 0.389 & 0.395 & 0.436 & 0.453 & 0.466 & 0.475 \\ \hline & \multicolumn{8}{c}{Na} \\ \hline 1.0 & 0.782 & 0.757 & 0.884 & 0.858 & 0.927 & 0.921 & 0.95 & 0.945 \\ 1.5 & 0.638 & 0.642 & 0.747 & 0.744& 0.796 & 0.806 & 0.824 & 0.831 \\ 2.0 & 0.539 & 0.544 & 0.647 & 0.646 & 0.699 & 0.709 & 0.73 & 0.733 \\ 2.5 & 0.466 & 0.47 & 0.571 & 0.565 & 0.624 & 0.628 & 0.656 & 0.652\\ 3.0 & 0.41 & 0.4 & 0.51 & 0.502 & 0.563 & 0.565 & 0.596 & 0.589\\ 3.5 & 0.366 & 0.354 & 0.461 & 0.456 & 0.512 & 0.519 & 0.545 & 0.543 \\ 4.0 & 0.33 & 0.325 & 0.42 & 0.427 & 0.47 & 0.49 & 0.502 & 0.514\\ \hline & \multicolumn{8}{c}{K} \\ \hline 1.0 & 1.212 & 1.172 & 1.37 & 1.33 & 1.437 & 1.428 & 1.473 & 1.465 \\ 1.5 & 0.99 & 0.995 &1.16 & 1.154 & 1.235 & 1.252 & 1.278 & 1.289 \\ 2.0 & 0.838 & 0.845 & 1.0 & 1.0 & 1.087 &1.1 & 1.135 & 1.139 \\ 2.5& 0.726 & 0.722 & 0.889 & 0.88 & 0.971 & 0.978 & 1.022 & 1.015\\ 3.0 & 0.64 & 0.675 & 0.796 & 0.783 & 0.878 & 0.881 & 0.929 & 0.919 \\ 3.5 & 0.572 & 0.554 & 0.72 & 0.712 & 0.801 & 0.811 & 0.852 & 0.848 \\ 4.0 & 0.516 & 0.51 & 0.657 & 0.669 & 0.736 & 0.767 & 0.787 & 0.804\\ \hline & \multicolumn{8}{c}{Rb} \\ \hline 1.0 & 1.371 & 1.325 & 1.548 & 1.502 & 1.624 & 1.611 & 1.665 & 1.653 \\ 1.5 & 1.113 & 1.121 &1.302 & 1.298 & 1.388 & 1.407 & 1.438 & 1.449 \\ 2.0 & 0.939 & 0.948 & 1.127& 1.125 & 1.217 &1.235 & 1.271 & 1.277 \\ 2.5& 0.8 & 0.807 & 0.993 & 0.984 & 1.04 & 1.093 & 1.097 & 1.135\\ 3.0 & 0.714 & 0.696 & 0.888 & 0.873 & 0.979 & 0.983 & 1.037 & 1.025 \\ 3.5 & 0.637 & 0.616 & 0.802 & 0.794 & 0.892 & 0.903 & 0.949 & 0.945 \\ 4.0 & 0.574 & 0.568 & 0.731 & 0.745 & 0.818 & 0.855 & 0.875 & 0.897\\ \hline \end{tabular} \end{table} \begin{table}[t] \caption{\label{fit} Fitting parameters for $C_3$(a) coefficients with a graphene layer and CNT.} \begin{ruledtabular} \begin{tabular}{ccccc} {Graphene layer} & Li & Na & K & Rb\\ \hline A$_0$(a.u.) & 7.4355 & 7.61362 & 12.5622 & 13.7257\\ B$_0$($nm$) & 8.36468 & 7.74636 & 8.41002 & 8.19064\\ \hline {CNT} & Li & Na & K & Rb\\ \hline C$_0$(a.u.) & 0.79556 & 0.89764 & 1.3852 & 1.5806 \\ A(a.u./$nm$) & -0.27172 & -0.31559 & -0.48514 & -0.5628 \\ B(a.u./$nm$) & 0.07338 & 0.07988 & 0.12444 & 0.13918 \\ C(a.u./$nm^2$) & 0.02902 & 0.03436 & 0.05293 & 0.06213 \\ D(a.u./$nm^2$) & -0.00445 & -0.00484 & -0.00754 & -0.00844\\ \end{tabular} \end{ruledtabular} \end{table} One of our motivations to carry out this study is also to find out the dependence of the atom-wall interactions on the radius of CNT. For this purpose, we present the results computed for the $C_3$ coefficients as a function of distance '$a$' and radius '$R$' of CNT in Table.~\ref{cnt}. The range for $R$ and $a$ have been chosen in accordance with the validity range of PFA. From the table, we notice that the $C_3$ coefficients increase slowly with the increase in the CNT radii, however the rate of increase is not very magnificent. With a three-fold increase in the radius, it raises the $C_3$ coefficients only about one and half times. As expected these coefficients get stronger as the size of the atom increases; i.e. from Li to Rb for a given separation distance '$a$'. We were unable to find out any previous work to compare our results with; however, we have exercised the cross-checking between our results for the H atom and H$_2$ molecule independently with the results reported in \cite{blagov} for CNT to ascertain our calculation procedure. A lot of research work is devoted to the experimental investigation of the behavior of the interactions between the trapped atoms with the graphene layers or CNTs ~\cite{trapping,trapping1,trapping2,trapping3,trapping4,trapping5}. For simplification of reproducing the surface interaction potentials from our reported $C_3$ coefficients and for any comparison of our results with theoretical values, we give a logistic fit for the interaction potential of the atom-graphene layer interaction using the following form \begin{equation} U(a)=\frac{A_0}{a^3(a+B_0)} , \label{fit-g} \end{equation} where $A_0$ (in a.u.) and $B_0$ (in $nm$) are the fitting parameters that depend on the properties of the atom. A list of these fitting parameters for the Li, Na, K and Rb atoms are given in Table~\ref{fit}. The above equation is a useful tool to predict the interaction between the alkali atoms and a graphene layer for any given separation distance '$a$'. We have used the mass gap parameter $\Delta$ value as 0.1 eV in our calculations. It has been observed that a change in $\Delta$ value from 0.1 eV to $10^{-5}$ eV causes a change of 13\% in the fitting parameters. Our fitting parameters for interaction between graphene and Na atoms are considerably different from those calculated in Ref~\cite{dirac-hydro} ($A_0$=7.11 a.u. and $B_0$=9.77 $nm$). As mentioned previously our results are more reliable keeping in mind the error in the code in Ref.~\cite{dirac-hydro} and use of our fitting parameters is recommended in extrapolating the interaction potential for graphene-alkali atom interaction. Similarly, we also fit the $U(a,R)$ results for the interaction of these atoms with CNT. However, a logistic equation didn't serve as a suitable fit for CNT, instead we use a rational Taylor equation to fit the results in the following functional form \begin{equation} U(a,R)=\frac{C_0+Aa+BR+Ca^2+DR^2}{a^3}\label{fit-n} \end{equation} and present the respective fitting coefficients with units in Table \ref{fit} with best goodness. In Table~\ref{cnt}, we compare our fitted $C_3$ coefficient values (column labeled II) with those calculated using Eq.~(\ref{eq-c3cnt}) and Eq.~(\ref{eq-d}) (column labeled I). We see a deviation of less then 4\% at all the separation distances. \section{Summary} To summarize, we have investigated the dispersion coefficients for the atom-graphene and atom-carbon nanotube interactions for the Li, Na, K, and Rb atoms in this work and compared our results with the previously reported results and against the results for the interaction of atoms with a perfect conductor. The interaction potentials of the alkali atoms are studied using both the hydrodynamic and Dirac models and their dependence on the distance between the atom and the nanotube or graphene layer and radius of the nanotube are investigated. The importance of using high precision dynamic polarizability values for such calculations specially for the heavier atoms is highlighted. Readily usable functional forms for the interaction potentials are suggested for the easy extrapolation and comparison of the experimental results with the theoretical values. \section*{Acknowledgement} The work of B.A. is supported by CSIR grant no. 03(1268)/13/EMR-II, India. H.K. acknowledges the financial support from CSIR. Computations were carried out using 3TFLOP HPC Cluster at Physical Research Laboratory, Ahmedabad.
2023-04-23T08:17:47.005Z
2013-12-02T02:14:39.000Z
redpajama/arxiv
arxiv_0000
629
6,366
838a0fda26e07db5359029e60c515cf36b0c1e94
\section{Introduction} \label{sec:intro} Query complexity is a model of computation where quantum computers are provably better than classical computers. Some of the great breakthroughs of quantum algorithms have been conceived in this model (e.g., Grover's algorithm~\cite{Gro96}). Shor's factoring algorithm~\cite{Sho97} also essentially solves a query problem exponentially faster than any classical algorithm. In this paper we study the query complexity of the oracle identification problem, the very basic problem of completely determining a string given oracle access to it. In the oracle identification problem, we are given an oracle for an unknown $N$-bit string $x$, which is promised to belong to a known set $\C \subseteq \{0,1\}^N$, and our task is to identify $x$ while minimizing the number of oracle queries. For a set $\C$, we denote this problem ${\oip}(\C)$. As usual, classical algorithms are given access to an oracle that outputs $x_i$ on input $i$, while quantum algorithms have access to a unitary $O_x$ that maps $|i,b\>$ to $|i,b \oplus x_i\>$ for $b \in \{0,1\}$. For a function $f: D \to E$, where $D \subseteq \{0,1\}^N$, let $Q(f)$ denote the bounded-error quantum query complexity of computing $f(x)$. The problem $\oip(\C)$ corresponds to computing the identity function $f(x)=x$ with $D = E =\C$. For example, let $\C_N \defeq \{0,1\}^N$. Then the classical query complexity of $\oip(\C_N)$ is $N$, since every bit needs to be queried to completely learn $x$, even with bounded error. A surprising result of van Dam shows that $Q(\oip(\C_N)) = N/2 + O(\sqrt{N})$~\cite{vDam98}. As another example, consider the set $\C_\text{H1} = \{x: |x| = 1\}$, where $|x|$ denotes the Hamming weight of $x$. This corresponds to the search problem with 1 marked item and thus $Q(\oip(\C_\text{H1})) = \Theta(\sqrt{N})$~\cite{BBBV97,Gro96}. Due to the generality of the problem, it has been studied in different contexts such as quantum query complexity \cite{AIK+04,AIK+07}, quantum machine learning~\cite{SG04,AS05,HMP+10} and post-quantum cryptography \cite{BZ13}. Several well-known problems are special cases of oracle identification, e.g., the search problem with one marked element \cite{Gro96}, the Bernstein-Vazirani problem \cite{BV97}, the oracle interrogation problem~\cite{vDam98} and hidden shift problems \cite{vDHI06}. For some applications, generic oracle identification algorithms are almost as good as algorithms tailored to the specific application \cite{CKOR13}. Consequently, the main result of this paper improves some of the upper bounds stated in \cite{CKOR13}. Ambainis et al.\ \cite{AIK+04,AIK+07} studied the oracle identification problem in terms of $N$ and $M\defeq|\C|$. They exhibited algorithms whose query complexity is close to optimal in its dependence on $N$ and $M$. For a given $N$ and $M$, we say an oracle identification algorithm is optimal in terms of $N$ and $M$ if it solves all $N$-bit oracle identification problems with $|\C|=M$ making at most $Q$ queries and there exists some $N$-bit oracle identification problem with $|\C|=M$ that requires $\Omega(Q)$ queries. This does not, however, mean that the algorithm is optimal for each set $\C$ individually, since these two parameters do not completely determine the query complexity of the problem. For example, all oracle identification problems with $M=N$ can be solved with $O(\sqrt{N})$ queries, and this is optimal since this class includes the search problem with 1 marked item ($\C_\text{H1}$ above). However there exists a set $\C$ of size $M=N$ with query complexity $\Theta(\log N)$, such as the set of all strings with arbitrary entries in the first $\log N$ bits and zeroes elsewhere. Let $\oip(M,N)$ denote the set of oracle identification problems with $\C \subseteq \{0,1\}^N$ and $|\C| = M$. Let the query complexity of $\oip(M,N)$ be the maximum query complexity of any problem in that set. Then the classical query complexity of $\oip(M,N)$ is easy to characterize: \begin{proposition}\label{prop:classicalOIP} The classical (bounded-error) query complexity of $\oip(M,N)$ is $\Theta(\min\{M,N\})$. \end{proposition} For $M\leq N$, the upper bound follows from the observation that we can always eliminate at least one potential string in $\C$ with one query. For the lower bound, consider any subset of $\C_\text{H1}$ of size $M$. For $M > N$, the lower bound follows from any set $\C \supseteq \C_\text{H1}$ and the upper bound is trivial since any query problem can be solved with $N$ queries. Now that the classical query complexity is settled, for the rest of the paper ``query complexity'' will always mean quantum query complexity. When quantum queries are permitted, the $M\leq N$ case is fully understood. For a lower bound, we consider (as before) any subset of $\C_\text{H1}$ of size $M$, which is as hard as the search problem on $M$ bits and requires $\Omega(\sqrt{M})$ queries. For an upper bound, we can reduce this to the case of $M=N$ by selecting $M$ bits such that the strings in $\C$ are distinct when restricted to these bits. (A proof of this fact appears in \cite[Theorem 11]{CKOR13}.) Thus $Q(\oip(M,N)) \leq Q(\oip(M,M))$, which is $O(\sqrt{M})$ \cite[Theorem 3]{AIK+04}. In summary, we have the following. \begin{proposition} For $M\leq N$, $Q(\oip(M,N))= \Theta(\sqrt{M})$. \end{proposition} For the hard regime, where $M > N$, the best known lower and upper bounds are the following, from \cite[Theorem 2]{AIK+04} and \cite[Theorem 2]{AIK+07} respectively. \begin{theorem}[\cite{AIK+04,AIK+07}]\label{thm:AmbainisOIP} If $N < M \leq 2^{N^{d}}$ for some constant $d<1$, then $Q(\oip(M,N))= O(\sqrt{{N\log M}/{\log N}})$ and for all $M > N$, $Q(\oip(M,N))= \Omega(\sqrt{{N\log M}/{\log N}})$. \end{theorem} When $M$ gets closer to $2^N$, their algorithm no longer gives nontrivial upper bounds. For example, if $M \geq 2^{N/\log N}$, their algorithm makes $O(N)$ queries. While not stated explicitly, an improved algorithm follows from the techniques of \cite[Theorem 6]{AIN+09}, but the improved algorithm also does not yield a nontrivial upper bound when $M \geq 2^{N/\log N}$. Ambainis et al. \cite{AIK+07} left open two problems, in increasing order of difficulty: to determine whether it is always possible to solve the oracle identification problem for $M=2^{o(N)}$ using $o(N)$ queries and to design a single algorithm that is optimal in the entire range of $M$. In this paper we resolve both open problems by completely characterizing the quantum query complexity of the oracle identification problem in the full range $N < M \leq 2^N$. Our main result is the following: \begin{restatable}{theorem}{qOIP}\label{thm:quantumOIP} For $N < M \leq 2^N$, $Q(\oip(M,N)) = \Theta\(\sqrt{\frac{N\log M}{\log({N}/{\log M})+1}}\)$. \end{restatable} The lower bound follows from the ideas in \cite{AIK+04}, but needs additional calculation. We provide a proof in \app{lb}. The lower bound also appears in an unpublished manuscript \cite[Remark 1]{AIN+09}. The $+1$ term in the denominator is relevant only when $M$ gets close to $2^N$; it ensures that the complexity is $\Theta(N)$ in that regime. Our main result is the algorithm, which is quite different from and simpler than that of \cite{AIK+07}. It is also optimal in the full range of $M$ as it makes $O\(\sqrt{\frac{N\log M}{\log({N}/{\log M})+1}}\)$ queries when $M \geq N$ and $O(\sqrt{M})$ queries when $M \leq N$. Our algorithm has two main ingredients: First, we use ideas from classical learning theory, where the oracle identification problem is studied as the problem of exact learning with membership queries \cite{Ang88}. In particular, our quantum algorithm is based on Heged\H{u}s' implementation of the halving algorithm \cite{Heg95}. Heged\H{u}s characterizes the number of queries needed to solve the classical oracle identification problem in terms of the ``extended teaching dimension'' of $\C$. While we do not use that notion, we borrow some of the main ideas of the algorithm. This is further explained in \sec{upper}. We now present a high-level overview of the algorithm. Say we know that the string in the black box, $x$, belongs to a set $S$. We can construct from $S$ a string $s$, known as the ``majority string,'' which is 1 at position $i$ if at least half the strings in $S$ are 1 at position $i$. Importantly, for any $i$, the set of strings in $S$ that disagree with $s$ at position $i$ is at most half the size of $S$. Now we search for a disagreement between $x$ and $s$ using Grover's algorithm. If the algorithm finds no disagreement, then $x = s$. If it does, we have reduced the size of $S$ by a factor of 2. This gives an algorithm with query complexity $O(\sqrt{N}\log M)$, which is suboptimal. We improve the algorithm by taking advantage of two facts: first, that Grover's algorithm can find a disagreement faster if there are many disagreements to be found, and second, that there exists an order in which to find disagreements that reduces the size of $S$ as much as possible in each iteration. The existence of such an order was shown by Heged\H{u}s \cite{Heg95}. The second ingredient of our upper bound is a general composition theorem for solutions of the filtered $\gamma_2$-norm semidefinite program (SDP) introduced by Lee et al.\ \cite{LMR+11} that preserves input-dependent query complexities. We need such a result to resolve the following problem: Our algorithm consists of $k$ bounded-error quantum algorithms that must be run sequentially because each algorithm requires as input the output of the previous algorithm. Let the query complexities of the algorithms be $Q_1(x), Q_2(x), \ldots , Q_k(x)$ on input $x$. If these were exact algorithms, we could merely run them one after the other, giving one algorithm's output to the next as input, to obtain an algorithm with worst-case query complexity $O(\max_x \sum_i Q_i(x))$. However, since these are bounded-error algorithms, we cannot guarantee that all $k$ algorithms will give the correct output with high probability. One option is to apply standard error reduction, but this would yield an algorithm that makes $O(\max_x \sum_i Q_i(x) \log k)$ queries. Instead, we prove a general composition theorem for the filtered $\gamma_2$-norm SDP that gives us an algorithm that makes $O(\max_x \sum_i Q_i(x))$ queries, as if the algorithms had no error. A similar result is known for worst-case query complexity, but that gives a suboptimal upper bound of $O(\sum_i \max_x Q_i(x))$ queries. We prove this result in \sec{gamma}. The oracle identification problem was also studied by At{\i}c{\i} and Servedio \cite{AS05}, who studied algorithms that are optimal for a given set $\C$. The query complexity of their algorithm depends on a combinatorial parameter of $\C$, $\hat{\gamma}^\C$, which satisfies $2 \leq 1/\hat{\gamma}^\C \leq N+1$. They prove $Q(\oip(\C)) = O(\sqrt{1/\hat{\gamma}^\C}\log M \log\log M)$. Our algorithm for oracle identification, without modification, makes fewer queries than this bound. Our algorithm's query complexity is $O\(\sqrt{\frac{1/\hat{\gamma}^\C}{\log{1/\hat{\gamma}^\C}}}\log M\)$, which resolves a conjecture of Hunziker et al.\ \cite{HMP+10}. We prove this in \sec{quantumml}. Our composition theorem can also be used to remove unneeded log factors from existing quantum query algorithms. As an example, we show how to improve the almost optimal Boolean matrix multiplication algorithm that requires $O(n\sqrt{l} \poly(\log n))$ queries \cite{JKM12}, where $n$ is the size of the matrices and $l$ is the sparsity of the output, to an algorithm with query complexity $O(n\sqrt{l})$. We show this in \sec{bmm}. We conclude with some discussion and open questions in \sec{open}. \section{Oracle identification algorithm} \label{sec:upper} In this section we explain the ideas that go into our algorithm and prove its correctness. We also prove the query upper bound assuming we can compose bounded-error quantum algorithms without incurring log factors, which we justify in \sec{gamma}. Throughout this section, let $x \in \C$ be the string we are trying to identify. For any set $S \in \{0,1\}^N$, let $\MAJ(S)$ be an $N$-bit string such that $\MAJ(S)_i$ is 1 if $|\{y\in S:y_i = 1\}| \geq |\{y\in S:y_i = 0\}|$ and 0 otherwise. In words, $\MAJ(S)_i$ is $b$ if the majority of strings in $S$ have bit $i$ equal to $b$. Note that the string $\MAJ(S)$ need not be a member of $S$. In this paper, all logarithms are base 2 and for any positive integer $k$, we define $[k] \defeq \{1,2,\ldots,k\}$. \subsection{Basic halving algorithm} We begin by describing a general learning strategy called the halving algorithm, attributed to Littlestone \cite{Lit88}. Say we currently know that the oracle contains a string $x \in S\subseteq \C$. The halving algorithm tests if the oracle string $x$ is equal to $\MAJ(S)$. If it is equal, we have identified $x$; if not, we look for a bit at which they disagree. Having found such a bit $i$, we know that $x_i \neq \MAJ(S)_i$, and we may delete all strings in $S$ that are inconsistent with this. Since at most half the strings in $S$ disagree with $\MAJ(S)$ at any position, we have at least halved the number of potential strings. To convert this into a quantum algorithm, we need a subroutine that tests if a given string $\MAJ(S)$ is equal to the oracle string $x$ and finds a disagreement otherwise. This can be done by running Grover's algorithm on the bitwise $\textsc{xor}$ of $x$ and $\MAJ(S)$. This gives us the following simple algorithm. \begin{algorithm} \caption{Basic halving algorithm \label{alg:halving}} \begin{algorithmic}[1] \Statex \Let{$S$}{$\C$} \Repeat \State{Search for a disagreement between $x$ and $\MAJ(S)$. If we find a disagreement, delete all inconsistent strings from $S$. If not, let $S \gets \{\MAJ(S)\}$.} \Until{$|S|=1$} \end{algorithmic} \end{algorithm} This algorithm always finds the unknown string $x$, since $S$ always contains $x$. The loop can run at most $\log M$ times, since each iteration cuts down the size of $S$ by a factor of 2. Grover's algorithm needs $O(\sqrt{N})$ queries, but it is a bounded-error algorithm. For this section, let us assume that bounded-error algorithms can be treated like exact algorithms and need no error reduction. Assuming this, \alg{halving} makes $O(\sqrt{N}\log M)$ queries. \subsection{Improved halving algorithm} Even assuming free error reduction, \alg{halving} is not optimal. Primarily, this is because Grover's algorithm can find an index $i$ such that $x_i \neq \MAJ(S)_i$ faster if there are many such indices to be found, and \alg{halving} does not exploit this fact. Given an $N$-bit binary string, we can find a 1 with $O(\sqrt{{N}/{K}})$ queries in expectation, where $K>0$ is the number of 1s in the string \cite{BBHT98}. Alternately, there is a variant of Grover's algorithm that finds the first 1 (from left to right, say) in the string in $O(\sqrt{p})$ queries in expectation where $p$ is the position of the first 1. This follows from the known $O(\sqrt{N})$ algorithm for finding the first 1 in a string of size $N$ \cite{DHHM06}, by running that algorithm on the first $2^k$ bits, for $k=1,2,\ldots, \log N$. We can now modify the previous algorithm to look for the first disagreement between $x$ and $\MAJ(S)$ instead of any disagreement. \begin{algorithm} \caption{Improved halving algorithm \label{alg:halving2}} \begin{algorithmic}[1] \Statex \Let{$S$}{$\C$} \Repeat \State{Search for the first disagreement between $x$ and $\MAJ(S)$. If we find a disagreement, delete all inconsistent strings from $S$. If not, let $S \gets \{\MAJ(S)\}$.} \Until{$|S|=1$} \end{algorithmic} \end{algorithm} As before, the algorithm always finds the unknown string. To analyze the query complexity, let $r$ be the number of times the loop repeats and $p_1, p_2, \ldots, p_r$ be the positions of disagreement found. After the first run of the loop, since a disagreement is found at position $p_1$, we have learned the first $p_1$ bits of $x$; the first $p_1-1$ bits agree with $\MAJ(S)$, while bit $p_1$ disagrees with $\MAJ(S)$. Thus we are left with a set $S$ in which all strings agree on these $p_1$ bits. For convenience, we can treat $S$ as a set of strings of length $N-p_1$ (instead of length $N$). Each iteration reduces the effective length of strings in $S$ by $p_i$, which gives $\sum_i p_i \leq N$, since there are at most $N$ bits to be learned. As before, the loop can run at most $\log M$ times, thus $r \leq \log M$. Finally, let us assume again that these bounded-error search subroutines are exact. Then this algorithm requires $O(\sum_i \sqrt{p_i})$ queries, which is $O(\sqrt{N\log M})$, by the Cauchy--Schwarz inequality. \subsection{Final algorithm} While \alg{halving2} is an improvement over \alg{halving}, it is still not optimal. One reason is that sometimes a disagreement between the majority string and $x$ may eliminate more than half the possible strings. This observation can be exploited by finding disagreements in such a way as to maximize the reduction in size when a disagreement is found. This idea is due to Heged\H{u}s \cite{Heg95}. To understand the basic idea, consider searching for a disagreement between $x$ and $\MAJ(S)$ classically. The most obvious strategy is to check if $x_1 = \MAJ(S)_1$, $x_2 = \MAJ(S)_2$, and so on until a disagreement is found. This strategy makes more queries if the disagreement is found at a later position. However, we could have chosen to examine the bits in any order. We would like the order to be such that if a disagreement is found at a later position, it cuts down the size of $S$ by a larger factor. Such an ordering would ensure that either we spend very few queries and achieve a factor-2 reduction right away, or we spend more queries but the size of $S$ goes down significantly. Heged\H{u}s shows that there is always a reordering of the bits that achieves this. The following lemma is similar to \cite[Lemma 3.2]{Heg95}, but we provide a proof for completeness. \begin{lemma} \label{lem:ordering} For any $S \subseteq \{0,1\}^N$, there exists a string $s \in \{0,1\}^N$ and a permutation $\sigma$ on $N$, such that for any $p \in [N]$, $|S_p| \leq \frac{|S|}{\max\{2,p\}}$, where $S_p = \{y\in S: y_{\sigma(i)} = s_{\sigma(i)} \text{ for }1\leq i\leq p-1 \text{ and } y_{\sigma(p)} \neq s_{\sigma(p)}\}$, the set of strings in $S$ that agree with $s$ at $\sigma(1), \ldots , \sigma(p-1)$ and disagree with it at $\sigma(p)$. \end{lemma} \begin{proof} We will construct the permutation $\sigma$ and string $s$ greedily, starting with the first position, $\sigma(1)$. We choose this bit to be one that intuitively contains the most information, i.e., a bit for which the fraction of strings that agree with the majority is closest to 1/2. This choice will make $|S_1|$ as large as possible. More precisely, we choose $\sigma(1)$ to be any $j$ that maximizes $|\{y\in S: y_j \neq \MAJ(S)_j\}|$. Then let $s_{\sigma(1)}$ be $\MAJ(S)_{\sigma(1)}$. In general, after having chosen $\sigma(1), \ldots , \sigma(k-1)$ and having defined $s$ on those bits, we choose $\sigma(k)$ to be the most informative bit assuming all previous bits have agreed with string $s$ on positions $\sigma(1), \ldots , \sigma(k-1)$. This choice makes $|S_{k}|$ as large as possible. More precisely, define $\bar{S}_p = \{y \in S: y_{\sigma(i)} = s_{\sigma(i)} \text{ for all } 1 \leq i \leq p\}$. We choose $\sigma(k)$ to be any bit $j$ that maximizes $|\{y \in \bar{S}_{k-1}: y_{j}\neq \MAJ(\bar{S}_{k-1})_j\}|$. Then let $s_{\sigma(k)}$ be $\MAJ(\bar{S}_{k-1})_{\sigma(k)}$. This construction ensures that $|S_1| \geq |S_2| \geq \ldots \geq |S_N|$. Since $\sigma(k)$ was chosen to maximize $|\{y \in \bar{S}_{k-1}: y_{j}\neq \MAJ(\bar{S}_{k-1})_j\}|$, we have $|S_k| = |\{y \in \bar{S}_{k-1}: y_{\sigma(k)}\neq \MAJ(\bar{S}_{k-1})_{\sigma(k)}\}| \geq|\{y \in \bar{S}_{k-1}: y_{\sigma(k+1)}\neq \MAJ(\bar{S}_{k-1})_{\sigma(k+1)}\}|$. The size of this set is at least $|\{y \in \bar{S}_k: y_{\sigma(k+1)}\neq \MAJ(\bar{S}_{k-1})_{\sigma(k+1)}\}|$, since $\bar{S}_{k} \subseteq \bar{S}_{k-1}$. We do not know the value of $\MAJ(\bar{S}_{k-1})_{\sigma(k+1)}$ (e.g., it need not be equal to $s_{\sigma(k+1)}$), but we do know that it is either 0 or 1. So this term is at least $\min\{|\{y \in \bar{S}_k: y_{\sigma(k+1)}\neq 0\}|,|\{y \in \bar{S}_k: y_{\sigma(k+1)}\neq 1\}|\} = \min\{|\{y \in \bar{S}_k: y_{\sigma(k+1)}\neq s_{\sigma(k+1)}\}|,|\{y \in \bar{S}_k: y_{\sigma(k+1)} = s_{\sigma(k+1)}\}|\} = \min\{|S_{k+1}|,|\bar{S}_{k+1}|\} = |S_{k+1}|$, where the last equality uses $|S_k| \leq |\bar{S}_k|$ for all $k$. Finally, combining $|S_1| + \ldots + |S_p| \leq |S|$ with $|S_1| \geq |S_2| \geq \ldots \geq |S_p|$ gives us $|S_p| \leq |S|/{p}$. Combining this with $|S_1| \leq |S|/2$, which follows from the definition of $S_1$, yields the result. \end{proof} We can now state our final oracle identification algorithm. \begin{algorithm} \caption{Final algorithm \label{alg:final}} \begin{algorithmic}[1] \Statex \Let{$S$}{$\C$} \Repeat \State{Let $\sigma$ and $s$ be as in \lem{ordering}. Search for the first (according to $\sigma$) disagreement between $x$ and $s$. If we find a disagreement, delete all inconsistent strings from $S$. If not, let $S \gets \{s\}$.} \Until{$|S|=1$} \end{algorithmic} \end{algorithm} As before, it is clear that this algorithm solves the problem. Let us analyze the query complexity. To compute the query complexity, let $r$ be the number of times the loop repeats. Let $p_1, p_2, \ldots, p_r$ be the positions of disagreement. We have $\sum_{i=1}^r p_i \leq N$, as in \alg{halving2}. Unlike the previous analysis, the bound $r \leq \log M$ can be loose, since the size of $S$ may reduce by a larger factor due to \lem{ordering}. Instead, we know that each iteration reduces the set $S$ by a factor of $\max\{2,p_i\}$, which gives us $\prod_{i=1}^{r} \max\{2,p_i\} \leq M$. As before, we will assume the search subroutine is exact, which gives us a query upper bound of $O(\sum_{i=1}^{r} \sqrt{p_i})$, subject to the constraints $\sum_{i=1}^r p_i \leq N$ and $\prod_{i=1}^{r} \max\{2,p_i\} \leq M$. We solve this optimization problem in \app{proofs} to obtain the following lemma. \begin{restatable}{lemma}{opt} \label{lem:opt} Let $C(M,N)$ be the maximum value attained by $\sum_{i=1}^{r} \sqrt{p_i}$, subject to the constraints $\sum_{i=1}^{r} p_i \leq N,$ $\prod_{i=1}^{r} \max\{2,p_i\} \leq M,$ $r \in [N]$ and $p_i \in [N]$ for all $i \in [r]$. Then $C(M,N) = O\(\sqrt{\frac{N\log M}{\log({N}/{\log M})+1}}\)$ and $C(M,N) = O(\sqrt{M})$. \end{restatable} Thus \alg{final} achieves the upper bound claimed in \thm{quantumOIP}, under our assumption. We can now return to the assumption that the search subroutine is exact. Since it is not exact, we could reduce the error with logarithmic overhead. However, it is usually unnecessary to incur this loss in quantum query algorithms. In the next section we prove this and rigorously establish the query complexity of \alg{final}. \section{Composition theorem for input-dependent query complexity} \label{sec:gamma} The primary aim of this section is to rigorously establish the query complexity of \alg{final}. Along the way, we will develop techniques that can be used more generally. Let us begin by describing what we would like to prove. \alg{final} essentially consists of a loop repeated $r(x)$ times. We write $r(x)$ to make explicit its dependence on the input $x$. The loop itself consists of running a variant of Grover's algorithm on $x$, based on information we have collected thus far about $x$. Call these algorithms $A_1, A_2, \ldots, A_{r(x)}$. To be clear, $A_1$ is the algorithm that is run the first time the loop is executed, i.e., it looks for a disagreement under the assumption that $S = \C$. It produces an output $p_1(x)$, which is then used by $A_2$. $A_2$ looks for a disagreement assuming a modified set $S$, which is smaller than $\C$. Let us say that in addition to $p_2(x)$, $A_2$ also outputs $p_1(x)$. This ensures that the output of $A_i$ completely describes all the information we have collected about $x$. Thus algorithm $A_{i+1}$ now only needs the output of $A_i$ to work correctly. We can now view \alg{final} as a composition of $r(x)$ algorithms, $A_1, A_2, \ldots, A_{r(x)}$. It is a composition in the sense that the output of one is required as the input of the next algorithm. We know that the expected query complexity of $A_i$ is $O(\sqrt{p_i(x)})$. If these algorithms were exact, then running them one after the other would yield an algorithm with expected query complexity $O(\sum_i \sqrt{p_i(x)})$. But since they are bounded error, this does not work. However, if we consider their worst-case complexities, we can achieve this query complexity. If we have $r$ algorithms $A_1, A_2, \ldots, A_r$ with worst-case query complexities $Q_i$, then there is a quantum algorithm that solves the composed problem with $O(\sum_i Q_i)$ queries. This is a remarkable property of quantum algorithms, which follows from the work of Lee et al.\ \cite{LMR+11}. We first discuss this simpler result before moving on to input-dependent query complexities. \subsection{Composition theorem for worst-case query complexity} We now show a composition theorem for solutions of the filtered $\gamma_2$-norm SDP, which implies a similar result for worst-case quantum query complexity. This follows from the work of Lee et al.\ \cite{LMR+11}, which we generalize in the next section. As discussed in the introduction, let $D \subseteq \{0,1\}^N$, and consider functions that map $D \to E$. For any matrix $A$ indexed by elements of $D$, we define a quantity $\gamma(A)$. (To readers familiar with the notation of \cite{LMR+11}, this is the same as their $\gamma_2(A|\Delta)$.) \begin{definition} Let $A$ be a square matrix indexed by $D$. We define $\gamma({A})$ as the following program. \begin{align} \gamma({A}) & \defeq \min_{\{\ket{u_{x j}}, \ket{v_{y j}}\}} \max_{x \in D} \quad c(x)\\ \label{eq:constr1} \text{subject to:}& \qquad \forall x \in D, \quad c(x) = \max \Big\{ \sum_j \norm{\ket{u_{xj}}}^2, \sum_j \norm{\ket{v_{xj}}}^2\Big\}\\ &\qquad \forall x,y \in D, \quad \sum_{j:x_j \neq y_j} \<{u_{xj}}|{v_{yj}}\> = A_{xy} \end{align} \end{definition} We use $\gamma(A)$ to refer to both the semidefinite program (SDP) above and its optimum value. For a function $f:D\to E$, let $F$ be its Gram matrix, defined as $F_{xy} = 1$ if $f(x) \neq f(y)$ and $F_{xy} = 0$ otherwise. Lee et al.\ showed that $Q(f) = \Theta(\gamma(J-F))$, where $J$ is the all-ones matrix. More generally, they showed that this SDP also upper bounds the quantum query complexity of state conversion. In the state conversion problem, we have to convert a given state $|s_x\>$ to $|t_x\>$. An explicit description of the states $|s_x\>$ and $|t_x\>$ is known for all $x \in D$, but we do not know the value of $x$. Since the query complexity of this task depends only on the Gram matrices of the starting and target states, define $S$ and $T$ by $S_{xy} = \<s_x|s_y\>$ and $T_{xy} = \<t_x|t_y\>$ for all $x,y \in D$. Let $S \mapsto T$ denote the problem of converting states with Gram matrix $S$ to those with Gram matrix $T$. If $F$ is the Gram matrix of a function $f$, then $J \mapsto F$ is the function evaluation problem. Lee et al.\ showed that $Q(S \mapsto T) = O(\gamma(S-T))$, which generalizes $Q(f) = O(\gamma(J-F))$. We now have the tools to prove the composition theorem for the filtered $\gamma_2$-norm SDP. \begin{theorem}[\cite{LMR+11}] \label{thm:worstcasecomp} Let $f_0, f_1, \ldots, f_k$ be functions with Gram matrices $F_0, F_1, \ldots, F_k$. Let $C_1, C_2, \ldots, C_k$ be the optimum value of the SDPs for the state conversion problems $F_0 \mapsto F_1, F_1 \mapsto F_2, \ldots , F_{k-1} \mapsto F_k$, i.e., for $i \in [k]$, $C_i = \gamma(F_{i-1} - F_i)$. Then, $\gamma(F_0 - F_k) \leq \sum_{i=1}^k C_i$. \end{theorem} This does not appear explicitly in \cite{LMR+11}, but simply follows from the triangle inequality $\gamma(A+B) \leq \gamma(A)+\gamma(B)$ \cite[Lemma A.2]{LMR+11}. From this we can also show an analogous theorem for quantum query complexity, which states $Q(F_0 \mapsto F_k) = O(\sum_{i=1}^k Q(F_{i-1} \mapsto F_i))$. We do not prove this claim as we do not need it in this paper. For our application, we require a composition theorem similar to \thm{worstcasecomp}, but for input-dependent query complexity. However, it is not even clear what this means a priori, since the value $\gamma(J-F)$ does not contain information about input-dependent complexities. Indeed, the value is a single number and cannot contain such information. However, the SDP does contain this information and we modify this framework to be able to access this. For example, let $f$ be the find-first-one function, which outputs the smallest $i$ such that $x_i=1$ and outputs $N+1$ if $x=0^N$. There is a quantum algorithm that solves this with $O(\sqrt{f(x)})$ queries in expectation. Furthermore, there is a feasible solution for the $\gamma(J-F)$ SDP with $c(x)=O(\sqrt{f(x)})$, where $c(x)$ is the function that appears in \eq{constr1}. This suggests that $c(x)$ gives us information about the $x$-dependent query complexity. The same situation occurs when we consider the search problem with multiple marked items. There is a feasible solution with $c(x) = O(\sqrt{N/K})$ for inputs with $K$ ones. This function $c(x)$ will serve as our input-dependent cost measure. \subsection{Cost functions} \begin{definition}[Cost function] Let $A$ be a square matrix indexed by $D$. We say $c:D \to \R$ is a feasible cost function for $\gamma({A})$ if there is a feasible solution of $\gamma({A})$ with values $c(x)$ in eq. \eq{constr1}. Let the set of all feasible cost functions for $\gamma(A)$ be denoted $\Gamma(A)$. \end{definition} Note that if $c$ is a feasible cost function for $\gamma(J-F)$, then $\max_x c(x)$ is an upper bound on the worst-case cost, $\gamma(J-F)$, which is exactly what we expect from an input-dependent cost. We can now prove an input-dependent analogue of \thm{worstcasecomp} with $c(x)$ playing the role of $\gamma(J-F)$. \begin{theorem} \label{thm:comp} Let $f_0,f_1, \ldots , f_k$ be functions with Gram matrices $F_0,F_1, \ldots, F_k$. Let $c_1, c_2, \ldots, c_k$ be feasible cost functions for $\gamma(F_0-F_1), \gamma(F_1 - F_2), \ldots, \gamma(F_{k-1} - F_{k})$, i.e., for $i \in [k]$, $c_i \in \Gamma(F_{i-1} - F_i)$. Then there is a $c \in \Gamma(F_0-F_k)$ satisfying $c(x) \leq \sum_i c_i(x)$ for all $x \in D$. \end{theorem} As in the case of \thm{worstcasecomp}, this follows from an analogous triangle inequality. \begin{lemma}\label{lem:triangle} Let $A$ and $B$ be square matrices indexed by $D$. If $c_A \in \Gamma(A)$ and $c_B\in \Gamma(B)$, there exists a $c \in \Gamma(A+B)$ satisfying $c(x) \leq c_A(x) + c_B(x)$ for all $x \in D$. \end{lemma} \begin{proof} Since $c_A \in \Gamma(A)$ and $c_B\in \Gamma(B)$, there exist vectors that satisfy the following constraints: $\sum_{j:x_j \neq y_j} \<{u^{A}_{xj}}|{v^{A}_{yj}}\> = (A)_{xy}$ with $c_A(x) = \max \{\sum_j \norm{\ket{u^{A}_{xj}}}^2, \sum_j \norm{\ket{v^{A}_{xj}}}^2\}$ and $\sum_{j:x_j \neq y_j} \<{u^{B}_{xj}}|{v^{B}_{yj}}\> = (B)_{xy}$ with $c_B(x) = \max \{\sum_j \norm{\ket{u^{B}_{xj}}}^2, \sum_j \norm{\ket{v^{B}_{xj}}}^2\}$. Now define $\ket{u_{x j}}= \ket{1}\ket{u^{A}_{x j}} + \ket{2}\ket{u^{B}_{xj}}$ and $\ket{v_{x j}}= \ket{1}\ket{v^{A}_{x j}} + \ket{2}\ket{v^{B}_{xj}}$. We claim that these vectors are feasible for $\gamma(J-G)$. The constraints are satisfied since $\sum_{j:x_j \neq y_j} \<{u_{xj}}|{v_{yj}}\> = \sum_{j:x_j \neq y_j} \<{u^{A}_{xj}}|{v^{A}_{yj}}\> + \sum_{j:x_j \neq y_j} \<{u^{B}_{xj}}|{v^{B}_{yj}}\> = (A)_{xy} + (B)_{xy} = (A+B)_{xy}$. The cost function for this solution, $c(x)$, is $\max \{\sum_j \norm{\ket{u_{xj}}}^2, \sum_j \norm{\ket{v_{xj}}}^2\}$, which gives $c(x) = \max \{ \sum_j \norm{\ket{u^{A}_{xj}}}^2+\norm{\ket{u^{B}_{xj}}}^2, \sum_j \norm{\ket{v^{A}_{xj}}}^2+\norm{\ket{v^{B}_{xj}}}^2 \} \leq c_A(x) + c_B(x)$. \end{proof} In our applications, we will encounter algorithms that also output their input, i.e., accept as input $f(x)$ and output $(f(x),g(x))$. Note that the Gram matrix of the function $h(x) = (f(x),g(x))$ is merely $H = F \circ G$, defined as $H_{xy} = F_{xy} G_{xy}$. Such an algorithm can either be thought of as a single quantum algorithm that accepts $f(x)\in E$ as input and outputs $(f(x),g(x))$ or as a collection of algorithms $A_e$ for each $e\in E$, such that algorithm $A_{f(x)}$ requires no input and outputs $(f(x),g(x))$ on oracle input $x$. These are equivalent viewpoints, since in one direction you can construct the algorithms $A_e$ from $A$ by hardcoding the value of $e$ and in the other direction, we can read the input $e$ and call the appropriate $A_e$ as a subroutine and output $(e, A_e(x))$. Additionally, if the algorithm $A_{f(x)}$ makes $q(x)$ queries on oracle input $x$, the algorithm $A$ we constructed accepts $f(x)$ as input, outputs $(f(x),g(x))$, and makes $q(x)$ queries on oracle input $x$. While intuitive for quantum algorithms, we need to establish this rigorously for cost functions. \begin{theorem}\label{thm:circ} Let $f,g:D \to E$ be functions with Gram matrices $F$ and $G$. For any $e \in E$, let $f^{-1}(e) = \{x: f(x)=e\}$. For every $e \in E$, let $c_e:f^{-1}(e)\to \R$ be a feasible cost function for $\gamma(J - G_e)$, where $G_e$ denotes the matrix $G$ restricted to those $x$ that satisfy $f(x) = e$. Then there exists a $c\in \Gamma(F - F\circ G)$, such that $c(x) = c_{f(x)}(x)$. \end{theorem} \begin{proof} We build a feasible solution for $\gamma(F - F \circ G)$ out of the feasible solutions for $\gamma(J-G_e)$. We have vectors $\{\ket{u^e_{x j}}, \ket{v^e_{y j}}\}$ for each $e \in E$ that satisfy $\sum_{j:x_j \neq y_j} \<{u^e_{xj}}|{v^e_{yj}}\> = (J-G_e)_{xy}$ for all $x,y \in f^{-1}(e)$ and $c_e(x) = \max \{\sum_j \norm{\ket{u^e_{xj}}}^2, \sum_j \norm{\ket{v^e_{xj}}}^2\}$. Let $\ket{u_{x j}}= \ket{f(x)}\ket{u^{f(x)}_{xj}}$ and $\ket{v_{x j}}= \ket{f(x)}\ket{v^{f(x)}_{xj}}$. This is a feasible solution for $\gamma(F - F\circ G)$, since $\sum_{j:x_j \neq y_j} \<{u_{xj}}|{v_{yj}}\> = \sum_{j:x_j \neq y_j} \<f(x)|f(y)\>\<{u^{f(x)}_{xj}}|{v^{f(y)}_{yj}}\> = F_{xy} \circ (J - G_{f(x)})_{xy} = F_{xy} - (F \circ G)_{xy}$. Note that when $f(x) \neq f(y)$, the value of $\sum_{j:x_j \neq y_j} \<{u^{f(x)}_{xj}}|{v^{f(y)}_{yj}}\>$ is not known, but this only happens when $F_{xy} = 0$, which makes the term 0. Lastly, the cost function for this solution is $\max \{\sum_j \norm{\ket{u_{xj}}}^2, \sum_j \norm{\ket{v_{xj}}}^2\}$, which is $\max \{ \sum_j \norm{\ket{u^{f(x)}_{xj}}}^2, \sum_j \norm{\ket{v^{f(x)}_{xj}}}^2 \} = c_{f(x)}(x)$. \end{proof} \subsection{Algorithm analysis} We can now return to computing the query complexity of \alg{final}. Using the same notation as in the beginning of this section, for any $x \in \C$, we define $r(x)$ to be the number of times the repeat loop is run in \alg{final} for oracle input $x$ assuming all subroutines have no error. Similarly, let $p_1(x),p_2(x),\ldots p_{r(x)}(x)$ be the first positions of disagreement found in each run of the loop. Note that $p_1(x),p_2(x),\ldots p_{r(x)}(x)$ together uniquely specify $x$. Let $r = \max_x r(x)$. We now define $r$ functions $f_1, \ldots, f_r$ as $f_1(x) = p_1(x), f_2(x) = (p_1(x),p_2(x)), \ldots, f_r(x) = (p_1(x), \ldots, p_r(x))$, where $p_k(x) = 0$ if $k>r(x)$. Thus if $P_i$ are the Gram matrices of the functions $p_i$, then $F_1 = P_1, F_2 = P_1 \circ P_2, \ldots, F_r = P_1 \circ P_2 \circ \cdots \circ P_r$. We will now construct a solution for $\gamma(J-F_r)$, using solutions for the intermediate functions $f_i$. From \thm{comp} we know that we only need to construct solutions for $\gamma(J-F_1), \gamma(F_1 - F_2), \ldots ,\gamma(F_{r-1} - F_r)$. From \thm{circ} we know that instead of constructing a solution for $\gamma(F_k - F_{k+1})$, which is $\gamma(F_k - F_k \circ P_{k+1})$, we can construct several solutions, one for each value of $f_k(x)$. More precisely, let $f_k:D \to E_k$; then we can construct solutions for $\gamma(J - P_{k+1}^e)$ for all $e\in E_k$, where $P_{k+1}^e$ is the matrix $P_{k+1}$ restricted to $x$ that satisfy $f_k(x) = e$. For any $k$, the problem corresponding to $\gamma(J - P_{k+1}^e)$ is just the problem of finding the first disagreement between $x$ and a known string, which is the essentially the find-first-one function. This has a solution with cost function $O(\sqrt{f(x)})$, which in this case is $O(\sqrt{p_{k+1}(x)})$. \begin{theorem} \label{thm:firstmarked} Let $f$ be the function that outputs the smallest $i$ such that $x_i = 1$ and outputs $N+1$ if $x = 0^N$ and let $F$ be its Gram matrix. Then there is a $c \in \Gamma(J-F)$ such that $c(x) = O(\sqrt{f(x)})$. \end{theorem} \begin{proof} Let $a_k = k^{-1/4}$ and $b_k = 1/a_k = k^{1/4}$. Define $|u_{xj}\> = |v_{xj}\>$ as the following. \[ |u_{xj}\> = |v_{xj}\> = \begin{cases} a_j, & \text{if } j < f(x) \\ b_{f(x)}, & \text{if } j = f(x) \\ 0, & \text{if } j > f(x) .\end{cases} \] This is a feasible solution for $\gamma(J-F)$. Since the constraints are symmetric in $x$ and $y$, there are two cases: either $f(x) < f(y)$ or $f(x) = f(y)$. For the first case, $\sum_{j:x_j \neq y_j} \<{u_{xj}}|{v_{yj}}\> = \sum_{j=f(x)} \<{u_{xj}}|{v_{yj}}\> = a_{f(x)} b_{f(x)} = 1$, since $x$ and $y$ agree on all positions before $f(x)$. For the second case, $\sum_{j:x_j \neq y_j} \<{u_{xj}}|{v_{yj}}\> = 0$, since the only bits that $x$ and $y$ disagree on appear after position $f(x)=f(y)$. To compute the cost function, note that $c(0^N) = \sum_{k=1}^{N} a_k^2 = O(\sqrt{N}) = O(\sqrt{f(0^N)})$. For all other $x$, $c(x) = \sum_{k=1}^{f(x)-1} a_k^2 + b_{f(x)}^2 = \sum_{k=1}^{f(x)-1} k^{-1/2} + \sqrt{f(x)} = O(\sqrt{f(x)})$. \end{proof} Our function is different from this one in two ways. First, we wish to find the first disagreement with a fixed string $s$ instead of the first 1. This change does not affect the Gram matrix or the SDP. Second, we are looking for a disagreement according to an order $\sigma$, not from left to right. This is easy to fix, since we can replace $j$ with $\sigma(j)$ in the definition of the vectors in the proof above. This shows that for any $k$, there is a feasible cost function for $\gamma(J - P_{k+1}^e)$ with cost $c(x)= O(\sqrt{p_{k+1}(x)})$ for any $x$ that satisfies $f_k(x) = e$. Using \thm{circ}, we get that for any $k$ there is a $c_k \in \Gamma(F_k - F_k\circ P_{k+1})$ with $c_k(x) = O(\sqrt{p_{k+1}(x)})$ for all $x \in D$. Finally, using \thm{comp}, we have a $c \in \Gamma(J-F_r)$ with cost $c(x) = O(\sum_{i=1}^{r} \sqrt{p_i(x)}) = O(\sum_{i=1}^{r(x)} \sqrt{p_i(x)})$. Since the function $f_r(x)$ uniquely determines $x$, we have a feasible cost function for oracle identification with cost $O(\sum_{i=1}^{r(x)} \sqrt{p_i(x)})$, subject to the constraints of \lem{opt}, which we have already solved. Along with the lower bound proved in \app{lb}, this yields the main result. \qOIP* \section{Other applications} \subsection{Quantum learning theory} \label{sec:quantumml} The oracle identification problem has also been studied in quantum learning theory with the aim of characterizing $Q(\oip(\C))$. The algorithms and lower bounds studied apply to arbitrary sets $\C$, not just to the class of sets of a certain size, as in the rest of the paper. We show that \alg{final} also performs well for any set $\C$, outperforming the best known algorithm. The known upper and lower bounds for this problem are in terms of a combinatorial parameter $\hat{\gamma}^\C$, defined by Servedio and Gortler. They showed that for any $\C$, $Q(\oip(\C))=\Omega(\sqrt{1/\hat{\gamma}^\C} + \frac{\log M}{\log N})$ \cite{SG04}. Later, At{\i}c{\i} and Servedio showed that $Q(\oip(\C)) =O(\sqrt{1/\hat{\gamma}^\C} \log M \log\log M)$ \cite{AS05}. While we do not define $\hat{\gamma}^\C$, we can informally describe it as follows: $\hat{\gamma}^\C$ is the largest $\alpha<1$, such that for any set $S\subseteq \C$, if we know that $x$ belongs to $S$, there is a bit of $x$ that can be queried such that size of the set of strings consistent with the answer to this query is at most $(1-\alpha)|S|$, no matter what the oracle responds. This ensures that if we query the oracle with the permutation of \lem{ordering}, which was chosen to maximize the number of strings eliminated with a query, each query reduces the size of $S$ by a factor of $(1-\hat{\gamma}^\C)$. This adds an extra constraint to \lem{opt} of the form $M \prod_i^r (1-\hat{\gamma}^\C)^{p_i} \geq 1$, since learning $p_i$ bits will reduce the size of the remaining set by a factor of $(1-\hat{\gamma}^\C)^{p_i}$. From this constraint we get $(\sum_i p_i) \log(1-\hat{\gamma}^\C) \geq -\log M$. Using $\log(1-\hat{\gamma}^\C) \leq -\hat{\gamma}^\C$ gives $\sum_i p_i \leq \frac{\log M}{\hat{\gamma}^\C}$. We may now replace the constraint $\sum_i p_i \leq N$ with $\sum_i p_i \leq \frac{\log M}{\hat{\gamma}^\C}$ in the optimization problem of \lem{opt}. This inequality also implies $p_i \leq \frac{\log M}{\hat{\gamma}^\C}$ and $r \leq \frac{\log M}{\hat{\gamma}^\C}$. Thus we may simply replace all occurrences of $N$ by $\frac{\log M}{\hat{\gamma}^\C}$ in \lem{opt}. This yields the following theorem, which resolves a conjecture of Hunziker et al.\ \cite[Conjecture 2]{HMP+10}. \begin{theorem} \alg{final} solves $\oip(\C)$ with $O\(\sqrt{\frac{1/\hat{\gamma}^\C}{\log {1/\hat{\gamma}^\C}}}\log M\)$ queries. \end{theorem} This shows that \alg{final} performs well on any set $\C$, since $Q(\oip(\C))=\Omega(\sqrt{1/\hat{\gamma}^\C} + \frac{\log M}{\log N})$. By combining this lower bound with our upper bound, we see that \alg{final} makes $O(\frac{Q(\oip(\C))^2}{\sqrt{\log Q(\oip(\C))}} \log N)$ queries, which means it can be at most about quadratically worse than the best algorithm for $\oip(\C)$. \subsection{Boolean matrix multiplication} \label{sec:bmm} In this section we show how to improve the upper bound on Boolean matrix multiplication (BMM) from $O(n\sqrt{l} \poly(\log n))$ \cite{JKM12} to $O(n\sqrt{l})$, where $n$ is the size of the matrices and $l$ is the sparsity of the output. Just like in the oracle identification problem, we will break up the BMM algorithm of \cite{JKM12} into a sequence of algorithms $A_i$ such that the output of $A_i$ is the input of $A_{i+1}$, and convert each algorithm into a feasible solution for the corresponding SDP. The BMM algorithm is almost of this form. The main algorithm uses two subroutines for graph collision, one to solve the decision problem and another to find all collisions. The first subroutine solves the decision problem on a bipartite graph with $2n$ vertices and $m$ nonedges in $O(\sqrt{n}+\sqrt{m})$ queries. Since the graph is not part of the oracle input, this query complexity is not input dependent, and thus there is a feasible SDP solution for this problem with $c(x) = O(\sqrt{n}+\sqrt{m})$ for all $x$, using the known characterization of Lee et al.\ \cite{LMR+11}. The second subroutine finds all graph collisions in an instance with $\lambda$ collisions using $O(\sqrt{n\lambda} + \sqrt{m})$ queries. This upper bound is input dependent, since $\lambda$ is a function of the input. In this subroutine, the only input-dependent algorithm is the variant of Grover's algorithm that requires $O(\sqrt{nk})$ queries to output all the ones in an $n$-bit string when there are $k$ ones. It is easy to show that there is a feasible cost function for this with $c(x)=O(\sqrt{nk})$. For example, we may compose the SDP solution for the find-first-one function (\thm{firstmarked}) with itself repeatedly to find all ones. The cost function of the resultant SDP will satisfy $c(x) = O(\sum_i \sqrt{p_i})$, where $p_i$s are the locations of the ones. By the Cauchy--Schwarz inequality this is $O(\sqrt{nk})$. Thus the second graph collision algorithm also has a feasible cost function $c(x)=O(\sqrt{n\lambda} + \sqrt{m})$. The BMM algorithm breaks up the problem into $n$ instances of graph collision. The algorithm repeatedly searches for indices $i$ such that the $i$th graph collision instance has a collision. Then it finds all graph collisions of this instance and repeats. Instead of searching for an arbitrary $i$, we can search for the first index $i$. The problem of searching for the first $i$ that has a graph collision is the composition of the find-first-one function (\thm{firstmarked}) with the graph collision function. This is a composition in the sense that each oracle input bit of the first problem is the output bit of another query problem. It is known that the optimal value of the $\gamma$ SDP for $f \circ g^n$ is at most $\gamma(J-F)\gamma(J-G)$. Similarly, it can be shown that there is a feasible cost function for $f \circ g$ that is at most the product of the cost functions. This is similar to \cite[Lemma 5.1]{LMR+11} or \lem{triangle}, but instead of taking the direct sum of the vectors, we take the tensor product. Finally, let $p_1, \ldots, p_t$ be the positions of indices found in the algorithm. The search problem requires $O(\sqrt{p_i}(\sqrt{n}+\sqrt{m}))$ queries for each $i$, since it is the composition of the two above-mentioned algorithms. The algorithm that finds all graph collisions has a feasible cost function $O(\sqrt{n\lambda_i} + \sqrt{m})$, where $\lambda_i$ is the number of graph collisions in the $i$th graph collision instance. This gives a feasible cost function for BMM with cost $O(\sum_i (\sqrt{p_i}(\sqrt{n}+\sqrt{m})+\sqrt{n\lambda_i} + \sqrt{m}))$, which is the same optimization problem solved in \cite{JKM12}, without log factors. This is $O(n\sqrt{l})$. \section{Discussion and open questions} \label{sec:open} Some readers may wonder if the composition theorem could be avoided by using a standard argument about expected running times (or query complexity), which has the following form: Given $k$ Las Vegas algorithms with expected running times $t_1,\ldots,t_k$, running these algorithms in succession will yield an algorithm with expected running time $\sum_i t_i$ by the linearity of expectation. If we now terminate the algorithm after (say) 5 times its expected running time, then by Markov's inequality we have a bounded-error algorithm with worst-case running time $O(\sum_i q_i)$. However, to use this argument the individual algorithms need to be zero error. If the algorithms are merely bounded error, then the final answer may be incorrect even if one of the $k$ bounded-error algorithms errs. In our applications, oracle identification and Boolean matrix multiplicaiton, we use a subroutine to find the first marked 1 in a string. This algorithm has bounded error since it is too expensive to verify (with zero error) that a given 1 is indeed the first 1 in a string. Our composition theorem only works for solutions of the filtered $\gamma_2$-norm SDP, not for quantum query complexity itself. While this is sufficient for our application, it would be interesting to know if bounded-error quantum algorithms with input-dependent query complexities can be composed in general without incurring log factors. While the query complexity of oracle identification in terms of $M$ and $N$ has been fully characterized, finding an optimal quantum algorithm for $\oip(\C)$ remains open. The corresponding problem for classical query complexity is also open. It would also be interesting to study time-efficient oracle identification algorithms for specific sets $\C$, since none of the known algorithms, including ours, is known to be time efficient. \section*{Acknowledgments} I thank Andrew Childs and Ben Reichardt for helpful discussions, Seiichiro Tani for pointing me to Ref.~\cite{AIN+09}, and Andrew Childs and Ansis Rosmanis for comments on a preliminary draft. This work was supported in part by NSERC, the Ontario Ministry of Research and Innovation, and the US ARO. \bibliographystyle{amsalpha}
2023-04-23T08:17:47.351Z
2014-01-07T02:14:35.000Z
redpajama/arxiv
arxiv_0000
638
8,823
e1b515b7397d32adff355016bca0273c89d274c4
\section{Introduction}\label{intro} In the past decade our picture of the Milky Way's stellar halo has dramatically changed thanks to the advent of several observational surveys, which have shown the richness and complexity of the substructure in the Galactic halo \citep{ibat01,newb02,maj03,yan03,martin04CMa,grilldion07b,belok06,belok07orphan, belok07quintet}. Our Galaxy is still undergoing an assembling process, where part of the infalling material has already been accreted and become dynamically relaxed \citep{helmi99,sheff12}, part of it is still dynamically cold \citep{bell08,juric08} and another part is in the process of being dynamically stripped or even approaching its first dynamical encounter with our Galaxy \citep{kall06feb, kall06dec,piatek08,besla10,rocha12}. The most prominent example of a currently ongoing disruption is that of the Sagittarius stream (Sgr stream). Since its discovery in 1996 \citep{mat96}, the stream has been mapped wrapping over $\pi$ radians on the sky, first through 2MASS \citep{maj03} and later through SDSS \citep{belok06,kopos12}. There is general agreement that it is the stellar debris of a disrupting satellite galaxy, the Sagittarius dwarf galaxy \citep{ibat94}, which is currently being accreted by the Milky Way \citep{velaz95,ibat97,no10}. The stream is composed of the leading and the trailing tails of this disruption event \citep{mat96,ibat01,dp01,md01,md04,maj03,belok06, belok13}, which wrap at least once around the Galaxy but have been predicted to wrap more than once \citep{penarrub10,law10}. In addition, a bifurcation and what resembles an extra branch parallel to the main component of the Sgr stream have been discovered both in the northern hemisphere \citep{belok06} and in the southern hemisphere \citep{kopos12}. The origin of this bifurcation and the meaning of the two branches are still debated: they could represent wraps of different age \citep{fellh06}, they could have arisen due to the internal dynamics of the progenitor \citep{penarrub10,penarrub11} or they could indeed be due to different progenitors and a multiple accretion event \citep{kopos12}. On the other hand, one of the simplest and neatest examples of a disrupting satellite is that of the Palomar 5 globular cluster \citep{sand77,oden02,dehnen04} and its stream \citep{oden01,oden03}. This stream extends over $20^{\mathrm{o}}$ along its narrow leading and trailing tails. It displays an inhomogeneous stellar density in what resembles gaps or underdensities \citep{grilldion06pal5}; the origin of this stellar distribution has been attributed both to interactions with dark satellites \citep{carlb12} and to epicyclic motions of stars along the tails \citep{mb12}. Finally, there are also cases of streams with unknown progenitors, such as the so-called Orphan stream \citep{grill06orphan,belok06,belok07orphan,newb10orphan}. This stream extends for $~50^{\mathrm{o}}$ in the North galactic cap, and the chemical signatures from recent spectroscopic observations associate its progenitor with a dwarf galaxy \citep{casey13I,casey13II}. A number of plausible progenitors have been suggested \citep{zucker06,fellh07,jin07,sales08}, but it is still possible that the true progenitor remains undiscovered in the southern hemisphere \citep{casey13I}. In general, the discovery of most of the substructures in the halo of the Milky Way has been possible thanks to photometric multi-colour wide area surveys. Such surveys pose several advantages for this kind of search. First, their multiple-band photometry allows for stellar population selections (halo or thick disk; red clump, main sequence turnoff point, etc.) based on colour-colour stellar loci. These selection criteria can be used to make stellar density maps that track the streams all through the survey's coverage area \citep{maj03, belok06}. Second, their continuous coverage of a large area allow the fields adjacent to the substructure to act as control fields. In this way, the colour-magnitude diagrams (CMDs) of the control fields can be used to statistically subtract the foreground and the background stars from the fields probing the substructure. This enhances the signature of the stellar population belonging to the stream or satellite (by removing the noise), and makes it possible to identify age and distance indicators such as the red clump or the main sequence turnoff point \citep{belok06,kopos12,slater13}. In this paper we explore the possibilities of using deep two-band pencil-beam surveys instead of the usual wide-area multi-colour surveys in order to detect and characterize stellar streams of the halo and, in particular, we revisit the Sagittarius, the Palomar 5 and the Orphan streams. We derive photometric distances using purely the main sequence turnoff point and --unlike other works-- regardless of the giant branch and its red clump. \section{Observations and data processing }\label{data} \subsection{Description of data set}\label{subsec:dataset} We use deep photometric imaging from the MENeaCS and the CCCP surveys \citep{sand12meneacs,hoekstra12cccp,bildfell12combined} as well as several additional archival cluster fields, observed with the CFHT-MegaCam instrument. These surveys targeted pre-selected samples of galaxy clusters; therefore the surveys geometry takes the form of a beam-like survey where the pointings are distributed without prior knowledge of the halo substructure (blind survey). \begin{figure*} \centering \includegraphics[width=\textwidth]{mapequat_differentiatedpointings_upon_SgrFromKop2012.eps} \caption{Equatorial map showing the position of all the fields from our survey (white hexagons) and highlighting the ones that lay on the Sagittarius stream (green circles for the faint branch and green squares for the bright branch), on the Palomar 5 stream and on the Orphan stream (blue diamonds). The background image is the SDSS-DR8 map of the Sgr stream from \citet{kopos12}. } \label{fig:mapPointings} \end{figure*} Our pointings are one square degree wide and spread over the sky visible to CFHT. Each consists of several exposures through the g' and r' filters with image quality of of sub-arcsecond seeing. After stacking the individual exposures, the limiting magnitudes reach $\sim25.0$ at the $5.0\sigma$ level. Out of the 97 fields, at least 25 fall on the structure of the Sagittarius (Sgr) stream and show distinct signatures in their CMDs, one on the Orphan stream, one on the Palomar 5 stream and three to seven on the Virgo Overdensity and the Virgo Stellar Stream \citep{duffau06, juric08,casetti09,prior09,bonaca12virgo} (see figure~\ref{fig:mapPointings}). Further away from the plane of the Sgr stream, we also find three fields to be coincident with the Triangulum-Andromeda structure \citep{rochapinto04,bonaca12triangulum}, two to three with the Pisces Overdensity \citep{watkins09,sesar10,sharma10}, one transitional between the Triangulum-Andromeda and the Pisces Overdensity, four with the Anticenter Structure \citep{grillmair06anticenter} and two to three with the NGC5466 stream \citep{grillmairjohnson06,fellhauer07ngc5466}. We also find two fields on the Lethe stream \citep{grillmair09new4treams}, four on the Styx stream \citep{grillmair09new4treams}, one on a region apparently common to the Styx and Cocytos streams \citep{grillmair09new4treams} and two on the Canis Major overdensity \citep{martin04CMa}. In this paper we concentrate on the clearest structures (those where the contrast-to-noise in the CMD is higher) in order to test the capabilities of our method. In particular, we address the Sagittarius stream, the Palomar5 stream and the Orphan stream. \subsection{Correction of the PSF distortion and implications for the star/galaxy separation}\label{subsec:PSF} Before building catalogues and in order to perform an accurate star/galaxy separation, it is necessary to rectify the varying PSF across the fields of the CFHT images. In order to correct for this effect, we make use of a 'PSF-homogenizing' code (K. Kuijken et al., in prep.). The code uses the shapes of bright objects unambiguously classified as stars to map the PSF across the image, and then convolves it with an appropriate spatially variable kernel designed to render the PSF gaussian everywhere. With a view to obtaining a PSF as homogeneous as possible, we treat the data as follows \citep{vdBurg13}: i) we implement an accurate selection of sufficiently bright stars based on an initial catalogue, ii) we run the code on the individual exposures for each field, and iii) we reject bad exposures based on a seeing criterion \footnotemark[1] before stacking them into one final image, on which we perform the final source extraction and photometry. \footnotetext[1]{ The rejection of exposures derives from trying to optimize the image quality while achieving the desired photometric depth. Thus our seeing criterion is a variable number dependent on the field itself, the seeing distribution for individual exposures and the individual plus total exposure time. In general it takes values $<\approx0.9$.} The advantages of this procedure are twofold. First, because the resulting PSF for each exposure is gaussian, all the stars become round. Second, because the PSF anisotropy is removed from all exposures before stacking, the dispersion in size for the point-source objects becomes smaller, even if the average value increases after stacking the individual exposures (see figure~\ref{fig:GaussedSize}). These two improvements significantly reduce the galaxy contamination when performing the star selection (illustrated in figure~\ref{fig:starsSelec}). Additionally, homogenizing the PSF also allows to measure colours in fixed apertures. \begin{figure} \centering \includegraphics[width=95mm]{gaussed_magsize_A119_g.eps} \caption{Brightness versus size diagram of all the sources in one of our pointings. The stellar locus prior to the PSF-homogenization (black) is wider and therefore subject to greater galaxy contamination at the faint end than the stellar locus posterior to the correction (blue) because the PSF initially varies across the field.} \label{fig:GaussedSize} \end{figure} From the final images, we extract the sources and produce photometric catalogues using SExtractor \citep{bertinSExtractor}. To derive the stellar catalogues, we use a code that filters the source catalogues as follows: i) finds the saturated stars and removes them from the stellar catalogue; ii) evaluates the distribution of bright sources ($r'=[18.0,20.0]\ \mathrm{mag}$) in the brightness-size parameter space, assumes a gaussian distribution in the size and in the ellipticity parameters ($e_1$, $e_2$)\footnotemark[2] of stars, and uses this information to define the boundaries of the stellar locus along the bright range; iii) evaluates the dependence of the width of the stellar locus on brightness and extrapolates the relation to fainter magnitudes; iv) applies the extended stellar locus and an ellipticity criterion to drop galaxies from the stellar catalogue. \footnotetext[2]{ \begin{displaymath} e_1 = \frac{1-q^2}{1+q^2}\cos{2\theta} \, , \; \; e_2 = \frac{1-q^2}{1+q^2}\sin{2\theta} \, , \; \; \end{displaymath} where $q=$axis ratio, $\theta=$position angle of the major axis. } For the stars resulting from this selection (figure~\ref{fig:starsSelec}), we correct their photometry from galactic reddening by using the extinction maps from \citet{schlegel98dustmaps}. The final stellar catalogues are used to build the CMDs employed for our analysis. The PSF-corrected catalogues yield much cleaner CMDs than the catalogues with similar star/galaxy separation but no PSF-correction (figure~\ref{fig:GaussedCMD}). \begin{figure} \centering \includegraphics[width=95mm]{starSelec_A119_g.eps} \caption{Brightness versus size diagram showing all the PSF-corrected sources (blue) and the subset of sources selected as stars through our star/galaxy separation algorithm (red) for one of our pointings. Although the star selection may not be complete at the faint end due to increasing scatter, our algorithm minimizes the galaxy contamination, which otherwise would be the main obstacle for detecting faint structures in the CMD.} \label{fig:starsSelec} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{gaussed_CMD_A119.eps} \caption{Colour-Magnitude Diagram (CMD) displaying the selection of sources considered stars (selected as explained in section 2.2). The plume on the red side ($g-r\approx1.2$) is composed of the nearby M-dwarfs, whereas the main sequence on the bluer side ($0.18<g-r<0.6$) corresponds to a halo overdensity located at a particular well-defined distance. The cloud of sources at faint magnitudes are faint galaxies that enter the star selection. \emph{Left:} CMD derived from an image that has not been PSF-corrected. \emph{Right:} CMD derived from a PSF-corrected image. After homogenizing the PSF, the galaxy contamination decreases markedly below $r \approx 22$.} \label{fig:GaussedCMD} \end{figure*} \subsection{Identification of the main sequence turnoff point}\label{subsec:turnoffpoint} The photometric depth of our data allows us to detect a number of halo substructures several magnitudes below their main sequence turn-off point. However, because our survey is a pencil-beam survey lacking control fields adjacent to our target fields, we have no reference-CMDs representing a clean foreground plus a smooth halo, and thus a simple foreground subtraction is not possible. Instead the halo substructures in our survey can only be detected in those fields where the contrast in density between the main sequence stream stars and the foreground and background stars is significant in the CMD. Thus, in order to search for main sequences in the CMDs, we build a cross-correlation algorithm that runs across a region of the CMD (the 'search region'), focused on the colour range associated with the halo turnoff stars ($0.18 \leq g-r \leq 0.30$). Within the boundaries of this search region, we slide a template main sequence-shaped 2D function that operates over the number of stars and, for each step, yields an integral representing the weighted density of stars in such a main sequence-shaped area. When the template main sequence function coincides with a similarly shaped overdensity in the CMD), the value of the cross-correlation (the weighted density) is maximized, and a value for the turnoff point is assigned. This process is illustrated in figure~\ref{fig:ccCMDbins}. In some cases a CMD presents more than one main sequence signature with sufficient contrast to noise. When this happens we use the detection of the primary main sequence (the position of its turnoff point and its characteristic width-function) to randomly subtract a percentage of the stars associated with it (lowering its density to the foreground level) and detect the next prominent main sequence feature. We name these main sequence detections as primary, secondary, etc., ranked by their signal to noise. We require the signal to noise to be $>3.5\sigma$ for primary MSs and $>4\sigma$ for the secondary or tertiary MSs after partially removing the primary one. \begin{figure*} \centering \includegraphics[width=\textwidth]{cc_slrCMDdenBins_A401.eps} \caption{\emph{Left}: Dereddened CMD (black dots) with the search region (pink solid-line rectangle) for the cross-correlation and the template main sequence-shaped function (green solid line) at the position of maximum density (peak of the cross-correlation). \emph{Right}: Binned diagram representing the weighted density of stars resulting from the cross-correlation process. The density in each bin corresponds to the integral of the template main sequence-shaped function with top left corner in the position of the bin.} \label{fig:ccCMDbins} \end{figure*} \subsubsection{Shape of the template main sequence function} When constructing the template main sequence-shaped 2D function (from now on, 'template-MS'), we use two ingredients. The first one is a theoretical isochrone\footnotemark[3] of age $t = 10 \mathrm{Gyr}$ and metallicity $[Fe/H] = -1.58$, which is used to define the central spine of the template-MS. The position of this central spine is later shifted in magnitude and colour steps during the cross-correlation. Since we are only interested in the shape of this isochrone (its absolute values are irrelevant because it will be shifted) and since we are searching for halo substructures, we choose the above age and metallicity values because they yield an isochrone shape representative of old metal-poor stellar populations. The second ingredient is a magnitude-dependent colour-width, which is used to broaden the isochrone template as illustrated in the left panel of fig.~\ref{fig:ccCMDbins}). \footnotetext[3]{Through all this work we use a subset of theoretical isochrones from http://stev.oapd.inaf.it/cmd. The theoretical isochrones (\citet{marigo08}, with the corrections from Case A in \citet{girardi10} and the bolometric corrections for Carbon stars from \citet{loidl01}) are provided as observable quantities transformed into the CFHT photometric system.} The width is in general directly derived from the width of the locus of nearby M-dwarfs ($1.0<g-r<1.4$). The width of this feature is calculated for a number of magnitude bins as three times the standard deviation in colour for each bin. Then a functional form dependent on magnitude is obtained through polynomial fitting. In a few cases, minor tweaking is needed to compensate for extremely large widths (colour shifts become insensitive to any substructure) or for extremely small widths (density values become meaningless due to the built-in weight [see below]). This way of defining the width of the template-MS accounts for the observational broadening of intrinsically well defined stellar loci due to increasing photometric uncertainties at faint magnitudes. \subsubsection{Weights within the template MS-function} In addition to a theoretically and observationally motivated shape for the template-MS, we also give a different weight to each region of the template. This means that, for each step of the cross-correlation, the stars contained within will contribute differently to the enclosed stellar density depending on how far they are from the spine of the template-MS. The weight in colour (stars near the spine of the template-MS are more likely to belong to the main sequence than stars close to the boundaries) is assigned through the exponential term in a gaussian weight function. We match the standard deviation of the gaussian weight to the standard deviation of the template-MS width ($3\sigma=\omega_{MS}$) so that all the stars contained within the template-MS are assigned a weight. To guarantee that the weight does not favour bright features, we choose the amplitude of the gaussian function to be such that the integral of the weight function between the edges of the template-MS function is the same for all magnitudes. The resulting weight function for a given star in the template-MS at a particular step of the cross-correlation then follows: \begin{equation} \ \ W_{\ast}(mag,colour) =\frac{A}{\sqrt{2\pi}\sigma(mag)}\cdot exp\left\{-\frac{[colour-\eta_{CC}(mag)]^2}{2[\sigma(mag)]^2}\right\} \end{equation} where $mag$ and $colour$ are the magnitude and colour of the weighted star, $\eta_{CC}(mag)$ represents the theoretical isochrone at that particular step of the cross-correlation, and $\sigma(mag)=\frac{1}{3}\omega_{MS}(mag)$ is proportional to the width of the template-MS function for that particular CMD. \subsection{Uncertainties in the turnoff point}\label{subsec:uncertainties} The colour and magnitude values for the turnoff point of a given main sequence, ($c_{TO}$, $mag_{TO}$), are derived from the position of the template at which the cross-correlation peaks. Therefore the uncertainties for these turnoff point values derive from the contribution of individual stars to the position and shape of the main sequence (the uncertainty from the CMD itself). To evaluate this uncertainty, we carry out a bootstrapping process. In this process first we generate re-sampled stellar catalogues by randomly withdrawing stars from one of our true catalogues. Second we run the cross-correlation and obtain the turnoff points for each of these re-samples. Third we consider the offsets between these turnoff points and the original turnoff point and derive the standard deviation of the distribution. The contribution of any CMD to the uncertainty of its turnoff point can then be calculated as a function of a reference (bootstrapped) standard deviation, $s$: \begin{displaymath} \ \ \ \ E_{\mathrm{mag,CMD}} =\ f_{\mathrm{mag,BS}}\cdot\frac{(s_{{\mathrm{mag,BS}}})}{\left.\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 mag}\right|_{\mathrm{TO}}} \, , \; \; \; E_{\mathrm{c,CMD}} =\ f_{\mathrm{c,BS}}\cdot\frac{(s_{{\mathrm{c,BS}}})}{\left.\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 c}\right|_{\mathrm{TO}}} \, , \; \; \end{displaymath} where, in practice, $s_{\mathrm{mag,BS}}$ and $s_{\mathrm{c,BS}}$ are the standard deviations calculated for a number of representative fields, $f_{\mathrm{mag,BS}}$ and $f_{\mathrm{c,BS}}$ are scale factors that allow to obtain the uncertainty for any field from the standard deviation of the bootstrapped fields, and $\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 mag}$ and $\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 c}$ evaluate the prominence of the particular overdensity as a function of magnitude or as a function of colour. In practice, $E_{\mathrm{mag,CMD}}=s_{{\mathrm{mag,BS}}}$ and $E_{\mathrm{c,CMD}}=s_{{\mathrm{c,BS}}}$ for the bootstrapped fields used as a reference. The photometric turnoff point distances are derived from the distance modulus. Therefore the uncertainties in the distances can be calculated as a combination of two sources of error: the uncertainty derived from the observed brightness of the turnoff point ($E_{\mathrm{mag,CMD}}$, discussed above) and the uncertainty derived from the absolute brightness of the turnoff point, which depends on the choice of isochrone (and thus on the uncertainty in the age or in the metallicity). \begin{eqnarray} E_{\mathrm{\mu,TO}} & = & \sqrt{E_{\mathrm{mag,CMD}}^2 + E_{\mathrm{mag,isoch}}^2} \,; \end{eqnarray} \section{The Sagittarius stream}\label{results} \subsection{Turnoff point distances to the Sgr stream}\label{subsec:resultsSgr} The Sagittarius stream is clearly probed by at least 25 of our 97 fields (see the red and pink squares in figure~\ref{fig:mapPointings}). They probe both the faint and the bright branches of the stream (the faint branch lying to the North of the bright one) and also two transitional areas, indicating that the transversal drop in stellar counts between both branches is not dramatic. Some of these fields present more than one main sequence in their CMDs; for those fields the secondary turnoff points are calculated by subtracting the primary MS and re-running the cross-correlation (as explained in section~\ref{subsec:turnoffpoint}). Based on the turnoff point values obtained from the cross-correlation, we calculate the distances to the Sagittarius stream in these 25 fields for 31 detections. For this calculation, we assume a single stellar population represented by a theoretical isochrone with age $t_{age} = 10.2 \ \mathrm{Gyr}$ and metallicity $[Fe/H]=-1.0 \ \mathrm{dex}$ (for a detailed description on the set of isochrones see footnote~2 in section~\ref{subsec:turnoffpoint}). We choose these age and metallicity values because they match the age-metallicity relation for the Sgr dwarf galaxy \citep{layden00} --which is also expected to hold for its debris-- and are consistent with the range that characterizes old metal-poor populations. To account for the potential influence on our distance measurements of a possible metallicity gradient along the different Sgr arms \citep{chou07,shi12,vivas05,carlin12}, we analyse the dependency of the isochrones turnoff point absolute brightness ($M_{TO}$) with metallicity throughout the Sgr metal-poor range (see figure~\ref{fig:metallicitySgr}). We find that for $-1.53 \ \mathrm{dex} <[Fe/H]< -0.8 \ \mathrm{dex}$ the absolute brightness remains nearly constant in the r band, with a maximum variation of $\Delta M=\pm0.1 \ \mathrm{mag}$. We conclude that if we take this variation in absolute brightness as the isochrone uncertainty in the distance modulus ($E_{\mathrm{mag,isoch}}=\Delta M$), we can use the $t_{age} = 10.2 \ \mathrm{Gyr}$ and $[Fe/H]=-1.0 \ \mathrm{dex}$ isochrone to calculate distances to any region of the Sgr stream. \begin{figure*} \centering \includegraphics[width=\textwidth]{Sgrstream_TOmagageZrelation_zoomin.eps} \caption{Absolute brightness of the turnoff point in the r band as a function of metalllicity and age for metal poor populations (green circles). The values in this diagram meet the age/metallicity relation for the Sgr dwarf galaxy from \citet{layden00}. The isochrone used in this paper to derive distances to the Sgr stream is represented with a yellow star, and its maximum difference to the other brightness values in this range is $\Delta M=\pm0.1\mathrm{mag}$.} \label{fig:metallicitySgr} \end{figure*} \begin{table*} \caption{Position and distances to the Sgr stream, together with a tag for faint or bright branch membership, a tentative classification as leading or trailing arm and a number specifying the hierarchy of the detection in the CMD (primary, secondary,etc.). The distances are indicated both as distance modulus and as heliocentric distance, with the distance uncertainty ($E_{\mathrm{d}}$) in kpc. } \label{table:distSgr} \centering \begin{tabular}{l r c c r c c c c} \hline\hline Field & arm & detection & RA (deg) & DEC (deg) & $\mu (mag)$ & $d$ (kpc) & $E_{\mathrm{d}}$ (kpc) \\ \hline A2104$^{\mathrm{f}}$ & lead & 1 & 235.040644 & -3.33158 & 18.8 & 56.6 & 3.1 \\ RXJ1524$^{\mathrm{f}}$ & trail & 1 & 231.170583 & 9.93498 & 16.2 & 17.1 & 2.0 \\ A2050$^{\mathrm{f}}$ & lead & 1 & 229.080749 & 0.08773 & 18.7 & 54.1 & 8.7 \\ A1942$^{\mathrm{f}}$ & lead & 1 & 219.654126 & 3.63573 & 18.7 & 54.1 & 3.7 \\ A1882$^{\mathrm{b}}$ & lead & 1 & 213.667817 & -0.30598 & 18.5 & 49.3 & 5.7 \\ A1835$^{\mathrm{b}}$ & lead & 1 & 210.259355 & 2.83093 & 18.4 & 47.1 & 4.2 \\ RXJ1347$^{\mathrm{b}}$ & ? & 1 & 206.889060 & -11.80299 & 15.5 & 12.4 & 7.3 \\ ZwCl1215$^{\mathrm{b}}$ & ? & 1 & 184.433196 & 3.67469 & 16.7 & 21.5 & 2.9 \\ ZwCl1215$^{\mathrm{b}}$ & ? & 3 & 184.433196 & 3.67469 & 15.0 & 9.8 & 2.6 \\ A1413$^{\mathrm{f}}$ & lead & 1 & 178.842420 & 23.42207 & 17.5 & 31.1 & 2.7 \\ A1413$^{\mathrm{f}}$ & trail & 2 & 178.842420 & 23.42207 & 16.2 & 17.1 & 1.9 \\ A1246$^{\mathrm{f}}$ & lead & 1 & 170.987824 & 21.40913 & 17.6 & 32.6 & 9.2 \\ A1185$^{\mathrm{f}}$ & ? & 1 & 167.694750 & 28.68127 & 16.3 & 18.7 & 12 \\ ZwCl1023$^{\mathrm{b}}$ & ? & 1 & 156.489424 & 12.69030 & 17.4 & 29.7 & 11 \\ A795$^{\mathrm{b}}$ & lead & 1 & 141.030063 & 14.18190 & 16.0 & 14.2 & 2.8 \\ A795$^{\mathrm{b}}$ & ? & 2 & 141.030063 & 14.18190 & 15.6 & 14.2 & 2.8 \\ A763$^{\mathrm{b}}$ & ? & 1 & 138.150298 & 15.99992 & 16.7 & 21.5 & 2.6 \\ A763$^{\mathrm{b}}$ & lead & 2 & 138.150298 & 15.99992 & 15.8 & 14.2 & 1.0 \\ RXJ0352$^{\mathrm{f}}$ & lead & 1 & 58.263173 & 19.70387 & 15.7 & 13.6 & 0.7 \\ RXJ0352$^{\mathrm{f}}$ & trail & 2 & 58.263173 & 19.70387 & 17.7 & 34.1 & 4.3 \\ A401$^{\mathrm{f}}$ & trail & 1 & 44.759558 & 13.58975 & 17.4 & 29.7 & 3.4 \\ A399$^{\mathrm{f}}$ & trail & 1 & 44.478652 & 13.05185 & 17.6 & 32.6 & 11 \\ A370$^{\mathrm{b}}$ & trail & 1 & 39.963713 & -1.65806 & 17.6 & 32.6 & 4.8 \\ A223$^{\mathrm{b}}$ & trail & 1 & 24.557005 & -12.77010 & 17.0 & 24.7 & 1.7 \\ RXJ0132$^{\mathrm{f}}$ & trail & 1 & 23.169048 & -8.04556 & 17.1 & 25.9 & 2.3 \\ A133$^{\mathrm{b}}$ & trail & 1 & 15.673483 & -21.88113 & 16.6 & 20.6 & 2.4 \\ A119$^{\mathrm{f}}$ & trail & 1 & 14.074919 & -1.23337 & 16.9 & 23.6 & 2.9 \\ A85$^{\mathrm{b}}$ & trail & 1 & 10.469662 & -9.28824 & 16.9 & 23.6 & 1.6 \\ A2670$^{\mathrm{f}}$ & trail & 1 & 358.564313 & -10.40142 & 16.6 & 20.6 & 1.1 \\ RXJ2344$^{\mathrm{f}}$ & trail & 1 & 356.059633 & -4.36345 & 16.7 & 21.5 & 5.6 \\ RXJ2344$^{\mathrm{f}}$ & lead & 2 & 356.059633 & -4.36345 & 15.6 & 13.0 & 1.2 \\ A2597$^{\mathrm{f}}$ & trail & 1 & 351.336736 & -12.11193 & 16.9 & 23.6 & 1.4 \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{b}}$] Bright branch \item[$^{\mathrm{f}}$] Faint branch \end{list} \end{table*} The resulting distances and distance uncertainties for these fields can be found in table~\ref{table:distSgr}, together with the central position of each field (in equatorial coordinates), a 'faint/bright branch' tag (derived from figure~\ref{fig:mapPointings}), a tentative classification as leading or trailing arm where posible (see below), a 'primary/secondary detection' tag and the distance modulus ($\mu$). In figure~\ref{fig:DistVsRA} we compare our results to values from the literature~\footnotemark[4], split in two diagrams (top panel for the faint branches and bottom panel for the bright branches in both hemispheres). \footnotetext[4]{The SDSS-DR8 measurements shown in this paper for the southern bright arm have been corrected for the difference in the calibration of the red clump absolute magnitude, as pointed out in \citet{slater13} and corrected in \citet{kopos13erratum}). And the SDSS-DR5 measurements have been decreased by $0.15 \ \mathrm{mag}$ to match the BHB signal from SDSS, as prescribed in \citet{belok13}.} Remarkably our turnoff point distances are not only in agreement with previous distance measurements to known wraps, but also compatible with the distance predictions for nearby wraps by the models of \citet{penarrub10} and \citet{law10}. In the following section we discuss in detail these findings. \begin{figure*} \centering \includegraphics[width=\textwidth]{Sgr_distance_vs_RA_witherrors.eps} \caption{Photometric main sequence turnoff point distances for the Sagittarius stream along right ascension (northern-leading tail and southern- trailing tail). The top panel shows results for the faint branch, whereas the bottom panel corresponds to the bright arm. Our data (blue circles for leading tails and red circles for trailing tails) are based on the theoretical isochrones by \citet{marigo08} and the corrections by \citet{girardi10}, for a $10.2 \ \mathrm{Gyr}$ old stellar population with $[Fe/H]=-1.0$. Other distance values correspond to \citet{belok06} (green and grey triangles), \citet{kopos12,kopos13erratum} (green asterisks), \citet{belok13} (pink triangles) and \citet{slater13} (yellow squares for $>3\sigma$ detections and white squares for $<3\sigma$). White circles denote detections that can not be unambiguously tagged as leading or trailing.} \label{fig:DistVsRA} \end{figure*} \subsection{Comparison with models of the Sgr stream}\label{subsec:compareModels} Using the model predictions shown in figures~\ref{fig:modelJorge} and \ref{fig:modelLaw}, we classify each field as belonging to the leading or trailing arm, by matching the distance and the sky position. Of these two models, the model by \citet{penarrub10} seems to recover better the separation in stellar density distribution that gives rise to the northern bifurcation into faint and bright branches (figure~\ref{fig:modelJorge}, upper panels), whereas the model by \citet{law10} seems to reproduce better the projected 2MASS stellar density distribution (figure~\ref{fig:modelJorge}, lower panels). As noted in \citet{belok13}, the northern- trailing arm is more distant and has a steeper distance gradient than predicted by any Sgr model. And although neither clearly recovers the southern bifurcation, they succeed in reproducing the general distribution observed in that section of the stream. \begin{figure*} \centering \includegraphics[width=\textwidth]{Sgr_positionANDdistance_model_Jorge.eps} \caption{Our data compared to the predictions by the model from \citet{penarrub10}. \emph{Top panel}: Equatorial map with the position of our fields plotted over the simulation. \emph{Bottom panel}: Distance vs RA diagram with our results compared to the model predictions. Fields on the faint branch are denoted with circles, and fields on the bright branch are denoted with squares. Measurements matching the leading arm are denoted in pink, whereas those matching the trailing arm are denoted in light blue. White markers represent detections that can not be unambiguously tagged as leading or trailing; grey markers in the upper panel correspond to fields with more than one MS detection (they unfold in the bottom panel). The colour scales represent the time since the particles from the simulations became unbound.} \label{fig:modelJorge} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Sgr_positionANDdistance_model_Law.eps} \caption{Same as in figure~\ref{fig:modelJorge} but for the model from \citet{law10}.} \label{fig:modelLaw} \end{figure*} \subsubsection{Northern leading arm}\label{subsec:SgrNorthLead} From our eighteen measurements on the bright and the faint branches of the northern-leading arm (branches A and B, in the terminology of \citet{belok06}), nine clearly reproduce the distance trends of \citet{belok06} and \citet{belok13} based on red giant and blue horizontal branch stars (blue circles in figure~\ref{fig:DistVsRA}). For the faint branch, we extend westwards the distance measurements beyond those of SDSS, and we provide its most distant detections so far --out to $56\mathrm{kpc}$ at RA~$\sim235^{\mathrm{o}}$. Comparing these most distant detections to the distance trend of the models and to the bright branch at a similar right ascension, one can argue that these detections likely lie close to the aphelion of the faint branch (or represent the aphelion themselves), and therefore they are probably a good estimation for its distance. For the other nine detections, we find that the derived distances are either in mild disagreement with the trends of the leading arm (four cases, white circles in figure~\ref{fig:DistVsRA}) or incompatible with the leading arm (five cases, red and white circles in figure~\ref{fig:DistVsRA}). In the single case of mild disagreement for the faint branch (A1185, RA~$\sim168^{\mathrm{o}}$) the distance is well below the trends of both this and previous work (offset $\approx10 \ \mathrm{kpc}$); however its large uncertainty prevents us from ruling out that it belongs to the faint branch. We will discuss an alternative membership in subsection~ \ref{subsec:SgrNorthTrail}. The three cases of mild disagreement for the bright branch (ZwCl1023, A795-2 and A763-1, RA~$\sim150^{\mathrm{o}}$) are slightly above the distance trend of this branch. Particularly, fields A795 and A763 also display two additional detections (primary and secondary, respectively) slightly under the expected distance trend. Fields A795 and A763 lie close in the sky (less than $4^{\mathrm{o}}$ apart) and both yield primary and secondary distance measurements very consistent with each other and with this dichotomy. We interpret this as possibly indicating a region of the sky where the bright branch runs broader in distance. Out of the five detections incompatible with the distance trends of the leading arm, we will discuss three (RXJ1524,A1413-2 and ZwCl1215-1) in subsection~\ref{subsec:SgrNorthTrail}, together with the above mentioned A1185. Regarding the other two (RXJ1347 and ZwCl1215-3, RA~$\sim205^{\mathrm{o}}$ and RA~$\sim185^{\mathrm{o}}$ respectively), we have studied them individually and found the following. On the one hand, ZwCl1215-3 matches the distance to the Virgo Overdensity \citep{bonaca12virgo} when using the appropriate age and metallicity values for the theoretical isochrone, so it is likely a detection of this cloud. On the other hand, RXJ1347 matches the distance and position predicted by the model from \citet{penarrub10} for an older northern-wrap of the leading arm, but also the distances predicted by the two models for the northern-trailing wrap. However we can not draw conclusions regarding membership for an isolated detection and we lack kinematic data, so at the moment we can not discriminate between both options (or even a third one). \subsubsection{Northern trailing arm}\label{subsec:SgrNorthTrail} In this subsection we revisit four detections in the galactic northern hemisphere which yield distances incompatible with (or off) the leading arm. These detections are RXJ1524, A1413-2 (red circles in figure~\ref{fig:DistVsRA}), A1185 (compatible with the faint leading branch thanks to its large error bars, but severely offset from the distance trend) and ZwCl1215-1 (white marker at RA~$\sim185^{\mathrm{o}}$ on the bright branch). The three detections in the faint branch (RXJ1524, A1413-2 and A1185) show distances strongly consistent with each other ($\sim17 \ \mathrm{kpc}$). And the three of them are the fields most apart from the Sgr orbital plane in our northern sample, spreading $60^{\mathrm{o}}$ along the orbit. Remarkably both their distances and their positions in the sky are in extremely good agreement with the predictions from the above mentioned models for the Sgr debris in the northern-trailing arm, but at odds with the claim in \citet{belok13} that branch C (at lower declinations and more distant) is indeed the continuation of the northern-trailing arm for this range of RA (RA~$>160^{\mathrm{o}}$). In this sense it is worth noting that the model from \citet{penarrub10} has predicted two nearly parallel branches for the northern-trailing arm (not only for the northern- leading arm). Therefore it is dynamically feasible that both the measurements for branch C \citep{belok06} and the measurements in this work are tracing the trailing arm in the galactic northern hemisphere, as far as they are probing two different branches. Given the consistency of our distance measurements with each other and with the simulations, and given the distribution of the fields along the stream, we believe our detections in the faint branch are a previously undetected part of a Sgr wrap, most likely a continuation of the section of the northern-trailing arm presented in \citet{belok13}. However kinematic data or a spatially broader photometric coverage are needed to confirm this. Additionally, ZwCl1215-1, which lies on the bright branch, yields a distance measurement compatible with the trend predicted for the northern-trailing arm. But its position on the sky (on the bright branch) can not be reconciled with the current models for the trailing tail, neither with the age, metallicity and distance values for the Virgo Overdensity. Thus, its membership and meaning in the puzzle of the halo remain an open question. \subsubsection{Southern trailing arm}\label{subsec:SgrSouthTrail} Our measurements on the bright and the faint branches of the southern-trailing arm reproduce the distance trends of \citet{kopos12, kopos13erratum} and \citet{slater13} based on red clump and turnoff point stars. For the faint branch, we confirm the trend set by the $<3\sigma$ detections in \citet{slater13}, and we briefly extend westwards and eastwards the distance measurements. Contrary to \citet{slater13}, we find no evidence for a difference in distance between the faint and the bright branches of the southern-trailing tail. However it is possible that such difference remains hidden in our distance uncertainties. When comparing to the above mentioned models, we find that the measures are in general agreement with the predictions for both the faint and the bright branches. However the distance gradient in the faint branch seems to be less steep in the data than in the models, and the branch seems to be thinner in distance than predicted for any value of the probed RA range. In this sense it is worth noting that, in contrast to what happens to many of our northern hemisphere fields, only two of the CMDs in the southern galactic hemisphere show secondary MS detections (RXJ0352 and RXJ2344, at RA~$\sim58^{\mathrm{o}}$ and RA~$\sim356^{\mathrm{o}}$, respectively). And the difference between the turnoff point brightness of these double detections does not favour a thick branch, but rather the detection of a previously unknown nearby wrap (see subsection~\ref{subsec:SgrSouthLead}). \subsubsection{Southern leading arm}\label{subsec:SgrSouthLead} In this subsection we revisit the double detections of \ref{subsec:SgrSouthTrail}, namely RXJ0352-1 and RXJ2344-2, (RA~$\sim58^{\mathrm{o}}$ and RA~$\sim356^{\mathrm{o}}$, primary and secondary detections, respectively). We show their CMDs and their cross-correlation density diagrams in figures~\ref{fig:CMD-leadSouthRXJ0352} and \ref{fig:CMD-leadSouthRXJ2344}. We find that, using the same isochrone we have used to derive distances to all the Sgr fields, both yield a distance of $\sim13 \ \mathrm{kpc}$. These distances are in excellent agreement with the predictions from the two simulations for the leading arm in the South and also with the trend set by the leading-northern data. We thus claim to have detected the continuation of the northern-leading arm into the southern hemisphere for the first time. The positions of these fields, however, suggest that the leading arm dives into the southern hemisphere at higher declinations than predicted, overlapping in projection with the faint branch of the trailing arm. If the detection of the southern-leading arm or the northern-trailing arm proposed in this paper are confirmed in future works (with kinematic measurements for membership or photometric follow-up for spatial coverage), our measurements will be the closest and the oldest debris of the Sgr stream detected to date. If so, this would mean that our method has succeeded in detecting nearby substructure in regions of the sky that had already been explored. The explanation to such a performance would lie on the fact that we use a sample of stars (a large part of the main sequence) to identify the overdensities in the CMD larger than the sample of the usual halo tracers (red clump, red giants or blue horizontal branch), and this could increase the contrast in regions of low concentration and thick disk contamination. \begin{figure*} \centering \includegraphics[width=\textwidth]{slr_CMDdensBins_RXJ0352.eps} \caption{\emph{Left}: Dereddened CMD for the westernmost pointing probing the leading arm in the southern hemisphere; the template main sequence function and the turnoff point (green) are plotted for the maximum of the primary cross-correlation. \emph{Right}: Weighted-density diagram resulting from the primary cross- correlation. The maximum (white bin, black cross) marks the top left corner of the template-MS at the position of the southern-leading arm main sequence, whereas the red overdensity at fainter magnitudes corresponds to the southern-trailing arm. } \label{fig:CMD-leadSouthRXJ0352} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{slr_CMDdensBins_RXJ2344.eps} \caption{\emph{Left}: Dereddened CMD for the easternmost pointing probing the leading arm in the southern hemisphere; the template main sequence function and the turnoff point (green) are plotted for the maximum of the secondary cross-correlation. We have randomly removed $50\%$ of the stars contributing to the primary detection, which corresponds to the southern-trailing arm of the Sgr stream. \emph{Right}: Weighted-density diagram resulting from the secondary cross- correlation. The maximum (white bin, black cross) marks the top left corner of the template-MS function at the position of the Orphan stream's main sequence. The primary detection has been partially removed, and the remainings can be seen as a weak tail at fainter magnitudes and slightly bluer colour.} \label{fig:CMD-leadSouthRXJ2344} \end{figure*} \section{The Palomar 5 stream and the Orphan stream}\label{sec:Pal5Orphan} \subsection{Turnoff point distances to the Pal5 stream and the Orphan stream} The Palomar5 stream and the Orphan stream are also probed by two of our fields (see pink circles in figure~\ref{fig:mapPointings}). Their CMDs and their corresponding turnoff points are shown in figures~\ref{fig:CMD-Pal5} and \ref{fig:CMD-Orph}, together with their cross-correlation maps. \begin{figure*} \centering \includegraphics[width=\textwidth]{slr_CMDdensBins_A2050.eps} \caption{\emph{Left}: Dereddened CMD for the pointing containing the Palomar 5 stream as its primary feature; the template main sequence function and the turnoff point (green) are plotted for the maximum of the cross-correlation. The secondary main sequence at fainter magnitudes corresponds to the faint arm of the Sgr stream. \emph{Right}: Weighted-density diagram resulting from the cross-correlation. The maximum (white bin, black cross) marks the top left corner of the template-MS function at the position of the Palomar 5 stream's main sequence, whereas the cyan overdensity at fainter magnitudes corresponds to the Sgr stream.} \label{fig:CMD-Pal5} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{slr_CMDdensBins_ZWCL1023.eps} \caption{\emph{Left}: Dereddened CMD for the pointing containing the Orphan stream as its secondary feature; the template main sequence function and the turnoff point (green) are plotted for the maximum of the secondary cross-correlation. We have randomly removed $60\%$ of the stars contributing to the primary detection, which corresponds to the bright arm of the Sgr stream. \emph{Right}: Weighted-density diagram resulting from the secondary cross- correlation. The maximum (white bin, black cross) marks the top left corner of the template-MS function at the position of the Orphan stream's main sequence. The primary detection has been removed, and thus it does not show in the density diagram. } \label{fig:CMD-Orph} \end{figure*} We use these turnoff point values to calculate photometric distances to each of the streams. Once again we assume single stellar populations characterized by theoretical isochrones but now with $t_{age} = 11.5 \ \mathrm{Gyr}$ \citep{martell02} and metallicity $[Fe/H]=-1.43$ \citep{harris96} in the case of the Pal5 stream, and $t_{age} = 10.0 \ \mathrm{Gyr}$ and metallicity $[Fe/H]=-1.63$ in the case of the Orphan stream \citep{casey13II}. These values correspond to measurements for these particular streams, which are more metal-poor than the Sgr stream for similar ages. Since the absolute brightness of the turnoff point for a given stellar population depends on its age and metallicity, it is important to select representative values in order to derive the right photometric distances. The resulting distances are collected in table~\ref{table:distOthers} and displayed in figures~\ref{fig:Pal5DistVsRA} and \ref{fig:OrphDistVsDEC}, respectively, where they are compared to previous findings by other groups. Both results show good agreement for the adopted age and metallicity values. \begin{table*} \caption{Position and distances to the Palomar5 and Orphan streams:} \label{table:distOthers} \centering \begin{tabular}{l c r c c c c} \hline\hline Field & stream & RA (deg) & DEC (deg) & $\mu (mag)$ & $d$ (kpc) & $\Delta d$ (kpc) \\ \hline A2050 & Pal5 & 229.080749 & 0.08773 & 17.0 & 23.1 & 1.1 \\ ZwCl1023 & Orphan & 235.040644 & -3.33158 & 16.6 & 23.8 & 2.2 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Pal5_distance_vs_RA_witherrors.eps} \caption{Photometric main sequence turnoff point distances along right ascension for the Palomar5 stream. Our data point (blue circle) is based on a single stellar population of age $11.5 \ \mathrm{Gyr}$ and metallicity $[Fe/H]=-1.43$. The other values correspond to \citet{grilldion06pal5} (green triangles) and \citet{vivas06} (pink star). } \label{fig:Pal5DistVsRA} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Orph_distance_vs_DEC_witherrors.eps} \caption{Photometric main sequence turnoff point distances along declination for the Orphan stream. Our data point (blue circle) is based on the theoretical isochrone for a $10.0 \ \mathrm{Gyr}$ old stellar population with $[Fe/H]=-1.63$. The other values correspond to \citet{belok07orphan} (green triangles), \citet{newb10orphan} (orange diamonds) and \citet{casey13II} (cyan star). } \label{fig:OrphDistVsDEC} \end{figure*} \subsection{Influence of the age/Z isochrone values on the distances}\label{subsec:discussPal5Orph} For the Palomar 5 stream and the Orphan stream, our pencil-beam survey returns only one detection each. We compare their derived distance measurements (table~ \ref{table:distOthers}) to previous work (figures~\ref{fig:Pal5DistVsRA} and \ref{fig:OrphDistVsDEC}, respectively) and find that our measurements are consistent and well within the values to be expected from interpolations. We interpret this as an independent validation of the stellar population parameters for these streams in the literature: $11.5 \ \mathrm{Gyr}$ and $[Fe/H]=-1.43$ for the Pal5 stream \citep{martell02,harris96}, and $10.0 \ \mathrm{Gyr}$ and $[Fe/H]=-1.63$ for the Orphan stream \citep{casey13II}. Variations in the absolute magnitude for the turnoff point of the theoretical isochrone assigned to a given stellar population (characterized by a given age and metallicity) propagate into the distance modulus, thus yielding variations in the distance. For the Pal5 stream, our distance measurement can tolerate a relative variation of $\Delta d_{rel}\approx0.15$ and still be in agreement with the previous distance measurements; this variation threshold translates into an absolute magnitude variation threshold of $\Delta M\approx0.35$. We find that variations in $\Delta t=^{+1.7}_{-3.2}$ Gyr (limited by the formation time of the first stars) or in $\Delta Z=^{+30}_{-6}\cdot10^{-4}$ dex (limited by the minimum metallicity available in the set of theoretical isochrones) for the age and metallicity of the employed theoretical isochrones meet this tolerance criterion. For the Orphan stream, our distance measurement can tolerate a relative variation of $\Delta d_{rel}=^{+0.24}_{-0.05}$, which translates into $\Delta M=^{+0.60}_{-0.11}$. Variations in $\Delta t=^{+3.2}_{-1.3}$ Gyr or in $\Delta Z=^{+26}_{-3}\cdot10^{-4}$ dex (same limitations as above) for the age and metallicity of the theoretical isochrones respect this requirement. \section{Conclusions}\label{conc} In this work we have used data from two deep cluster surveys, which provide randomly scattered photometric pencil-beams in g' and r', and a field of view of $1\mathrm{deg}^2$ per pointing. We have used this data to characterize previously known substructure in the stellar halo of the Milky Way. We analysed these data using two novel ingredients: a PSF-homogenization for the images and a cross-correlation algorithm for the colour-magnitude diagram (CMD). The PSF-homogenization algorithm corrects the inhomogeneous distortion of the sources across an image caused by the telescope's optics. In this way, it recovers the true shapes and size distribution of the sources, improving the performance of any star/galaxy separation procedure, specially at the faint end. The cross-correlation algorithm explores the CMD of each field searching for an overdensity with the shape of a stellar main sequence, and returns the (colour,magnitude) coordinates of the corresponding turnoff point, from which distances can be derived. Through this method, we have shown that it is possible to exploit a two-filter pencil-beam survey to perform such a study of streams or satellites, provided that the contrast-to-noise ratio of the substructure's main sequence is moderately significant. In this way our method bypasses the need for nearby control-fields that can be used to subtract a reference foreground from the target CMDs. Using a set of theoretical isochrones \citep{marigo08,girardi10}, we have calculated the distances to different regions of the Sagittarius stream (faint and bright branches in both the northern and southern arms) and obtained results in good agreement with previous work \citep{belok06,kopos12,kopos13erratum,slater13} (see figure~\ref{fig:DistVsRA}). We detect for the first time the continuation of the northern-leading arm into the Southern hemisphere; we find that its distances are in excellent agreement with the predictions by the models in \citet{penarrub10} and \citet{law10}, while the trajectory seems to be located at higher declinations. We also find evidence for a nearby branch of the northern-trailing arm at RA~$>160^{\mathrm{o}}$. Both the distances and the footprint on the sky are in good agreement with the predictions from the models. It is also compatible with being the continuation of the northern-trailing region detected in \citet{belok13} if it turns or broadens to higher latitudes as it evolves westwards, but it does not follow the same distance trend as branch C \citep{belok06}. However it is feasible that both trends represent the trailing arm in the galactic northern hemisphere if they belong to two different branches, as predicted in the model from \citet{penarrub10}. We have also used age and metallicity measurements from previous work \citep{martell02, harris96,casey13II}, to calculate distances to the Pal5 stream and the Orphan stream. These distances are in good agreement with the results in the literature \citep{grilldion06pal5, vivas06,belok07orphan,newb10orphan,casey13II}, attesting --together with the results from the Sgr stream for previously known regions of the stream-- the robustness and accuracy of the cross-correlation. The methods presented in this paper open the possibility of using deeper existing pencil-beam surveys (maybe originally aimed for extragalactic studies) to measure accurate distances (or ages or metallicities, provided that two of the three parameters are known) to streams, globular clusters or dwarf galaxies. The existence of these pencil-beam surveys or the reduced requirements of prospective ones, allow for more complete maps of the Galactic halo substructure at reduced observational costs. \begin{acknowledgements} B.P.D. iss supported by NOVA, the Dutch Research School of Astronomy. H.H. and R.vdB. acknowledge support from the Netherlands Organisation for Scientific Research (NWO) grant number 639.042.814. \end{acknowledgements} \bibliographystyle{aa}
2023-04-23T08:17:47.369Z
2013-12-02T02:15:16.000Z
redpajama/arxiv
arxiv_0000
640
8,622
6b0b6821d7e388d6ac9335bb5dea16b629d7dd15
\section{Introduction} \subsection{Basic notations} For any integer $n$, we note $[n]=\{1,\ldots,n\}$, $S_n$ the symmetric group on $n$ elements and $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_p) \vdash n$ an integer partition of $n$ with $\ell(\lambda)=p$ parts sorted in decreasing order. If $n_i(\lambda)$ is the number of parts of $\lambda$ that are equal to $i$ (by convention $n_0(\lambda)=0$), then we may write $\lambda $ as $[1^{n_1(\lambda)}\,2^{n_2(\lambda)}\ldots]$ and define $Aut_{\lambda}=\prod_i n_i(\lambda)!$ and $z_\lambda =\prod_i i^{n_i(\lambda)}n_i(\lambda)!$. We note $m_\lambda(x)$ and $p_\lambda(x)$ the monomial and power sum symmetric functions indexed by $\lambda$ on indeterminate $x$. For a $m\times m$ matrix $V$ we write $m_\lambda (V)$ and $p_\lambda(V)$ the value of these symmetric functions at the eigenvalues of $V$. Finally, for any real number $\alpha$ and $l$ non negative integers $k_1,\ldots,k_l$, we note the multinomial coefficients: \small $$\binom{\alpha}{k_1,\ldots,k_l} = \frac{\alpha(\alpha-1)\ldots(\alpha-\sum_i k_i+1)}{k_1!k_2!\ldots k_l!}.$$ \normalsize \subsection{Integer arrays} \noindent Given $\lambda$ and $\mu$, two integer partitions of $n$ and a non negative integer $r$, we define the sets $M_{\lambda,\mu}^r$ of $4$-tuples ${\bf A}=(P,P',Q,Q')$ of bi-dimensional arrays of non negative integers such that: $n_i(\mu) = \sum_{j \geq 0}Q_{ij} + Q'_{ij}, r =\sum_{i,j}j(Q_{ij} + Q'_{ij})$. Additionally, there exist two indices $i_0$ and $j_0$ such that: $ n_i(\lambda) =\delta_{i,i_0}+ \sum_{j \geq 0}P_{ij} + P'_{ij}, r =j_0 + \sum_{i,j}j(P_{ij} + P'_{ij}) $. For such an array, we note: $p=|P| = \sum_{i,j\geq 0}P_{ij}, p'=\ell(\lambda)-p-1=|P'|, q=|Q|$, $q'=\ell(\mu)-q=|Q'|$, ${\bf A!} = \prod_{i,j}P_{ij}!\,P'_{ij}!\,Q_{ij}!\,Q'_{ij}!$. We define as well $\mathcal{I}({\bf A})= i_0$ if $r=0$, otherwise: \small \begin{align*} \mathcal{I}({\bf A}) = \binom{i_0}{j_0,j_0}\left [i_0-2j_0 + \frac{\sum_{i,j}{jQ'}(j_0(n-p)-ri_0)}{r^2}+\frac{\sum_{i,j}\left((n-q)j-ir\right)Q'\sum_{i,j}\left(i_0j-j_0(i-1)\right)P}{r^2(n-q-2r)}\right ] \end{align*} \normalsize \subsection{Main results} We look at the quantity $p_n(XUYU^{t})$ (i.e the trace of $(XUYU^{t})^n$) where $X$ and $Y$ are given $m\times m$ real symmetric matrices, $U$ is a random $m\times m$ real matrix whose entries are independent standard normal variables and $U^{t}$ is the transpose of $U$. The mathematical expectation of this quantity is of particular interest for statisticians (see \cite{OU}). We define: \begin{equation} \label {real} P^{\mathbb{R}}_n(X,Y) = \mathcal{E}_{U}(p_n(XUYU^{t})) \end{equation} Similarly, for $X$ and $Y$ given $m\times m$ hermitian matrices and $U$, random complex matrix whose entries are independent standard normal variables, we define: \begin{equation} \label {complex} P^{\mathbb{C}}_n(X,Y) = \mathcal{E}_{U}(p_n(XUYU^{*})) \end{equation} Where $U^{*}$ is the conjugate transpose of $U$. In \cite{HSS} Hanlon, Stanley and Stembridge proved that both of these quantities can be expressed as a linear combination of the $p_\lambda(X)p_\mu(Y)$ for $\lambda, \mu \vdash n$. Furthermore, they show that the coefficients in these expansions are the {\bf connection coefficients} of two commutative subalgebras of the group algebra of the symmetric group, the {\bf class algebra} (in the case of complex matrices) and the {\bf double coset algebra} (in the case of real matrices). While these coefficients admit a very nice combinatorial interpretation, their explicit value is unknown in the general case. By interpreting these coefficients as the cardinalities of sets of {\bf locally orientable (partitioned) unicellular hypermaps} and by introducing a new bijective construction between such hypermaps and decorated forests, we provide an explicit expansion of $P^{\mathbb{R}}_n(X,Y)$ in terms of the $m_\lambda(X)m_\mu(Y)$. Namely : \begin{theorem}\label{thm:main} Let $P^{\mathbb{R}}_n(X,Y)$ be defined as in Equation \ref{real}, we have : \begin{multline*} \label{eq:mainthm} P^{\mathbb{R}}_n(X,Y)= \sum_{\lambda,\mu \vdash n}m_{\lambda}(X)m_{\mu}(Y)Aut_{\lambda}Aut_{\mu}\times \\ \sum_{r\geq 0}\sum_{{\bf A}\in M^r_{\lambda,\mu}}\frac{\mathcal{I}({\bf A})}{{\bf A}!}\frac{r!^2(n-q-2r)!(n-1-p-2r)!}{2^{2r-p'-q'}(n-p-q-2r)!} \prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \end{multline*} \end{theorem} Using a special case of our bijection we show the following result: \begin{theorem}\label{thm:comp} Let $P^{\mathbb{C}}_n(X,Y)$ be defined as in Equation \ref{complex}, we have : \begin{equation*} \label{eq:complex} P^{\mathbb{C}}_n(X,Y)= n\sum_{\lambda,\mu \vdash n}m_{\lambda}(X)m_{\mu}(Y)\frac{(n-\ell(\lambda))!(n-\ell(\mu))!}{(n+1-\ell(\lambda)-\ell(\mu))!} \end{equation*} \end{theorem} Denote $I_l$ the $m\times m$ diagonal matrix whose first $l$ diagonal entries are equal to $1$ and the $m-l$ remaining ones equal $0$. In both theorems the special cases $X=I_l $ and $Y=I_m$ are of particular interest. Let $Q^{\mathbb{R}}_n(l,m) = P^{\mathbb{R}}_n(I_l,I_m)$ (resp. $Q^{\mathbb{C}}_n(l,m) = P^{\mathbb{C}}_n(I_l,I_m)$). As a corollary to our main results, we find: \begin{corollary}\label{thm:cor} Let $Q^{\mathbb{R}}_n(l,m)$ be defined as above, we have : \begin{equation*} \label{eq:corthm} \frac{1}{n!}Q^{\mathbb{R}}_n(l,m)=\sum_{r,p,q,p',q'}\binom{l}{p,p'}\binom{m}{q,q'}\binom{n+2r-1}{p+2r-1,q+2r-1}{\binom{n+2r-1}{r,r}}^{-1}2^{2r-p'-q'}\alpha_{r,p,q,p',q'}, \end{equation*} where the summation indices check $p\geq1$, $q+q'\geq1$ and we have $\alpha_{0,p,q,p',q'} = \delta_{p'0}\delta_{q'0}$ and for $r>0$: \begin{equation*} \alpha_{r,p,q,p',q'} = \sum_{a,b}(-1)^{p'+q'-a-b}\left [\frac{p}{p+a}\left(1+\frac{aq}{(p+2r)(q+b)}\right)\right]\binom{-(p+a)/2}{r}\binom{-(q+b)/2}{r}\binom{p'}{a}\binom{q'}{b} \end{equation*} \end{corollary} \noindent We also have: \begin{corollary}\label{thm:corcomp} Let $Q^{\mathbb{C}}_n(l,m)$ be defined as above, then: \begin{equation*} \label{eq:corthm} \frac{1}{n!}Q^{\mathbb{C}}_n(l,m)=\sum_{p,q\geq 1}\binom{l}{p}\binom{m}{q}\binom{n-1}{p-1,q-1} \end{equation*} \end{corollary} \subsection{Connection coefficients} For $\lambda \vdash n$, let $\mathcal{C}_{\lambda}$ be the {\bf conjugacy class} in $S_n$ of permutations with cycle type $\lambda$. The cardinality of the conjugacy classes is given by $|C_\lambda| = n!/z_\lambda$. We denote by $\mathcal{C}_{\lambda\la}$ the set of permutations of $S_{2n}$ with cycle type $\lambda\la =(\lambda_1,\lambda_1,\lambda_2,\lambda_2,\ldots,\lambda_k,\lambda_k)$. We look at {\bf perfect pairings} of the set $[n]\cup [\widehat{n}]=\{1,\ldots n, \widehat{1},\ldots, \widehat{n}\}$ which we view as fixed point free involutions in $S_{2n}$. Note that for $f,g \in S_{2n}$, the disjoint cycles of the product $f\circ g$ have repeated lengths {\it i.e.\ } $f\circ g \in \mathcal{C}_{\lambda\la}$ for some $\lambda \vdash n$. Additionally, $B_n$ is the {\bf hyperoctahedral group} (i.e the centralizer of $f_{\star}=(1\widehat{1})(2\widehat{2})\cdots(n\widehat{n})$). We note $K_\lambda$ the {\bf double coset} of $B_n$ in $S_{2n}$ consisting in all the permutations $\omega$ of $S_{2n}$ such that $f_\star \circ\omega\circ f_\star\circ\omega^{-1}$ belongs to $\mathcal{C}_{\lambda\la}$. We have $|B_n| = 2^nn!$ and $|K_\lambda| = |B_n|^2/(2^{\ell(\lambda)}z_\lambda)$. By abuse of notation, let $C_\lambda$ (resp. $K_\lambda$) also represent the formal sum of its elements in the group algebra $\mathbb{C} S_{n}$ (resp. $\mathbb{C} S_{2n}$). Then, $\{C_\lambda, \lambda \vdash n\}$ (resp. $\{K_\lambda, \lambda \vdash n\}$) forms a basis of the class algebra (resp. double coset algebra, i.e. the commutative subalgebra of $\mathbb{C} S_{2n}$ identified as the Hecke algebra of the Gelfand pair $(S_{2n},B_n)$). For $\lambda$, $\mu$, $\nu \vdash n$, the {\bf connection coefficients} $c^\nu_{\lambda,\mu}$ and $b^\nu_{\lambda,\mu}$ can be defined formally by: \begin{equation} c^\nu_{\lambda,\mu} = [C_\nu]C_\lambda C_\mu, \;\;\;\;\; b^\nu_{\lambda,\mu} = [K_\nu]K_\lambda K_\mu \end{equation} From a combinatorial point of view $c^\nu_{\lambda\mu}$ is the number of ways to write a given permutation $\gamma_\nu$ of $C_\nu$ as the ordered product of two permutations $\alpha\circ\beta$ where $\alpha$ is in $C_\lambda$ and $\beta$ is in $C_\mu$. Similarly, $b^\nu_{\lambda\mu}$ counts the number of ordered factorizations of a given element in $K_\nu$ into two permutations of $K_\lambda$ and $K_\mu$. We have the following results for $\nu = (n)$: \begin{theorem}[\cite{HSS}]\label{thm:HSS} \begin{equation} \label{eq:HSS} P^{\mathbb{R}}_n(X,Y)= \frac{1}{|B_n|}\sum_{\lambda,\mu \vdash n}b_{\lambda, \mu}^{n}p_{\lambda}(X)p_{\mu}(Y) \end{equation} \begin{equation} \label{eq:HSS2} P^{\mathbb{C}}_n(X,Y)= \sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_{\lambda}(X)p_{\mu}(Y) \end{equation} \end{theorem} We note $x^n_{p,q} = \sum_{\lambda,\mu \vdash n; \ell(\lambda) = p;\ell(\mu) = q}x_{\lambda, \mu}^{n}$ for $x = b$ or $c$. As an immediate corollary we have: \begin{corollary}\label{cor:HSS} \begin{equation} Q^{\mathbb{R}}_n(l,m)= \frac{1}{|B_n|}\sum_{p,q \geq 1}b^n_{p,q}l^pm^q \end{equation} \begin{equation} \label{eq:cor2} Q^{\mathbb{C}}_n(l,m)= \sum_{p,q \geq 1}c^n_{p,q}l^pm^q \end{equation} \end{corollary} \subsection{Computation of connections coefficients} Despite the attention the problem received and the elegant combinatorial interpretations of the coefficients $c^n_{\lambda,\mu}$ and $b_{\lambda, \mu}^{n}$, no closed formulas are known except for very special cases. Using an inductive argument B\'{e}dard and Goupil \cite{BG} first found a formula for $c^n_{\lambda,\mu}$ in the case $\ell(\lambda)+\ell(\mu)=n+1$, which was later reproved by Goulden and Jackson \cite{GJ92} via a bijection with a set of ordered rooted bicolored trees. Later, using characters of the symmetric group and a combinatorial development, Goupil and Schaeffer \cite{GS} derived an expression for connection coefficients of arbitrary genus as a sum of positive terms (see Biane \cite{PB} for a succinct algebraic derivation; and Poulalhon and Schaeffer \cite{PS}, and Irving \cite{JI} for further generalizations). Closed form formulas can be found when considering the expansion of the generating series in the RHS of Theorem \ref{thm:HSS} (resp. of corollary \ref{cor:HSS}) in the basis of the $m_{\lambda}(X)m_{\mu}(Y)$ (resp. $\binom{l}{p}\binom{m}{q}$). Jackson (\cite{J}) computed an elegant expression for a generalized version of the RHS of Equation \ref{eq:cor2} whose specialization proves Theorem \ref{thm:comp} when combined with Theorem \ref{thm:HSS}. The proof is algebraic and relies on the theory of the characters of the symmetric group. Schaeffer and Vassilieva in \cite{SV}, Vassilieva in \cite{V} and Morales and Vassilieva in \cite{MV} and \cite{MV2} provided the first purely bijective computations of the generating series in the RHS of (\ref{eq:HSS2}) and (\ref{eq:cor2}).\\ \indent Known results about the coefficients $b_{\lambda, \mu}^{n}$ are much more limited. As shown in \cite{HSS,GJ1}, the generating function in the RHS of equation of Equation \ref{eq:HSS} can be expanded in the basis of {\em zonal polynomials} with simple coefficients. The expression of zonal polynomials in terms of monomial symmetric function is however non trivial and unknown in the general case. In \cite{GJ3}, Goulden and Jackson conjectures that the coefficients $b_{\lambda, \mu}^{\nu}$ can be expressed as a counting series for hypermaps in locally orientable surfaces with respect to some statistics and proved the conjecture for $\lambda = [1^n]$ and $[1^{n-1}2^1]$.\\ \indent In this paper, we provide the first explicit monomial expansion of the RHS of Equation \ref{eq:HSS} thanks to a new bijection for locally orientable hypermaps. As the method is purely bijective, it provides a combinatorial interpretation of the coefficients in the monomial expansion and allows simple alternative combinatorial computations of some of these coefficients. When specialized to the case of orientable hypermaps the proposed bijection simplifies considerably and becomes equivalent to the bijection proposed in \cite{MV}. This special case provides the monomial expansion of the RHS of Equation \ref{eq:HSS2} and proves Theorem \ref{thm:comp}. Using the proper parameters, the bijection and its special case prove the formulas of Theorems \ref{thm:cor} and \ref{thm:corcomp}. \section{Combinatorial formulation} \subsection{Unicellular locally orientable hypermaps} From a topological point of view, a {\bf locally orientable hypermap} of n edges can be defined as a connected bipartite graph with black and white vertices. Each edge is composed of two half edges both connecting the two incident vertices. This graph is embedded in a locally orientable surface such that if we cut the graph from the surface, the remaining part consists of connected components called faces or cells, each homeomorphic to an open disk. The map can be represented as a ribbon graph on the plane keeping the incidence order of the edges around each vertex. In such a representation, two half edges can be parallel or cross in the middle. A crossing (or a twist) of two half edges indicates a change of orientation in the map and that the map is embedded in a non orientable surface (projective plane, Klein bottle,...). We say a hypermap is {\bf rooted} if it has a distinguished half edge. In \cite{GJ1}, it was shown that rooted hypermaps admit a natural formal description involving triples of perfect pairings $(f_1, f_2, f_3)$ on the set of half edges where: \begin{itemize} \item $f_3$ associates half edges of the same edge, \item $f_1$ associates immediately successive (i.e. with no other half edges in between) half edges moving around the white vertices, and \item $f_2$ associates immediately successive half edges moving around the black vertices. \end{itemize} Formally we label each half edge with an element in $[n]\cup [\widehat{n}]=\{1,\ldots,n,\widehat{1},\ldots,\widehat{n}\}$, labeling the rooted half edge by $1$. We then define $(f_1, f_2, f_3)$ as perfect pairings on this set. Combining the three pairings gives the fundamental characteristics of the hypermap since: \begin{itemize} \item The cycles of $f_3\circ f_1$ give the succession of edges around the white vertices. If $f_3\circ f_1 \in \mathcal{C}_{\lambda\lambda}$ then the degree distribution of the white vertices is $\lambda$ (counting only once each pair of half edges belonging to the same edge), \item The cycles of $f_3\circ f_2$ give the succession of edges around the black vertices. If $f_3\circ f_2 \in \mathcal{C}_{\mu\mu}$ then the degree distribution of the black vertices is $\mu$ (counting only once each pair of half edges belonging to the same edge), \item The cycles of $f_1\circ f_2$ encode the faces of the map. If $f_1\circ f_2 \in \mathcal{C}_{\nu\nu}$ then the degree distribution of the faces is $\nu$ \end{itemize} In what follows, we consider the number $L_{\lambda, \mu}^{n}$ of rooted {\bf unicellular}, or one-face, locally orientable hypermaps with face distribution $\nu=(n)=n^1$, white vertex distribution $\lambda$, and black vertex distribution $\mu$. Let $f_1$ be the pairing $(1\,\widehat{n})(2\,\widehat{1})(3\,\widehat{2})\ldots(n\,n\widehat{-}1)$ and $f_2 =f_{\star}= (1\,\widehat{1})(2\,\widehat{2})\ldots(n\,\widehat{n})$. We have $f_1\circ f_2 = (123\ldots n)(\widehat{n}n\widehat{-}1\, n\widehat{-}2\ldots\widehat{1}) \in \mathcal{C}_{(n)(n)}$. Then one can show that \begin{equation} L_{\lambda, \mu}^{n} =\,\, \mid \{ f_3 \mbox{ pairings in } S_{2n}([n]\cup [\widehat{n}]) ; f_3\circ f_1 \in \mathcal{C}_{\lambda\lambda}, f_3\circ f_2 \in \mathcal{C}_{\mu\mu} \} \mid. \end{equation} Moreover the following relation between $L^n_{\lambda,\mu}$ and $b^n_{\lambda,\mu}$ holds \cite[Cor 2.3]{GJ1}: \begin{equation} \label{eq:lb} L_{\lambda, \mu}^{n} = \frac{1}{2^nn!}b_{\lambda, \mu}^{n} \end{equation} Thus we can encode the connection coefficients as numbers of locally orientable hypermaps.\\ We can refine the definition of $L_{\lambda, \mu}^{n}$ using the non negative number $r$ of hat/hat (equivalently non-hat/non-hat) pairs in $f_3$: \begin{equation} L_{\lambda, \mu, r}^{n} =\,\, \mid \{ f_3 \mbox{ pairings in } S_{2n}([n]\cup [\widehat{n}]) ; f_3\circ f_1 \in \mathcal{C}_{\lambda\lambda}, f_3\circ f_2 \in \mathcal{C}_{\mu\mu},\mid f_3([\widehat{n}])\cap[\widehat{n}]\mid = r \} \mid. \end{equation} Obviously $L_{\lambda, \mu}^{n} = \sum_{r \geq 0}L_{\lambda, \mu, r}^{n}$. The following result holds \cite[Prop 4.1]{GJ3}: \begin{equation} \label{eq: or} L_{\lambda, \mu, 0}^{n} = c_{\lambda, \mu}^{n} \end{equation} \begin{example} Figure \ref{fig:example} depicts a locally orientable unicellular hypermap in $L_{\lambda, \mu, r}^{n}$ with $\lambda=[1^12^23^14^1]$, $\mu = [3^14^15^1]$ and $r=3$ (at this stage we disregard the geometric shapes around the vertices). \end{example} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.33\textwidth]{LPexample.pdf} \caption{A unicellular locally orientable hypermap} \label{fig:example} \end{center} \end{figure} \subsection{Partitioned locally orientable hypermaps} We consider locally orientable hypermaps where we partition the set of white vertices (resp. black). In terms of the pairings, this means we ``color'' the cycles of $f_3\circ f_1$ (resp. $f_3\circ f_2$) allowing repeated colors but imposing that the two cycles corresponding to each white (resp. black) vertex have the same color. The following definition in terms of set partitions of $[n]\cup [\widehat{n}]$ makes this more precise. \begin{definition}[Locally orientable partitioned hypermaps] We consider the set $\mathcal{LP}_{\lambda,\mu}^{n}$ of triples $(f_3,\pi_1,\pi_2)$ where $f_3$ is a pairing on $[n]\cup [\widehat{n}]$, $\pi_1$ and $\pi_2$ are set partitions on $[n]\cup [\widehat{n}]$ with blocks of even size and of respective types $2\lambda$ and $2\mu$ (or {\em half types} $\lambda$ and $\mu$) with the constraint that $\pi_i$ $(i=1,2)$ is stable by $f_i$ and $f_3$. Any such triple is called a {\bf locally orientable partitioned hypermap} of type $(\lambda,\mu)$. In addition, let $LP_{\lambda, \mu}^{n} = \mid \mathcal{LP}_{\lambda,\mu}^{n} \mid$. Finally, we note $\mathcal{LP}_{\lambda,\mu,r}^{n}$ and $LP_{\lambda, \mu, r}^{n}$ the set and number of such partitioned hypermaps with $\mid f_3([\widehat{n}])\cap[\widehat{n}]\mid = r$. \end{definition} \begin{remark} The analogous notion of partitioned or colored map is common in the study of orientable maps ({\it e.g.\ } see \cite{L},\cite{GN}). Recently Bernardi in \cite{B} extended the approach in \cite{L} to find a bijection between locally orientable partitioned maps and orientable partitioned maps with a distinguished {\em planar submap}. As far as we know \cite[Sect. 7]{B} this technique does not extend to locally orientable hypermaps. \end{remark} \begin{lemma} The number of hat numbers in a block is equal to the number of non hat numbers\end{lemma} \begin{proof} If a non hat number $i$ belongs to block $\pi_1^k$ then $f_1(i) = i\widehat{-}1$ also belongs to $\pi_1^k$. The same argument applies to blocks of $\pi_2$ with $f_2(i)=\widehat{i}$.\end{proof} \begin{example} As an example, the locally orientable hypermap on Figure \ref{fig:example} is partitioned into the blocks: \begin{eqnarray} \nonumber \pi_1 &=& \{\{\widehat{12},1,\widehat{3},4,\widehat{7},8,\widehat{11},12\};\{\widehat{1},2,\widehat{6},7,\widehat{8},9\};\{\widehat{2},3,\widehat{10},11\};\{\widehat{4},5,\widehat{5},6,\widehat{9},10\}\}\\ \nonumber \pi_2 &=& \{\{1,\widehat{1},3,\widehat{3},6,\widehat{6},10,\widehat{10}\};\phantom{\{}\{2,\widehat{2},7,\widehat{7},11,\widehat{11}\};\phantom{\{}\{4,\widehat{4},5,\widehat{5},8,\widehat{8},9,\widehat{9},12,\widehat{12}\}\} \end{eqnarray} (blocks are depicted by the geometric shapes around the vertices, all the vertices belonging to a block have the same shape). \end{example} Let $\overline{R}_{\lambda,\mu}$ be the number of unordered partitions $\pi=\{\pi^{1}, \ldots, \pi^{p}\}$ of the set $[\ell(\lambda)]$ such that $\mu_j = \sum_{i\in \pi^j} \lambda_i$ for $1\leq j \leq \ell(\mu)$. Then for the monomial and power symmetric functions $m_{\lambda}$ and $p_{\lambda}$ we have $p_{\lambda} = \sum_{\mu \succeq \lambda} Aut_{\mu} \overline{R}_{\lambda,\mu} m_{\mu}$ \cite[Prop.7.7.1]{EC2}. We use this to obtain a relation between $L_{\lambda,\mu}^n$ and $LP_{\lambda,\mu}^n$. \begin{proposition} \label{prop1} For partitions $\rho, \epsilon \vdash n$ and $r\geq 0$ we have $LP_{\nu,\rho,r}^n = \sum_{\lambda,\mu} \overline{R}_{\lambda\nu}\overline{R}_{\mu\rho}L_{\lambda,\mu,r}^n$, where $\lambda$ and $\mu$ are refinements of $\nu$ and $\rho$ respectively. \end{proposition} \begin{proof} Let $(f_3,\pi_1,\pi_2) \in \mathcal{LP}_{\nu,\rho,r}^n$. If $f_3\circ f_1 \in \mathcal{C}_{\lambda\lambda}$ and $f_3\circ f_2 \in \mathcal{C}_{\mu\mu}$ then by definition of the set partitions we have that $\lambda$ and $\mu$ are refinements of $type(\pi_1)=\nu$ and $type(\pi_2)=\rho$ respectively. Thus, we can classify the elements of $\mathcal{LP}_{\nu,\rho,r}^n$ by the cycle types of $f_3\circ f_1$ and $f_3\circ f_2$. {\it i.e.\ } $\mathcal{LP}_{\nu,\rho,r}^n=\bigcup_{\lambda,\mu} \mathcal{LP}_{\nu,\rho,r}^n(\lambda,\mu)$, where $$ \mathcal{LP}_{\nu,\rho,r}(\lambda,\mu) = \{ (f_3,\pi_1,\pi_2) \in \mathcal{LP}^n_{\nu,\rho,r} ~|~ (f_3\circ f_1,f_3\circ f_2) \in \mathcal{C}_{\lambda\lambda} \times \mathcal{C}_{\mu\mu}\}. $$ If $LP_{\mu\rho,r}^n(\lambda,\mu) = |\mathcal{LP}_{\mu\rho,r}^n(\lambda,\mu)|$ then it is easy to see that $LP_{\mu,\rho,r}^n(\lambda,\mu)= \overline{R}_{\lambda\nu}\overline{R}_{\mu\rho} L_{\lambda\mu,r}^{n}$. \end{proof} The change of basis between $p_{\lambda}$ and $m_{\lambda}$ immediately relates the generating series for $L^n_{\lambda,\mu,r}$ and the generating series for $LP^n_{\lambda,\mu,r}$ in monomial symmetric functions: \begin{equation} \label{lem:lp} \sum_{\lambda,\mu \vdash n}L_{\lambda, \mu,r}^{n}p_{\lambda}({\bf x})p_{\mu}({\bf y}) = \sum_{\lambda,\mu \vdash n}Aut_{\lambda}Aut_{\mu}LP_{\lambda, \mu,r}^{n}m_{\lambda}({\bf x})m_{\mu}({\bf y}) \end{equation} Summing over $r$ gives: \begin{equation} \sum_{\lambda,\mu \vdash n}L_{\lambda, \mu}^{n}p_{\lambda}({\bf x})p_{\mu}({\bf y}) = \sum_{\lambda,\mu \vdash n}Aut_{\lambda}Aut_{\mu}LP_{\lambda, \mu}^{n}m_{\lambda}({\bf x})m_{\mu}({\bf y}) \end{equation} For $l$ and $p$ non negative integers, we note $(l)_p = l(l-1)\ldots(l-p+1)$. We have $m_\lambda(I_l) = (l)_{\ell(\lambda)}/Aut_\lambda$ and: \begin{equation} \label{eq:nodeg} \sum_{p,q}L_{p, q, r}^{n}l^pm^q =\sum_{p,q}LP_{p, q, r}^{n}(l)_p(m)_q, \end{equation} where $LP_{p,q,r}^n = \sum_{\lambda,\mu \vdash n; \ell(\lambda) = p;\ell(\mu) = q}LP_{\lambda, \mu, r}^{n}$ (a similar definition applies to $L_{p,q,r}^n$). \begin{definition} Let $\mathcal{LP}({\bf A})$ be the set of cardinality $LP({\bf A})$ of partitioned locally orientable hypermaps with $n$ edges where ${\bf A}= (P,P',Q,Q')$ are bidimensional arrays such that for $i,j \geq 0$: \begin{itemize} \item $P_{ij}$ (resp. $P'_{ij}$) is the number of blocks of $\pi_1$ of half size $i$ that do not contain $1$ and such that: \begin{itemize} \item[(i)] its maximum {\bf non-hat} number is paired to a {\bf hat} (resp. {\bf non-hat}) number by $f_3$ \item[(ii)] the block contains $j$ pairs $\{t,f_3(t)\}$ where both $t$ and $f_3(t)$ are {\bf non-hat} numbers. \end{itemize} \item $Q_{ij}$ (resp. $Q'_{ij}$) is the number of blocks of $\pi_2$ of half size $i$ such that: \begin{itemize} \item[(i)] the maximum {\bf hat} number of the block is paired to a {\bf non-hat} (resp. {\bf hat}) number by $f_3$, \item[(ii)] the block contains $j$ pairs $\{t,f_3(t)\}$ where both $t$ and $f_3(t)$ are {\bf hat} numbers. \end{itemize} \end{itemize} \end{definition} As a direct consequence we get: \begin{equation}\label{eq:lpa}{LP}_{\lambda,\mu,r}^{n} = \sum_{{\bf A} \in M_{\lambda,\mu}^r}LP({\bf A})\end{equation} \begin{example} The partitioned hypermap on Figure \ref{fig:example} belongs to $\mathcal{LP}({\bf A})$ for $P = E_{3,1}+ E_{2,0}$, $P' = E_{3,1}$, $Q= E_{5,1}+E_{4,1}$, $Q' = E_{3,1}$ where $E_{t,u}$ is the elementary array with entry $1$ at position $(t,u)$ and $0$ elsewhere. \end{example} \subsection{Permuted forests and reformulation of the main theorem} We show that partitioned locally orientable hypermaps admit a nice bijective interpretation in terms of some recursive forests defined as follows: \begin{definition}[Rooted bicolored forests of degree {\bf A}]\label{def:forest} In what follows we consider the set $\mathcal{F}({\bf A})$ of permuted rooted forests composed of: \begin{itemize} \item a bicolored identified ordered {\bf seed tree} with a white root vertex, \item other bicolored ordered trees, called {\bf non-seed trees} with either a white or a black root vertex, \item each vertex of the forest has three kind of ordered descendants: {\bf tree-edges} (connecting a white and a black vertex), {\bf thorns} (half edges connected to only one vertex) and {\bf loops} connecting a vertex to itself. The two {\em extremities} of the loop are part of the ordered set of descendants of the incident vertex and therefore the loop can be intersected by thorns, edges and other loops as well. \end{itemize} The forests in $\mathcal{F}({\bf A})$ also have the following properties: \begin{itemize} \item the root vertices of the non-seed trees have at least one descending loop with one extremity being the rightmost descendant of the considered vertex, \item the total number of thorns (resp. loops) connected to the white vertices is equal to the number of thorns (resp. loops) connected to the black ones, \item there is a bijection between thorns connected to white vertices and the thorns connected to black vertices. The bijection between thorns will be encoded by assigning the same symbolic {\em latin} labels $\{a,b,c,\ldots\}$ to thorns associated by this bijection, \item there is a mapping that associates to each loop incident to a white (resp. black) vertex, a black (resp. white) vertex ${\rm v}$ such that the number of white (resp. black) loops associated to a fixed black (resp. white) vertex ${\rm v}$ is equal to its number of incident loops. We will use symbolic {\em greek} labels $\{\alpha,\beta,\ldots\}$ to associate loops with vertices except for the maximal loop (i.e. the loop whose rightmost extremity is the rightmost descendant of the considered vertex) of a root vertex ${\rm r}$ of the non-seed trees. In this case, we draw an arrow (\begin{tikzpicture} \draw[very thick,densely dashed] [->] (0,0) to (0.8,0); \end{tikzpicture}) outgoing from the root vertex ${\rm r}$ and incoming to the vertex associated with the loop. Arrows are non ordered, and : \item the ascendant/descendant structure defined by the edges of the forest and the arrows defined above is a tree structure rooted in the root of the seed tree. \end{itemize} Finally the degree ${\bf A}$ of the forest is given in the following way: \begin{itemize} \item[(vii)] $P_{ij}$ (resp $P'_{ij}$) counts the number of non root white vertices (resp. white root vertices excluding the root of the seed tree) of degree $i$ with a total number of $j$ loops, \item[(viii)] $Q_{ij}$ (resp $Q'_{ij}$) counts the number of non root black vertices (resp. black root vertices) of degree $i$ with a total number of $j$ loops. \end{itemize} \end{definition} \begin{example} As an example, Figure \ref{fig:exrecons} depicts two permuted forests. The one on the left is of degree ${\bf A}=(P,P',Q,Q')$ for $ E_{3,1}+E_{2,0}$, $P' = E_{3,1}$, $Q= E_{5,1} + E_{4,1}$, and $Q' = E_{3,1}$ while the one on the right is of degree ${\bf A^{(2)}}=(P^{(2)},P'^{(2)},Q^{(2)},Q'^{(2)})$ for $P^{(2)}= E_{4,1}$, $P'^{(2)}=\{0\}_{i,j}$, $Q^{(2)}=E_{7,2}$, and $Q'^{(2)}=E_{4,2}$. \begin{figure}[htbp] \begin{center} \includegraphics[width=60mm]{ForestExample.pdf}\hspace{15mm} \includegraphics[width=30mm]{InjExample.pdf} \caption{Two Permuted Forests} \label{fig:exrecons} \end{center} \end{figure} \end{example} \begin{lemma} \label{lem: lag} Let $F({\bf A})$ be the cardinality of the set of forests $\mathcal{F}({\bf A})$ defined above. We have: \begin{eqnarray} \label{eq:degrees} F({\bf A}) = \frac{\mathcal{I}({\bf A})}{{\bf A}!}\frac{r!^2(n-q-2r)!(n-1-p-2r)!}{2^{2r-p'-q'}(n-p-q-2r)!}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \end{eqnarray} Let $F_{p,p',q,q',r}$ be the total number of forests with degree ${\bf A} = (P,P',Q,Q')$ for $p=|P|+1$ (in the following formula we count the root of the seed tree as an internal white vertex), $p'=|P'|$, $q=|Q|$, $q'=|Q'|$ and $r = \sum_j j(Q + Q')_{i,j}$. We have : \begin{eqnarray} \label{eq:nodegrees} F_{p,p',q,q',r} = \frac{n!}{p!p'!q!q'!}\binom{n+2r-1}{p+2r-1,q+2r-1}{\binom{n+2r-1}{r,r}}^{-1}2^{2r-p'-q'}\alpha_{r,p,q,p',q'} \end{eqnarray} \normalsize \end{lemma} \begin{proof} The proof is postponed to Annex~\ref{sec: ann}. \end{proof} \noindent {\bf Reformulation of the main theorem}\\ In order to show Theorem \ref{thm:main} the next sections are dedicated to the proof of the following stronger result: \begin{theorem} \label{thm:ref} There is a bijection $\Theta_{\bf A}:\mathcal{LP}({\bf A}) \to \mathcal{F}({\bf A}$) and $LP({\bf A}) = F({\bf A})$. \end{theorem} \noindent Theorem \ref{thm:comp} is a direct consequence of the above result by setting $r=0$ and using Equation \ref{eq: or}. Using Equation \ref{eq:nodeg}, corollaries \ref{thm:cor} and \ref{thm:corcomp} are also direct consequences of Theorem \ref{thm:ref}. \section{Bijection between partitioned locally orientable unicellular hypermaps and permuted forests} We proceed with the description of the bijective mapping $\Theta_{\bf A}$ between partitioned locally orientable hypermaps and permuted forests of degree ${\bf A}$. Let $(f_3,\pi_1,\pi_2)$ be a partitioned hypermap in $\mathcal{LP}({\bf A})$. The first step is to define a set of white and black vertices with labeled ordered half edges such that: \begin{itemize} \item each white vertex is associated to a block of $\pi_1$ and each black vertex is associated to a block of $\pi_2$, \item the number of half edges connected to a vertex is half the cardinality of the associated block, and \item the half edges connected to the white (resp. black) vertices are labeled with the non hat (resp. hat) integers in the associated blocks so that moving clockwise around the vertices the integers are sorted in increasing order. \end{itemize} Then we define an ascendant/descendant structure on the vertices. A black vertex $b$ is the descendant of a white one $w$ if the maximum half edge label of $b$ belongs to the block of $\pi_1$ associated to $w$. Similar rules apply to define the ascendant of each white vertex except the one containing the half edge label $1$.\\ If black vertex $b^d$ (resp. white vertex $w^d$) is a descendant of white vertex $w^a$ (resp. black vertex $b^a$) and has maximum half edge label $m$ such that $f_3(m)$ is the label of a half edge of $w^a$ (resp. $b^a$), i.e. $f_3(m^b)$ is a non hat (resp. hat) number, then we connect these two half edges to form an edge. Otherwise $f_3(m)$ is a hat (resp. non hat) number and we draw an arrow (\begin{tikzpicture} \draw[very thick,densely dashed] [->] (0,0) to (0.8,0); \end{tikzpicture}) between the two vertices. Note that descending edges are ordered but arrows are not.\\ \begin{lemma} \label{lem:tree}The above construction defines a tree structure rooted in the white vertex with half edge $1$. \end{lemma} \begin{proof} Let black vertices $b_1$ and $b_2$ associated to blocks $\pi_2^{b_1}$ and $\pi_2^{b_2}$ be respectively a descendant and the ascendant of white vertex $w$ associated to $\pi_1^w$. We denote by $m^{b_1}$, $m^{b_2}$ and $m^{w}$ their respective maximum half edge labels (hat, hat, and non hat) and assume $m^{b_1} \neq \widehat{n}$. As $\pi_1^{w}$ is stable by $f_1$, then $f_1(m^{b_1})$ is a non hat number in $\pi_1^w$ not equal to $1$. It follows that $m^{b_1} < f_1(m^{b_1}) \leq m^w < f_2(m^w)$. Then as $\pi_2^{b_2}$ is stable by $f_2$, it contains $f_2(m^w)$ and $f_2(m^w) \leq m^{b_2}$. Putting everything together yields $m^{b_1} < m^{b_2}$. In a similar fashion, assume white vertices $w_1$ and $w_2$ are descendant and ascendant of black vertex $b$. If we note $m^{w_1}$, $m^{w_2}$ and $m^{b}$ their maximum half edge labels (non hat, non hat, and hat) with $m^{b} \neq \widehat{n}$, one can show that $m^{w_1} < m^{w_2}$. Finally, as $f_1(\widehat{n}) =1$, the black vertex with maximum half edge $\widehat{n}$ is descendant of the white vertex containing the half edge label $1$. \end{proof} \begin{example} Using the hypermap of Figure \ref{fig:example} we get the set of vertices and ascendant/descendant structure as described on Figure \ref{fig:verttree}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\textwidth]{Bijvertree.pdf} \caption{Construction of the ascendant/descendant structure} \label{fig:verttree} \end{center} \end{figure} \end{example} Next we proceed by linking half edges connected to the same vertex if their labels are paired by $f_3$ to form loops. Furthermore, we assign greek symbolic labels from $\{\alpha,\beta,\ldots\}$ to all the non maximal loops and the applicable vertices in the following way: \begin{itemize} \item if $i$ and $f_3(i)$ are the numeric labels of a non maximal loop connected to a white (resp. black) vertex, we assign the same label to the loop and the black (resp. white) vertex associated to the block of $\pi_2$ (resp. $\pi_1$) also containing $i$ and $f_3(i)$, \item a vertex has at most one such label. \end{itemize} As a natural consequence of these two conditions, several loops may share the same label. \begin{lemma}\label{lem:loops} The number of loops connected to the vertex labeled $\alpha$ is equal to its number of incoming arrows plus the number of loops labeled $\alpha$ incident to other vertices in the forest. \end{lemma} \begin{proof} The result is a direct consequence of the fact that in each block the number of hat/hat pairs is equal to the number of non hat/non hat pairs.\end{proof} As a final step we define a bijection between the remaining half edges (thorns) connected to the white vertices and the ones connected to the black vertices. If two remaining thorns are paired by $f_3$ then these two thorns are given the same label from $\{a,b,\ldots\}$. Then all the original integer labels are removed. Denote by $\widetilde{F}$ the resulting forest. \begin{example} We continue with the hypermap from Figure \ref{fig:example} and perform the final steps of the construction as described on Figure \ref{fig:loopsperm} (note that the geometric shapes are here for reference only, they do not play any role in the final object $\widetilde{F}$). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\textwidth]{BijloopsPerm.pdf} \caption{Final steps of the permuted forest construction} \label{fig:loopsperm} \end{center} \end{figure} \end{example} \noindent As a direct consequence of definition \ref{def:forest}, $\widetilde{F}$ belongs to $\mathcal{F}(A)$. \section{Proof of the bijection} We show that mapping $\Theta_{\bf A}:(f_3,\pi_1,\pi_2) \mapsto \widetilde{F}$ is indeed one-to-one. \subsection{Injectivity} We start with a forest $\widetilde{F}$ in $\mathcal{F}({\bf A})$ and show that there is at most one triple $(f_3,\pi_1,\pi_2)$ in $\mathcal{LP}({\bf A})$ such that $\Theta_{\bf A}(f_3,\pi_1,\pi_2) = \widetilde{F}$. The first part is to notice that within the construction in $\Theta_{\bf A}$ the original integer label of the leftmost descendant (thorn, half loop or edge) of the root vertex of the seed tree is necessarily $1$ (this root is the vertex containing $1$ and the labels are sorted in increasing order from left to right).\\ Assume we have recovered the positions of integer labels $1,\widehat{1},2,\widehat{2},\ldots,i$, for some $1 \leq i \leq n-1$, non hat number. Then four cases can occur: \begin{itemize} \item $i$ is the integer label of a thorn of latin label $a$. In this case $f_3(i)$ is necessarily the integer label of the thorn connected to a black vertex also labeled with $a$. But as the blocks of $\pi_2$ are stable by both $f_3$ and $f_2$ then $\widehat{i} = f_2(i)$ is the integer label of one of the descendants of the black vertex with thorn $a$. As these labels are sorted in increasing order, necessarily, $\widehat{i}$ labels the leftmost descendant with no recovered integer label, \item $i$ is the integer label of a half loop of greek label $\alpha$. Then, in a similar fashion as above $\widehat{i}$ is necessarily the leftmost unrecovered integer label of the black vertex with symbolic label $\alpha$, \item $i$ is the integer label of a half loop with no symbolic label (i.e, either $i$ or $f_3(i)$ is the maximum label of the considered white vertex). Then, $\widehat{i}$ is necessarily the leftmost unrecovered integer label of the black vertex at the other extremity of the arrow outgoing from the white vertex containing integer label $i$, \item $i$ is the integer label of an edge and $\widehat{i}$ is necessarily the leftmost unrecovered integer label of the black vertex at the other extremity of this edge. \end{itemize} Finally, using similar four cases for the black vertex containing the descendant with integer label $\widehat{i}$ and the fact that blocks of $\pi_1$ are stable by $f_3$ and $f_1$, the thorn, half loop or edge with integer label $i+1 = f_1(\widehat{i})$ is uniquely determined as well. We continue with the procedure described above until we fully recover all the original labels $[n]\cup [\hat{n}]$. According to the construction of $\widetilde{F}$ the knowledge of all the integer labels uniquely determines the blocks of $\pi_1$ and $\pi_2$. The pairing $f_3$ is uniquely determined by the loops, edges and thorns with same latin labels as well. \begin{example} Assume the permuted forest $\widetilde{F}$ is the one on the right hand side of Figure \ref{fig:exrecons}. The steps of the reconstruction are summarized in Figure \ref{fig:recons}. We get that the unique triple $(f_3,\pi_1,\pi_2)$ such that $\Theta_{\bf A}(f_3,\pi_1,\pi_2) = \widetilde{F}$ is: \begin{eqnarray} \nonumber f_3 &=& (1\,\, 4)(\widehat{1}\,\, \widehat{8})(2\,\, 9)(\widehat{2}\,\, \widehat{3})(3\,\, \widehat{11})(\widehat{4}\,\, \widehat{10})(5\,\, 7)(\widehat{5}\,\, 6)(\widehat{6}\,\, 11)(\widehat{7}\,\, \widehat{9})(8\,\, 10)\\ \nonumber \pi_1 &=& \{\{\widehat{11},1,\widehat{1},2,\widehat{2},3,\widehat{3},4,\widehat{7},8,\widehat{8},9,\widehat{9},10\};\{\widehat{4},5,\widehat{5},6,\widehat{6},7,\widehat{10},11\}\}\\ \nonumber \pi_2 &=& \{\{2,\widehat{2},3,\widehat{3},5,\widehat{5},6,\widehat{6},7,\widehat{7},9,\widehat{9},11,\widehat{11}\};\{1,\widehat{1},4,\widehat{4},8,\widehat{8},10,\widehat{10}\}\} \end{eqnarray} \begin{figure}[htbp] \begin{center} \includegraphics[width=1\textwidth]{InjRecons.pdf} \caption{Recovery of the integer labels and the partitioned map} \label{fig:recons} \end{center} \end{figure} \end{example} \subsection{Surjectivity} To prove that $\Theta_{\bf A}$ is surjective, we have to show that the reconstruction procedure of the previous section always finishes with a valid output.\\ Assume the procedure comes to an end at step $i$ before all the integer labels are recovered (where $i$ is for example non hat, the hat case having a similar proof). It means that prior to this step we have already recovered all the labels of vertex $v^i$ identified as the one containing $\widehat{i}$ (or $i+1$). This is impossible by construction provided $v^i$ is not the root vertex of the seed tree. Indeed the number of times a vertex is identified for the next step is equal to its number of thorns plus its number of edges plus twice the number of loops that have the same greek label as $v^i$ plus twice the incoming arrows. Using Property {\em (iv)} of Definition \ref{def:forest}, we have that the sum of the two latter numbers is twice the number of loops of $v^i$. As a consequence, the total number of times the recovering process goes through $v^i$ is exactly (and thus never more than) the degree of $v^i$. If $v$ is the root vertex of the seed tree the situation is slightly different due to the fact that we recover label $1$ before we start the procedure. To ensure that the procedure does not terminate prior to its end, we need to show that the $\mid v \mid$-th time the procedure goes through the root vertex is right after all the labels of the forest have been recovered. Again, this is always true because: \begin{itemize} \item the last element of a vertex to be recovered is the label of the maximum element of the associated block. Consequently, all the elements of a vertex are recovered only when all the elements of the descending vertices (through both arrows and edges) are recovered. \item property {\em (v)} of Definition \ref{def:forest} states that the ascendant/descendant structure involving both edges and arrows is a tree rooted in $v$. As a result, the procedure goes the $v$-th time through $v$ only when all the elements of all the other vertices are recovered. \end{itemize} \section{Additional results} The bijection proved in the previous sections may be used directly to derive efficiently some additional results that may not be obvious from the formula in Theorem \ref{thm:main}. \subsection{Coefficient of $m_\lambda(X)m_n(Y)$} Using the bijection between partitioned locally orientable hypermaps and permuted forests, one can show: \begin{theorem} For $\lambda \vdash n$ the coefficient of $m_\lambda(X)m_n(Y)$ in the monomial expansion of $P^{\mathbb{R}}_n(X,Y)$ is given by \begin{equation} \label{eq: mlamn} [m_\lambda(X)m_n(Y)]P^{\mathbb{R}}_n(X,Y) = \binom{n}{\lambda}(2\lambda-1)!!, \end{equation} where $(2\lambda-1)!! = \prod_i(2\lambda_i-1)!!$ and $(2\lambda_i-1)!! = (2\lambda_i-1)(2\lambda_i-3)\dots1$. \end{theorem} \begin{proof} Let $F_n$ be the number of forests composed of exactly one white (the root of the seed tree) and one black vertex. Obviously $F_n = (2n-1)!!$ as any such forest is fully described as a pairing of the $2n$ children around the white and the black vertex (a loop is the pairing of two children of the same vertex, thorns with latin letters are pairings of one black and one white child and an edge is the pairing of the rightmost child of the black vertex to one of the child of the white root.)\\ \indent But a forest with one white (root) vertex and $\ell(\lambda)$ black vertices of degree distribution $\lambda$ ($F_\lambda$ denotes the number of such forests) can be seen as a $\ell(\lambda)$-tuple of forests with one white and one black vertex of degree $\{\lambda_i\}_{1\leq i \leq \ell(\lambda)}$. The $i$-th forest is composed of the $i$-th black vertex with its descendants and one new white vertex with a subset of descendants of the original one's containing: \begin{itemize} \item(i) the edge linking the white vertex and the $i$-th black vertex (if any), \item(ii) the thorns in bijection with the thorns of the $i$-th black vertex, \item(iii) the loops mapped to the $i$-th black vertex. \end{itemize} The construction is bijective if we distinguish in the initial forest the black vertices with the same degree ($Aut_\lambda$ ways to do it) and we keep track in the tuple of forests the initial positions of the descendants of the white vertices within the initial forest ($\binom{n}{\lambda}$ possible choices). We get: \begin{equation} Aut_\lambda F_\lambda = \binom{n}{\lambda}\prod_i F_{\lambda_i} =\binom{n}{\lambda}(2\lambda-1)!! \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{MiniBij.pdf} \end{center} \caption{Splitting a forest of black degree distribution $\lambda$ into a $\ell(\lambda)$-tuple of two vertex forests for $\lambda = [1^14^25^1]$.} \label{forests} \end{figure} \end{proof} \begin{rem} Using the formula of Theorem \ref{thm:main}, we have: \begin{equation} \sum_{Q,Q'}\prod_{i,j}\frac{2^{Q'_{ij}-2j(Q_{ij}+Q'_{ij})}}{Q_{ij}!Q'_{ij}!}{\binom{i-1}{j,j}}^{Q_{ij}}{\binom{i-1}{j,j-1}}^{Q'_{ij}} = \frac{(2\lambda-1)!!}{\lambda!Aut_\lambda}, \end{equation} where the sum runs over two dimensional arrays $Q$ and $Q'$ with $n_i(\lambda)=\sum_{j \geq 0}Q_{ij} + Q'_{ij}$. \end{rem} \subsection{Coefficient of $m_{n-a,1^{a}}(X)m_{n-a,1^{a}}(Y)$} The number $F_{(n-a,1^a),(n-a,1^a)}$ of forests with $a+1$ white (including the root of the seed tree) and $a+1$ black vertices, both of degree distribution $(n-a,1^{a})$, can be easily obtained from the number of two-vertex forests $F_{n-2a}$. We consider $2a\leq n-1$, it is easy to show that the coefficient is equal to $0$ otherwise. Two cases occur: either the white vertex with degree $n-a$ is the root and there are $\binom{n-a}{a}\times\binom{n-a-1}{a}$ ways to add the black and the white descendants of degree $1$, or the root is a white vertex of degree $1$ and there are $\binom{n-a-1}{a-1}\times\binom{n-a-1}{a}$ ways to add the remaining white vertices and the $a$ black vertices of degree $1$ (see Figure \ref{forests2}). We have: \begin{equation} F_{(n-a,1^a),(n-a,1^a)} = F_{n-2a}\binom{n-a-1}{a}\left [\binom{n-a}{a}+\binom{n-a-1}{a-1}\right ] \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{minibij2.pdf} \end{center} \caption{Two possible decompositions of forests of degree $(n-a,1^{a})$.} \label{forests2} \end{figure} As a result, we obtain : \begin{theorem} For {\em $a$} non negative integer such that $2a\leq n-1$, the coefficient of $m_{n-a,1^{a}}(X)m_{n-a,1^{a}}(Y)$ in the monomial expansion of $P^{\mathbb{R}}_n(X,Y)$ is given by: \begin{align} \nonumber [m_{n-a,1^{a}}(X)m_{n-a,1^{a}}(Y)]P^{\mathbb{R}}_n(X,Y) &= Aut_{n-a,1^{a}}^2F_{(n-a,1^a),(n-a,1^a)}\\ &= n(n-2a)\left(\frac{(n-a-1)!}{(n-2a)!}\right)^2(2n-4a-1)!! \end{align} \end{theorem} \section{Annex: enumeration of permuted forests} \label{sec: ann} \subsection{General considerations} \label{subsec: andeg} In this section we prove Lemma \ref{lem: lag} and compute the cardinality of the set $\mathcal{F}({\bf A})$. To this extent we slightly modify the considered forests and define the set $\mathcal{G}({\bf A})$ of cardinality $G({\bf A})$. The definition of these forests differs from the one of $\mathcal{F}({\bf A})$ as in $\mathcal{G}({\bf A})$: \begin{itemize} \item[(i)] There is no bijection between the thorns connected to the black vertices and the one connected to the white vertices. \item[(ii)] All the non seed trees with a white (resp. black) root are labeled by an integer in $\{1,\ldots,p'\}$ (resp. $\{1,\ldots,q'\}$). \item[(iii)] All the loops incident to a white (resp. black) vertex whose right extremity is not the rightmost descendant of the root vertex of a non seed-tree are labeled with an integer of $\{1,\ldots, r-p'\}$ (resp. $\{1,\ldots, r-q'\}$) according to the labeling of the non seed trees. If $k_i$ $(0\leq i \leq p')$ (resp. $(1\leq i \leq q')$) is the number of such loops in white (resp . black) rooted tree $i$ (tree $0$ is the seed tree) we use integers $\sum_{j\leq i-1}k_j+1, \sum_{j\leq i-1}k_j +2,\ldots, \sum_{j\leq i-1}k_j+k_i$ to label these loops. Within a tree loops are labeled in a classical order, say according to the depth first traversal of the tree. \item[(iv)] Instead of the "coloration" of the loops with the greek letters, additional unordered labeled arrows are connected to the black and white vertices. The labels of the arrows connected to a given vertex $v$ are the ones of the loops colored by $v$. \end{itemize} \begin{example} The right hand side forest of Figure \ref{fig:lag} belongs to $\mathcal{G}({\bf A})$. The left hand side one is a forest of $\mathcal{F}({\bf A})$ with an equivalent coloration of the loops with the greek letters. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\textwidth]{LagrangeFigures.pdf} \caption{A forest of $\mathcal{G}({\bf A})$ (right) and its "equivalent" in $\mathcal{F}({\bf A})$ (left).} \label{fig:lag} \end{center} \end{figure} \end{example} \begin{lemma} \label{lem: f2g} $\mathcal{G}({\bf A})$ as defined above is in a $p'!q'!$ to $(n-p-q-2r)!$ relation with $\mathcal{F}({\bf A})$: $$F({\bf A}) = \frac{(n-p-q-2r)!}{p'!q'!}G({\bf A})$$ \end{lemma} \begin{proof}(Sketch) The factor $(n+1-p-q-2r)!$ clearly comes from the fact that we removed the bijection between the thorns. There are $p'!q'!$ ways of labeling the non-seed trees of a forest in $\mathcal{F}$, i.e. there are at most $p'!q'!$ forests in $\mathcal{G}$ for $(n+1-p-q-2r)!$ forests in $\mathcal{F}$. Then two different labelings of the non seed trees yields two different forests of $\mathcal{G}$. Obviously, permuting the labels of the root tree of some subforests that are non identical up to the various labels and non linked by a dotted arrow to the same vertex automatically reaches distinct forests in $\mathcal{G}$. Next, one can notice that in any subforest there is exactly one more labeled arrow than labeled loops. As a result, a set of "identical" subforests has labeled arrows indexed by a loop that does not belong to the considered set and permuting their root's labels yields another object of $\mathcal{G}$. \end{proof} \subsection{Demonstration of Equation \ref{eq:degrees}} The computation of $G(A)$ is performed thanks to the multivariate Lagrange theorem for implicit functions. For given partitions $\lambda, \mu \vdash n$ and integer $r \geq 0$ we consider the generating function $H$: \begin{equation} H = x_0\sum_{{\bf A}\in M^r_{\lambda,\mu}} G({\bf A}) x^{p}y^{q}\frac{x'^{p'}}{p'!}\frac{y'^{q'}}{q'!}\frac{{f_1}^{r-q'}}{(r-q')!}\frac{{f_2}^{r-p'}}{(r-p')!}\mathbf{t}^{P}\mathbf{t'}^{P'}\mathbf{u}^{Q}\mathbf{u'}^{Q'}. \end{equation} Variables $x_0$, $x$, $x'$, $y$, $y'$, $f_1$ and $f_2$ mark respectively the root of the seed tree, non root white vertices, root white vertices (excluding the root of the seed tree), non root black vertices, root black vertices, labeled arrows incident to white vertices and labeled arrows incident to black vertices. Furthermore, $\mathbf{t}$, $\mathbf{t'}$, $\mathbf{u}$, $\mathbf{u'}$ are two dimensional indeterminate such that $$\mathbf{a}^{X} = \prod_{i,j} a_{ij}^{X_{ij}}$$ \noindent for $a \in \{t,t',u,u'\}$ and $X \in \{P,P',Q,Q'\}$. We define $W$, $W'$, $B$ and $B'$ as the generating functions of subforests descending from an internal white, root white (excluding the root of the seed tree), internal black and root black vertex. \begin{remark} According to the above definition the degree of the root vertex of a subforest marked by $W$ is one plus the number of descendants while the degree of the root vertex marked by $H$ is only the number of its descendants. As a result, $W \neq H$. \end{remark} \noindent Additionally, we note $A_w$ and $A_b$ the generating functions of the labeled arrows incident to the white (respectively black) vertices. Trivially, $A_w = f_1$ and $A_b = f_2$. According to the construction rules of the considered permuted forest, we have the following relation between $W$ and the other considered generating functions: \begin{equation} W = x\sum_{i\geq 1,j,k\geq 0}t_{i,j}\binom{i-1}{2j}(1+B)^{i-1-2j}(2j-1)!!\frac{B'^k}{k!}\frac{A_w^{j-k}}{(j-k)!} \end{equation} Indeed, assume the degree of an internal white vertex is $i$, its number of incident loops is $j$ and this white vertex has exactly $k$ descending non-seed trees. This vertex has $i-1$ ordered children. Among them $i-1-2j$ can be either a thorn or an edge also incident to an internal black vertex. The remaining $2j$ are the extremities of loops that can be paired in $(2j-1)!!$ different ways. Then $j$ loops and $k$ incident non seed trees necessarily implies $j-k$ incident labeled arrows. The factors $1/k!$ and $1/(j-k)!$ are needed as descending non seed trees and incoming labeled arrows are not ordered. This formula simplifies using $(2j-1)! = 2^{-j}(2j)!/j!$ and $\sum_{0\leq k \leq j}B'^kA_w^{j-k}/k!(j-k)! = (B'+A_w)^j/j!$. One gets: \begin{equation*} W = x\sum_{i\geq 1,j \geq 0}t_{i,j}\binom{i-1}{j,j}\left (\frac{B'+A_w}{2}\right)^j(1+B)^{i-1-2j}. \end{equation*} We define function $\Phi_W$ as: \begin{equation*} W = x\,\Phi_W(H,W,B,W',B',A_w,A_b). \end{equation*} Similarly, \begin{align*} B &= y\sum_{i\geq 1,j \geq 0}u_{i,j}\binom{i-1}{j,j}\left (\frac{W'+A_b}{2}\right)^j(1+W)^{i-1-2j},\\ B &= y\,\Phi_B(H,W,B,W',B',A_w,A_b). \end{align*} Also, \begin{equation*} W'= x'\sum_{i\geq 1,j,k\geq 0}t'_{i,j}\binom{i-1}{2j-1}(1+B)^{i-2j}(2j-1)!!\frac{B'^k}{k!}\frac{A_w^{j-k}}{(j-k)!}. \end{equation*} Then \begin{align*} W' &= 2x'\sum_{i\geq 1,j \geq 0}t'_{i,j}\binom{i-1}{j,j-1}\left (\frac{B'+A_w}{2}\right)^j(1+B)^{i-2j},\\ W &= x'\,\Phi_{W'}(H,W,B,W',B',A_w,A_b). \end{align*} Similar equation applies to $B'$: \begin{align*} B' &= 2y'\sum_{i\geq 1,j \geq 0}u'_{i,j}\binom{i-1}{j,j-1}\left (\frac{W'+A_b}{2}\right)^j(1+W)^{i-2j},\\ B' &= y'\,\Phi_{B'}(H,W,B,W',B',A_w,A_b). \end{align*} Generating function $H$ verifies the following equation: \begin{align*} H &= x_0\binom{i_0}{j_0,j_0}\left (\frac{B'+A_w}{2}\right)^{j_0}(1+B)^{{i_0}-2j_0},\\ H &= x_0\,\Phi_{H}(H,W,W',B,B',A_w,A_b). \end{align*} Define $\Phi_{A_w} = \Phi_{A_b} = 1$ and ${\bf \Phi} = (\Phi_H,\Phi_W,\Phi_{W'},\Phi_B,\Phi_{B'},\Phi_{A_w},\Phi_{A_b}) =(\Phi_1,\Phi_2,\ldots,\Phi_7)$ and ${\bf x} = (x_0,x,x',y,y',f_1,f_2)$. We have \begin{equation} (H,W,B,W',B',A_w,A_b) = {\bf x}\,{\bf \Phi}(H,W,B,W',B',A_w,A_b) \end{equation} \noindent Using the multivariate Lagrange inversion formula for monomials (see \cite[1.2.9]{GJCE}) we find: \begin{eqnarray} \nonumber {k_1 k_2 k_3 k_4 k_5 k_6 k_7}\,[{\bf x}^{\bf k}]\,\,H =\hspace{0mm} \label{eq lagrange}\sum_{\{\mu_{ij}\}}\mid\mid \delta_{ij}k_j-\mu_{ij}\mid\mid\prod_{1\leq i \leq 7}[H^{\mu_{i1}}W^{\mu_{i2}}W'^{\mu_{i3}}B^{\mu_{i4}}B'^{\mu_{i5}}A_w^{\mu_{i6}}A_b^{\mu_{i7}}]{ \Phi}_i^{k_i}, \end{eqnarray} \noindent where $\mid\mid \cdot \mid\mid$ denotes the determinant, ${\bf k} = (k_1,k_2,k_3,k_4,k_5,k_6,k_7)$, $\delta_{ij}$ is the Kronecker delta function and the sum is over all $7\times 7$ integer matrices $\{\mu_{ij}\}$ such that: \begin{minipage}{1\linewidth}\centering \begin{itemize} \item $\mu_{ij} = 0$ for $i\geq 6$ or $j=1$ or $i=j$ \item $\mu_{12} = \mu_{13} = \mu_{17} = \mu_{23} = \mu_{27}=\mu_{32}=\mu_{37}=\mu_{45} =\mu_{46} = \mu_{54} =\mu_{56}= 0$ \item $\mu_{42} + \mu_{52} = k_2 $ \item $\mu_{43} + \mu_{53} = k_3$ \item $\mu_{14} + \mu_{24}+\mu_{34} = k_4$ \item $\mu_{15} + \mu_{25}+\mu_{35} = k_5$ \item $\mu_{16} + \mu_{26}+\mu_{36} = k_6$ \item $\mu_{47} + \mu_{57} = k_7$ \end{itemize} \end{minipage} We look at the solution when ${\bf k} = (1,p,p',q,q',r-q',r-p')$. Namely when $\mu$ is of the form\\ \begin{minipage}{1\linewidth} \centering $\mu= \left[ \begin{array}{ccccccc} 0 & 0 & 0 & a & b & g&0\\ 0 & 0 & 0 & c & d & h&0\\ 0 & 0 & 0 & q-a-c & q'-b-d & r-q'-g-h&0\\ 0 & e & f & 0 & 0 & 0&i\\ 0 & p-e & p'-f & 0 & 0 & 0&r-p'-i\\ 0 & 0 & 0 & 0 & 0 & 0&0\\ 0 & 0 & 0 & 0 & 0 & 0&0 \end{array} \right], $ \end{minipage} \noindent where the parameters $a,b,c,d,e,f,g,h,i$ are non negative integers. In this case the determinant $\Delta = \mid\mid \delta_{ij}k_j-\mu_{ij}\mid\mid$ reads:\\ \begin{minipage}{1\linewidth} \centering $\Delta= \left | \begin{array}{ccccccc} 1 & 0 & 0 & -a & -b & -g&0\\ 0 & p & 0 & -c & -d & -h&0\\ 0 & 0 & p' & a+c-q & b+d-q' & g+h+q'-r&0\\ 0 & -e & -f & q & 0 & 0&-i\\ 0 & e-p & f-p' & 0 & q' & 0&i+p'-r\\ 0 & 0 & 0 & 0 & 0 & r-q'&0\\ 0 & 0 & 0 & 0 & 0 & 0&r-p' \end{array} \right |, $ \end{minipage} \noindent that immediately reduces as\\ \begin{minipage}{1\linewidth} \centering $\Delta=(r-p')(r-q') \left | \begin{array}{cccc} p & 0 & -c & -d\\ 0 & p' & a+c-q & b+d-q'\\ -e & -f & q & 0\\ e-p & f-p' & 0 & q' \end{array} \right |. $ \end{minipage} Define $\Delta'=\Delta/(r-p')(r-q')$. Looking at the dependence in $\bf t,t',u,u'$ we have\\ \begin{align} \label{eq: G}\nonumber G({\bf A}) = &\frac{{p'!q'!(r-p')!(r-q')!}}{pp'qq'}\sum\Delta' [B^{q-a-c}B'^{q'-b-d}A_w^{r-q'-g-h}{\bf t'}^{P'}]\Phi_{W'}^{p'}\\ &\times[B^{c}B'^{d}A_w^{h}{\bf t}^P]\Phi_W^p[W^{e}W'^{f}A_b^{i}{\bf u}^Q]\Phi_B^q[W^{p-e}{W'}^{p'-f}A_b^{r-p'-i}{\bf u'}^{Q'}]\Phi_{B'}^{q'}[B^{a}B'^{b}A_w^{g}]\Phi_H \end{align} \noindent The sum runs over the parameters $a,b,c,d,e,f,g,h,i$. Next step is to compute \begin{equation*} \Phi_W^p = \sum_{{\bf \eta} \in co(p)}\binom{p}{{\bf \eta} }\prod_{i,j}\left [t_{i,j}\binom{i-1}{j,j}\left (\frac{B'+A_w}{2}\right)^j(1+B)^{i-1-2j}\right ]^{\eta_{i,j}}, \end{equation*} where $co(p)$ denotes the sets of the two dimensional compositions $\bf \eta$ of $p$ i.e the two dimensional arrays such that $\sum_{i,j} \eta_{i,j} = p$. We have \begin{equation*} \Phi_W^p = \sum_{{\bf \eta} \in co(p)}\binom{p}{{\bf \eta} }\left (\frac{B'+A_w}{2}\right)^{\sum_{i,j}j\eta_{i,j}}(1+B)^{\sum_{i,j}(i-1-2j)\eta_{i,j}}\prod_{i,j}\left [ \binom{i-1}{j,j}t_{i,j}\right ]^{\eta_{i,j}}. \end{equation*} Then: \begin{equation*} [B^{c}B'^{d}A_w^{h}{\bf t}^P]\Phi_W^p = \frac{p!}{P!}\left (\frac{1}{2}\right)^{\sum_{i,j}jP_{i,j}}\binom{\sum_{i,j}jP_{i,j}}{d}\binom{\sum_{i,j}(i-1-2j)P_{i,j}}{c}\prod_{i,j}\binom{i-1}{j,j}^{P_{i,j}}. \end{equation*} \noindent The equation above is true for $h = \sum_{i,j}jP_{i,j}-d$. Otherwise $[B^{c}B'^{d}A_w^{h}{\bf t}^P]\Phi_W^p =0$ and other values of $h$ lead to a zero contribution to the global sum. In a similar fashion, one finds \begin{align*} &[W^{e}W'^{f}A_b^{i}{\bf u}^Q]\Phi_B^q =\\ &\hspace{3cm} \frac{q!}{Q!}\left (\frac{1}{2}\right)^{\sum_{i,j}jQ_{i,j}}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\prod_{i,j}\binom{i-1}{j,j}^{Q_{i,j}},\\ &[B^{q-a-c}B'^{q'-b-d}A_w^{r-q'-g-h}{\bf t'}^{P'}]\Phi_{W'}^{p'} =\\ &\hspace{3cm}\frac{p'!}{P'!}\left (\frac{1}{2}\right)^{\sum_{i,j}jP'_{i,j}-p'}\binom{\sum_{i,j}jP'_{i,j}}{q'-b-d}\binom{\sum_{i,j}(i-2j)P'_{i,j}}{q-a-c}\prod_{i,j}\binom{i-1}{j,j-1}^{P'_{i,j}},\\ &[W^{p-e}{W'}^{p'-f}A_b^{r-p'-i}{\bf u'}^{Q'}]\Phi_{B'}^{q'} =\\ &\hspace{3cm}\frac{q'!}{Q'!}\left (\frac{1}{2}\right)^{\sum_{i,j}jQ'_{i,j}-q'}\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}\prod_{i,j}\binom{i-1}{j,j-1}^{Q'_{i,j}},\\ &[B^{a}B'^{b}A_w^{g}]\Phi_H = \left (\frac{1}{2}\right)^{j_0}\binom{j_0}{b}\binom{i_0-2j_0}{a}\binom{i_0}{j_0,j_0}. \end{align*} \begin{remark}One can check that there is exactly one set of parameters $g,h,i$ that leads to a non zero contribution as $(r-q'-g-h) + g + h = \sum_{i,j}j(P_{i,j}+P'_{i,j}) +j_0 - d -b -(q'-b-d)$ and $(r-p'-i) + i = \sum_{i,j}j(Q_{i,j}+Q'_{i,j}) - f -(p'-f)$. \end{remark} Substituting these expressions in Equation \ref{eq: G} and summing over the parameters $a,b,c,d,e,f$ leads to the explicit formulation of $G(A)$.\\ The computation can be performed as follows: \begin{itemize} \item First as $\sum_{i,j}jQ_{i,j}+\sum_{i,j}jQ'_{i,j} = \sum_{i,j}jP_{i,j}+\sum_{i,j}jP'_{i,j}+j_0 = r$, we have\\ \begin{align*} G({\bf A}) =&\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\binom{i_0}{j_0,j_0}\sum_{a,b,c,d,e,f}\Delta'\binom{i_0-2j_0}{a}\binom{\sum_{i,j}(i-2j)P'_{i,j}}{q-a-c}\binom{j_0}{b}\\ &\times\binom{\sum_{i,j}jP_{i,j}}{d}\binom{\sum_{i,j}(i-1-2j)P_{i,j}}{c}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\\ &\times\binom{\sum_{i,j}jP'_{i,j}}{q'-b-d}\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}}. \end{align*} \item Then we sum over $a$ by rewriting the determinant $\Delta'$ as \begin{align*} \Delta' &= \left | \begin{array}{cccc} 0 & 0 & a & b\\ p & 0 & -c & -d\\ -e & -f & q & 0\\ e-p & f-p' & 0 & q' \end{array} \right | = a \underbrace{\left | \begin{array}{ccc} p & 0 & -d\\ -e & -f & 0\\ e-p & f-p' & q' \end{array} \right |}_\text{$\Delta''$} -b\underbrace{\left | \begin{array}{ccc} p & 0 & -c\\ -e & -f & q\\ e-p & f-p' & 0 \end{array} \right |}_\text{$\Delta'''$}. \end{align*} \noindent Thanks to Vandermonde's convolution \begin{align*} \sum_{a}b\Delta'''\binom{i_0-2j_0}{a}\binom{\sum_{i,j}(i-2j)P'_{i,j}}{q-a-c} &= b\Delta'''\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}}{q-c},\\ \sum_{a}a\Delta''\binom{i_0-2j_0}{a}\binom{\sum_{i,j}(i-2j)P'_{i,j}}{q-a-c}&= \sum_{a}\Delta''(i_0-2j_0)\binom{i_0-2j_0-1}{a-1}\binom{\sum_{i,j}(i-2j)P'_{i,j}}{q-a-c},\\ &= \Delta''(i_0-2j_0)\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}-1}{q-c-1}. \end{align*}\\ \noindent We have \begin{align*} &G({\bf A}) =\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\binom{i_0}{j_0,j_0}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \\ &\times\sum_{b,c,d,e,f}\left[\Delta''(i_0-2j_0)\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}-1}{q-c-1}-b\Delta'''\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}}{q-c}\right]\\ &\times\binom{j_0}{b}\binom{\sum_{i,j}jP_{i,j}}{d}\binom{\sum_{i,j}(i-1-2j)P_{i,j}}{c}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\\ &\times\binom{\sum_{i,j}jP'_{i,j}}{q'-b-d}\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}. \end{align*} \item We proceed with the summation over $b$: \begin{align*} &G({\bf A}) =\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\binom{i_0}{j_0,j_0}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \\ &\times\sum_{c,d,e,f}\left[\Delta''(i_0-2j_0)\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}-1}{q-c-1}\binom{\sum_{i,j}jP'_{i,j}+j_0}{q'-d}-\right.\\ &\hspace{5cm}\left. j_0\Delta'''\binom{i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}}{q-c}\binom{\sum_{i,j}jP'_{i,j}+j_0-1}{q'-d-1}\right]\\ &\times\binom{\sum_{i,j}jP_{i,j}}{d}\binom{\sum_{i,j}(i-1-2j)P_{i,j}}{c}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\\ &\times\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}. \end{align*} \item Then notice that $\Delta''' = -c\Delta''''+pq(p'-f)$ with $\Delta'''' = \left | \begin{array}{cc} -e & -f \\ -p & -p' \end{array} \right |$ and $i_0-2j_0+\sum_{i,j}(i-2j)P'_{i,j}+\sum_{i,j}(i-1-2j)P_{i,j} = n-2r-p$. Summing over $c$ gives: \begin{align*} &G({\bf A}) =\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\binom{i_0}{j_0,j_0}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \\ &\times\sum_{d,e,f}\left[\Delta''(i_0-2j_0)\binom{n-2r-p-1}{q-1}\binom{\sum_{i,j}jP'_{i,j}+j_0}{q'-d}-\right.\\ &\hspace{3cm}\left. pq(p'-f)j_0\binom{n-2r-p}{q}\binom{\sum_{i,j}jP'_{i,j}+j_0-1}{q'-d-1}-\right.\\ &\hspace{3cm}\left.\Delta''''j_0\sum_{i,j}{(i-1-2j)P_{i,j}}\binom{n-2r-p-1}{q-1}\binom{\sum_{i,j}jP'_{i,j}+j_0-1}{q'-d-1}\right]\\ &\times\binom{\sum_{i,j}jP_{i,j}}{d}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}. \end{align*} \item As $\Delta'' = -d\Delta''''-fq'p$, summing over $d$ yields \begin{align*} &G({\bf A}) =\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\binom{i_0}{j_0,j_0}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}}\\ &\times\sum_{e,f}-\left[\Delta''''\left(\sum_{i,j}(i_0j-j_0(i-1))P_{i,j}\right)+frp(i_0-2j_0)+(p'-f)p(n-2r-p)\right]\\ &\times\binom{r-1}{q'-1}\binom{n-2r-p-1}{q-1}\binom{\sum_{i,j}jQ_{i,j}}{f}\binom{\sum_{i,j}(i-1-2j)Q_{i,j}}{e}\\ &\times\binom{\sum_{i,j}jQ'_{i,j}}{p'-f}\binom{\sum_{i,j}(i-2j)Q'_{i,j}}{p-e}. \end{align*} \item One proceeds in a similar fashion to sum over $e$ and $f$: \begin{align*} G({\bf A}) =&\frac{{p'!^2q'!^2p!q!(r-p')!(r-q')!}}{pp'qq'2^{2r-p'-q'}{\bf A}!}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}} \\ &\times pr^2\mathcal{I}({\bf A})\binom{r-1}{q'-1}\binom{r-1}{p'-1}\binom{n-2r-p-1}{q-1}\binom{n-2r-q}{p}. \end{align*} \noindent Finally, \begin{align*} G({\bf A}) =\frac{p'!q'!r!^2(n-1-2r-p)!(n-2r-q)!\mathcal{I}({\bf A})}{2^{2r-p'-q'}(n-p-q-2r)!^2{\bf A}!}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}}. \end{align*} \end{itemize} By replacing $G({\bf A})$ with the expression above in Lemma \ref{lem: f2g}, one gets the desired result: \begin{equation*} F({\bf A}) =\frac{\mathcal{I}({\bf A})}{{\bf A}!}\frac{r!^2(n-1-2r-p)!(n-2r-q)!}{2^{2r-p'-q'}(n-p-q-2r)!}\prod_{i,j}{\binom{i-1}{j,j}}^{(P+Q)_{i,j}}{\binom{i-1}{j,j-1}}^{(P'+Q')_{i,j}}. \end{equation*} \subsection{Demonstration of Equation \ref{eq:nodegrees} (sketch)} The computation of $F_{p,p',q,q',r}$ can be performed in a similar fashion. We define the number $\widetilde{G}_{p,p',q,q',r}$ of modified forests as in Section \ref{subsec: andeg} with one major difference in point (i) that we replace by (i'): \begin{itemize} \item[(i')] {\bf There is no thorns} connected to the vertices. \end{itemize} As an immediate result the two quantities are linked through the relation $$F_{p,p',q,q',r} = \binom{n}{n+1-p-q-2r}\binom{n-1}{n+1-p-q-2r}\frac{(n+1-p-q-2r)!}{p'!q'!}\widetilde{G}_{p,p',q,q',r},$$ \noindent where the first binomial coefficient counts the number of way of positioning the thorns around the white vertices and the second one, around the black vertices. \noindent We need to compute the generating function \begin{equation} H = \sum_{p,p',q,q',r} \widetilde{G}_{p,p',q,q',r} x^{p}y^{q}\frac{x'^{p'}}{p'!}\frac{y'^{q'}}{q'!}\frac{{f_1}^{r-q'}}{(r-q')!}\frac{{f_2}^{r-p'}}{(r-p')!} \end{equation} Define the generating functions $W$ (for internal white vertices {\bf and} the root of the seed tree), $W'$, $B$, $B'$, $A_b$ and $A_w$ (consistent definition with respect to the previous subsection). In this case $W = H$. The new relations between the generating functions read \begin{equation} W = x\sum_{i,j,k\geq 0}\binom{i}{2j}B^{i-2j}(2j-1)!!\frac{B'^k}{k!}\frac{A_w^{j-k}}{(j-k)!}, \end{equation} that simplifies as $\sum_{i\geq 0}\binom{i}{2j}B^{i-2j} = (1-B)^{-2j-1}$: \begin{equation} W = \frac{x}{1-B}\sum_{j\geq 0}\binom{2j}{j}\left (\frac{B'+A_w}{2(1-B)^2}\right)^j. \end{equation} Finally, \begin{equation} W = \frac{x}{1-B}\frac{1}{\sqrt{1-4\left (\frac{B'+A_w}{2(1-B)^2}\right)}}. \end{equation} Further we get \begin{equation} B = \frac{y}{1-W}\frac{1}{\sqrt{1-4\left (\frac{W'+A_b}{2(1-W)^2}\right)}}, \end{equation} \begin{equation} W' = x'\frac{1-\sqrt{1-4\left (\frac{B'+A_w}{2(1-B)^2}\right)}}{\sqrt{1-4\left (\frac{B'+A_w}{2(1-B)^2}\right)}}, \end{equation} \begin{equation} B' = y'\frac{1-\sqrt{1-4\left (\frac{W'+A_b}{2(1-W)^2}\right)}}{\sqrt{1-4\left (\frac{W'+A_b}{2(1-W)^2}\right)}}. \end{equation} Applying the Lagrange formula for implicit functions as in the previous case leads to the desired formula. \bibliographystyle{abbrvnat}
2023-04-23T08:17:47.376Z
2013-12-02T02:17:54.000Z
redpajama/arxiv
arxiv_0000
641
12,657
8f78e99ffb9d42b8c1bde13389c84d6a72dd5e8f
\section{Introduction} Understanding most nontrivial {\em claim}s requires insights from various \emph{perspective} s. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of \emph{bias}. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the \emph{perspective} s presented in them or whether they are supported by evidence. \ignore{ Understanding controversial \emph{claim}s usually requires insights from various \emph{perspective} s. In such cases, the use of search engine or recommendation systems to retrieve relevant information has become prevalent. However, this process carries multiple forms of \emph{biases}, including \emph{selection bias} when only information pertaining to a particular view is presented, resulting in under-representation of valid information from other \emph{perspective} s.} \begin{figure} \centering \includegraphics[scale=0.37,trim=1.2cm 0.5cm 0cm 0.0cm, clip=false]{figures/evaluation-setting8.png} \caption{Given a \emph{claim}, a hypothetical system is expected to discover various \emph{perspectives} that are substantiated with \emph{evidence} and their \emph{stance} with respect to the claim. } \label{fig:example:intro} \end{figure} In this paper, we explore an approach to mitigating this \emph{selection bias}~\cite{H79} when studying (disputed) claims. Consider the \emph{claim} shown in Figure \ref{fig:example:intro}: \emph{``animals should have lawful rights.''} One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as \emph{perspective} s throughout the paper, is an opinion, possibly conditional, in support of a given \emph{claim} or against it. A \emph{perspective}\, thus constitutes a particular attitude towards a given \emph{claim}. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering \emph{diverse perspectives} and their supporting evidence with respect to a given \emph{claim}. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure~\ref{fig:example:intro}, multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is \emph{``animals have no interest or rationality''}, a \emph{perspective}\ that should be identified as taking an \emph{opposing} stance with respect to the \emph{claim}. Each \emph{perspective} should also be well-supported by \emph{evidence} found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a \emph{claim}, presenting a small but diverse set of \emph{perspectives} could be an important step towards addressing the \emph{selection bias} problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness~\cite{PasternackRo10a,PasternackRo13} and possibly others. Inherently, our objective requires understanding the relations between \emph{perspective} s and \emph{claim} s, the nuances in the meaning of various \emph{perspective} s in the context of \emph{claim} s, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. \ignore{In this work, we propose a setting to help discover \emph{diverse perspectives} with respect to a given \emph{claim}. For example, \emph{``animals have no interest or rationality''} (Figure~\ref{fig:example:intro}) is a \emph{perspective}\ takes an \emph{opposing} stance with respect to the \emph{claim}, by citing \emph{animals' lack of rationality}. Each \emph{perspective} has to be well-supported by \emph{evidence} found in paragraphs that summarize findings and substantiations of different sources.\footnote{We assume that the \emph{evidence} at hand is credible. We defer the study of source credibility as a future work.}} \ignore{While it might be impractical to show an exhaustive spectrum of ideas with respect to a \emph{claim}, cherry-picking a small but diverse set of \emph{perspectives} could be a tangible step towards addressing the \emph{selection bias} problem. Inherently this objective requires the understanding of the relations between each \emph{perspective}\, and \emph{claim}, as well as the nuance in semantic meaning between \emph{perspective} s under the context of the \emph{claim}. } To facilitate the research towards developing solutions to such challenging issues, we propose \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}, a dataset of \emph{claims}, \emph{perspectives} and \emph{evidence} paragraphs. For a given \emph{claim} and pools of \emph{perspectives} and \emph{evidence paragraphs}, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: \begin{itemize}[noitemsep,leftmargin=0.4cm] \item To facilitate making progress towards the problem of \emph{substantiated perspective discovery}, we create a high-quality dataset for this task.\footnote{https://github.com/CogComp/perspectrum } \item We identify and formulate multiple NLP tasks that are at the core of addressing the \emph{substantiated perspective discovery} problem. We show that humans can achieve high scores on these tasks. \item We develop competitive baseline systems for each sub-task, using state-of-the-art techniques. \end{itemize} \definecolor{lightgreen}{RGB}{200, 255, 200} \definecolor{lightred}{RGB}{255, 200, 200} \begin{figure} \centering \includegraphics[scale=0.38, trim=0cm 0.5cm 0cm 0.0cm, clip=false]{figures/challenges4.png} \caption{ Depiction of a few claims, their \emph{perspectives} and evidences from \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}. The \emph{supporting} and \emph{opposing} perspectives are indicated with \colorbox{lightgreen}{green} and \colorbox{lightred}{red} colors, respectively. } \label{fig:clusters} \end{figure} \section{Design Principles and Challenges} In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to \emph{substantiated perspective discovery}. To clarify our description we use to following notation. Let $c$ indicate a target claim of interest (for example, the claims $c_1$ and $c_2$ in Figure~\ref{fig:clusters}). Each claim $c$ is addressed by a collection of perspectives $\set{p}$ that are grouped into clusters of \emph{equivalent} perspectives. Additionally, each perspective $p$ is supported, relative to $c$, by at least one evidence paragraph $e$, denoted $e\vDash p|c$. Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: \\ \noindent \emph{Determination of argue-worthy claims: } not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures~\cite{PalauMo09} in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. \noindent \emph{Discovery of pertinent perspectives:} a system is expected to recognize argumentative sentences~\cite{CabrioVi12} that directly address the points raised in the disputed claim. For example, while the perspectives in Figure~\ref{fig:clusters} are topically related to the claims, $p_1, p_2$ do not directly address the focus of claim $c_2$ (i.e., \emph{``use of animals'' in ``entertainment''}). \noindent \emph{Perspective equivalence: } a system is expected to extract a \emph{minimal} and \emph{diverse} set of perspectives. This requires the ability to discover equivalent perspectives $p, p'$, with respect to a claim $c$: $p | c \approx p^{'} | c$. For instance, $p_3$ and $p_4$ are equivalent in the context of $c_2$; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the \emph{paraphrasing} task~\cite{BannardCa05}. \noindent \emph{Stance classification of perspectives:} a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) ~\cite{HasanNg14b}. \noindent \emph{Substantiating the perspectives:} a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment~\cite{DRSZ13} except that here the entailment decisions depend on the choice of claims. \begin{comment} In this section we discuss design principles of \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}, and the corresponding natural language understanding challenges required to solve the task. This include semi-automatic crawling from from debate websites, crowdsourced labeling and paraphrasing and manual curation of the collected data. We also summarize the results of a study that helped us understand the language undertsanding requirements for solving this task, and end with a summary of statistics of the collected corpus. \subsection{Perspective relevance} \subsection{Stance Classification} \subsection{Perspective equivalence} An important step towards identification of diverse \emph{perspective} s of a given claim is recognizing the similarity of two \emph{perspective} s. \subsection{Perspective substantiation} There are a few key issues that add to the complexity of this problem and have to be considered jointly: \begin{itemize} \item \emph{Minimal perspectives:} Systems are expected to extract a \emph{minimal} and \emph{diverse} set of perspectives, by ignoring all the synonymous opinions. \todo{show this with a figure?} \item \emph{Deciding stances of perspectives:} the system is supposed to assess the stances of the perspectives (supporting, opposing, etc.) \item \emph{Substantiating the perspectives:} The system is expected to find valid substantiations for each perspective and ignore the ones that are not substantiated (i.e. do not have any evidence to back them). \end{itemize} Out dataset design is guided by the following key principles: \paragraph{Diverse set of claims.} Each \emph{claim}\ state an assertion about a disputed subject. \paragraph{Perspectives into claims.} For each \emph{claim}, there must be valid \emph{perspective} s, addressing different aspects of it. \paragraph{Evidences substantiating perspectives. } Each \emph{perspective}, has to be well-substantiated through at-least one evidence paragraph. \paragraph{Neutrality of design. } We do not take sides with respect to controversial issues; instead we are trying to be inclusive in covering diverse perspectives to different issues, irrespective of our (the authors) personal view on each perspective. \end{comment} \vspace{0.5cm} \section{Related Work} \paragraph{Claim verification.} The task of \emph{fact verification} or \emph{fact-checking} focuses on the assessment of the truthfulness of a claim, given evidence \cite{VlachosRi14,MitraGi15,STVB16,Wang17,NBESMZAKM18,HPSCCMG18,KRST18,APMa18}. These tasks are highly related to the task of textual-entailment that has been extensively studied in the field~\cite{BCDG08,DRSZ13,KhotSaCl18}. Some recent work study jointly the problem of identifying evidence and verifying that it supports the claim~\cite{YinRo18b}. Our problem structure encompasses the \emph{fact verification} problem (as verification of \emph{perspectives} from \emph{evidence}; Figure~\ref{fig:example:intro}). \begin{table*}[] \centering \small \begin{tabular}{c C{1.7cm} C{1.7cm} C{1.7cm} C{1.7cm}} \toprule Dataset & Stance Classification & Evidence Verification & Human Verified & Open Domain \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}\ (this work) & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} \\ FEVER~\cite{TVCM18} & \xmark & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} \\ \cite{WPKAPQDMBS17} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \xmark & \textcolor{DarkGreen}{\ding{51}} \\ LIAR~\cite{Wang17} & \xmark & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} \\ \cite{VlachosRi14} & \xmark & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} & \textcolor{DarkGreen}{\ding{51}} \\ \cite{HasanNg14b} & \textcolor{DarkGreen}{\ding{51}} & \xmark & \textcolor{DarkGreen}{\ding{51}} & \xmark \\ \bottomrule \end{tabular} \caption{Comparison of \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}\ to a few notable datasets in the field. } \label{tab:comparisons} \end{table*} \paragraph{Stance classification.} Stance classification aims at detecting phrases that \emph{support} or \emph{oppose} a given claim. The problem has gained significant attention in the recent years; to note a few important ones, \newcite{HasanNg14b} create a dataset of dataset text snippets, annotated with ``reasons'' (similar to \emph{perspectives} in this work) and stances (whether they support or oppose the claim). Unlike this work, our pool of the relevant ``reasons'' is not restricted. \newcite{FerreiraVl16} create a dataset of rumors (claims) coupled with news headlines and their stances. There are a few other works that fall in this category~\cite{Boltuvzic14,ParkCa14,RDPKAS15,SwansonEcWa15,MKSZC16,SobhaniInZh17,BBDSS17}. Our approach here is closely related to existing work in this direction, as stance classification is part of the problem studied here. \paragraph{Argumentation.} There is a rich literature on \emph{formalizing} argumentative structures from free text. There are a few theoretical works that lay the ground work to characterizing units of arguments and argument-inducing inference \cite{Teufelot99,Toulmin03,Freeman11}. Others have studied the problem of extracting argumentative structures from free-form text; for example, \newcite{PalauMo09,KWKHS16,ACKWS17} studied elements of arguments and the internal relations between them. \newcite{FengHi11} classified an input into one of the argument schemes. \newcite{HabernalGu17} provided a large corpus annotated with argument units. \newcite{CabrioVi18} provide a thorough survey the recent work in this direction. A few other works studied other aspects of argumentative structures~\cite{CabrioVi12,KWKHS16,LippiTo16,ZHHL17,StabGu17}. A few recent works use a similar conceptual design that involves a \emph{claim}, \emph{perspectives} and \emph{evidence}.These works are either too small due to the high cost of construction~\cite{APLHLRGS14} or too noisy because of the way they are crawled from online resources~\cite{WPKAPQDMBS17,HuaWa17}. Our work makes use of both online content and of crowdsourcing, in order to construct a sizable and high-quality dataset. \begin{comment} \end{comment} \section{The \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}\ Dataset} \subsection{Dataset construction} \label{sec:construction} In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95\% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section~\ref{sec:supp:screenshots}). In the steps outlined below, we filter out a subset of the data with low rater--rater agreement $\rho$ (see Appendix~\ref{sec:agreement}). In certain steps, we use an information retrieval (IR) system\footnote{\url{www.elastic.co}} to generate the best candidates for the task at hand. \paragraph{Step 1: The initial data collection.} We start by crawling the content of a few notable debating websites: {\tt \small idebate.com, debatewise.org, procon.org}. This yields $\sim1k$ claims, $\sim8k$ perspectives and $\sim8k$ evidence paragraphs (for complete statistics, see Table~\ref{tab:seed_data} in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. \paragraph{Step 2a: Perspective verification. } For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of \emph{claim} and \emph{perspective}, we ask the crowd-workers to label the perspective with one of the five categories of \emph{support}, \emph{oppose}, \emph{mildly-support}, \emph{mildly-oppose}, or \emph{not a valid perspective}. The reason that we ask for two levels of intensity is to distinguish \emph{mild} or \emph{conditional} arguments from those that express \emph{stronger} positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid \$1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of $\rho \geq 0.5$. To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing \emph{mildly-support} and \emph{mildly-oppose} into \emph{support} and \emph{oppose}, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94\%, which shows high-agreement with the crowdsourced annotations. \paragraph{Step 2b: Perspective paraphrases. } To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of \$1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of \$1. Overall, this process gives us $\sim4.5$ paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. \paragraph{Step 2c: Web perspectives.} In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying ``claim+perspective''. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. \paragraph{Step 2d: Final perspective trimming.} In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. \paragraph{Step 3: Evidence verification. } The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from \emph{step 2a}. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph \emph{supports} a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid \$$1$ and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7\%. This indicates the high quality of the crowdsourced data. \begin{table}[] \small \centering \resizebox{\linewidth}{!}{ \begin{tabular}{clc} \toprule Category & \multicolumn{1}{c}{Statistic} & Value \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \multirow{4}{*}{Claims} & \# of claims (step 1) & 907\\ & avg. claim length (tokens) & 8.9\\ & median claims length (tokens) & 8\\ & max claim length (tokens) & 30\\ & min claim length (tokens) & 3\\ \hline \multirow{5}{*}{Perspectives} & \# of perspectives & 11,164\\ & \multicolumn{1}{l}{\hspace{0.5cm} Debate websites (step 1)} & 4,230 \\ & \multicolumn{1}{l}{\hspace{0.5cm} Perspective paraphrase (step 2b)} & 4,507 \\ & \multicolumn{1}{l}{\hspace{0.5cm} Web (step 2c)} & 2,427 \\ & \# of perspectives with stances & 5,095\\ & \# of ``support'' perspectives & 2,627\\ & \# of ``opposing'' perspectives & 2,468\\ & avg size of perspective clusters & 2.3 \\ & avg length of perspectives (tokens) & 11.9 \\ \hline \multirow{2}{*}{Evidences} & \# of total evidences (step 1) & 8,092 \\ & avg length of evidences (tokens) & 168 \\ \bottomrule \end{tabular} } \caption{A summary of \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}~ statistics} \label{tab:statistics} \end{table} \subsection{Statistics on the dataset} \label{sec:statistics} We now provide a brief summary of \textsc{\textcolor[wave]{390}{P}\textcolor[wave]{415}{e}\textcolor[wave]{440}{r}\textcolor[wave]{465}{s}\textcolor[wave]{485}{p}\textcolor[wave]{525}{e}\textcolor[wave]{535}{c}\textcolor[wave]{595}{t}\textcolor[wave]{610}{r}\textcolor[wave]{635}{u}\textcolor[wave]{660}{m}}. The dataset contains about $1k$ claims with a significant length diversity (Table~\ref{tab:statistics}). Additionally, the dataset comes with $\sim 12k$ perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of $2.3$ which shows that, on average, many perspectives have equivalents. More granular details are available in Table~\ref{tab:statistics}. \begin{figure} \centering \includegraphics[scale=0.24]{figures/topics.png} \caption{Distribution of claim topics.} \label{fig:topic_distribution} \end{figure} To better understand the topical breakdown of claims in the dataset, we crowdsource the set of ``topics'' associated with each \emph{claim} (e.g., \emph{Law, Ethics, etc}.) We observe that, as expected, the three topics of \emph{Politics, World, and Society} have the biggest portions (Figure~\ref{fig:topic_distribution}). Additionally, the included claims touch upon 10+ different topics. Figure~\ref{fig:topics} depicts a few popular categories and sampled questions from each. \begin{figure*} \centering \includegraphics[scale=0.32]{figures/sunbusrt-topics3.png} \caption{Visualization of the major topics and sample claims in each category.} \label{fig:topics} \end{figure*} \subsection{Required skills} We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work~\cite{SugawaraYoAi17,KCRUR18}. The result of this annotation is depicted in Figure~\ref{fig:reasoning-categories}. As can be seen, the problem requires understanding of \emph{common-sense}, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of \emph{coreference} understanding, such as \emph{event coreference} and \emph{entity coreference}. \begin{figure} \centering \includegraphics[scale=0.19,trim=0cm 0cm 0cm 0cm]{figures/reasoning-categories.png} \caption{The set of reasoning abilities required to solve the stance classification task. } \label{fig:reasoning-categories} \end{figure} \section{Empirical Analysis} \label{sec:analysis} In this section we provide empirical analysis to address the tasks. We create a split of 60\%/15\%/25\% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the ``topic'' annotation from Section~\ref{sec:statistics}), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as $\eqcls{p}$, given a representative member $p$. Let $P(c)$ denote the collection of relevant perspectives to a claim $c$, which is the union of all the equivalent perspectives participating in the claim: $\set{ \eqcls{p_i} }_{i}$. Let $E(\eqcls{p}) = E(p) = \bigcup_{i} e_i$ denote the set of evidence documents lending support to a perspective $p$. Additionally, denote the two pools of perspectives and evidence with $\mathcal{U}^p$ and $\mathcal{U}^e$, respectively. \subsection{Systems} We make use of the following systems in our evaluation: \paragraph{IR} (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering~\cite{CEKSTTK16}. We create two versions of this baseline: one with the pool of perspectives $\mathcal{U}^p$ and one with the pool of evidences $\mathcal{U}^e$. We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. \begin{comment} proposed for the FEVER dataset, The section for evidence identification is located in the 3rd page, right column by "Fine-grained representation". Basically it detects the evidence like this: first it needs an evidence representation contextualized by the perspective. This is achieved by concatenating evidence representation, perspective representation, their dot product, and the evidence representation generated by Attentive Convolution (Yin and Schuetze TACL2018) which encodes the fine-grained lexical interactions between the perspective and the evidence. second, the evidence representation (contextualized by the perspective) is projected into a score, which tells how likely this evidence is valid for this perspective. (equation 6 in the paper). \end{comment} \paragraph{BERT} (Contextual representations). A recent state-of-the-art contextualized representation~\cite{DCLT18}. This system has been shown to be effective on a broad range of natural language understanding tasks. \paragraph{Human Performance.} Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. \subsection{Evaluation metrics.} We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives $\mathcal{U}^p$ and evidences $\mathcal{U}^e$. \paragraph{T1: Perspective extraction.} A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let $\hat{P}(c)$ be the set of output perspectives. Define the precision and recall as $ \text{Pre}(c) = \frac{\sum_{\hat{p} \in \hat{P}(c)} \oneE{ \exists p, s.t. \hat{p} \in \eqcls{p} } }{ \abs{\hat{P}(c)} } $ and $ \text{Rec}(c) = \frac{\sum_{\hat{p} \in \hat{P}(c)} \oneE{ \exists p, s.t. \hat{p} \in \eqcls{p} } }{ \abs{P(c)} } $ respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. \paragraph{T2: Perspective stance classification.} Given a claim, a system is expected to label every perspective in $P(c)$ with one of two labels \emph{support} or \emph{oppose}. We use the well-established definitions of precision-recall for this binary classification task. \paragraph{T3: Perspective equivalence.} A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives $p_1, p_2 \in P(c)$, a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: $\exists \tilde{p}\; s.t .\; \tilde{p} \in P(c) \wedge p_1, p_2 \in \eqcls{\tilde{p}} $. We use this pairwise definition for all the pairs in $P(c)\times P(c)$, for any claim $c$ in the test set. \paragraph{T4: Extraction of supporting evidences.} Given a perspective $p$, we expect a system to return all the evidence $\set{e_i}$ from the pool of evidence $\mathcal{U}^e$. Let $\hat{E}(p)$ and $E(p)$ be the predicted and gold evidence for a perspective $p$. Define macro-precision and macro-recall as $\text{Pre}(p) = \frac{ \abs{\hat{E}(p) \cap E(p) } }{ \abs{\hat{E}(p)} } $ and $\text{Rec}(p) = \frac{ \abs{\hat{E}(p) \cap E(p) } }{ \abs{E(p)} } $, respectively. The metrics are averaged across all the perspectives $p$ participating in the test set. \paragraph{T5: Overall performance.} The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in $T1$, $T2$ and $T4$. While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of $T3$ (perspective equivalence) is indirectly being measured within $T1$. Furthermore, since we do not report an IR performance on $T2$, we use the ``always supp'' baseline instead to estimate an overall performance for IR. \subsection{Results} \subsubsection{Minimal perspective extraction (T1)} \label{sec:exp1:minimal} Table~\ref{tab:results} shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing $\mathcal{U}^p$. Given each claim, we query the top $k$ perspectives, ranked according to their retrieval scores. We tune $k$ on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields $>$90\% recall (for the PR-curve, see Figure~\ref{fig:pr-curves} in the Appendix). In order to train \textsc{BERT}\ on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top-$k$ and select a \emph{minimal} set of perspectives (i.e., no two equivalent perspectives). \subsubsection{Perspective stance classification (T2)} \label{sec:exp2:stance} We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to { \set{support, oppose}}. The candidate inputs are generated on the collection of perspectives $P(c)$ relevant to a claim $c$. To have an understanding of a lower bound for the metric, we measure the quality of an { always-support} baseline. We measure the performance of \textsc{BERT}\ on this task as well, which is about 20\% below human performance. This might be because this task requires a deep understanding of \emph{commonsense} knowledge/reasoning (as indicated earlier in Section~\ref{sec:analysis}). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. \subsubsection{Perspective equivalence (T3)} \label{sec:exp3:equivalent} We create instances in the form of $(p_1, p_2, c)$ where $p_1, p_2 \in P(c)$. The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of $\sim 36\%$ over the IR baseline. Meanwhile, this system is behind human performance by a margin of $\sim 20\%$. \subsubsection{Extraction of supporting evidence (T4)} \label{sec:exp4:evidence} We evaluate the systems on the extraction of items from the pool of evidences $\mathcal{U}^e$, given a \emph{claim}-\emph{perspective}\ pair. To measure the performance of the IR system working with the index containing $\mathcal{U}^e$ we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a $>$85\% recall (for the PR-curve, see Figure~\ref{fig:pr-curves} in the Appendix). We train \textsc{BERT}\ system to map each (gold) \emph{claim}-\emph{perspective}\ pair to its corresponding \emph{evidence}\ paragraph(s). Since each evidence paragraph could be long (hence hard to feed into \textsc{BERT}), we split each evidence paragraph into sliding windows of 3 sentences. For each \emph{claim}-\emph{perspective}\ pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by \textsc{BERT}, we consider the whole evidence as positive (i.e. it supports a given \emph{perspective}). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the \textsc{BERT}\ solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. \begin{table}[] \small \centering \resizebox{\linewidth}{!}{ \begin{tabular}{C{0.9cm}C{1.2cm}C{2.4cm}C{0.73cm}C{0.73cm}C{0.73cm}} \toprule Setting & Target set & System & \emph{Pre.} & \emph{Rec.} & $F1$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \multirow{4}{1.5cm}{\rotatebox[origin=c]{90}{\parbox{1.1cm}{\centering T1: \\Perspective \\ relevance}}} & \multirow{4}{*}{ $\mathcal{U}^p$ } & IR & 46.8 & 34.9 & 40.0 \\ & & IR + BERT & 47.3 & 54.8 & \textbf{50.8} \\ \cmidrule(l){3-6} & & IR + Human & 63.8 & 83.8 & 72.5\\ \midrule \multirow{4}{1.5cm}{\rotatebox[origin=c]{90}{\parbox{1.1cm}{\centering T2: \\Perspective\\stance}} } & \multirow{4}{*}{\parbox{1.1cm}{$P(c)$}} & Always ``supp.'' & 51.6 & 100.0 & 68.0 \\ & & BERT & 70.5 & 71.1 & \textbf{70.8} \\ \cmidrule(l){3-6} & & Human & 91.3 & 90.6 & 90.9 \\ \midrule \multirow{5}{1.5cm}{ \rotatebox[origin=c]{90}{\parbox{1.1cm}{\centering T3: \\Perspective\\equivalence}} } & \multirow{5}{*}{\parbox{1.1cm}{$P(c)^2$}} & Always ``$\neg$equiv.'' & 100.0 & 11.9 & 21.3 \\ & & Always ``equiv.'' & 20.3 & 100.0 & 33.7 \\ & & IR & 36.5 & 36.5 & 36.5 \\ & & BERT & 85.3 & 50.8 & \textbf{63.7} \\ \cmidrule(l){3-6} & & Human & 87.5 & 80.2 & 83.7\\ \midrule \multirow{4}{1.5cm}{\rotatebox[origin=c]{90}{\parbox{1.1cm}{\centering T4: \\Evidence \\extraction }} } & \multirow{4}{*}{$\mathcal{U}^e$} & IR & 42.2 & 52.5 & 46.8 \\ & & IR + BERT & 69.7 & 46.3 & \textbf{55.7} \\ \cmidrule(l){3-6} & & IR + Human & 70.8 & 53.1 & 60.7\\ \midrule \multirow{3}{0.5cm}{\rotatebox[origin=c]{90}{\parbox{1.1cm}{\centering T5: Overall}}} & \multirow{3}{*}{$\mathcal{U}^p, \mathcal{U}^e$} & IR & - & - & 12.8 \\ & & IR + BERT & - & - & \textbf{17.5} \\ \cmidrule(l){3-6} & & IR + Human & - & - & 40.0\\ \bottomrule \end{tabular} } \caption{ Quality of different baselines on different sub-tasks (Section~\ref{sec:analysis}). All the numbers are in percentage. Top machine baselines are in \textbf{bold}. } \label{tab:results} \end{table} \section{Discussion} { As one of the key consequences of the information revolution, \emph{information pollution} and \emph{over-personalization} have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too~\cite{VZRP14}. The dataset presented here is not intended to be \emph{exhaustive}, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs \cite{MarkovitsNa89}. To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete \emph{perspective discovery and presentation} pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments \cite{LBHAS14}. In a similar vein, since our main focus is the study of the relations between \emph{claim} s, \emph{perspective} s, and \emph{evidence}, we leave out important issues such as their degree of factuality \cite{VlachosRi14} or trustworthiness \cite{PasternackRo14, PasternackRo10a} as separate aspects of problem.} We hope that some of these challenges and limitations will be addressed in future work. \section{Conclusion} \ignore{This work touches upon a class of claims for which answering them requires addressing multiple angles.} The importance of this work is three-fold; we define the problem of \emph{substantiated perspective discovery} and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. \section*{Acknowledgments} The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
2023-04-23T08:17:48.669Z
2019-06-11T02:13:17.000Z
redpajama/arxiv
arxiv_0000
677
6,826
6efe8047be57df8ab18fde073ee1a2ef4734e39c
\section{Introduction} Identifying statistically significant dependency between two or more sets of attributes serves as a key first check before further investigations are warranted. The space of possible attributes and their statistical dependencies is truly enormous, ranging from scalar values with relatively simple linear relationship to data with high dimensions, complex structures and nonlinear relationships. There are many traditional and recent new procedures for testing dependency on linear, nonlinear, low-dimensional Euclidean data with satisfactory performance, e.g., \cite{Pearson1895,KendallBook, szekely2007measuring, GrettonEtAl2005,gretton2007,GrettonGyorfi2010,HellerGorfine2013, mgc2}, while detecting relationship between data of high-dimensional or complex structure remains difficult and less well-understood. The large and still growing amount of structured data motivate a development of methods for those data. In particular, graphs are emerging as a prevalent form of data representation in many scientific areas, ranging from linguistics to neuroscience to sociology. Graphs have complex structures and relationships. One type of question about graphs is to ask whether a given pair of graphs are statistically dependent. For example, one could ask "to what extent are the connectomes (brain graphs) of the left and right hemisphere of a species correlated with each other?", or "is the connectome constructed on chemical synapses statistically dependent on the connectome constructed on gap junctions? If so, how strong is such correlation?". The answers to these questions would explain the presence or absence of relationships between the objects of interest We investigate a popular graph model, the $\rho$-correlated stochastic block model ($\rho$-SBM), and propose a statistical test for testing (conditional) dependence between two sample graphs from $\rho$-SBM. The test utilizes the adjacency matrices and the block permutation procedure. We prove the validity of the resulting procedure, and demonstrate its effectiveness both simulated graphs and real brain graphs (connectomes). \clearpage \section{Preliminary} \subsection{Correlated Bernoulli Graphs} Let $\mathbf{G}: \Omega \xrightarrow{} \mathcal{G}$ be a graph-valued random variable with sample $G_i$. Each graph $G = (V, E)$ is defined by a set of $n$ vertices, $V = \{v_i\}_{i \in [n]}$, where $[n] = \{1, ..., n\}$, and a set of edges between pairs of vertices $E \subseteq V \times V$. Let $A: \Omega \xrightarrow{} \mathcal{A}$ be an adjacency matrix-valued random variable taking values $a \subseteq \mathcal{A} \subseteq \mathbb{R}^{V \times V}$, identifying which pairs of vertices share edges. Here, the graph $G$ is undirected, so the corresponding adjacency matrix $A$ is symmetric. The $\rho$-correlated Erdos-Renyi Model was proposed as an intuitive way to capture correlations between graphs \cite{fishkind2018alignment}. Erdos-Renyi Model (ER) is a random graph in which each edge is sampled i.i.d (independent and identically distributed) from a Bernoulli distribution with some parameter $p$. Let $G_{ij}$ be a random variable denoting whether there is an edge from vertex $i$ to vertex $j$ in graph $G$. $G, H$ are called \textit{$\rho$-correlated ER($p$)} if $G$ and $H$ are ER graphs with parameter $p$ and the two random variables $G_{ij}$ and $H_{ij}$ have Pearson's correlation $\rho$ for all $\{i,j\}\in {[n]\choose 2}$. In fact, one can generalize the notion of $\rho$-correlated Bernoulli Graphs by allowing for different marginal probabilities for the two graphs. Namely, $G, H$ are called $\rho$-correlated ER($p, q$) if $G$ is a ER($p$), $H$ is a ER($q$) and $G, H$ are $\rho$-correlated. To sample a pair of $\rho$-correlated ER, consider random variable $X \sim Bernoulli(p), Y \sim Bernoulli(q)$. Note that $\rho$ can be written as following by the definition of Pearson's correlation: \[ \rho = \frac{\mathbb{P}(X=1, Y=1)-pq}{\sqrt{p(1-p)q(1-q)}} \]. Given this equation and the marginal probabilities of $X$ and $Y$, one can solve for the joint probability for each value of $X$ and $Y$ and get the following sampling procedure: First, realize $X \sim Bernoulli(p)$. Then, if $x=1$, independently realize: \[ Y \sim Bernoulli(q + \rho \sqrt{\frac{1-p}{p}q(1-q)})\] if $x=0$, independently realize: \[Y \sim Bernoulli(q - \rho\sqrt{\frac{p}{1-p}q(1-q)})\] Note that this is only valid when: \[max\{-\frac{pq}{(1-p)(1-q)}, -\frac{(1-p)(1-q)}{pq}\} \leq \rho \leq min\{\frac{p(1-q)}{q(1-p)}, \frac{q(1-p)}{p(1-q)}\}\] To generate a pair of $\rho$-correlated ER($p, q$), one can simply follow this procedure for each edge independently. The Stochastic Block Model (SBM) is a generalization of ER. SBM is parametrized by the \textit{block probability matrix} $B \in [0,1]^{k \times k}$, where $k$ is the number of blocks \cite{HollandEtAl1983}. Each \textit{community} is labeled $1, 2, \dots, k$. The entry $B_{ij}$ gives the probability of an edge from a node in community $i$ to a node in community $j$, for all $i, j \in [k]$. The community assignment of each node is given by the \textit{community membership function} $z: [n] \rightarrow [k]$. For all node $v \in [n]$, $z(v) = i$ would mean that node $v$ is a member of block $i$. ER is a SBM with $k=1$, so the block probability matrix $B \in \ensuremath{\mathbb R}^{1 \times 1} = [p]$. A \textit{block} refers to a submatrix in the adjacency matrix formed by the edges connecting every node in community $i$ to every node in community $j$. Every edge within a block is sampled i.i.d from $Bernoulli(B_{ij})$. One can generalize $\rho$-ER to \textit{$\rho$-correlated SBM} similar to how one generalize an ER to a SBM. Assuming the same community assignment (but possibly different block probability matrix), to generate a pair of $\rho$-correlated SBM, one can follow the procedure of generating $\rho$-correlated ER for each block separately. A further generalization of SBM is the Independent Edge Graph (IE). An IE is parametrize by the \textit{edge probability matrix} $P \in [0,1]^{n \times n}$, where $n$ is the number of vertices. The probability of an edge from vertex $i$ to vertex $j$ is given by $P_{ij}$. An ER is an IE with $P_{ij} = p$ for all $i, j$, and an SBM is an IE with $P_{ij} = B_{z(i), z(j)}$. $G$ and $H$ are called \textit{$\rho$-correlated Bernoulli graphs} if $G, H$ are both IE and the random variables $G_{ij}$ and $H_{ij}$ have Pearson's correlation of $\rho$ for all $i, j$. Under the setting of correlated Bernoulli Graphs, the null hypothesis of the graph independence test is $\rho=0$, and the alternative is $\rho \neq 0$. \subsection{Correlated Gaussian Graphs} Correlated Bernoulli graphs are binary by definition. To sample correlated weighted graphs, we leverage the joint normal distribution and introduce \textit{Correlated Gaussian Graphs}. $G$ and $H$ are called $\rho$-correlated Gaussian ER($\mu, \sigma$) if every pair of edges $G_{ij}, H_{ij}$ are sampled from a joint normal distribution with mean $\mu$, variance $\sigma^2$ and covariance $\rho$. One can generalize $\rho$-correlated Gaussian ER($\mu, \sigma$) to have different marginal distributions. Namely, $G$ and $H$ are called $\rho$-correlated Gaussian ER($\mu, \Sigma$), where $\mu=(\mu_x, \mu_y), \Sigma_{11} = \sigma_x^2, \Sigma_{22} = \sigma_y^2, \Sigma_{12}=\Sigma_{21}=\rho$ if $G_{ij}, H_{ij} \sim \mathcal{N}(\mathbf{\mu}, \Sigma)$ for all $i, j$. One can further generalize $\rho$-correlated Gaussian ER to $\rho$-correlated Gaussian SBM by following the procedure of generating $\rho$-correlated Gaussian ER for each block separately. In the rest of this paper, we refer $\rho$-correlated Bernoulli graphs and $\rho$-correlated Gaussian graphs together as \textit{$\rho$-correlated graphs}. \subsection{Conditional independence testing} Conditional independence testing, also referred to as \textit{partial testing}, is the testing of independence between conditional distributions \cite{hothorn2008implementing}. Conditional independence is important if one is interested in identifying correlation given some known structure in the data. As a concrete example, given the connectomes of two individuals, we might observe the same block structure in both graphs because the brain of each individual is segmented into two hemispheres. The correlation due to such inherent structure might be interesting, but often we are more interested in any correlation that might exist \textit{in addition to} the structural correlation, rendering partial testing an important problem. A conditional independence testing problem can be formulated under the setting of $\rho$-correlated graphs. Let $G$ and $H$ be two $\rho$-correlated SBMs, and the corresponding adjacency matrices $X$ and $Y$, jointly sampled from a distribution $F_{G, H}$. Since for $\rho$-correlated graphs, both graphs have the same set of uniquely labeled vertices, all variability is in the adjacency matrix, that is $V_G = V_H$, and $F_{G, H} = F_{X, Y}$. For notational simplicity, we will use the same notation to refer to the graphs and the adjacency matrices for the rest of this paper. Furthermore, let $F_{X|z}$ and $F_{Y|z}$ be the marginal distributions of the adjacency matrices \textit{conditioning on} the community assignments. To determine whether $X, Y$ are independent, given the community assignments, the following hypothesis is tested: $H_0: F_{X, Y | z} = F_{X|z} F_{Y|z}$ and $H_a: F_{X, Y | z} \neq F_{X|z} F_{Y|z}$. Given a pair of $\rho$-correlated SBMs, without conditioning on the community assignments, even when $\rho=0$, both graphs still have the same block structures, which leads to some correlation between them. But when conditioning on the community assignments, any remaining correlation must be due to the correlation between edges. Therefore, in the paradigm of conditional testing, the null distribution asserts that $\rho=0$. \subsection{Distance correlation (DCorr)} Distance correlation is a generalization of the classical Pearson correlation, to random vectors with arbitrary dimensions or in metric spaces \cite{szekely2007measuring}. Consider semimetric spaces $(\ensuremath{\mathcal U}, d_{\ensuremath{\mathcal U}})$ and $(\ensuremath{\mathcal W}, d_{\ensuremath{\mathcal W}})$ with distance functions $d_{\ensuremath{\mathcal U}}: \ensuremath{\mathcal U} \times \ensuremath{\mathcal U} \rightarrow \ensuremath{\mathbb R}$ and $d_{\ensuremath{\mathcal W}}: \ensuremath{\mathcal W} \times \ensuremath{\mathcal W} \rightarrow \ensuremath{\mathbb R}$. Consider random variables $U: \Omega \rightarrow (\ensuremath{\mathcal U}, d_{\ensuremath{\mathcal U}})$ and $W: \Omega \rightarrow (\ensuremath{\mathcal W}, d_{\ensuremath{\mathcal W}})$. The {\it distance covariance} function for any random variables $U \in \ensuremath{\mathcal U}$ and $W \in \ensuremath{\mathcal W}$ is defined as the positive square root of the following ($U'$ and $W'$ are independent copies of $U$ and $W$ respectively) \cite{sejdinovic2013equivalence}: \begin{align*} \mathcal{V}^2_{d_{\ensuremath{\mathcal U}}, d_{\ensuremath{\mathcal W}}}(U,W)&=\ensuremath{\mathbb E}_{UW}[\ensuremath{\mathbb E}_{U'W'}[d_{\ensuremath{\mathcal U}}(U,U')d_{\ensuremath{\mathcal W}}(W,W')]] \\ &+\ensuremath{\mathbb E}_{U}[\ensuremath{\mathbb E}_{U'}[d_{\ensuremath{\mathcal U}}(U,U')]]\ensuremath{\mathbb E}_{W}[\ensuremath{\mathbb E}_{W'}[d_{\ensuremath{\mathcal W}}(W,W')]]\\ &-2\ensuremath{\mathbb E}_{UW}[\ensuremath{\mathbb E}_{U'}[d_{\ensuremath{\mathcal U}}(U,U')]\ensuremath{\mathbb E}_{W'}[d_{\ensuremath{\mathcal W}}(W,W')]] \end{align*} $\mathcal{V}^2_{d_{\ensuremath{\mathcal U}}, d_{\ensuremath{\mathcal W}}}(U,W)$ is zero if and only if $U$ and $W$ are independent and is non-zero otherwise. Usually, it is assumed that $\ensuremath{\mathcal U} = \ensuremath{\mathbb R}^p$ and $\ensuremath{\mathcal W} = \ensuremath{\mathbb R}^q$ and the metric is Euclidean distance, but the setting considered herein is slightly different. Given the fact that the distance covariance function characterizes whether $U$ and $W$ are independent, the graph independence testing problem can be described under this formulation. Let $\ensuremath{\mathcal U} = \ensuremath{\mathcal W} = V$ be the set of vertices of graph $G$ and $H$, let function $z$ denote the community assignment of each vertex in $V$, and let the distances $d_{\ensuremath{\mathcal U}}(v_i, v_j)$ and $d_{\ensuremath{\mathcal W}}(v_i, v_j)$ be functions of the adjacency matrix entries $X_{ij}, Y_{ij}$ respectively (we introduce the notion of \textit{kernel-induced distance} explicitly in Section \ref{sec:methods}) Then consider random variables $U: \Omega \rightarrow (\ensuremath{\mathcal U}, d_{\ensuremath{\mathcal U}})$ and $W: \Omega \rightarrow (\ensuremath{\mathcal W}, d_{\ensuremath{\mathcal W}})$. Informally, the two metric spaces both include the same set of vertices with the same community assignments, and use the kernel-induced distance as distance functions. Given the definition above, the more formal formulation of the hypothesis under testing is the following: $H_0: F_{U, W|z} = F_{U|z} F_{W|z}$ and $H_a: F_{U, W|z} \neq F_{U|z} F_{W|z}$. For notational simplicity, we drop the subscript $d_{\ensuremath{\mathcal U}}, d_{\ensuremath{\mathcal W}}$. The distance covariance function can be normalized to the {\it distance correlation} function $\mathcal{R}$ as: \[\mathcal{R}^2(U,W) = \begin{cases} \frac{\mathcal{V}^2(U,W)}{\sqrt{\mathcal{V}^2(U,U)\mathcal{V}^2(W,W)}} &\text{if } \mathcal{V}^2(U,U)\mathcal{V}^2(W,W) > 0 \\ 0 &\text{if } \mathcal{V}^2(U,U)\mathcal{V}^2(W,W) = 0 \end{cases} \] Given samples $(U_1,W_1),...,(U_n,W_n)$ jointly sampled from $F_{UW}$, an unbiased estimate of $\mathcal{V}^2(U,W)$ based solely on the sample distance matrices is described in \cite{szekely2013distance}. This sample test statistic is used for \texttt{DCorr} in Algorithm \ref{alg:dcorr} in the Appendix. \subsection{Multiscale Graph Correlation (MGC)} Multiscale Graph Correlation (MGC) builds upon distance correlation by exploring all local distance correlation and efficiently searching for the optimal scale. The algorithm is described in details in \cite{vogelstein2019discovering} and it is demonstrated that compared to distance correlation, MGC loses almost no power in monotonic dependencies and achieves better finite sample power on high- dimensional data with non-monotonic relationships. It is shown that many theoretic properties that hold for distance correlation also holds for MGC \cite{shen2018distance}. Similar to distance correlation, for a sample test statistic, MGC can take as inputs two sample distances matrices, then output a test statistic indicating the strength of correlation. \section{Methods}\label{sec:methods} \subsection{Graph Correlation using Induced Metric from Adjacency Matrix} One can view the adjacency matrix $X=\{X_{ij}\}$ of an undirected graph as a similarity or kernel matrix, where the similarity of node $v_i$ and $v_j$ is the weight of the edge $X_{ij}$ between them. A kernel can be converted into a distance metric using the bijective transformation between metric and kernels introduced in \cite{shen2018exact}. To that end, each adjacency entries is normalized by the maximum within the matrix (which is $1$ for unweighted graphs), and the diagonals of the adjacency matrix are tweaked to ensure the transformed distance satisfies the identity property, i.e. the distance of each node to itself is $0$. Eventually, the transformed metric used in this paper is: $D = \{J - (I + X / \max_{s, t \in [1, n]}X_{st}\}$, where $I$ is the identity matrix and $J$ is the matrix of ones. After computing the distances matrices for both graphs, one can use any proper correlations measures, such as MGC and DCorr that takes distance matrices as inputs. The sample test statistic described in \cite{szekely2013distance} is used for DCorr and the test statistic in \cite{vogelstein2019discovering} is used for MGC. \subsection{Pearson Correlation} Alternatively, one can ignore the graph structure entirely and calculate the Pearson correlation using the vectorized adjacency matrices as a measure of their correlations. The test statistic can be expressed as the following ($\Bar{X}$ and $\Bar{Y}$ denote the overall mean of the adjacency matrices $X, Y$ respectively): \[r_{XY} = \frac{ \sum_{i,j}^n(X_{ij} - \Bar{X})(Y_{ij} - \Bar{Y})}{\sqrt{\sum_{i,j}^n(X_{ij}-\Bar{X})^2\sum_{i,j}^n(Y_{ij}-\Bar{Y})^2 }}\] Vectorization is necessary since Pearson only operates on 1-dimensional data. Pearson assumes the samples are i.i.d, but this assumption is violated for $\rho$-correlated Bernoulli graphs in general, except under the special setting of $\rho$-ER. As an example, in general, for $\rho$-SBM, each pair of edges is sampled independently, but potentially under different distributions (namely, edges in different blocks are sampled under Bernoulli with different parameters). We investigate how this violation of the i.i.d assumption affects the correlation-based procedure in Section \ref{sec:simulation}. Both Algorithm \ref{alg:dcorr} and \ref{alg:pearson} in the Appendix are procedures to compute a test statistics for measuring correlation between graphs, and they are referred to as \texttt{GCorr} (graph correlation) in subsequent algorithms. \subsection{Computing p-value} Computing p-value requires using a permutation test to estimate the distribution of the test statistic under the null. Under the setting of $\rho$-correlated graphs, in general, a naive permutation of the row-column pairs of the adjacency matrix would result in an invalid test for $\rho$-SBM (the power would converge to 1 under the null), because the distribution of the permuted matrices does not approximate the null distribution at $\rho=0$. Intuitively, since the block structure is the same in both graphs, one implicitly desires a conditional independence testing, which is not enabled by the usual permutation test procedure. In general, it is not clear what a valid permutation test would be for $\rho$-correlated graphs. Such permutation should preserve the inherent graph structure while smearing the edge correlations. Under $\rho$-SBM, however, since the inherent graph structure is captured completely by the block structure, one can perform a \textit{block permutation test} (Algorithm \ref{alg:bpermute} in the Appendix) \cite{hothorn2008implementing}. Namely, given the community assignment of nodes, the edges \textit{within} each block are permuted, which preserves the block structure and thus is able to approximate the null distribution. In practice, we usually don't know the community assignment of each node. Assuming the vertices of both graphs are matched (there is a bijection between the vertices of both graphs), we can use a Joint Random Dot Product Graph (JRDPG) model to estimate the community assignment jointly. JRDPG is a procedure to embed multiple graphs sampled under some joint distribution. It works by finding the adjacency spectral embedding (ASE) of a matrix formed by concatenating the ASE of each of the jointly sampled graphs. If the graphs are sampled under an SBM, one can recover the communities by clustering the embeddings \cite{SussmanEtAl2012}. The procedure to estimate the community assignment is Algorithm \ref{alg:best}, and the procedure to compute p-value is Algorithm \ref{alg:pvalue} (both algorithms are in the Appendix). In Algorithm \ref{alg:best}, one needs to choose the parameters $d$, the number of dimensions of the latent positions, and $k$, the prior estimate of the number of communities. $d$ is chosen via the scree plots of the singular values \cite{zhu2006automatic}, while $k$ can be chosen with prior knowledge about the graph structure, or one can select the optimal number of clusters in a Gaussian Mixture Model (GMM) by selecting the clustering with the best Bayesian Information Criterion (BIC). \section{Theoretical Results} \label{sec:theory} To derive the theoretic property of the proposed test, we operate under the setting of $\rho$-correlated SBM (which can be unweighted $\rho$-correlated Bernoulli graphs or weighted $\rho$-correlated Gaussian graphs), as stated in the following assumption: \begin{assumption}\label{rho-sbm} The adjacency matrices $X, Y$ are sampled jointly from a $\rho$-correlated SBM distribution. \end{assumption} First, we show that the test is valid, i.e., it properly controls the type I error. We prove validity by showing that block permutation results in a test statistic that equals the test statistic under the null in distribution. To simplify the proof, we write $\mathcal{V}^2(U, W)$ as $\mathcal{V}^2(X, Y)$, where $X, Y$ are the adjacency matrices used by $U, W$ as kernels respectively. $\mathcal{V}^2(U, W)$ is the same as $\mathcal{V}^2(X, Y)$ in the sense that distance correlation is computed by first computing a distance matrix for $U$ and $W$ respectively, using the kernel-induced distances of $X$ and $Y$. \begin{proposition}\label{validity} Under Assumption \ref{rho-sbm}, let $\pi$ be the block permutation procedure in Algorithm \ref{alg:bpermute}, and assume $X$ and $Y$ are conditionally independent, that is, they are $\rho$-correlated SBM (either Bernoulli or Gaussian) with $\rho=0$. It follows that $\mathcal{V}^2(X^{\pi}, Y) \stackrel{D}{=} \mathcal{V}^2(X, Y)$, i.e., the block permutation test is a valid test procedure for $\rho$-correlated SBM. \end{proposition} In conditional testing, the Pearson, DCorr and MGC statistics are no longer $0$ under conditional independence. Instead, the test statistics under the null shall converge to some non-zero constants that depends on the actual distributions. Moreover, since the adjacency matrix is not positive semi-definite, the constant could be negative (whereas in case of Euclidean data and Euclidean distance, DCorr is asymptotically non-negative). \begin{theorem}\label{consistency} For any of Pearson, DCorr, MGC statistics, $\mathcal{R}^2(U, W) = \rho$ for $\rho$-ER. For $\rho$-SBM with fixed marginals, there exists a unique constant $c$ such that $\mathcal{R}^2(U, W) = c$ if and only if $F_{U \mid z} = F_{W \mid z}$. Therefore, any of the three sample correlations using block permutation is consistent against all possible alternatives under $\rho$-SBM. Note that the theorem holds for either the binary $\rho$-ER / $\rho$-SBM from Bernoulli, or the weighted $\rho$-ER / $\rho$-SBM from Gaussian. \end{theorem} Theorem \ref{consistency} is supported by simulation results in Figure \ref{fig:all_ts} and \ref{fig:all_power}, and it is clear that all three correlations coincide with each other, which equals $\rho$ in case of ER and is otherwise a linear function of $\rho$ in case of SBM. The proofs of both Proposition \ref{validity} and Theorem \ref{consistency} are in the Appendix. \section{Simulated Experiments} \label{sec:simulation} \subsection{Test statistics} \label{sec:ts} We corroborate the theory using simulations with $ \rho$-correlated graphs, for which we can sample a pair of graphs while controlling their correlation $\rho$ exactly. For this experiment, we compare the test statistics of Pearson and DCorr with the correlation $\rho$ used to generate different settings of $\rho$-correlated Bernoulli graphs. We use Algorithm \ref{alg:pearson} for Pearson and Algorithm \ref{alg:dcorr} for DCorr (both algorithms are in the Appendix). The simulation settings are: (a): $\rho$-ER, $p=q=0.5$; (b): $\rho$-ER, $p=0.7 \neq q=0.2$; (c): $\rho$-SBM, the block probability matrices of the two graphs $B^x = B^y = B \in \ensuremath{\mathbb R}^{2 \times 2}$, where $B_{ij}=0.7$ when $i=j$, and $B_{ij}=0.3$ when $i\neq j$; (d): $\rho$-SBM, $B^x \neq B^y \in \ensuremath{\mathbb R}^{2 \times 2}$, where $B^x_{ij}=0.7, B^y_{ij}=0.2$ when $i=j$, and $B^x_{ij}=0.3, B^y_{ij}=0.5$ when $i \neq j$. All the community assignments are given instead of being estimated. Figure \ref{fig:all_ts} shows that for $\rho$-ER, both the Pearson and DCorr test statistics estimate $\rho$ perfectly. In particular, it is zero only when $\rho=0$. This aligns with Theorem \ref{consistency}. For $\rho$-SBM, both test statistics are still the same, but they are no longer zero when $\rho=0$. This is also expected based on Theorem \ref{consistency}. Intuitively, the test statistics differ from $\rho$ because they capture not only the correlation between pairs of edges, but also the correlation due to the block structure of SBMs. The test statistics of both DCorr and Pearson motivate a two-sided test of conditional independence, which we describe in the next section. \begin{figure} \centering \includegraphics[width=.98\textwidth]{figures/all_teststats.png} \caption{Test statistics on $\rho$-correlated Bernoulli graphs. For each setting, the graphs have 100 vertices. Test statistics are computed for 500 replications, the mean is plotted and the error bar is one standard deviation. Simulation settings for each subplot is described in the beginning of Section \ref{sec:ts}. Each subplot has different ranges for $\rho$ because the minimum and maximum $\rho$ differ for different marginal distributions. This suggests that the different test statistics accurately reflect the correlation structure. } \label{fig:all_ts} \end{figure} \subsection{Power}\label{sec:power} We use a power experiment to check that the test has the following properties: (1) validity: the power converges to below the type I error level $\alpha$ under the null; (2) consistency: the power increases to 1 as sample size, i.e. the number of vertices $n \xrightarrow{} \infty$. We compare the power of Pearson, DCorr and MGC on $\rho$-ER and $\rho$-SBM, both when the two graphs are sampled from distributions with the same probability matrix and when the distributions have different probability matrices. For MGC, we use Algorithm \ref{alg:dcorr} but substituting the DCorr sample statistic with a MGC sample statistic. For $\rho$-SBM, we compute the power when the community assignments are given, when the assignments are unknown and estimated, and when the block sizes are different. The algorithm for calculating the power on $\rho$-correlated SBM is in Algorithm \ref{alg:power} in the Appendix. The simulation settings are the following: left column show $\rho=0$, middle column shows $\rho=0.1$, right column shows $\rho=-0.1$. For the rows 1-4 are the same as in Section \ref{sec:ts}; row 5 is the same as row 4, except the community assignments are estimated instead of given; row 6 is the same as row 5, except the block sizes are different: for each $n$, there are 70 percent of nodes in the first community and 30 percent of nodes in the second community. Visualizations of some simulation settings are in Figure \ref{fig:all_graphs}. All simulation results for $\rho$-correlated Bernoulli graphs are in Figure \ref{fig:all_power}. The results show that all the tests using block permutation (Pearson, DCorr and MGC) are valid and consistent, and have similar power under all settings. For comparison, the power of Pearson using the exact analytic p-value instead of block permutation is also computed (the samples are pairs of edges in the vectorized upper triangles of both graphs \cite{student1908probable}, and the null is rejected if the p-value is less than type I error level $\alpha$). Without block permutation, the test is invalid for $\rho$-SBM. The same results also hold for the weighted $\rho$-correlated Gaussian ER and SBM, shown in Figure \ref{fig:all_gaussian_power}. For implementation, we use the python package \texttt{GraSPy} for graphs generation and manipulation \cite{chung2019graspy} and \href{https://mgcpy.readthedocs.io/}{\texttt{mgcpy}} for various functionalities related to independence testing. \begin{figure} \centering \includegraphics[width=.7\textwidth]{figures/all_graphs.png} \caption{Visualization of different settings of $\rho$-correlated graphs. All the graphs shown have 100 vertices and 2 communities. The first row is $\rho$-correlated Bernoulli SBMs with different probability matrices when $\rho=0.1$. The block probability matrices $B^x \neq B^y \in \ensuremath{\mathbb R}^{2 \times 2}$, where $B^x_{ij}=0.7, B^y_{ij}=0.2$ when $i=j$, and $B^x_{ij}=0.3, B^y_{ij}=0.5$ when $i \neq j$. The second row is the same as the first row, except the block sizes are different: 70 vertices are in the first community and 30 vertices are in the second community. The third row is weighted $\rho$-correlated Gaussian SBMs when $\rho=0.1$. $\mu_x=2 \neq \mu_y=4$ for the first block, and $\mu_x=0 \neq \mu_y=2$ for the second block. The covariance matrix is $\Sigma \in \ensuremath{\mathbb R}^{2 \times 2}$, where $\Sigma_{ij}=1$ if $i=j$ and $\Sigma_{ij}=\rho$ if $i \neq j$.} \label{fig:all_graphs} \end{figure} \begin{figure} \centering \includegraphics[height=.75\textheight]{figures/all_power_ttest.png} \caption{Power experiments using $\rho$-correlated Bernoulli graphs. Left column show $\rho=0$. Middle column shows $\rho=0.1$. Right column shows $\rho=-0.1$. For the rows, row 1-4 are the same as described in Section \ref{sec:ts}; row 5 is the same as 4, except the community assignments are estimated instead of given; row 6 is the same as row 5, except the block sizes are different: for each $n$, there are 70 percent of nodes in the first community and 30 percent of nodes in the second community. Power is computed for 500 Monte Carlo replicates for Pearson, DCorr and MGC with block permutations. Power for Pearson using the exact analytical p-value is computed for 5000 Monte Carlo replicates.} \label{fig:all_power} \end{figure} \begin{figure} \centering \includegraphics[height=.6\textheight]{figures/all_power_gaussian.png} \caption{Power experiments using $\rho$-correlated Gaussian graphs. Left column show $\rho=0$. Middle column shows $\rho=0.1$. Right column shows $\rho=-0.1$. For the rows, the simulation settings are: (1): $\rho$-ER, $\mu_x=\mu_y=0$; (2): $\rho$-ER, $\mu_x=0 \neq \mu_y=2$; (3): $\rho$-SBM, $\mu_x=\mu_y=0$ for the first block, and $\mu_x=\mu_y=2$ for the second block; (4): $\rho$-SBM, $\mu_x=2 \neq \mu_y=4$ for the first block, and $\mu_x=0 \neq \mu_y=2$ for the second block. The covariance matrix for all settings is $\Sigma \in \ensuremath{\mathbb R}^{2 \times 2}$, where $\Sigma_{ij}=1$ if $i=j$ and $\Sigma_{ij}=\rho$ if $i \neq j$. All the community assignments are given instead of being estimated. Power is computed for 500 Monte Carlo replicates.} \label{fig:all_gaussian_power} \end{figure} \section{Real Data Experiments}\label{sec:real} We consider the application of the graph conditional independence testing procedures on connectomes, also known as "brain graphs". The connectome of nematode \textit{Caenorhabditis elegans} (\textit{C. elegans}) is the only known whole-animal connectome of an organism, including not only neurons but also body cells. It was constructed based on series of electron microscopy, and has been updated and made more complete over the years \cite{white1976structure, white1985neuronal, white1986structure, varshney2011structural, cook2019connectome}. The connectome is constructed on the level of individual synapses, and is constructed for both chemical synapses and gap junctions. The graphs are directed and weighted. The nodes of the graph are individual cells, and the edges represent the strength of synapses from one cell to another. The data provides an invaluable source of information to study the coordination of nervous system within the entirety of an organism. Given such data, one natural initial question to ask is whether the graph constructed based on chemical synapses and that constructed based on gap junctions are statistically dependent on each other, and if so, how strong the correlation is. To answer this question with the proposed testing procedure, we chose the chemical and gap junction connectome of the hermaphrodite, one of the two sexes of adult \textit{C. elegans}. However, the graph independence testing procedure cannot be directly applied to the original graphs for two reasons: (1) not all the cells that are present in the chemical synapses connectome are present in the gap junctions connectome, and vice versa; (2) the original graphs are directed. To address (1), since each vertex is labeled with a unique cell name, we take the intersection of the cells in each of the graphs, and ensure the vertices of the two graphs are matched. After taking the intersection, each graph has 448 nodes, which includes all neurons and the cells that the neurons synapse onto. To address (2), the average weight of the two edges between a pair of nodes is used as the edge weight between them, rendering the graphs symmetric and thus undirected. The weighted and unweighted graphs after preprocessing are shown in Figure \ref{fig:celegans_graphs}. \begin{figure} \centering \includegraphics[width=.98\textwidth]{figures/all_celegans_graphs.png} \caption{Visualization of the \textit{C.elegans} connectomes. The first row are the connectomes of the chemical synapses and gap junctions of \textit{C. elegans} represented as adjacency matrices of undirected, weighted graphs. The chemical graph is on the left, the gap junction graph is on the right. The second row is the unweighted version of the adjacency matrices of the chemical and gap junctions graphs (all edge weights larger than zero are set to one)} \label{fig:celegans_graphs} \end{figure} To derive a p-value from the graph conditional independence test, the number of blocks $k$ used in block permutation needs to be set. By default, we pick the optimal $\hat{k}$ that results in a GMM clustering with the lowest BIC in Algorithm \ref{alg:best}. The estimated community assignment of the unweighted graphs is shown in Figure \ref{fig:celegans_block}. \begin{figure} \centering \includegraphics[width=.98\textwidth]{figures/all_celegans_graphs_block.png} \caption{Visualization of the unweighted \textit{C.elegans} connectomes, with the vertex sorted by the estimated community assignments. The optimal estimated number of blocks is 13.} \label{fig:celegans_block} \end{figure} Since graphs derived from real data do not arise from an SBM, we don't expect any given $k$ to perfectly estimate the community assignment of each node. But given a relatively large number of nodes, one can get a better fit with SBM by increasing $k$ while still maintaining the validity of block permutation. Figure \ref{fig:celegans_ts} shows that for any reasonable chosen $k$, the test detects strong dependency between the connectomes of chemical synapses and gap junctions, meaning that over and above block structure, one could in theory predict the presence of a gap junction given the presence of a chemical synapse, and vice versa. \begin{figure} \centering \includegraphics[width=.98\textwidth]{figures/all_celegans_ts.png} \caption{Test statistics for dependence on the weighted (left) and unweighted (right) \emph{C. elegans} connectomes. For a given $k$, we compute the test statistic under the null calculated with block permutation using $k$ blocks. $k$ is chosen for $2^i, i \in [1, 8]$. As $k \rightarrow n$, the test statistics under the null approaches the observed test statistic. This is expected since when $k \rightarrow n$, effectively each node is in its own block, so block permutation does not alter the graph much, resulting in a null test statistic similar to the observed test statistic. But for any reasonably chosen $k$ e.g. $k \leq \sqrt{n}$, including the optimal $\hat{k}$ identified with BIC, the observed test statistics are much larger than the null test statistics for all three tests, revealing a strong dependency between the two graphs. The null distribution of the test statistics are estimated for 500 replicates. The mean test statistic for each test is plotted and the error bars are for one standard deviation.} \label{fig:celegans_ts} \end{figure} \section{Conclusion and Future Work} \label{conclusion} We presented a statistical test for conditional independence between a pair of undirected graphs. This is the first approach that we know of that addresses the question of whether two graphs are conditionally independent. The question of independence is very general and may be of interest in many distinct areas of scientific inquiries. In the field of connectomics, the proposed approach is especially relevant in helping researchers draw new connections among various types of brain imaging data and ask novel questions about the relationships among these data. The proposed test relies on the theory of Stochastic Block Model (SBM). We note that this poses certain limitation in the effectiveness of the test in settings where the number of nodes is relatively small and the graph can only be fitted poorly given a SBM with small number of blocks, because it would be difficult for block permutation to approximate the null distribution. In such case, if one chooses a larger $k$, one can better approximate the graph with an SBM, but it would require a larger number of nodes for block permutation to approximate the null distribution; on the other hand, if one chooses a smaller $k$, the SBM might not fit the graph well enough for block permutation to approximate the null. With this in mind, we hypothesize that a generalization to Degree-corrected SBM \cite{KarrerNewman2011} or Random Dot Product Graph \citep{YoungScheinerman2007} might allow better modeling in such settings, if one can design a valid permutation procedure in these graph models. Such generalization of the procedure is a potential future direction of the current work. \paragraph{Authors, Affiliations, and Acknowledgements} \mbox{} \\ $^1$ Johns Hopkins University $^2$ University of Delaware \vspace{5mm}
2023-04-23T08:17:49.146Z
2019-06-11T02:17:55.000Z
redpajama/arxiv
arxiv_0000
690
6,101
7d896f282605161d3de8c835ba2b604513039024
\section{Introduction}\label{Sec:Intro} Let $(\Om,\cF,\mathbb{P}} \def\sP{\mathscr{P}} \def\fP{\mathfrak{P}} \def\cP{{\cal P}} \def\BP{{\bm P}} \def\Bp{{\bm p})$ be a complete probability space on which a standard one-dimensional Brownian motion $W=\{W(t);0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t<\i\}$ is defined, and let $\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}=\{\cF_t\}_{t\ges0}$ be the natural filtration of $W$ augmented by all the $\mathbb{P}} \def\sP{\mathscr{P}} \def\fP{\mathfrak{P}} \def\cP{{\cal P}} \def\BP{{\bm P}} \def\Bp{{\bm p}$-null sets in $\cF$. Consider the following controlled linear stochastic differential equation (SDE, for short) on a finite horizon $[t,T]$: \bel{state} dx(s) = [A(s)x(s)+B(s)u(s)]ds + [C(s)x(s)+D(s)u(s)]dW(s), \q s\in[t,T], \ee where $A,C:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $B,D:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times m}$, called the {\it coefficients} of the {\it state equation} \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state}, are given deterministic functions. The solution $x=\{x(s);t\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$ of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state}, which takes values in $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n$, is called a {\it state process}, and the process $u=\{u(s);t\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$, which takes values in $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m$ and is $\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}$-progressively measurable, is called a {\it control}. For a given initial condition $x(t)=\xi$, the state process $x$ is uniquely determined by the control $u$, and is often denoted by $x^{t,\xi,u}$ when it is necessary to underline the dependence on the {\it initial pair} $(t,\xi)$ and the control $u$. In this paper we shall assume that the coefficients of the state equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state} satisfy the following condition: \begin{itemize} \item[{\setword{\bf(A1)}{(A1)}}] $A,C:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $B,D:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times m}$ are bounded, Lebesgue measurable functions. \end{itemize} According to the standard result for SDEs (see, for example, \cite[Chapter 1, Theorem 6.3]{Yong-Zhou 1999}), such a condition ensures that a unique $p$th power integrable solution exists for the SDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state} whenever the {\it initial state} $x(t)=\xi$ and the control $u$ are $p$th power integrable. We are interested in the case $p=2$, in which the spaces of initial states, {\it admissible controls} and state processes are \begin{align*} L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n) &\textstyle} \def\llan{\lt\lan} \def\rank{\hbox{\rm rank\,}=\Big\{\xi:\Om\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n\bigm|\xi~\hbox{is $\cF_t$-measurable with}~\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}|\xi|^2<\i\Big\},\\ % L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m) &\textstyle} \def\llan{\lt\lan} \def\rank{\hbox{\rm rank\,}=\Big\{u:[t,T]\times\Om\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m\bigm|u~\hbox{is $\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}$-progressively measurable} \\ &\textstyle} \def\llan{\lt\lan} \def\rank{\hbox{\rm rank\,}\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\Big\{\ } \hbox{with}~\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_t^T|u(s)|^2ds<\i\Big\}, ~\hbox{and}\\ % L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(\Om;C([t,T];\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)) &\textstyle} \def\llan{\lt\lan} \def\rank{\hbox{\rm rank\,}=\Big\{x:[t,T]\times\Om\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n\bigm|x~\hbox{is $\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}$-adapted and continuous} \\ &\textstyle} \def\llan{\lt\lan} \def\rank{\hbox{\rm rank\,}\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\Big\{\ } \hbox{with}~\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\big[\sup_{t\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T}|x(s)|^2\big]<\i\Big\}, \end{align*} respectively. \medskip Let $F\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{k\times n}$ $(k\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} n)$ be a matrix, and let $b\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$ be a random variable. We denote by $\cH(F,b)$ the {\it stochastic linear manifold} $$\{\xi\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n): F\xi=b\}.$$ The problems of interest here are those for which the control $u$ is required to drive the system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state} to a particular state at the end of the interval $[t,T]$ from a given stochastic linear manifold $\cH(F,b)$ and the cost functional is of the quadratic form \begin{equation}\label{cost} J(t,u) = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\left\{\lan Gx(t),x(t)\ran + \int_t^T\lan Q(s)x(s),x(s)\ran+ \lan R(s)u(s),u(s)\ran ds\right\}, \end{equation} where the {\it weighting matrices} $G$, $Q$, and $R$ are assumed to satisfy the following condition: \begin{itemize} \item[{\setword{\bf(A2)}{(A2)}}] $G\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ is a symmetric matrix; $Q:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $R:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m\times m}$ are bounded, symmetric functions. Moreover, for some real number $\d>0$, $$G\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} 0, \q Q(s)\ges0, \q R(s)\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} \d I_m, \q\ae~s\in[0,T].$$ \end{itemize} For a precise statement, we pose the following {\it constrained stochastic linear-quadratic (LQ, for short) optimal control problem}. \begin{taggedthm}{Problem (CLQ)}\label{Prob} For a given target $\eta\in L^2_{\cF_T}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$, find a control $u^*\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$ such that the cost functional $J(t,u)$ is minimized over $L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$, subject to the following constraints on the initial and terminal states: \bel{constraint} x(t)\in\cH(F,b), \q x(T)=\eta.\ee \end{taggedthm} A control $u^*\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$ that minimizes $J(t,u)$ subject to \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{constraint} will be called an {\it optimal control} with respect to the target $\eta$; the corresponding state process will be called an {\it optimal state process}. If an initial state $\xi\in\cH(F,b)$ is transferred to the target $\eta$ by an optimal control, we call $\xi$ an {\it optimal initial state}. \medskip If the constraint \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{constraint} is absent, but the initial state $x(t)=\xi$ is given,\autoref{Prob} becomes a standard stochastic LQ optimal control problem. Such kind of problems was initiated by Wonham \cite{Wonham 1968} and was later investigated by many researchers; see, for example, Bismut \cite{Bismut 1978}, Bensoussan \cite{Bensoussan 1982}, Chen and Yong \cite{Chen-Yong 2001}, Ait Rami, Moore, and Zhou \cite{Ait Rami-Moore-Zhou 2002}, Tang \cite{Tang 2003}, Yu \cite{Yu 2013}, Sun, Li, and Yong \cite{Sun-Li-Yong 2016}, L\"{u}, Wang, and Zhang \cite{Lu-Wang-Zhang 2017}, Sun, Xiong, and Yong \cite{Sun-Xiong-Yong 2019}, Wang, Sun, and Yong \cite{Wang-Sun-Yong 2019}, and the references therein. In contrast, much less progress has been made on the constrained LQ problem for stochastic systems. This problem is particularly difficult in the stochastic setting since not only is one required to decide whether a state of the stochastic system can be transferred to another state, but in addition an optimal parameter must be evaluated. \medskip There were some attempts in attacking the constrained stochastic LQ optimal control problem in the special case of norm optimal control; see, for instance, Gashi \cite{Gashi 2015}, Wang and Zhang \cite{Wang-Zhang 2015}, and Wang et al. \cite{Wang-Yang-Yong-Yu 2017}. However, in these works the state process is required to start from a {\it particular point}, and the optimal control is either characterized {\it implicitly} in terms of {\it coupled} forward-backward stochastic differential equations (FBSDEs, for short), which are difficult to solve, or explicitly obtained but under a {\it strong} assumption that the stochastic system is {\it exactly controllable} (which means a target can be reached from any initial state). \medskip This paper aims to provide a complete solution to\autoref{Prob}, a class of stochastic LQ optimal control problems with fixed terminal states. A distinctive feature of the problem under consideration is that the state process is allowed to start from a stochastic linear manifold $\cH(F,b)$, instead of a fixed initial state. Clearly, our problem contains the norm optimal control as a particular case. Another feature is that the stochastic system is {\it not} assumed to be exactly controllable. The initial states outside the stochastic linear manifold $\cH(F,b)$ are irrelevant to our problem, so figuring out when the target can be reached from $\cH(F,b)$ will be enough to tackle\autoref{Prob}. \medskip The principal method adopted in the paper is combination of Lagrange multipliers and unconstrained backward LQ problems. By introducing a parameter $\l$, the Lagrange multiplier,\autoref{Prob} is reduced to a parameterized unconstrained backward LQ problem, whose optimal control and value function $V_\l$ can be constructed explicitly using the solutions to a Riccati equation and a decoupled FBSDE. Then the optimal state process $x^*_\l$ of the derived backward LQ problem is proved to be also optimal for\autoref{Prob} if the parameter $\l$ is such that $x^*_\l(t)\in\cH(F,b)$. In order to find such a parameter, called an {\it optimal parameter}, a first idea is to solve the equation ${d\over d\l}V_\l=0$. However, this does not work well in our situation, due to the difficulty in computing the derivative of $V_\l$. Our approach for finding the optimal parameter is based on a refinement (\autoref{prop:controllability}) of Liu and Peng's result \cite[Theorem 2]{Liu-Peng 2010}. The key is to establish an equivalence relationship between the controllability of the original system and a system involving $\Si$, the solution of a Riccati equation (\autoref{prop:x=hatx}). By observing that the controllability Gramian of the new system is exactly $\Si(t)$ (\autoref{prop:Gramian-Si}), we show that an optimal parameter can be obtained by solving an algebraic equation (\autoref{thm:main}). \medskip The rest of the paper is organized as follows. In \autoref{Sec:Preliminary}, we collect some preliminary results. \autoref{Sec:controllability} is devoted to the study of controllability of stochastic linear systems. In \autoref{Sec:Lagrange}, using Lagrange multipliers, we reduce the problem to a parameterized unconstrained backward LQ problem and an optimal parameter selection problem. Finally, we discuss how to find an optimal parameter and present the complete solution to\autoref{Prob} in \autoref{Sec:Main-result}. \section{Preliminaries}\label{Sec:Preliminary} Let $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times m}$ be the Euclidean space consisting of $n\times m$ real matrices, and let $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n=\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times1}$. The inner product of $M,N\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times m}$, denoted by $\lan M,N\ran$, is given by $\lan M,N\ran=\tr(M^\top N)$, where $M^\top$ is transpose of $M$ and $\tr(M^\top N)$ stands for the trace of $M^\top N$. This inner product induces the Frobenius norm $|M|=\sqrt{\tr(M^\top M)}$. Denote by $\mathbb{S}} \def\sS{\mathscr{S}} \def\fS{\mathfrak{S}} \def\cS{{\cal S}} \def\BS{{\bm S}} \def\Bs{{\bm s}^n$ the space of all symmetric $n\times n$ real matrices, and by $\mathbb{S}} \def\sS{\mathscr{S}} \def\fS{\mathfrak{S}} \def\cS{{\cal S}} \def\BS{{\bm S}} \def\Bs{{\bm s}^n_+$ the space of all symmetric positive definite $n\times n$ real matrices. For $\mathbb{S}} \def\sS{\mathscr{S}} \def\fS{\mathfrak{S}} \def\cS{{\cal S}} \def\BS{{\bm S}} \def\Bs{{\bm s}^n$-valued functions $M$ and $N$, we write $M\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} N$ (respectively, $M>N$) if $M-N$ is positive semidefinite (respectively, positive definite) almost everywhere. The identity matrix of size $n$ is denoted by $I_n$. \medskip We now present some lemmas that are useful in the subsequent sections. Consider the linear BSDE \bel{Y-Formula}\left\{\begin{aligned} dY(s) &= \big[A(s)Y(s)+C(s)Z(s)+f(s)\big]ds + Z(s)dW(s), \q s\in[0,T], \\ Y(T) &= \eta. \end{aligned}\right.\ee The following result, coming from the idea of proving the well-posedness of linear BSDEs (see \cite[Chapter 7, Theorem 2.2]{Yong-Zhou 1999}), provides a formula for the first component of the adapted solution $(Y,Z)$ to the BSDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Y-Formula}. \begin{lemma}\label{lemma:Y-formula} Let {\rm\ref{(A1)}} hold, and let $f\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(0,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$, $\eta\in L_{\cF_T}^2(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. Then the first component $Y$ of the adapted solution to \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Y-Formula} has the following representation: $$ Y(t) = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\G(t,T)\eta-\int_t^T\G(t,s)f(s)ds\bigg|\cF_t\rt], \q t\in[0,T], $$ where $\G(t,s)\triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,}\G(t)^{-1}\G(s)$ with $\G=\{\G(s);0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$ being the solution to $$\left\{\begin{aligned} d\G(s) &= -\G(s)A(s)ds -\G(s)C(s)dW(s), \q s\in[0,T], \\ \G(0) &= I_n. \end{aligned}\right.$$ \end{lemma} \begin{proof} Let $\th=\G(T)\eta-\int_0^T\G(s)f(s)ds$. By It\^{o}'s formula, \begin{align*} d\G Y &= -\G AYds - \G CYdW + \G(AY + CZ + f)ds + \G ZdW - \G CZds \\ &= \G fds +\G(Z-CY)dW, \end{align*} from which it follows that \begin{align}\label{FY--17Feb18} \G(t)Y(t) &= \G(T)\eta - \int_t^T\G(s)f(s)ds - \int_t^T\G(s)\big[Z(s)-C(s)Y(s)\big]dW(s) \nn\\ &= \th + \int_0^t\G(s)f(s)ds - \int_t^T\G(s)\big[Z(s)-C(s)Y(s)\big]dW(s). \end{align} Note that $$\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt(\int_0^T\big|\G(s)[Z(s)-C(s)Y(s)]\big|^2ds\rt)^{1\over 2}<\i.$$ Hence, the process $$M(t)\equiv\int_0^t\G(s)\big[Z(s)-C(s)Y(s)\big]dW(s),\q 0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$$ is a martingale, and by taking conditional expectations with respect to $\cF_t$ on both sides of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{FY--17Feb18}, we obtain $$\G(t)Y(t) = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\th|\cF_t] + \int_0^t\G(s)f(s)ds = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\G(T)\eta-\int_t^T\G(s)f(s)ds\bigg|\cF_t\rt],$$ from which the desired result follows. \end{proof} We conclude this section with a simple but useful algebraic lemma. \begin{lemma}\label{lmm:range=range} Let $A\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m\times n}$ and $B\in\cl{\mathbb{S}} \def\sS{\mathscr{S}} \def\fS{\mathfrak{S}} \def\cS{{\cal S}} \def\BS{{\bm S}} \def\Bs{{\bm s}^n_+}$. Then $ABA^\top$ and $AB$ have the same range space. \end{lemma} \begin{proof} For a matrix $M$, let $\sR(M)$ and $\sN(M)$ denote the range and kernel of $M$, respectively. Since $\sR(M)^\bot=\sN(M^\top)$ for any matrix $M$, it is suffice to prove $$\sN(ABA^\top)=\sN(BA^\top).$$ Clearly, $\sN(BA^\top)\subseteq\sN(ABA^\top)$. For the reverse inclusion, let $C\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ be such that $B=CC^\top$. If $x\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m$ is such that $ABA^\top x=0$, then $$\big|C^\top A^\top x\big|^2=x^\top ACC^\top A^\top x=x^\top ABA^\top x=0.$$ Thus, $C^\top A^\top x=0$ and hence $BA^\top x=CC^\top A^\top x=0$. This shows that $\sN(ABA^\top)\subseteq\sN(BA^\top)$. \end{proof} \section{Controllability of linear stochastic systems}\label{Sec:controllability} Consider the controlled linear stochastic differential system \begin{equation}\label{[ACBD]} dx(s)=[A(s)x(s)+B(s)u(s)]ds + [C(s)x(s)+D(s)u(s)]dW(s). \end{equation} Let $(t_0,x_0)\in [0,T)\times L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ be an initial pair, and let $t_1\in(t_0,T]$ be the terminal time. We know by the standard result for SDEs (\cite[Chapter 1, Theorem 6.3]{Yong-Zhou 1999}) that a solution $x^{t_0,x_0,u}\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(\Om;C([t_0,t_1];\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n))$ uniquely exists for any control $u\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$. We are now concerned with the question of finding a control such that a given target (terminal state) is reached on the terminal time. \begin{definition}\rm We say that a control $u\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$ {\it transfers the state of the system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} from $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ at $t=t_0$ to $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ at $t=t_1$} if $$x^{t_0,x_0,u}(t_1)=x_1$$ almost surely. We then also say that {\it $u$ transfers $(t_0,x_0)$ to $(t_1,x_1)$}, or that {\it $(t_1,x_1)$ can be reached from $(t_0,x_0)$ by $u$}. \end{definition} \begin{definition}\label{def:controllability}\rm System \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} is called {\it exactly controllable on $[t_0,t_1]$}, if for any $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ and any $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ there exists a control $u\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^m)$ transferring $(t_0,x_0)$ to $(t_1,x_1)$. \end{definition} It was shown in \cite{Peng 1994} and \cite{Liu-Peng 2010} that system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} is exactly controllable on some interval only if $D$ has full row rank and the number of columns of $D$ is greater than the number of rows of $D$ (i.e., $m>n$). Note that $\rank(D)=n$ means that $DD^\top$ is invertible. For technical reasons, in the sequel we shall impose, in addition to $m>n$, the following slightly stronger condition (which is usually referred to as the {\it nondegeneracy} condition): for some $\d>0$, \bel{rank(D)=n} D(s)D(s)^\top \geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue}\d I_n, \q\forall s\in[0,T]. \ee This condition implies that we can find a bounded invertible function $M:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m\times m}$ such that \bel{DM=} D(s)M(s)=(I_n,0_{n\times (m-n)}), \q\forall s\in[0,T].\ee In order to study the controllability of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]}, we write $B(s)M(s)=(K(s),L(s))$ with $K(s)$ and $L(s)$ taking values in $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times(m-n)}$, respectively, and introduce the following controlled system: \bel{sys-barX} d\bar x(s) = \big[\bar A(s)\bar x(s)+\bar B(s)\bar u(s)\big]ds + \bar D(s)\bar u(s)dW(s),\ee where \bel{barABD} \bar A=A-KC, \q \bar B=BM=(K,L), \q \bar D=DM=(I_n,0_{n\times (m-n)}). \ee Note that if we write the control $\bar u$ as the form $$\bar u(s) = \begin{pmatrix}z(s) \\ v(s)\end{pmatrix}; \q z(s)\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n,~v(s)\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n},$$ the system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{sys-barX} simplifies to \bel{[ACBD]*} d\bar x(s) = \big[\bar A(s)\bar x(s)+K(s)z(s)+L(s)v(s)\big]ds + z(s)dW(s).\ee The following result establishes a connection between the controllability of systems \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} and \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*}. \begin{proposition}\label{prop:[ACBD]-[ACBD]*} Let $0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_0<t_1\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$, $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ and $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. For system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*}, a control $(z,v)\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)\times L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ transfers $(t_0,x_0)$ to $(t_1,x_1)$ if and only if the control defined by $$u(s)\triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} M(s)\begin{pmatrix} z(s)-C(s)\bar x(s) \\ v(s)\end{pmatrix},\q s\in[t_0,t_1]$$ does so for system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]}, where $\bar x$ is the solution of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} with initial state $x_0$. \end{proposition} \begin{proof} We first observe that \begin{alignat*}{2} \bar A(s)\bar x(s)+K(s)z(s)+L(s)v(s) &= A(s)\bar x(s)+K(s)[z(s)-C(s)\bar x(s)]+L(s)v(s)\\ &= A(s)\bar x(s)+B(s)M(s)\begin{pmatrix} z(s)-C(s)\bar x(s)\\v(s)\end{pmatrix}\\ &= A(s)\bar x(s)+B(s)u(s),\\ % C(s)\bar x(s)+D(s)u(s) &= C(s)\bar x(s)+D(s)M(s)\begin{pmatrix}z(s)-C(s)\bar x(s)\\v(s)\end{pmatrix}\\ &= C(s)\bar x(s)+(I_n,0_{n\times (m-n)})\begin{pmatrix}z(s)-C(s)\bar x(s)\\v(s)\end{pmatrix}\\ &= z(s). \end{alignat*} This means $\bar x$ also satisfies $$d\bar x(s)=[A(s)\bar x(s)+B(s)u(s)]ds + [C(s)\bar x(s)+D(s)u(s)]dW(s).$$ Thus, by the uniqueness of a solution, with the initial state $x_0$ and the control $u$, the solution $x$ of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} coincides with $\bar x$. The result then follows immediately. \end{proof} From \autoref{prop:[ACBD]-[ACBD]*}, we see that the controllability of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} is equivalent to that of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*}. For the controllability of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*}, we have the following characterization, which refines the result of Liu and Peng \cite[Theorem 2]{Liu-Peng 2010}. \begin{proposition}\label{prop:controllability} Let $0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_0<t_1\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$, $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ and $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. There exists a control $(z,v)\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)\times L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ which transfers the state of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} from $x_0$ at $t=t_0$ to $x_1$ at $t=t_1$ if and only if $x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]$ belongs to the range space of \bel{Grammian} \Psi(t_0,t_1) \triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(t_0,s)L(s)[\F(t_0,s)L(s)]^\top ds\rt] \ee almost surely, that is, there exists an $\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ such that $$x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]=\Psi(t_0,t_1)\xi,\q\as,$$ where $\F(t,s)=\F(t)^{-1}\F(s)$ with $\F=\{\F(s);0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$ being the solution to the following SDE for $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$-valued processes: \bel{eqn:Phi}\left\{\begin{aligned} d\F(s) &= -\F(s)\bar A(s)ds-\F(s)K(s)dW(s), \q s\in[0,T],\\ \F(0) &= I_n. \end{aligned}\right.\ee \end{proposition} \begin{proof} {\it Sufficiency}. Suppose that there exists an $\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ such that $$x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]=\Psi(t_0,t_1)\xi,\q\as$$ Define $$v(s)=-[\F(t_0,s)L(s)]^\top\xi,\q s\in[t_0,t_1],$$ and let $(y_1,z_1)$ be the adapted solution to the following BSDE: $$\left\{\begin{aligned} dy_1(s) &= \big[\bar A(s)y_1(s)+K(s)z_1(s)+L(s)v(s)\big]ds+z_1(s)dW(s), \q s\in[t_0,t_1],\\ y_1(t_1) &= 0. \end{aligned}\right.$$ According to \autoref{lemma:Y-formula}, \begin{align*} y_1(t_0) &= -\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(t_0,s)L(s)v(s)ds\bigg|\cF_{t_0}\rt]\\ % &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(t_0,s)L(s)[\F(t_0,s)L(s)]^\top\xi ds\bigg|\cF_{t_0}\rt]. \end{align*} Noting that $\xi$ is $\cF_{t_0}$-measurable and that $\F(t_0,s)$ is independent of $\cF_{t_0}$ for $s\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} t$, we further obtain \begin{align*} y_1(t_0) &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(t_0,s)L(s)[\F(t_0,s)L(s)]^\top ds\rt]\xi = \Psi(t_0,t_1)\xi \\ &= x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]. \end{align*} Now let $(y_2,z_2)$ be the adapted solution to the BSDE \bel{eqn:Y2&z2}\left\{\begin{aligned} dy_2(s) &= \big[\bar A(s)y_2(s)+K(s)z_2(s)\big]ds + z_2(s)dW(s), \q s\in[t_0,t_1],\\ y_2(t_1) &= x_1, \end{aligned}\right.\ee and define $$\bar x(s)=y_1(s)+y_2(s),\q z(s)=z_1(s)+z_2(s),\q s\in[t_0,t_1].$$ By \autoref{lemma:Y-formula}, $y_2(t_0)=\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]$, and thus, by linearity, $(\bar x,z,v)$ satisfies $$\left\{\begin{aligned} d\bar x(s) &= \big[\bar A(s)\bar x(s)+K(s)z(s)+L(s)v(s)\big]ds + z(s)dW(s), \q s\in[t_0,t_1],\\ \bar x(t_0) &= x_0, \q \bar x(t_1)=x_1. \end{aligned}\right.$$ This shows $(t_1,x_1)$ can be reached from $(t_0,x_0)$ by $(z,v)$. \medskip} \def\rt{\right} \def\ae{\hbox{\rm a.e.} {\it Necessity}. We prove the necessity by contradiction. Suppose that $(t_1,x_1)$ can be reached from $(t_0,x_0)$ by some control $(z,v)$ but there exists some $\Om^\prime\subseteq\Om$ with $\mathbb{P}} \def\sP{\mathscr{P}} \def\fP{\mathfrak{P}} \def\cP{{\cal P}} \def\BP{{\bm P}} \def\Bp{{\bm p}(\Om^\prime)>0$ such that $x_0(\om)-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}](\om)$ does not lie in the range space of $\Psi(t_0,t_1)$ for every $\om\in\Om^\prime$. Then we can find an $\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ such that $$\Psi(t_0,t_1)\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi=0,~\as,\q\hb{and}\q\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\big(\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi_0\big)>0,$$ where $\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi_0=x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]$. Let $\bar x$ be the corresponding state process. By applying the integration by parts formula to $\F\bar x$, we have $$\F(t_1)x_1-\F(t_0)x_0 =\int_{t_0}^{t_1}\F(s)L(s)v(s)ds + \int_{t_0}^{t_1}\F(s)\big[z(s)-K(s)\bar x(s)\big]dW(s).$$ Taking conditional expectations with respect to $\cF_{t_0}$ on both sides of the above, we get $$-\F(t_0)\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi_0 = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_1)x_1|\cF_{t_0}]-\F(t_0)x_0 = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(s)L(s)v(s)ds\bigg|\cF_{t_0}\rt], $$ from which it follows that \begin{align}\label{contradiction>0} 0 &< \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\big(\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi_0\big) = -\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt(\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0)^{-1}\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt[\int_{t_0}^{t_1}\F(s)L(s)v(s)ds\bigg|\cF_{t_0}\rt]\rt) \nn\\ % &= -\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt(\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0)^{-1}\int_{t_0}^{t_1}\F(s)L(s)v(s)ds\rt) \nn\\ % &= -\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt(\int_{t_0}^{t_1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0,s)L(s)v(s)ds\rt). \end{align} But, using the fact that $\Psi(t_0,t_1)\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi=0$ $\as$ and noting that $\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi$ is independent of $\F(t_0,s)$ for $s\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} t_0$, we have \begin{align*} 0 &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\big(\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\Psi(t_0,t_1)\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi\big) = \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_{t_0}^{t_1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0,s)L(s)[\F(t_0,s)L(s)]^\top\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi ds\\ % &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_{t_0}^{t_1}\big|\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0,s)L(s)\big|^2 ds, \end{align*} which implies the vanishing of $\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi^\top\F(t_0,s)L(s)$ and the contradiction of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{contradiction>0}. \end{proof} \begin{remark}\rm The matrix $\Psi(t_0,t_1)$ defined by \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Grammian} is called the {\it controllability Gramian} of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} over $[t_0,t_1]$. Note that $\Psi(t_0,t_1)$ is symmetric positive semidefinite. \end{remark} Propositions \ref{prop:[ACBD]-[ACBD]*} and \ref{prop:controllability} have some easy consequences which we summarize as follows. \begin{corollary}\label{coro:controllability} Let $0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_0<t_1\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$, and let $\F$ be the solution to \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{eqn:Phi}. \begin{enumerate}[\indent\rm(i)] % \item System \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} is exactly controllable on $[t_0,t_1]$ if and only if system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} is so. % \item System \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} is exactly controllable on $[t_0,t_1]$ if and only if the controllability Gramian $\Psi(t_0,t_1)$ is positive definite. % \item Let $F\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{k\times n}$ and $b\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$. For system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*}, there exists a point on the stochastic linear manifold % $$\cH(F,b) = \{\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n): F\xi=b\}$$ % that can be transferred to $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ at time $t=t_1$ if and only if there exist an $\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ such that % $$F\Psi(t_0,t_1)\xi = b-F\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}].$$ % \end{enumerate} \end{corollary} \begin{proof} (i) It is a direct consequence of \autoref{prop:[ACBD]-[ACBD]*}. \medskip} \def\rt{\right} \def\ae{\hbox{\rm a.e.} (ii) If $\Psi(t_0,t_1)>0$, then obviously, $x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]$ belongs to $\sR(\Psi(t_0,t_1))$, the range of $\Psi(t_0,t_1)$, for all $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ and all $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. Thus, by \autoref{prop:controllability}, system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} is exactly controllable on $[t_0,t_1]$. Conversely, if system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} is exactly controllable on $[t_0,t_1]$, then for $x_1=0$ and any $x_0\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n$, $$x_0=x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]\in\sR(\Psi(t_0,t_1)),$$ which implies that $\Psi(t_0,t_1)$ has full rank and hence is positive definite. \medskip} \def\rt{\right} \def\ae{\hbox{\rm a.e.} (iii) By \autoref{prop:controllability} we know that a state $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ can be reached at $t_1$ from some $x_0\in\cH(F,b)$ if and only if there exists an $\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ such that $$\Psi(t_0,t_1)\xi = x_0-\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}].$$ Thus, the state $x_1$ can be reached from $\cH(F,b)$ if and only if the $\xi$ is such that $$F\{\Psi(t_0,t_1)\xi +\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}]\} =b.$$ The desired result then follows readily. \end{proof} The construction in the proof of \autoref{prop:controllability} actually provides an explicit procedure for finding a control that accomplishes desired transfers. Let us recap and conclude this section. \begin{proposition}\label{prop:candidate} Let $0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_0<t_1\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$ and $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. Let $F\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{k\times n}$ and $b\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$. If $\xi\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ is such that $$F\Psi(t_0,t_1)\xi = b-F\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}],$$ then with $$v(s) \triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} -L(s)^\top\F(t_0,s)^\top\xi,\q s\in[t_0,t_1],$$ and $z=\{z(s);t_0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_1\}$ being the second component of the adapted solution to the BSDE $$\left\{\begin{aligned} dy(s) &= \big[\bar A(s)y(s)+K(s)z(s)+L(s)v(s)\big]ds + z(s)dW(s),\q s\in[t_0,t_1],\\ y(t_1) &= x_1, \end{aligned}\right.$$ $(z,v)$ transfers the state of the system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} from $$x_0 = \Psi(t_0,t_1)\xi + \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\F(t_0,t_1)x_1|\cF_{t_0}] \in\cH(F,b)$$ at $t=t_0$ to $x_1$ at $t=t_1$. \end{proposition} \section{Lagrange multipliers and unconstrained backward LQ problems}\label{Sec:Lagrange} We now return to\autoref{Prob}. Recall that the nondegeneracy condition \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{rank(D)=n} is assumed so that the target $\eta$ can be reached from a given stochastic linear manifold $\cH(F,b)$. Let $M$ be as in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{DM=} and $\bar A$, $K$, $L$ be as in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{barABD}. We have seen from \autoref{prop:[ACBD]-[ACBD]*} that systems \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]} and \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{[ACBD]*} share the same controllability. So by appropriate transformations, we may assume without loss of generality that the state equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state} takes the form \bel{state*} dx(s) = [A(s)x(s)+K(s)z(s)+L(s)v(s)]ds + z(s)dW(s), \q s\in[t,T],\ee and that the cost functional \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{cost} takes the following form: \begin{align}\label{cost*} J(t,z,v) &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg\{\lan Gx(t),x(t)\ran +\int_t^T\[\lan Q(s)x(s),x(s)\ran \nn\\ &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg\{\q} + \lan R(s)z(s),z(s)\ran + \lan N(s)v(s),v(s)\ran\]ds\bigg\}. \end{align} That is, the coefficients $B$ and $D$ of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state} are given by $$ B(s) = (K(s),L(s)), \q D(s) = (I_n,0_{n\times (m-n)}); \q s\in[0,T], $$ and the control $u$ is $\begin{pmatrix}z \\ v\end{pmatrix}$. In this case, with the given terminal state $\eta$, we may think of $v$ alone as the control and regard $(x,z)$ as the adapted solution to the BSDE \bel{B-state}\left\{\begin{aligned} dx(s) &= [A(s)x(s)+K(s)z(s)+L(s)v(s)]ds + z(s)dW(s), \q s\in[t,T],\\ x(T) &= \eta. \end{aligned}\right.\ee Further, since for given $\eta$, $z$ is uniquely decided by $v$, we can simply write the cost functional \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{cost*} as $J(t,v)$. Therefore, solving\autoref{Prob} is equivalent to finding an optimal control $v^*$ for the following constrained backward LQ problem. \begin{taggedthm}{Problem (CBLQ)}\label{CBLQ} For a given terminal state $\eta\in L^2_{\cF_T}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$, find a control $v^*\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ such that the corresponding adapted solution $(x^*,z^*)$ of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{B-state} satisfies $x^*(t)\in\cH(F,b)$, and \bel{mincJ} J(t,v^*)\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} J(t,v), \q\forall v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n}).\ee \end{taggedthm} For this reduced problem, we impose the following assumptions that are similar to the conditions \ref{(A1)} and \ref{(A2)}. \begin{itemize} \item[{\setword{(H1)}{(H1)}}] $A,K:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $L:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times(m-n)}$ are bounded measurable functions. \item[{\setword{(H2)}{(H2)}}] $G$ is a symmetric $n\times n$ matrix; $Q,R:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$ and $N:[0,T]\to\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{(m-n)\times(m-n)}$ are bounded and symmetric. Moreover, for some $\d>0$, $$G\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} 0, \q Q(s)\ges0, \q R(s)\ges0, \q N(s)\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue}\d I_{m-n}, \q\ae~s\in[0,T].$$ \end{itemize} To find an optimal control for\autoref{CBLQ}, let $\l\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$ be undetermined and define \begin{align}\label{cost-l} J_\l(t,v) &\triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} J(t,v) + 2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan F^\top\l,x(t)\ran \nn\\ % &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg\{\lan Gx(t),x(t)\ran + 2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan F^\top\l,x(t)\ran +\int_t^T\[\lan Q(s)x(s),x(s)\ran \nn\\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\qq} +\lan R(s)z(s),z(s)\ran + \lan N(s)v(s),v(s)\ran\]ds\bigg\}. \end{align} Consider the following parameterized unconstrained backward LQ problem. \begin{taggedthm}{Problem (BLQ)$_\l$}\label{BLQ} For a given terminal state $\eta\in L^2_{\cF_T}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$, find a control $v^*\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ such that \bel{minJl} J_\l(t,v^*)\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} J_\l(t,v), \q\forall v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n}),\ee subject to the backward state equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{B-state}. \end{taggedthm} If for some parameter $\l\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$, the optimal control $v^*_\l$ of\autoref{BLQ} is such that the initial state of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{B-state} falls on the stochastic linear manifold $\cH(F,b)$, then intuitively we can convince ourselves that $v^*_\l$ is also optimal for\autoref{CBLQ}. In fact, we have the following result. \begin{proposition}\label{prop:relation} Let {\rm\ref{(H1)}--\ref{(H2)}} hold, and let $\eta\in L^2_{\cF_T}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ be given. If $v^*_\l$ is an optimal control of\autoref{BLQ} such that the adapted solution $(x^*_\l,z^*_\l)$ of $$\left\{\begin{aligned} dx^*_\l(s) &= [A(s)x^*_\l(s)+K(s)z^*_\l(s)+L(s)v^*_\l(s)]ds + z^*_\l(s)dW(s), \q s\in[t,T],\\ x^*_\l(T) &= \eta, \end{aligned}\right.$$ satisfies $x^*_\l(t)\in\cH(F,b)$, then $v^*_\l$ is also optimal for\autoref{CBLQ}. \end{proposition} \begin{proof} Since $v^*_\l$ is optimal for\autoref{BLQ}, \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{minJl} holds. In particular, for any $v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ such that the initial state of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{B-state} falls on $\cH(F,b)$, we have \begin{align*} & J(t,v^*_\l)+2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan F^\top\l,x^*_\l(t)\ran = J_\l(t,v^*_\l)\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} J_\l(t,v) = J(t,v)+2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan F^\top\l,x(t)\ran, \\ % & Fx(t)=Fx^*_\l(t)=b, \end{align*} from which it follows that \begin{align*} J(t,v^*_\l) &\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} J(t,v)+2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan F^\top\l,x(t)-x^*_\l(t)\ran \\ &= J(t,v)+2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan \l,F[x(t)-x^*_\l(t)]\ran = J(t,v). \end{align*} This completes the proof. \end{proof} According to \autoref{prop:relation}, the procedure for finding the optimal control of our original\autoref{CBLQ} can be divided into two steps. \begin{enumerate}[\indent Step 1.] \item Construct the optimal control $v^*_\l$ for the parameterized unconstrained backward LQ problem. \item Select the parameter $\l$ such that the corresponding optimal state process $x^*_\l$ of\autoref{BLQ} satisfies $x^*_\l(t)\in\cH(F,b)$. \end{enumerate} For Step 1, we first present the following result, which characterizes the optimal control of\autoref{BLQ} in terms of FBSDEs. \begin{theorem}\label{thm:optmality} Let {\rm\ref{(H1)}--\ref{(H2)}} hold. Let $\l\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$ and $\eta\in L^2_{\cF_T}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ be given. Then a control $v^*\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ is optimal for\autoref{BLQ} if and only if the adapted solution $(x^*,z^*,y^*)$ to the coupled FBSDE \bel{xzy-star}\left\{\begin{aligned} dx^*(s) &= (Ax^*+Kz^*+Lv^*)ds + z^*dW(s), \\ dy^*(s) &= (-A^\top y^* + Qx^*)ds + (-K^\top y^* +Rz^*)dW(s), \\ x^*(T) &= \eta, \q y^*(t)= Gx^*(t)+F^\top\l, \end{aligned}\right.\ee satisfies the following stationarity condition: \bel{stationarity} Nv^*-L^\top y^*=0, \q\ae~\hbox{on}~[t,T],~\as \ee \end{theorem} \begin{proof} First note that $v^*$ is optimal if and only if \begin{equation}\label{3.14-1} J_\l(t,v^*+\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu v) -J_\l(t,v^*)\ges0, \q\forall \varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r},~\forall v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n}). \end{equation} For fixed but arbitrary $\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}$ and $v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$, we have by linearity that the adapted solution $(x_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu,z_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu)$ to $$\left\{\begin{aligned} dx_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu(s) &= [Ax_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu+Kz_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu+L(v^*+\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu v)]ds + z_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu dW(s), \q s\in[t,T],\\ x_\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu(T) &= \eta, \end{aligned}\right.$$ is the sum of $(x^*,z^*)$ and $\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu(x,z)$, where $(x,z)$ is the adapted solution to $$\left\{\begin{aligned} dx(s) &= (Ax+Kz+Lv)ds + z dW(s), \q s\in[t,T],\\ x(T) &= 0. \end{aligned}\right.$$ Then it follows by a straightforward computation that \begin{align*} J_\l(t,v^*+\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu v) &= \varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu^2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg[\lan Gx(t),x(t)\ran +\int_t^T\(\lan Qx,x\ran + \lan Rz,z\ran + \lan Nv,v\ran\)ds\bigg] \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } +2\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg[\lan Gx^*(t)+F^\top\l,x(t)\ran +\int_t^T\(\lan Qx^*,x\ran + \lan Rz^*,z\ran + \lan Nv^*,v\ran\)ds\bigg] \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } +J_\l(t,v^*). \end{align*} Thus, \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{3.14-1} in turn is equivalent to \begin{align}\label{3.14-2} & \varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu^2\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg[\lan Gx(t),x(t)\ran +\int_t^T\(\lan Qx,x\ran + \lan Rz,z\ran + \lan Nv,v\ran\)ds\bigg] \nn\\ % &~ +2\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg[\lan Gx^*(t)+F^\top\l,x(t)\ran +\int_t^T\(\lan Qx^*,x\ran + \lan Rz^*,z\ran + \lan Nv^*,v\ran\)ds\bigg]\ges0 \end{align} for all $\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}$ and all $v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$. Since the term in the first square bracket is nonnegative by the assumption \ref{(H2)}, \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{3.14-2} holds for all $\varepsilon} \def\L{\Lambda} \def\l{\lambda} \def\m{\mu} \def\n{\nu\in\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}$ if and only if \begin{equation}\label{3.14-3} \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg[\lan Gx^*(t)+F^\top\l,x(t)\ran +\int_t^T\(\lan Qx^*,x\ran + \lan Rz^*,z\ran + \lan Nv^*,v\ran\)ds\bigg]=0. \end{equation} Now by applying It\^{o}'s rule to $s\mapsto\lan y^*(s),x(s)\ran$, we obtain \begin{align*} \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan Gx^*(t)+F^\top\l,x(t)\ran =\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lan y^*(t),x(t)\ran = -\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_t^T\(\lan Qx^*,x\ran+\lan L^\top y^*,v\ran+\lan Rz^*,z\ran\)ds, \end{align*} substituting which into \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{3.14-3} yields $$\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_t^T\lan Nv^*- L^\top y^*,v\ran ds=0.$$ Since the above has to be true for all $v\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$, \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{stationarity} follows. The sufficiency of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{stationarity} can be proved by reversing the above argument. \end{proof} We call \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{xzy-star}, together with the stationarity condition \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{stationarity}, the {\it optimality system} for\autoref{BLQ}. Note that from \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{stationarity} we can represent the optimal control $v^*$ in terms of $y^*$ as $v^*= N^{-1}L^\top y^*$. Substituting for $v^*$ then brings a coupling into the FBSDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{xzy-star}. So in order to find the optimal control $v^*$, one actually need to solve a coupled FBSDE. \medskip To construct an optimal control for\autoref{BLQ} from the optimality system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{xzy-star}-\eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{stationarity}, we now introduce the following Riccati-type equation: \bel{Ric-Sigma}\left\{\begin{aligned} & \dot\Si-\Si A^\top-A\Si-\Si Q\Si+LN^{-1}L^\top+K(I_n+\Si R)^{-1}\Si K^\top=0, \q s\in[0,T],\\ & \Si(T)=0. \end{aligned}\right.\ee It was shown in \cite{Lim-Zhou 2001} (see also \cite{Li-Sun-Xiong 2017} for an alternative proof) that equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Ric-Sigma} has a unique positive semidefinite solution $\Si\in C([0,T];\mathbb{S}} \def\sS{\mathscr{S}} \def\fS{\mathfrak{S}} \def\cS{{\cal S}} \def\BS{{\bm S}} \def\Bs{{\bm s}^n)$: $$\Si(s)^\top=\Si(s), \q \Si(s)\ges0; \q\forall s\in[0,T].$$ This allows us to consider the following BSDE: \bel{BSDE-f}\left\{\begin{aligned} d\f(s) &= [(A+\Si Q)\f +K(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi]ds + \beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi dW(s), \q s\in[0,T],\\ \f(T) &= -\eta, \end{aligned}\right.\ee which, by the standard result for BSDEs, admits a unique adapted solution $$(\f,\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(\Om;C([0,T];\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n))\times L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(0,T;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n).$$ Consider further the following $(\f,\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi,\l)$-dependent SDE: \bel{SDE-y}\left\{\begin{aligned} dy(s) &= -[(A^\top\1n+Q\Si)y+Q\f]ds -(I_n+R\Si)^{-1}(K^\top y+R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)dW(s), ~s\in[t,T],\\ y(t) &= [I_n+G\Si(t)]^{-1}[F^\top\l-G\f(t)]. \end{aligned}\right.\ee Obviously, \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{SDE-y} is uniquely solvable. \begin{theorem}\label{thm:solution-BLQ-lamda} Let {\rm\ref{(H1)}--\ref{(H2)}} hold. Then\autoref{BLQ} admits a unique optimal control which is given by $$v^*_\l(s)=N(s)^{-1}L(s)^\top y(s), \q s\in[t,T],$$ where $y$ is the solution to the SDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{SDE-y}. \end{theorem} \begin{proof} Let $(x,z)$ be the adapted solution to the BSDE $$\left\{\begin{aligned} dx(s) &= (Ax+Kz+Lv^*_\l)ds + zdW(s), \q s\in[t,T],\\ x(T) &= \eta. \end{aligned}\right.$$ According to \autoref{thm:optmality}, it suffices to verify that the solution $y$ of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{SDE-y} satisfies the SDE $$\left\{\begin{aligned} dy(s) &= (-A^\top y + Qx)ds + (-K^\top y+Rz)dW(s), \q s\in[t,T],\\ y(t) &= Gx(t)+F^\top\l. \end{aligned}\right.$$ This can be accomplished if we are able to show that \begin{equation}\label{Decoupling} x(s) = -[\Si(s)y(s)+\f(s)], \q z(s) = [I_n+\Si(s) R(s)]^{-1}[\Si(s) K(s)^\top y(s)-\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi(s)]. \end{equation} Indeed, if \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Decoupling} holds, then the first relation gives $$Gx(t)+F^\top\l=-G\Si(t)y(t)-G\f(t)+F^\top\l,$$ which, together with the initial condition in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{SDE-y}, implies that \begin{align*} y(t) &= -G\Si(t)y(t)+[I_n+G\Si(t)]y(t)=-G\Si(t)y(t)+F^\top\l-G\f(t) \\ &= Gx(t)+F^\top\l. \end{align*} Furthermore, $$-[(A^\top\1n+Q\Si)y + Q\f] = -A^\top y + Qx,$$ and using the second relation in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Decoupling} we obtain \begin{align*} K^\top y -(I_n+R\Si)^{-1}(K^\top y+R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi) &= [I_n-(I_n+R\Si)^{-1}]K^\top y -(I_n+R\Si)^{-1}R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &= (I_n+R\Si)^{-1}R\Si K^\top y -(I_n+R\Si)^{-1}R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &= (I_n+R\Si)^{-1}R(\Si K^\top y -\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi) \\ % &= R(I_n+\Si R)^{-1}(\Si K^\top y -\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi) \\ % &= Rz, \end{align*} and hence $-(I_n+R\Si)^{-1}(K^\top y+R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)= -K^\top y+Rz$. \medskip In order to prove \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Decoupling}, let us denote $$ \hat x(s) \triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} -[\Si(s)y(s)+\f(s)], \q \hat z(s) \triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} [I_n+\Si(s) R(s)]^{-1}[\Si(s) K(s)^\top y(s)-\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi(s)].$$ Thanks to the uniqueness of an adapted solution, our proof will be complete if we can show that $(\hat x,\hat z)$ satisfies the same BSDE as $(x,z)$. To this end, we first note that $\hat x(T)=-[\Si(T)y(T)+\f(T)]=\eta$. Moreover, by It\^{o}'s rule, \begin{align}\label{eqn:hatx} d\hat x &= d(-\Si y-\f)= -\dot\Si y ds -\Si dy -d\f \nn\\ % &= -\dot\Si y ds +\Si[(A^\top\1n+Q\Si)y+Q\f]ds +\Si(I_n+R\Si)^{-1}(K^\top y+R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)dW \nn\\ &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } -\1n[(A+\Si Q)\f +K(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi]ds - \beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi dW \nn\\ % &= [(-\dot\Si+ \Si A^\top +\Si Q\Si)y -A\f-K(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi]ds \nn\\ &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } +\1n\{\Si(I_n+R\Si)^{-1}K^\top y + [\Si(I_n+R\Si)^{-1}R-I_n]\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi\}dW. \end{align} Using \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Ric-Sigma}, we can rewrite the drift term in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{eqn:hatx} as \begin{align*} & (-\dot\Si+ \Si A^\top +\Si Q\Si)y -A\f-K(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &\q = [-A\Si + LN^{-1}L^\top +K(I_n+\Si R)^{-1}\Si K^\top]y -A\f-K(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &\q = -A(\Si y+\f) + LN^{-1}L^\top y + K(I_n+\Si R)^{-1}(\Si K^\top y-\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi) \\ % &\q = A\hat x + Lv^*_\l + K\hat z. \end{align*} Using the fact that $$\Si(I_n+R\Si)^{-1} = (I_n+\Si R)^{-1}\Si, \q \Si(I_n+R\Si)^{-1}R-I_n = -(I_n+\Si R)^{-1},$$ we can rewrite the diffusion term in \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{eqn:hatx} as \begin{align*} & \Si(I_n+R\Si)^{-1}K^\top y + [\Si(I_n+R\Si)^{-1}R-I_n]\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &\q = (I_n+\Si R)^{-1}\Si K^\top y -(I_n+\Si R)^{-1}\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi \\ % &\q = \hat z. \end{align*} This shows that $(\hat x,\hat z)$ satisfies the same BSDE as $(x,z)$ and hence completes the proof. \end{proof} \section{Selection of optimal parameters}\label{Sec:Main-result} In this section we show how to find a $\l\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$, called an {\it optimal parameter}, such that the corresponding optimal state process of\autoref{BLQ} satisfies $x^*_\l(t)\in\cH(F,b)$. It is worth pointing out that the usual method of Lagrange multipliers does not work efficiently in our situation, due to the difficulty in computing the derivative of $J_\l(t,v^*_\l)$ in $\l$. The key of our approach is to establish an equivalence relationship between the controllability of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*} and a system involving $\Si$, the solution of the Riccati equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Ric-Sigma}. It turns out that an optimal parameter exists and can be obtained by solving an algebraic equation. \medskip Recall that $\Si$ and $(\f,\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)$ are the unique solutions to equations \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Ric-Sigma} and \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{BSDE-f}, respectively. The main result of this section can be stated as follows. \begin{theorem}\label{thm:main} Let {\rm\ref{(H1)}--\ref{(H2)}} hold. If the state of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*} can be transferred to $(T,\eta)$ from the stochastic linear manifold $\cH(F,b)$, then the algebraic equation \bel{Eqn-lamda} \Big\{F[I_n+\Si(t)G]^{-1}\Si(t)F^\top\Big\}\l = -\Big\{ F[I_n+\Si(t)G]^{-1}\f(t)+b \Big\}\ee has a solution. Moreover, any solution $\l^*$ of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Eqn-lamda} is an optimal optimal parameter, and the optimal controls $v^*$ of\autoref{CBLQ} are given by \begin{align*} v^*(s) &= N(s)^{-1}L(s)^\top y^*(s), \q s\in[t,T], \end{align*} where $y^*$ is the solution of \bel{eqn:y*}\left\{\begin{aligned} dy^*(s) &= -[(A^\top\1n+Q\Si)y^*+Q\f]ds -(I_n+R\Si)^{-1}(K^\top y^*+R\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi)dW, ~s\in[t,T],\\ y^*(t) &= [I_n+G\Si(t)]^{-1}[F^\top\l^*-G\f(t)]. \end{aligned}\right.\ee \end{theorem} In preparation for the proof of \autoref{thm:main}, let us consider the following system: \bel{hatx}d\hat x(s) = [\widehat} \def\qq{\qquad} \def\1n{\negthinspace A(s)\hat x(s)+\widehat} \def\qq{\qquad} \def\1n{\negthinspace K(s)\hat z(s)+\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\hat v(s)]ds+\hat z(s)dW(s),\ee where the coefficients are given by \begin{align}\label{hatAKL} \widehat} \def\qq{\qquad} \def\1n{\negthinspace A = A+\Si Q, \q \widehat} \def\qq{\qquad} \def\1n{\negthinspace K= K(I_n+\Si R)^{-1}, \q \widehat} \def\qq{\qquad} \def\1n{\negthinspace L = (LN^{-{1\over2}},\, -\Si Q^{1\over2},\, K(I_n+\Si R)^{-1}\Si R^{1\over2}). \end{align} The following result shows that the controllability of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*} is equivalent to that of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx}. \begin{proposition}\label{prop:x=hatx} Let {\rm\ref{(H1)}--\ref{(H2)}} hold. Let $0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} t_0<t_1\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T$, $x_0\in L^2_{\cF_{t_0}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$ and $x_1\in L^2_{\cF_{t_1}}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)$. A control $(z,v)\in L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^n)\times L_\mathbb{F}} \def\sF{\mathscr{F}} \def\fF{\mathfrak{F}} \def\cF{{\cal F}} \def\BF{{\bm F}} \def\Bf{{\bm f}^2(t_0,t_1;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{m-n})$ transfers $(t_0,x_0)$ to $(t_1,x_1)$ for system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*} if and only if the control $(\hat z,\hat v)$ defined by \bel{hatzv} \hat z(s)\triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} z(s), \q \hat v(s)\triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} \begin{pmatrix}[N(s)]^{1\over2}v(s)\\ [Q(s)]^{1\over2}x(s) \\ [R(s)]^{1\over2}z(s)\end{pmatrix}; \q s\in[t_0,t_1] \ee does so for system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx}, where $x$ is the solution of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*} with respect to the initial pair $(t_0,x_0)$ and the control $(z,v)$. \end{proposition} \begin{proof} Let $(\hat z,\hat v)$ be defined by \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatzv} and $\hat x$ be the solution to \bel{hatx-5.21}\left\{\begin{aligned} d\hat x(s) &= [\widehat} \def\qq{\qquad} \def\1n{\negthinspace A(s)\hat x(s)+\widehat} \def\qq{\qquad} \def\1n{\negthinspace K(s)\hat z(s)+\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\hat v(s)]ds + \hat z(s)dW(s), \q s\in[t_0,t_1],\\ \hat x(t_0) &= x_0. \end{aligned}\right.\ee We prove the assertion by showing $\hat x=x$. Substituting \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatAKL} and \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatzv} into \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx-5.21}, we have \bel{hatx=x}\left\{\begin{aligned} d\hat x(s) &= [A\hat x+\Si Q(\hat x-x)+Kz+Lv]ds + z dW(s), \q s\in[t_0,t_1],\\ \hat x(t_0) &= x_0. \end{aligned}\right.\ee Clearly, $x$ is also a solution of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx=x} and hence $x=\hat x$ by the uniqueness of a solution. \end{proof} Although the system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} looks more complicated than \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{state*}, the controllability Gramian of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} takes a simpler form, as shown by the following result. \begin{proposition}\label{prop:Gramian-Si} Let {\rm\ref{(H1)}--\ref{(H2)}} hold. For any $t\in[0,T]$, the controllability Gramian of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} over $[t,T]$ is $\Si(t)$. \end{proposition} \begin{proof} Let $\Pi=\{\Pi(s);0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$ be the solution to the following SDE for $\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^{n\times n}$-valued processes: \bel{eqn:Pi}\lt\{\begin{aligned} d\Pi(s) &= -\Pi(s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace A(s)ds-\Pi(s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace K(s)dW(s),\q s\in[0,T],\\ \Pi(0) &= I_n, \end{aligned}\rt.\ee and let $\Pi(t,s)=\Pi(t)^{-1}\Pi(s)$. By \autoref{prop:controllability}, the controllability Gramian of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} over $[t,T]$ is $$\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt\{\int_t^T\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big[\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big]^\top ds\rt\}.$$ On the other hand, we have by It\^{o}'s rule that \begin{align*} d\(\Pi\Si\Pi^\top\) &= -\,\Pi(A+\Si Q)\Si\Pi^\top ds - \Pi K(I_n+\Si R)^{-1}\Si\Pi^\top dW \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } + \Pi\dot\Si\Pi^\top ds - \Pi\Si(A+\Si Q)^\top\Pi^\top ds - \Pi\Si(I_n+R\Si)^{-1}K^\top\Pi^\top dW \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } + \Pi K(I_n+\Si R)^{-1}\Si(I_n+R\Si)^{-1}K^\top\Pi^\top ds \\ % &= -\,\Pi\[(A+\Si Q)\Si - \dot\Si + \Si(A+\Si Q)^\top \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=-\,\Pi\[~} - K(I_n+\Si R)^{-1}\Si(I_n+R\Si)^{-1}K^\top\]\Pi^\top ds \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } - \Pi\[K(I_n+\Si R)^{-1}\Si + \Si(I_n+R\Si)^{-1}K^\top\]\Pi^\top dW \\ % &= -\,\Pi\[LN^{-1}L^\top + \Si Q\Si + K(I_n+\Si R)^{-1}\Si K^\top \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=-\,\Pi\[~}- K(I_n+\Si R)^{-1}\Si(I_n+R\Si)^{-1}K^\top\]\Pi^\top ds \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\ } - \Pi\[K(I_n+\Si R)^{-1}\Si + \Si(I_n+R\Si)^{-1}K^\top\]\Pi^\top dW. \end{align*} Integration from $t$ to $T$ and then taking conditional expectations with respect to $\cF_t$ on both sides, we obtain \begin{align*} \Pi(t)\Si(t)\Pi(t)^\top &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg\{\int_t^T\Pi\[LN^{-1}L^\top + \Si Q\Si + K(I_n+\Si R)^{-1}\Si K^\top \\ % &\hphantom} \def\nn{\nonumber} \def\cl{\overline{=\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\bigg\{\Pi\[~} - K(I_n+\Si R)^{-1}\Si(I_n+R\Si)^{-1}K^\top\]\Pi^\top ds\bigg|\cF_t\bigg\}. \end{align*} Observe that \begin{align*} & LN^{-1}L^\top + \Si Q\Si + K(I_n+\Si R)^{-1}\Si K^\top - K(I_n+\Si R)^{-1}\Si(I_n+R\Si)^{-1}K^\top \\ % &\q= LN^{-1}L^\top + \Si Q\Si + K\[(I_n+\Si R)^{-1}\Si R\Si(I_n+R\Si)^{-1}\]K^\top \\ % &\q= \widehat} \def\qq{\qquad} \def\1n{\negthinspace L \widehat} \def\qq{\qquad} \def\1n{\negthinspace L^\top, \end{align*} and that $\Pi(t,s)$ is independt of $\cF_t$ for $s\geqslant} \def\[{\Big[} %\def\tb{\textcolor{blue} t$. Then we have \begin{align*} \Si(t) &= \Pi(t)^{-1}\lt\{\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\int_t^T\Pi(s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)^\top\Pi(s)^\top ds\bigg|\cF_t\rt\}\[\Pi(t)^{-1}\]^\top \\ &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt\{\int_t^T\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big[\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big]^\top ds\bigg|\cF_t\rt\} \\ &= \mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}\lt\{\int_t^T\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big[\Pi(t,s)\widehat} \def\qq{\qquad} \def\1n{\negthinspace L(s)\big]^\top ds\rt\}. \end{align*} This completes the proof. \end{proof} {\it Proof of \autoref{thm:main}.} First note that the state of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} can also be transferred to $(T,\eta)$ from the stochastic linear manifold $\cH(F,b)$ (\autoref{prop:x=hatx}) and that the controllability Gramian of system \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{hatx} over $[t,T]$ is $\Si(t)$ (\autoref{prop:Gramian-Si}). Thus, by \autoref{coro:controllability} (iii), there exists $\xi\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$ satisfying \bel{5.24-1} F\Si(t)\xi = b-F\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\Pi(t,T)\eta|\cF_t], \ee where $\Pi(t,T)=\Pi(t)^{-1}\Pi(T)$ with $\Pi=\{\Pi(s);0\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} s\leqslant} \def\){\Big )} %\def\tc{\textcolor{red} T\}$ being the solution of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{eqn:Pi}. Applying \autoref{lemma:Y-formula} to the BSDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{BSDE-f}, we obtain $$\mathbb{E}} \def\sE{\mathscr{E}} \def\fE{\mathfrak{E}} \def\cE{{\cal E}} \def\BE{{\bm E}} \def\Be{{\bm e}[\Pi(t,T)\eta|\cF_t] = -\f(t),$$ and hence \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{5.24-1} becomes $$F\Si(t)\xi = F\f(t) + b.$$ Now using the identity $$ I_n-\Si(t)[I_n+G\Si(t)]^{-1}G = [I_n+\Si(t)G]^{-1}, $$ it is straightforward to verify that $$\l = [I_n+G\Si(t)]\xi - G\f(t)$$ is a solution of $$\Big\{F\Si(t)[I_n+G\Si(t)]^{-1}\Big\}\l = F[I_n+\Si(t)G]^{-1}\f(t)+b.$$ That is, $F[I_n+\Si(t)G]^{-1}\f(t)+b$ lies in the range of $F\Si(t)[I_n+G\Si(t)]^{-1}$. Since $$F\Si(t)[I_n+G\Si(t)]^{-1} = F[I_n+\Si(t)G]^{-1}\Si(t)$$ and $F[I_n+\Si(t)G]^{-1}\Si(t)$ and $F[I_n+\Si(t)G]^{-1}\Si(t)F^\top$ have the same range (\autoref{lmm:range=range}), we see that $F[I_n+\Si(t)G]^{-1}\f(t)+b$ also lies in the range of $F[I_n+\Si(t)G]^{-1}\Si(t)F^\top$, which means the algebraic equation \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Eqn-lamda} has a solution. \medskip For the second assertion, let $\l^*\in L^2_{\cF_t}(\Om;\mathbb{R}} \def\sR{\mathscr{R}} \def\fR{\mathfrak{R}} \def\cR{{\cal R}} \def\BR{{\bm R}} \def\Br{{\bm r}^k)$ and $y^*$ be the solution to the SDE \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{eqn:y*}. By \autoref{thm:solution-BLQ-lamda}, the process $$v^*(s) \triangleq} \def\({\Big (} \def\diag{\hbox{\rm diag\,} N(s)^{-1}L(s)^\top y^*(s), \q s\in[t,T]$$ is the optimal control of Problem (BLQ)$_{\l^*}$. Further, let $(x^*,z^*)$ be the adapted solution to the BSDE $$\left\{\begin{aligned} dx^*(s) &= (Ax^* + Kz^* + Lv^*)ds + z^* dW(s), \q s\in[t,T],\\ x^*(T) &= \eta. \end{aligned}\right.$$ We see from the proof of \autoref{thm:solution-BLQ-lamda} that $(x^*,z^*)$ and $y^*$ have the following relation (recalling \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Decoupling}): $$ x^*(s) = -[\Si(s)y^*(s)+\f(s)], \q z^*(s) = [I_n+\Si(s)R(s)]^{-1}[\Si(s)K(s)^\top y^*(s)-\beta} \def\D{\Delta} \def\d{\delta} \def\F{\Phi} \def\p{\phi(s)],$$ from which we obtain \begin{align}\label{x-lamada} x^*(t) &= -\Si(t)y^*(t) - \f(t) \nn\\ &= -\Si(t)[I_n+G\Si(t)]^{-1}[F^\top\l^*-G\f(t)] - \f(t) \nn\\ &= -[I_n+\Si(t)G]^{-1}\Si(t)[F^\top\l^*-G\f(t)] - \f(t) \nn\\ &= -[I_n+\Si(t)G]^{-1}\Si(t)F^\top\l^* + [I_n+\Si(t)G]^{-1}\Si(t)G\f(t) - \f(t) \nn\\ &= -[I_n+\Si(t)G]^{-1}\Si(t)F^\top\l^* - [I_n+\Si(t)G]^{-1}\f(t). \end{align} According to \autoref{prop:relation}, the optimal control $$v^*(s) =N(s)^{-1}L(s)^\top y^*(s), \q s\in[t,T]$$ of Problem (BLQ)$_{\l^*}$ is also optimal for\autoref{CBLQ} if and only if $x^*(t)\in\cH(F,b)$. Using \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{x-lamada}, we see the latter holds if and only if $\l^*$ is a solution of \eqref} \def\Blan{\Big\lan} \def\esssup{\mathop{\rm esssup}{Eqn-lamda}. $\hfill\qed$
2023-04-23T08:17:49.156Z
2019-06-11T02:15:45.000Z
redpajama/arxiv
arxiv_0000
691
16,425
4ecaf12f1f8be0816ef57fec862f56b438e5292a
\section{Introduction} A central problem in scientific computing involves estimating parameters that describe mathematical models, such as initial conditions, boundary conditions, or material parameters. This is often addressed by using experimental measurements and a mathematical model to compute estimates of unknown model parameters. In other words, one can estimate the parameters by solving an inverse problem. Experimental design involves specifying the experimental setup for collecting measurement data, with the goal of accurately recovering the parameters of interest. As such, optimal experimental design (OED) is an important aspect of effective and efficient parameter estimation. Namely, in applications where collecting experimental measurements is expensive (e.g., because of budget, labor, or physical constraints), deploying experimental resources has to be done efficiently and in a parsimonious manner. Even when collecting large amounts of data is feasible, OED is still important; the computational cost of processing all the data may be prohibitive or a poorly designed experiment with many measurements may miss important information about the parameters of interest. To make matters concrete, we explain the inverse problem and the experimental design in the context of an application. Consider the transport of a contaminant in an urban environment or the subsurface. The forward problem involves forecasting the spread of the contaminant, whereas the inverse problem involves using the measurements of the contaminant concentration at discrete points in space and time to recover the source of the contaminant (i.e., the initial state). In this application, OED involves optimal placement of sensors, at which measurement data is collected, to reconstruct the initial state. We focus on OED for Bayesian linear inverse problems governed by PDEs. In our formulation of the OED problem, the goal is to find an optimal subset of sensors from a fixed array of ${n_s}$ candidate sensor sites. The experimental design is parameterized by assigning non-negative weights to each candidate sensor location. Ideally, we seek a binary weight vector $\vec{w}$; if $w_i = 1$, a sensor will be placed at the $i$th candidate location, and if $w_i = 0$, no sensor will be placed at the location. However, formulating an optimization problem over binary weight vectors leads to a problem with combinatorial complexity that is computationally prohibitive. A common approach to address this issue is to relax the binary requirement on design weights by letting the weights take values in the interval $[0, 1]$. The sparsity of the design will then be controlled using a penalty method; see e.g.,~\cite{HaberMagnantLuceroEtAl12, a14infinite,yu2018scalable}. This results in an optimization problem of the following form: \begin{equation}\label{equ:basic_prob} \min_{\vec{w} \in [0, 1]^{n_s}} \Phi(\vec{w}) + \gamma P(\vec{w}), \end{equation} where $\Phi$ denotes the design criterion, $\gamma > 0$ is a penalty parameter, and $P$ is a penalty function. Adopting a Bayesian approach to the inverse problem, the design criterion will be a measure of the uncertainty in the estimated parameters. In this article, we focus on a popular choice known as the A-optimal criterion~\cite{Ucinski05,ChalonerVerdinelli95}. That is, we seek a sensor configuration that results in a minimized average posterior variance. The design criterion, in this case, is given by the trace of the posterior covariance operator. One major challenge in solving~\cref{equ:basic_prob}, specifically for PDE-based inverse problems, is the computational cost of objective function and gradient evaluations. Namely, the posterior covariance operator $\mathbf{\Gamma}_\text{post}$ is dense, high-dimensional, and computationally challenging to explicitly form---computing applications of $\mathbf{\Gamma}_\text{post}$ to vectors requires solving multiple PDEs. Furthermore, these computations must be performed at each iteration of an optimization algorithm used to solve~\cref{equ:basic_prob}. To address this computational challenge, efficient and accurate matrix-free approaches for computing the OED objective and its gradient are needed. Another challenge in solving the OED problem~\cref{equ:basic_prob} is the need for a suitable penalty method that is computationally tractable and results in sparse and binary optimal weight vectors. This article is about methods for overcoming these computational challenges. \paragraph{Related work} For an extensive review of the OED literature, we refer the reader to~\cite{alexanderian2018dopt}. We focus here on works that are closely related to the present article. Algorithms for A-optimal designs for ill-posed linear inverse problems were proposed in~\cite{HaberHoreshTenorio08,HaberMagnantLuceroEtAl12,FJ_16} and more specifically for infinite-dimensional Bayesian linear inverse problems in~\cite{a14infinite}. In majority of these articles, Monte-Carlo trace estimators are used to approximate the A-optimal design criterion and its gradient. Also,~\cite{HaberMagnantLuceroEtAl12,a14infinite} advocate use of low-rank approximations using the Lanczos algorithm or the randomized SVD~\cite{Halko2011structure}. We refer to our previous work~\cite{saibaba2016randomized} for comparison of Monte Carlo trace estimators and those based on randomized subspace iteration; it was shown that the latter are significantly more accurate than Monte Carlo trace estimators. Regarding sparsity control, various techniques have been used to approximate the $\ell_0$-``norm'' to enforce sparse and binary designs. For example, \cite{HaberHoreshTenorio08,HaberHoreshTenorio10,HaberMagnantLuceroEtAl12} use the $\ell_1$-penalty function with an appropriate threshold to post-process the solution. In \cite{a14infinite}, a continuation approach is proposed that involves solving a sequence of optimization problems with non-convex penalty functions that approximate the $\ell_0$-``norm''. More recently, in~\cite{yu2018scalable}, a sum-up rounding approach is proposed to obtain binary optimal designs. \paragraph{Our approach and contributions} In this article, we make the following advances in methods for A-optimal sensor placements in infinite-dimensional Bayesian linear inverse problems: \begin{enumerate} \item We present efficient and accurate randomized estimators of the A-optimal criterion and its gradient, based on randomized subspace iteration. This is accompanied by a detailed algorithm that guides efficient implementations, discussion of computational cost, as well as theoretical error analysis; see~\cref{sec:criterion}. Our estimators are structure exploiting, in that they use the low-rank structure embedded in the posterior covariance operator. To quantify the accuracy of the estimators we present rigorous error analysis, significantly advancing the methods in~\cite{saibaba2016randomized}. A desirable feature of our analysis is that the bounds are independent of the dimension of the discretized inversion parameter. Furthermore, the computational cost {(measured in the number of PDE solves)} of the A-optimal objective and gradient using our proposed estimators is independent of the discretized parameter dimension. \item We propose a new algorithm for optimal sensor placement that is based on solving a sequence of reweighted $\ell_1$-optimization problems; see~\cref{sec:reweightl1}. An important benefit of this approach is that one works with convex penalty functions, and since the A-optimal criterion itself is a convex function of $\vec{w}$, in each step of the reweighted $\ell_1$ algorithm a convex optimization problem is solved. We derive this algorithm by using the Majorization-Minimization principle applied to a novel penalty function that promotes binary designs. The solution of the reweighted $\ell_1$-optimization problems is accelerated by the efficient randomized estimators for the optimality criterion and its gradient. To our knowledge, the presented framework, based on reweighted $\ell_1$-minimization, is the first of its kind in the context of OED. \item Motivated by reducing computational cost, we propose a new criterion known as modified A-optimal criterion; see~\cref{sec:mod_a}. This criterion is derived by considering a suitably weighted A-optimal criterion. We present randomized estimators with complete error analysis for computing the modified A-optimal criterion and its gradient. \end{enumerate} We illustrate the benefits of the proposed algorithms on a model problem from contaminant source identification. A comprehensive set of numerical experiments is provided so to test various aspect of the presented approach; see~\cref{sec:numerics}. Finally, we remark that the randomized estimators and the reweighted $\ell_1$ approach for promoting sparse and binary weights are of independent interest beyond the application to OED. \section{Preliminaries}\label{sec:background} In this section, we recall the background material needed in the remainder of the article. \subsection{Bayesian linear inverse problems} We consider a linear inverse problem of estimating $m$, using the model \[ F m + \vec{\eta} = \vec{y}. \] Here $F$ is a linear parameter-to-observable map (also called the forward operator), $\vec{\eta}$ represents the measurement noise, and $\vec{y}$ is a vector of measurement data. The inversion parameter $m$ is an element of $\mathcal{V} = L^2(\mathcal{D})$, where $\mathcal{D}$ is a bounded domain. \subsubsection*{The setup of the inverse problem} To fully specify the inverse problem, we need to describe the prior law of $m$ and our choice of data likelihood. For the prior, we choose a Gaussian measure $\mu_{\mathup{pr}}=\GM{\ipar_{\mathup{pr}}}{\Gamma_{\mathup{pr}}}$. We assume the prior mean $\ipar_{\mathup{pr}}$ is a sufficiently regular element of $\mathcal{V}$ and that the covariance operator $\Gamma_{\mathup{pr}}:\mathcal{V} \to \mathcal{V}$ is a strictly positive self-adjoint trace-class operator. Following the developments in~\cite{Stuart10,Bui-ThanhGhattasMartinEtAl13,DashtiStuart17}, we use $\Gamma_{\mathup{pr}} = \mathcal{A}^{-2}$ with $\mathcal{A}$ taken to be a Laplacian-like operator~\cite{Stuart10}. This ensures that $\Gamma_{\mathup{pr}}$ is trace-class in two and three space dimensions. We consider the case where $F$ represents a time-dependent PDE and we assume observations are taken at ${n_s}$ sensor locations at ${n_t}$ points in time. Thus, the vector of experimental data $\vec{y}$ is an element of $\mathbb{R}^{{n_s}{n_t}}$. An application of the parameter-to-observable map, $F:\mathcal{V} \to \mathbb{R}^{{n_s}{n_t}}$, involves a PDE solve followed by an application of a spatiotemporal observation operator. We assume a Gaussian distribution on the experimental noise, $\vec{\eta} \sim \GM{\vec{0}}{\vec{\Gamma}_{\!\mathup{noise}}}$. Given this choice of the noise model---additive and Gaussian---the likelihood probability density function (pdf) is \[ \pi_{\mathup{like}}(\vec{y} \mid m) \propto \exp\left\{ -\frac12 \big(Fm - \vec{y}\big)^\top\vec{\Gamma}_{\!\mathup{noise}}^{-1} \big(Fm - \vec{y}\big)\right\}. \] Furthermore, the solution of the Bayesian inverse problem---the posterior distribution law $\mu_{\mathup{post}}^{{\obs}}$---is given by the Gaussian measure $\mu_{\mathup{post}}^{{\obs}} = \GM{\ipar_{\mathup{post}}^\obs}{\Gamma_{\mathup{post}}}$ with \begin{equation}\label{equ:mean-cov} \ipar_{\mathup{post}}^\obs = \Gamma_{\mathup{post}}(F^*\vec{\Gamma}_{\!\mathup{noise}}^{-1}\vec{y} + \Gamma_{\mathup{pr}}^{-1}\ipar_{\mathup{pr}}), \qquad \Gamma_{\mathup{post}} = (F^* \vec{\Gamma}_{\!\mathup{noise}}^{-1} F + \Gamma_{\mathup{pr}}^{-1})^{-1}. \end{equation} Note that here the posterior mean $\ipar_{\mathup{post}}^\obs$ coincides with the maximum a posteriori probability (MAP) estimator. We refer to~\cite{Stuart10} for further details. \subsubsection*{Discretization} We use a continuous Galerkin finite element discretization approach for the governing PDEs, as well as the inverse problem. Specifically, our discretization of the Bayesian inverse problem follows the developments in~\cite{Bui-ThanhGhattasMartinEtAl13}. The discretized parameter space in the present case is $\mathcal{V}_n = \mathbb{R}^n$ equipped with the inner product $\mip{\cdot}{\cdot}$ and norm $\mnorm{\cdot} = \mip{\cdot}{\cdot}^{1/2}$, where $\M$ is the finite element mass matrix. Note that $\mip{\cdot}{\cdot}$ is the discretized $L^2(\mathcal{D})$ inner product. The discretized parameter-to-observable map is a linear transformation $\mathbf{F}:\mathcal{V}_n \to \mathbb{R}^{{n_s}{n_t}}$ {with adjoint $\mathbf{F}^*$ discussed below}. The discretized prior measure $\GM{\dpar_{\mathup{pr}}}{\prior{}}$ is obtained by discretizing the prior mean and covariance operator, and the discretized posterior measure is given by $\GM{\vec{m}_\text{post}}{\mathbf{\Gamma}_\text{post}}$, with \[ \mathbf{\Gamma}_\text{post} = \left(\mathbf{F}^*\vec{\Gamma}^{-1}_\text{noise}\mathbf{F} + \prior{-1}\right)^{-1}, \quad \dpar_{\mathup{post}}^\obs = \mathbf{\Gamma}_\text{post}\left(\mathbf{F}^*\vec{\Gamma}^{-1}_\text{noise}\vec{y}+\prior{-1}\vec{m}_\text{pr} \right). \] We point out that the operator $\mathbf{F}^*\vec{\Gamma}^{-1}_\text{noise}\mathbf{F}$ is the Hessian of the data-misfit cost functional whose minimizer is the MAP point, and is thus referred to as the data-misfit Hessian; see e.g.,~\cite{a14infinite}. It is important to keep track of the inner products in the domains and ranges of the linear mappings, appearing in the above expressions, when computing the respective adjoint operators. For the readers' convenience, in \cref{fig:adjoints}, we summarize the different spaces that are important, the respective inner products, and the adjoints of the linear transformations defined between these spaces. \begin{figure}[ht]\centering \includegraphics[width=.35\textwidth]{figs/comm.pdf} \caption{Different spaces, their inner products, and the adjoints of linear transformations between them. Here $\ip{\cdot}{\cdot}$ denotes the Euclidean inner product and $\mip{\cdot}{\cdot}$ is the mass-weighted inner product.} \label{fig:adjoints} \end{figure} Using the fact that $\prior{}$ is a self-adjoint operator on $\mathcal{V}_n$ and the form of the adjoint operator $\mathbf{F}^*$ (see \cref{fig:adjoints}), we can rewrite the expression for $\mathbf{\Gamma}_\text{post}$ as follows: \begin{equation*} \mathbf{\Gamma}_\text{post} = \prior{1/2}\M^{-1/2}\left(\mathbf{I}+{\mathbfcal{F}}^\top {\vec{\Gamma}}_\text{noise}^{-1}{\mathbfcal{F}}\right)^{-1}\M^{1/2}\prior{1/2}, \end{equation*} with { \begin{equation}\label{eqn:ff} {\mathbfcal{F}}=\mathbf{F}\prior{1/2}\M^{-1/2}. \end{equation} } Note that {the operator $\mathbfcal{H}_\text{m} = {\mathbfcal{F}}^\top {\vec{\Gamma}}_\text{noise}^{-1}{\mathbfcal{F}}$ is a symmetric positive semidefinite matrix}, and is a similarity transform of the prior-preconditioned data-misfit Hessian $\prior{1/2}\mathbf{F}^*\vec{\Gamma}^{-1}_\text{noise}\mathbf{F}\prior{1/2}$. In many applications (including the application considered in \cref{sec:numerics}), $\mathbfcal{H}_\text{m}$ has rapidly decaying eigenvalues and therefore, it can be approximated by a low-rank matrix. This is a key insight that will be exploited in our estimators for the OED criterion and its gradient. \subsection{Randomized subspace iteration algorithm}\label{ssec:rand} In this article, we develop and use randomized estimators to efficiently compute the design criteria and their derivatives. We first explain how to use randomized algorithms for computing low-rank approximations of a symmetric positive semidefinite matrix $\mathbf{A}\in\mb{R}^{n\times n}$. To draw connection with the previous subsection, in our application $\mathbf{A}$ will stand for $\mathbfcal{H}_\text{m}$. We first draw a random Gaussian matrix $\boldsymbol\Omega \in \mb{R}^{n\times \ell}$ (i.e., the entries are independent and identically distributed standard normal random variables). We then perform $q$ steps of subspace iteration on $\mathbf{A}$ with the starting guess $\boldsymbol\Omega$ to obtain the matrix $\mathbf{Y}$. If, for example, the matrix has rank $k \leq \ell$, or the eigenvalues decay sufficiently, then the range of $\mathbf{Y}$ is a good approximation to the range of $\mathbf{A}$ under these suitable conditions. This is the main insight behind randomized algorithms. We now show how to obtain a low-rank approximation of $\mathbf{A}$. A thin-QR factorization of $\mathbf{Y}$ is performed to obtain the matrix $\mathbf{Q}$, which has orthonormal columns. We then form the ``projected'' matrix $\mathbf{T} = \mathbf{Q}^\top\mathbf{A}\mathbf{Q}$ and obtain the low-rank approximation \begin{equation} \mathbf{A} \approx \mathbf{Q} \mathbf{T}\mathbf{Q}^\top. \label{approx} \end{equation} This low-rank approximation can be manipulated in many ways depending on the desired application. An alternative low-rank approximation can be computed using the Nystr\"om approximation, see e.g.,~\cite{Halko2011structure}. \begin{algorithm}[h!] \caption{Randomized subspace iteration.} \begin{algorithmic} \REQUIRE $\mathbf{A}\in\mathbb{R}^{n\times n}$ with target rank $k$, oversampling parameter $p\geq 2$, with $\ell\equiv k+p \leq n$, and $q\geq 1$ (number of subspace iterations). \ENSURE $\mathbf{Q} \in \mb{R}^{n\times \ell}, \mathbf{T}\in\mathbb{R}^{\ell\times\ell}$. \STATE \textbf{Draw} a standard Gaussian random matrix $\boldsymbol{\Omega}\in\mathbb{R}^{n\times \ell}$. \STATE \textbf{Compute} $\mathbf{Y}=\mathbf{A}^q\boldsymbol\Omega$. \STATE \textbf{Compute} thin QR decomposition $\mathbf{Y}=\mathbf{Q}\mathbf{R}.$ \STATE \textbf{Compute} $\mathbf{T}=\mathbf{Q}^\top\mathbf{A}\mathbf{Q}$. \end{algorithmic} \label{alg:randsubspace} \end{algorithm} In addition, once the matrix $\mathbf{T}$ is computed, it can be used in various ways. In \cite{saibaba2016randomized}, $\trace{\mathbf{T}}$ was used as an estimator for $\trace{\mathbf{A}}$, whereas $\log\det(\mathbf{I} + \mathbf{T})$ was used as estimator for $\log\det(\mathbf{I} + \mathbf{A})$. The main idea behind these estimators is that the eigenvalues of $\mathbf{T}$ are good approximations to the eigenvalues of $\mathbf{A}$, when $\mathbf{A}$ is sufficiently low-rank or has rapidly decaying eigenvalues. Our estimators for the A-optimal criterion and its gradient utilize the same idea but in a slightly different form. \subsection{A-optimal design of experiments}\label{ssec:oed} As mentioned in the introduction, an experimental design refers to a placement of sensors used to collect measurement data for the purposes of parameter inversion. Here we describe the basic setup of the optimization problem for finding an A-optimal design. \subsubsection*{Experimental design and A-optimal criterion} We seek to find an optimal subset of a network of ${n_s}$ candidate sensor locations, which collect measurements at ${n_t}$ points in time. The experimental design is parameterized by a vector of design weights $\vec{w} \in [0, 1]^{n_s}$. In the present work, we use the A-optimal criterion to find the optimal design. That is, we seek designs that minimize the average posterior variance, as quantified by $\trace{\mathbf{\Gamma}_\text{post}(\vec{w})}$. (The precise nature of the dependence of $\mathbf{\Gamma}_\text{post}$ on $\vec{w}$ will be explained below.) Note that $\trace{\mathbf{\Gamma}_\text{post}(\vec{w})} = \trace{ \mathbf{\Gamma}_\text{post}(\vec{w}) - \prior{}} + \trace{\prior{}}$ and thus, minimizing the trace of the posterior covariance operator is equivalent to minimizing \begin{equation}\label{estobj} \aopt(\vec{w}) \equiv \trace{ \mathbf{\Gamma}_\text{post}(\vec{w}) - \prior{}}. \end{equation} This is the objective function we seek to minimize for finding A-optimal designs. As seen below, this formulation of the A-optimal criterion is well suited for approximations via randomized matrix methods \cref{ssec:rand}. Note also that minimizing $\aopt(\vec{w})$ amounts to maximizing $\trace{\prior{}} - \trace{\mathbf{\Gamma}_\text{post}(\vec{w})}$, which can be thought of as a measure of uncertainty reduction. We can also understand~\cref{estobj} from a decision theoretic point of view. It is well known~\cite{ChalonerVerdinelli95, alexanderian2016bayesian,AttiaAlexanderianSaibaba18} that for Bayesian linear inverse problems with Gaussian prior and additive Gaussian noise models, $\trace{\mathbf{\Gamma}_\text{post}{}}$ coincides with Bayes risk (with respect to the $L^2$ loss function): \[\begin{aligned} \trace{\mathbf{\Gamma}_\text{post}{}} = &\> \mathbb{E}_{\mu_{\mathup{pr}}} \Big( \mathbb{E}_{\pi_{\mathup{like}}(\vec{y} \mid \vec{m})} \big(\mnorm{\dpar_{\mathup{post}}^\obs - \vec{m}}^2 \big) \Big)\\ = &\> \int_{\mathcal{V}_n} \int_{\mathbb{R}^{n_s n_t}} \mnorm{\dpar_{\mathup{post}}^\obs - \vec{m}}^2 \, \pi_{\mathup{like}}(\vec{y} \mid \vec{m})d\vec{y}\,\mu_{\mathup{pr}}(d\vec{m}) . \end{aligned}\] Here $\mu_{\mathup{pr}}$ denotes the discretized prior measure, $\mu_{\mathup{pr}} = \mathcal{N}(\vec{\dpar_{\mathup{pr}}}, \prior{})$. Using $$\int_{\mathcal{V}_n} \mnorm{\dpar_{\mathup{pr}} - \vec{m}}^2 \, \mu_{\mathup{pr}}(d\vec{m}) = \trace{\prior{}},$$ we see that \[ \trace{\mathbf{\Gamma}_\text{post}{} - \prior{}} = \mathbb{E}_{\mu_{\mathup{pr}}} \Big(\mathbb{E}_{\pi_{\mathup{like}}(\vec{y} \mid \vec{m})} \big(\mnorm{\dpar_{\mathup{post}}^\obs - \vec{m}}^2 - \mnorm{\dpar_{\mathup{pr}} - \vec{m}}^2 \big) \Big); \] this provides an alternate interpretation of $\aopt(\cdot)$ as a Bayes risk with respect to a modified loss function. \subsubsection*{Dependence of the A-optimal criterion to the design weights} We follow the same setup as~\cite{a14infinite}. Namely, the design weights enter the Bayesian inverse problem through the data likelihood, resulting in the $\mathbfcal{H}_\text{m}$ operator being dependent on $\vec{w}$. To weight the spatiotemporal observations, the matrix $\mathbf{W}$ is defined as: \[ \mathbf{W}=\sum_{i=1}^{n_s}w_i\mathbf{E}_i, \quad \text{with} \quad \mathbf{E}_i= \mathbf{I}_{n_t}\otimes \mathbf{e}_i\mathbf{e}_i^\top, \] where $\otimes$ is the Kronecker product. Therefore, $\mathbfcal{H}_\text{m}(\vec{w})$ is expressed as follows: \[ \mathbfcal{H}_\text{m}(\vec{w}) = {\mathbfcal{F}}^\top \mathbf{W}^{1/2}{\vec{\Gamma}}_\text{noise}^{-1}\mathbf{W}^{1/2}{\mathbfcal{F}}, \] We refer to~\cite{a14infinite} for details. This results in the $\vec{w}$ dependent posterior covariance operator: \[ \mathbf{\Gamma}_\text{post}(\vec{w}) = \prior{1/2}\M^{-1/2}\left(\mathbfcal{H}_\text{m}(\vec{w}) + \mathbf{I}\right)^{-1}\M^{1/2} \prior{1/2}. \] In our formulation, we assume uncorrelated observations across sensor locations and time which implies $\vec{\Gamma}_\text{noise}$ is a $n_s n_t \times n_s n_t$ block diagonal matrix, with $n_t$ blocks of the form $\mathsf{diag}\,(\sigma_1^2, \ldots, \sigma_{n_s}^2)$; here $\sigma_i^2$, $i = 1, \ldots, n_s$, denote the measurement noise at individual sensor locations. Using this structure for $\vec{\Gamma}_\text{noise}$, we define \begin{equation} \label{Wnoise} {\mathbf{W}^\text{noise}} \equiv \mathbf{W}^{1/2}{\vec{\Gamma}}_\text{noise}^{-1}\mathbf{W}^{1/2} = \sum_{i=1}^{n_\text{sens}}w_i\mathbf{E}_i^\text{noise}, \end{equation} with $\mathbf{E}_i^\text{noise} = \sigma_i^{-2}\mathbf{I}_{n_\text{time}}\otimes \mathbf{e}_i\mathbf{e}_i^\top$. Thus, we have $\mathbfcal{H}_\text{m}(\vec{w}) = {\mathbfcal{F}}^\top{\mathbf{W}^\text{noise}}{\mathbfcal{F}}$ and the A-optimal criterion can be written as \begin{equation}\label{equ:criterion} \begin{aligned} \aopt{(\vec{w})} &= \trace{\prior{1/2}\M^{-1/2}\big[\big(\mathbfcal{H}_\text{m}(\vec{w}) + \mathbf{I}\big)^{-1} - \mathbf{I}\big] \M^{1/2}\prior{1/2} } \\ &= \trace{ \big[\big(\mathbfcal{H}_\text{m}(\vec{w}) + \mathbf{I}\big)^{-1} - \mathbf{I}\big] \mathbf{Z}}, \end{aligned} \end{equation} with { \begin{equation} \label{eqn:Z} \mathbf{Z}\equiv\M^{1/2}\prior{}\M^{-1/2}. \end{equation} } Anticipating that we will use a gradient-based solver for solving \cref{opt}, we also need the gradient of $\aopt{(\vec{w})}$ which we now derive. Using Theorems B.17 and B.19 in~\cite{Ucinski05}, the partial derivatives of \cref{equ:criterion} with respect to $w_j$, $j = 1,\dots, n_s$, are \begin{equation} \label{equ:gradient} \begin{aligned} \partial_j \aopt(\vec{w}) =-\trace{(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}\partial_j \mathbfcal{H}_\text{m}(\vec{w})(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}\mathbf{Z}}. \end{aligned} \end{equation} (We have used the notation $\partial_j$ to denote $\frac{\partial}{\partial w_j}$.) Note that using the definition of $\mathbfcal{H}_\text{m}(\vec{w})$, we have $\partial_j \mathbfcal{H}_\text{m}(\vec{w})={\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}$, $j=1,\ldots,n_s$. \subsubsection*{The optimization problem for finding an A-optimal design} We now specialize the optimization problem \cref{equ:basic_prob} to the case of A-optimal sensor placement for linear inverse problems governed by time-dependent PDEs: \begin{equation} \min_{\vec{w}\in[0, 1]^{{n_s}}}\aopt(\vec{w})+\gamma P(\vec{w}). \label{opt} \end{equation} As explained before, to enable efficient solution methods for the above optimization problem we need (i) a numerical method for fast computation of $\aopt(\vec{w})$ and its gradient, and (ii) a choice of penalty function that promotes sparse and binary weights. The former is facilitated by the randomized subspace iteration approach outlined earlier (see \cref{sec:criterion}), and for the latter we present an approach based on rewighted $\ell_1$ minimization (see \cref{sec:reweightl1}). \section{Efficient computation of A-optimal criterion and its gradient}\label{sec:criterion} The computational cost of solving \cref{opt} is dominated by the PDE solves required in OED objective and gradient evaluations; these operations need to be performed repeatedly when using an optimization algorithm for solving \cref{opt}. Therefore, to enable computing A-optimal designs for large-scale applications, efficient methods for objective and gradient computations are needed. In this section, we derive efficient and accurate randomized estimators for \cref{equ:criterion} and \cref{equ:gradient}. The proposed estimators are matrix-free---they require only applications of the (prior-preconditioned) forward operator ${\mathbfcal{F}}$ and its adjoint on vectors. Moreover, the computational cost of computing these estimators does not increase with the discretized parameter dimension. This is due to the fact that our estimators exploit the low-rank structure of $\mathbfcal{H}_\text{m}(\vec{w})$, a problem property that is independent of the choice of discretization. We introduce our proposed randomized estimators for the A-optimal design criterion $\aopt{(\vec{w})}$ and its gradient $\nabla\aopt{(\vec{w})}$ in \cref{sec:aopt_estimators}. Additionally, we present a detailed computational method for computing the proposed estimators. We analyze the errors associated with our proposed estimators in \cref{ssec:error}. \subsection{Randomized estimators for $\aopt(\vec{w})$ and its gradient} \label{sec:aopt_estimators} Consider the low-rank approximation of $\mathbfcal{H}_\text{m}(\vec{w})$ given by $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})=\mathbf{Q}(\vec{w})\mathbf{T}(\vec{w})\mathbf{Q}^\top(\vec{w})$, with $\mathbf{Q}(\vec{w})$ and $\mathbf{T}(\vec{w})$ computed using \cref{alg:randsubspace}. Replacing $\mathbfcal{H}_\text{m}(\vec{w})$ by its approximation and using the cyclic property of the trace, we obtain the estimator for the A-optimal criterion \cref{equ:criterion}: \begin{equation}\label{aoptest} \aoptrand{(\vec{w};\ell)}=\trace{\left((\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}-\mathbf{I}\right) \mathbf{Z}}, \end{equation} where $\mathbf{Z}$ is as in~\cref{eqn:Z}. To derive an estimator for the gradient, once again, we replace $\mathbfcal{H}_\text{m}(\vec{w})$ with its low-rank approximation $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})$ in \cref{equ:gradient} to obtain \begin{equation} \gradaoptrand{(\vec{w};\ell)} = -\trace{(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\mathbf{Z}}, \label{estgrad} \end{equation} for $j=1,\dots n_s$. \subsubsection*{Computational procedure} First, we discuss computation of the A-optimal criterion estimator using \cref{alg:randsubspace}. Typically, the algorithm can be used with $q=1$, due to rapid decay of eigenvalues of $\mathbfcal{H}_\text{m}(\vec{w})$. In this case, \cref{alg:randsubspace} requires $2\ell$ applications of $\mathbfcal{H}_\text{m}(\vec{w})$. Since each application of $\mathbfcal{H}_\text{m}(\vec{w})$ requires one ${\mathbfcal{F}}$ apply (forward solve) and one ${\mathbfcal{F}}^\top$ apply (adjoint solve), computing $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})$ requires $4\ell$ PDE solves. Letting the spectral decomposition of the $\ell \times \ell$ matrix $\mathbf{T}(\vec{w})$ be given by $\mathbf{T}(\vec{w})=\mathbf{U}(\vec{w})\mathbf{\Lambda}_\mathbf{T}(\vec{w}) \mathbf{U}(\vec{w})^\top$ and denoting $\mathbf{V}(\vec{w})\equiv\mathbf{Q}(\vec{w})\mathbf{U}(\vec{w})$, we have $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})=\mathbf{V}(\vec{w})\mathbf{\Lambda}_\mathbf{T}(\vec{w}) \mathbf{V}^\top(\vec{w})$. Applying the Sherman--Morrison--Woodbury formula \cite{meyer2000matrix} and the cyclic property of the trace to \cref{aoptest}, we obtain \begin{equation} \aoptrand{(\vec{w})}= -\trace{\mathbf{D}_\mathbf{T}(\vec{w}) \mathbf{V}^\top(\vec{w})\mathbf{Z} \mathbf{V}(\vec{w})}, \label{expandedobj} \end{equation} where $\mathbf{D}_\mathbf{T}(\vec{w})=\mathbf{\Lambda}_\mathbf{T}(\vec{w})(\mathbf{I}+\mathbf{\Lambda}_\mathbf{T}(\vec{w}))^{-1}$. To simplify notation, the dependence of $\vec{w}$ is suppressed for the operators used in computing the estimators for the remainder of the article; however, the notation is retained for $\mathbfcal{H}_\text{m}(\vec{w})$ and $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})$. Next, we describe computation of the gradient estimator \cref{estgrad}. Here we assume $n_sn_t \leq n$; the extension to the case $ n_sn_t > n$ is straightforward and is omitted. Again, using the Woodbury formula and cyclic property of the trace, we rewrite \cref{estgrad} as \begin{equation} \gradaoptrand{(\vec{w};\ell)} = -\trace{{\mathbfcal{F}}(\mathbf{I} - \mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top)\mathbf{Z}(\mathbf{I} - \mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top){\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}} \label{gradexpand} \end{equation} for $j=1,\dots,n_s$. Expanding this expression, we obtain \begin{equation} \label{alt} \begin{aligned} \widehat{\partial_j\Phi}_\text{aopt}(\vec{w}) &= -\trace{\mathbf{Z}{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}} + 2\trace{{\mathbfcal{F}}\mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top\mathbf{Z}{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}} -\\ &\quad\trace{{\mathbfcal{F}}\mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top\mathbf{Z}\mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}}. \end{aligned} \end{equation} Note that the first term $s_j=-\trace{\mathbf{Z}{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}} $ in \cref{alt} does not depend on the design $\vec{w}$, for $j=1,\dots,n_s$. As a result, this term can be precomputed and used in subsequent function evaluations. We expand $s_j$ to be \[\begin{aligned} s_j &=-\trace{\mathbf{Z}{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}}\\ &=-\sum_{k=1}^{n_sn_t}\left({\mathbfcal{F}}^\top(\mathbf{E}_j^\text{noise})^{1/2}\widehat{\vec{e}}_k\right)^\top\mathbf{Z}\left({\mathbfcal{F}}^\top(\mathbf{E}_j^\text{noise})^{1/2}\widehat{\vec{e}}_k\right), \end{aligned}\] where $\widehat{\vec{e}}_k$ is the $k^\text{th}$ column of the identity matrix of size $n_sn_t$. Because there are only $n_t$ columns of $\mathbf{E}_j^\text{noise}$ with nonzero entries, the total cost to precompute $s_j$ for $j=1,\dots,n_s$ is $n_sn_t$ PDE solves. To compute the remaining terms in \cref{alt}, we exploit the fact that $\mathbf{V}$ has $\ell$ columns. Notice all the other occurrences of ${\mathbfcal{F}}$ and ${\mathbfcal{F}}^\top$ occur as a combination of ${\mathbfcal{F}}\mathbf{V}$ and ${\mathbfcal{F}}\mathbf{Z}\mathbf{V}$ (or of their transposes). Both of these terms require $\ell$ PDE solves to compute. As a result, the total cost to evaluate $\aoptrand{(\vec{w})}$ and $\widehat{\nabla\Phi}_\text{aopt}(\vec{w};\ell)$ is $4\ell$ PDE solves to apply \cref{alg:randsubspace} and $2\ell$ PDE solves to compute ${\mathbfcal{F}}\mathbf{V}$ and ${\mathbfcal{F}}\mathbf{Z}\mathbf{V}$. We detail the steps for computing our estimators for A-optimal criterion and its gradient in~\cref{alg:randobjgrad}. \begin{algorithm}[ht!] \caption{Randomized method for computing $\aoptrand{(\vec{w};\ell)}$ and $\widehat{\nabla \Phi}_\text{aopt}(\vec{w};\ell)$.} \begin{algorithmic}[1] \REQUIRE Target rank $k$, oversampling parameter $p \geq 0$, design $\vec{w}$, and $s_j$ for $j=1,\dots,n_s$. \ENSURE OED objective $\aoptrand{(\vec{w};\ell)}$ and gradient $\widehat{\nabla\Phi}_\text{aopt}(\vec{w};\ell)$. \STATE Apply \cref{alg:randsubspace} with $\ell = k + p$ and $q=1$ to obtain $\mathbf{T} \in \mathbb{R}^{\ell \times \ell}$ and $\mathbf{Q} \in \mathbb{R}^{n\times \ell}$. \STATE Compute eigendecomposition $[\mathbf{U},\mathbf{\Lambda}_\mathbf{T}]$ of $\mathbf{T}$. Let $\mathbf{D}_\mathbf{T}=\mathbf{\Lambda}_\mathbf{T}(\mathbf{I}+\mathbf{\Lambda}_\mathbf{T})^{-1}$. \FOR{$i=1$ to $\ell$} \STATE Compute $\vec{v_i} \!=\! \mathbf{Q} \vec{u_i}$, where $\vec{u_i}$ are the columns of $\mathbf{U}$. \ENDFOR \STATE Compute $$\aoptrand{(\vec{w};\ell)} = -\sum_{i=1}^\ell d_i\vec{v}_i^\top \mathbf{Z}\vec{v}_i,$$ where $d_i$ is the $i^\text{th}$ diagonal of $\mathbf{D}_\mathbf{T}$. \FOR{$i=1$ to $\ell$} \STATE Compute $\vec{a}_i={\mathbfcal{F}}\vec{v}_i$ and $\vec{b}_i={\mathbfcal{F}}\mathbf{Z}\vec{v}_i$. \ENDFOR \FOR{$j=1$ to $n_s$} \STATE Compute \[ \widehat{\partial_j \Phi}_\text{aopt}(\vec{w};\ell) = s_j+2\sum_{i=1}^\ell d_i\vec{b}_i^\top\mathbf{E}_j^\text{noise}\vec{a}_i - \sum_{i=1}^\ell \sum_{k=1}^\ell d_id_k\vec{a}_i^\top\mathbf{E}_j^\text{noise}\vec{a}_k\vec{v}_k^\top\mathbf{Z}\vec{v}_i. \] \ENDFOR \STATE Return $\aoptrand{(\vec{w};\ell)}$ and $\widehat{\nabla\Phi}_\text{aopt}(\vec{w};\ell)=[\widehat{\partial_1 \Phi}_\text{aopt}(\vec{w};\ell),\dots,\widehat{\partial_{n_s} \Phi}_\text{aopt}(\vec{w};\ell)]^\top$. \end{algorithmic} \label{alg:randobjgrad} \end{algorithm} \subsubsection*{Alternative approaches and summary of computational cost} A closely related variation of \cref{alg:randobjgrad} is obtained by replacing step 1 of the algorithm (i.e., randomized subspace iteration) by the solution of an eigenvalue problem to compute the dominant eigenvalues of $\mathbfcal{H}_\text{m}(\vec{w})$ ``exactly''. We refer to this method as Eig-$k$, where $k$ is the target rank of $\mathbfcal{H}_\text{m}(\vec{w})$. This idea was explored for computing Bayesian D-optimal designs in \cite{alexanderian2018dopt}. The resulting cost is similar to that of the randomized method: it would cost $\mathcal{O}(k)$ PDE solves per iteration to compute the spectral decomposition of $\mathbfcal{H}_\text{m}(\vec{w})$, plus $\min\{n_sn_t,n\}$ PDE solves to precompute $s_j$ for $j=1,\dots,n_s$. While both the randomized and Eig-$k$ methods provide a viable scheme for computing the A-optimal criteria, our randomized method can exploit parallelism to lower computational costs. Each matrix-vector application with $\mathbfcal{H}_\text{m}(\vec{w})$ in \cref{alg:randsubspace} can be computed in parallel. However, if accurate eigenpairs of $\mathbfcal{H}_\text{m}(\vec{w})$ are of importance to the problem, one can choose to use the Eig-$k$ approach at the cost of computing a more challenging problem. Another possibility suitable for problems where the forward model does not depend on the design (as is the case in the present work), is to precompute a low-rank SVD of ${\mathbfcal{F}}$, which can then be applied as necessary to compute the A-optimal criterion and its gradient. This \emph{frozen forward operator approach} has been explored in \cite{a14infinite,HaberMagnantLuceroEtAl12} for the A-optimal criterion and in \cite{alexanderian2018dopt} for the D-optimal criterion. The resulting PDE cost of precomputing a low-rank approximation of ${\mathbfcal{F}}$ is $\mathcal{O}(k)$, with $k$ indicating the target rank. The Frozen method is beneficial as no additional PDE solves are required when applying \cref{alg:randobjgrad}; however, this approach would not favor problems where ${\mathbfcal{F}}$ depends on $\vec{w}$, nor can the modeling errors associated with the PDE be controlled in subsequent evaluations without the construction of another operator. Finally, if the problem size is not too large (i.e., in small-scale applications), one could explicitly construct the forward operator ${\mathbfcal{F}}$. This enables exact (excluding floating point errors) computation of the objective and its gradient. The total cost for evaluating ${\mathbfcal{F}}$ involves an upfront cost of $\min\{n_sn_t,n\}$ PDE solves. We summarize the computational cost of \cref{alg:randobjgrad} along with the other alternatives mentioned above in \cref{table:cost}. \begin{table}[h!] \begin{center} \begin{tabular}{c|c|c|c} Method & $\Phi_\text{aopt}(\vec{w})$ and $\nabla\Phi_\text{aopt}(\vec{w})$ & Precompute & Storage Cost\\ \hline ``Exact'' & $-$ & $\min\{n_sn_t,n\}$ & $nn_sn_t$\\ Frozen & $-$ & $\mathcal{O}(k)$ & $(2n+1)k$\\ Eig-$k$ & $\mathcal{O}(k)$ & $\min\{n_sn_t,n\}$ & $n_s$\\ Randomized & $2(q+2)(k+p)$ & $\min\{n_sn_t,n\}$ & $n_s$ \end{tabular} \caption{Computational cost measured in terms of PDE solves for different methods in computing $\Phi_\text{aopt}(\vec{w})$ and $\nabla\Phi_\text{aopt}(\vec{w})$. Typically, $q=1$ in \cref{alg:randsubspace} is sufficient.} \label{table:cost} \end{center} \end{table} To summarize, the randomized methods for computing OED objective and its gradient present several attractive features; our approach is well suited to large-scale applications; it is matrix free, simple to implement and parallelize, and exploits low-rank structure in the inverse problem. Moreover, as is the case with the Eig-k approach, the performance of the randomized subspace iteration does not degrade as the dimension of the parameter increases due to mesh refinement. This is the case because the randomized subspace iteration relies on spectral properties of the prior-preconditioned data misfit Hessian---a problem structure that is independent of discretization. \subsection{Error Analysis}\label{ssec:error} Here we analyze the error associated with our estimators computed using \cref{alg:randobjgrad}. For fixed $\vec{w}\in[0,1]^{n_s}$, since $\mathbfcal{H}_\text{m}(\vec{w}) \in \mb{R}^{n\times n}$ is symmetric positive semidefinite, we can order its eigenvalues as $\lambda_1\geq\lambda_2\geq\dots\geq\lambda_n\geq 0$. Suppose $\lambda_1, \ldots, \lambda_k$ are the dominant eigenvalues of $\mathbfcal{H}_\text{m}(\vec{w})$, we define $\mathbf{\Lambda}_1 = \mathsf{diag}\,(\lambda_1,\dots,\lambda_k)$ and $\mathbf{\Lambda}_2= \mathsf{diag}\,(\lambda_{k+1},\dots,\lambda_n)$ and we assume that the eigenvalue ratio satisfies $$\gamma_k = \|\mathbf{\Lambda}_2\|_2\|\mathbf{\Lambda}_1^{-1}\|_2 = \frac{\lambda_{k+1}}{\lambda_k} < 1.$$ We now present the error bounds for the objective function and its gradient. To this end, we define the constant $C$ as \begin{equation} C \equiv \frac{e^2(k+p)}{(p+1)^2}\left( \frac{1}{2\pi(p+1)}\right)^{\frac{2}{p+1}} \left( \mu + \sqrt{2}\right)^2 \left( \frac{p+1}{p-1}\right), \label{constant} \end{equation} with $r = \text{rank}(\mathbfcal{H}_\text{m}(\vec{w}))$ and $\mu \equiv \sqrt{r-k} + \sqrt{k+p}$. \begin{theorem} \label{objerror} Let $\aoptrand{(\vec{w};\ell)}$ and $\widehat{\nabla\Phi}_\text{aopt}(\vec{w};\ell)$ be approximations of the A-optimal objective function $\aopt{(\vec{w})}$ and its gradient $\nabla\aopt{(\vec{w})}$, respectively, computed using \cref{alg:randobjgrad} for fixed $\vec{w}\in[0,1]^{n_s}$. Recall that $k$ is the target rank of $\mathbfcal{H}_\text{m}(\vec{w})$, $p \geq 2$ is the oversampling parameter such that $k+p \leq n$, and $q \geq 1$ is the number of subspace iterations. Assume that $\gamma_k < 1$. Then, with $f = x/(1+x)$ \begin{equation}\label{equ:aopt_obj_bound} \expectation{|\aopt{(\vec{w})}-\aoptrand{(\vec{w};\ell)}|}\leq \|\mathbf{Z}\|_2\left(\trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)}\right). \end{equation} Furthermore, with $\mathbf{P}_j = {\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise} {\mathbfcal{F}}$, for $j=1,\dots, n_s$, \begin{equation}\label{equ:aopt_grad_bound} \expectation{|\partial_j \aopt{(\vec{w})}-\partial_j \aoptrand{(\vec{w};\ell)}|}\leq \> 2\|\mathbf{Z}\|_2\|\mathbf{P}_j \|_2\left(\trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)}\right). \end{equation} \end{theorem} \begin{proof} See \cref{proofobjerror}. \end{proof} In \cref{objerror}, the estimators are unbiased when the target rank equals the rank of $\mathbfcal{H}_\text{m}(\vec{w})$. If the eigenvalues decay rapidly, the bounds suggest that the estimators are accurate. Recall that $\text{rank}(\mathbfcal{H}_\text{m}(\vec{w}))\leq \min\{n_sn_t,n\}$ is the number of nonzero eigenvalues. Consequently, it is seen that the bounds are independent of the dimension of the discretization $n$. \section{An optimization framework for finding binary designs}\label{sec:reweightl1} We seek A-optimal designs by solving an optimization problem of the form, \begin{equation} \label{opt_relaxed} \min_{\vec{w}\in \mathcal{W}} \Phi(\vec{w})+\gamma P(\vec{w}), \quad \mathcal{W} = [0, 1]^{n_s}, \end{equation} where the design criterion $\Phi$ is either the A-optimal criterion $\aopt(\vec{w})$ or the modified A-optimal criterion $\moda(\vec{w})$ (see \cref{sec:mod_a}). In the previous sections, we laid out an efficient framework for computing accurate approximations to the A-optimal criterion and its gradient. We now discuss the choice of the penalty term $P$ and the algorithm for solving the optimization problem. The choice of the penalty term must satisfy two conditions: enforcing sparsity, measured by the number of nonzeros of the design vector, and binary designs, i.e., designs vectors whose entries are either $1$ or $0$. One possibility for the penalty function is the $\ell_0$-``norm'', $P_{\ell_0}(\vec{w}) = \| \vec{w} \|_0$, which measures the number of nonzero entries in the design. However, the resulting optimization problem is challenging to solve due to its combinatorial complexity. A common practice is to replace the $\ell_0$-``norm'' penalty by the $\ell_1$-norm, $P_{\ell_1}(\vec{w}) = \| \vec{w} \|_1$. The penalty function $P_{\ell_1}$ has desirable features: it is a convex penalty function that promotes sparsity of the optimal design vector $\vec{w}$. However, the resulting design is sparse but not necessarily binary and additional post-processing in the form of thresholding is necessary to enforce binary designs. In what follows, we introduce a suitable penalty function that enforces both sparsity and binary designs and an algorithm for solving the OED optimization problem based on the MM approach. The resulting algorithm takes the form of a reweighted $\ell_1$-minimization algorithm~\cite{Candes2008sparsity}. \subsection{Penalty functions} We propose the following penalty function \begin{equation} P_{\epsilon}(\vec{w})=\sum_{i=1}^{n_s}\displaystyle{\frac{|w_i|}{|w_i|+\epsilon}}, \quad \vec{w} \in \mathbb{R}^{n_s}, \label{pep} \end{equation} for a user-defined parameter $\epsilon>0$. This penalty function approximates $P_{\ell_0}$ for small values of $\epsilon$; however, as $\epsilon$ becomes smaller the corresponding optimization problem becomes harder. To illustrate the choice of penalty functions, in~\cref{fig:norm}, we plot $P_{0.05}$ along with $P_{\ell_0}$ and $P_{\ell_1}$, with $n_s = 1$. Using $P_\epsilon$ in the OED problem leads to the optimization problem, \begin{equation} \min_{\vec{w}\in\mathcal{W}}\Phi(\vec{w})+\gamma P_\epsilon(\vec{w}). \label{eqn:pepsilon} \end{equation} In \cref{eqn:pepsilon}, the absolute values in definition of $P_\epsilon$ can be dropped since we limit the search for optimal solutions in ${\mathcal{W}}$. Since $P_{\epsilon}(\vec{w})$ is concave, \cref{eqn:pepsilon} is a non-convex optimization problem. To tackle this, we adopt the majorization-minimization (MM) approach. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{figs/penalty_new.pdf} \end{center} \caption{Different choices of penalty functions with ${n_s} = 1$.} \label{fig:norm} \end{figure} \subsection{MM approach and reweighted $\ell_1$ algorithm} The idea behind the MM approach is to solve a sequence of optimization problems whose solutions converge to that of the original problem~\cite{hl2004MMAlgorithms,lange2016mm}. This sequence is generated by a carefully constructed surrogate that satisfies two properties---the surrogate must majorize the objective function for all values, and the surrogate must match the objective function at the current iterate. More specifically, suppose \[ J(\vec{w}) = \Phi(\vec{w}) + \gamma \sum_{i=1}^{n_s}\frac{w_i}{w_i+\epsilon}. \] Then the surrogate function $g(\vec{w}|\vec{w}^{(m)})$ at the current iterate $\vec{w}^{(m)}$ must satisfy \[ \begin{aligned} g(\vec{w}|\vec{w}^{(m)}) &\geq J(\vec{w}) \quad \forall\vec{w} \in \mathcal{W}, \\ g(\vec{w}^{(m)}|\vec{w}^{(m)}) &= J(\vec{w}^{(m)}). \end{aligned} \] Granted the existence of this surrogate function, to find the next iterate $\vec{w}^{(m+1)}$ we solve the optimization problem \begin{equation}\label{eqn:next1} \vec{w}^{(m+1)} = \argmin_{\vec{w} \in \mathcal{W}} g(\vec{w}|\vec{w}^{(m)}). \end{equation} To show the objective function decreases at the next iterate, observe that the next iterate $\vec{w}^{(m+1)}$ stays within the feasible region and use the two properties of the surrogate function \[ J(\vec{w}^{(m+1)}) \leq g(\vec{w}^{(m+1)}|\vec{w}^{(m)}) \leq g(\vec{w}^{(m)}|\vec{w}^{(m)})= J(\vec{w}^{(m)}). \] To construct this surrogate function, we use the fact that a concave function is below its tangent~\cite[Equation (4.7)]{lange2016mm}. Applying this to our concave penalty $P_\epsilon(\vec{w})$, we have \[ P_\epsilon(\vec{w}) \leq P_\epsilon(\vec{w}^{(m)}) + (\vec{w} - \vec{w}^{(m)})^\top \nabla_{\vec{w}} P_\epsilon(\vec{w}^{(m)}), \quad \text{for all } \vec{w} \in \mathcal{W}. \] With this majorization relation, we define the surrogate function to be \[ g(\vec{w}|\vec{w}^{(m)}) = \Phi(\vec{w}) + \gamma\left(P_\epsilon(\vec{w}^{(m)}) + (\vec{w} - \vec{w}^{(m)})^\top \nabla_{\vec{w}} P_\epsilon(\vec{w}^{(m)})\right)\] By dropping the terms that do not depend on $\vec{w}$, it can be readily verified that~\cref{eqn:next1} can be replaced by the equivalent problem \begin{equation}\label{eqn:next2} \begin{aligned} \vec{w}^{(m+1)} = &\> \argmin_{\vec{w} \in \mathcal{W}} \Phi(\vec{w}) + \gamma \sum_{i=1}^{n_s}\frac{\epsilon w_i}{(w_i^{(m)}+\epsilon)^2} \\ = & \> \argmin_{\vec{w} \in \mathcal{W}} \Phi(\vec{w}) + \gamma \| \vec{R}(\vec{w}^{(m)}) \vec{w}\|_1 , \end{aligned} \end{equation} where $\vec{R}(\vec{w}) = \text{diag}\left(\frac{\epsilon}{(w_1+\epsilon)^2},\dots,\frac{\epsilon}{(w_{n_s}+\epsilon)^2}\right)$. We see that \cref{eqn:next2} is of the form of a reweighted $\ell_1$-optimization problem. The details of the optimization procedure are given in \cref{alg:penalty}. (We remark that, in \cref{alg:penalty}, other metrics measuring the difference between the successive weight vectors can be used in step 3.) \begin{algorithm}[!ht] \caption{Reweighted $\ell_1$ Algorithm.} \begin{algorithmic}[1] \REQUIRE Initial guess $\vec{w}^{(0)} \in \mathbb{R}^{n_s}$, stopping tolerance tol, penalty parameters $\gamma,\epsilon \geq 0$. \ENSURE Optimal design $\vec{w}^*\in \mathbb{R}^{n_s}$. \STATE Initialize $m=1$. \STATE Compute $$\vec{w}^{(1)}=\argmin_{\vec{w}\in \mathcal{W}}\Phi(\vec{w})+\gamma\|\vec{w}\|_1.$$ \WHILE{$m<m_{max}$ and $\|\vec{w}^{(m)}-\vec{w}^{(m-1)}\|_2>\text{tol}$} \STATE Update $m=m+1$ \STATE Compute $$\vec{w}^{(m)}=\argmin_{\vec{w}\in\mathcal{W}}\Phi(\vec{w})+\gamma\sum_{i=1}^{n_s}r_i\cdot w_i,$$ \STATE Update $$r_i=\frac{\epsilon}{(|w_i^{(m-1)}|+\epsilon)^2}, \quad i=1,\ldots,n_s.$$ \ENDWHILE \STATE Return $\vec{w}^{(m)}=\vec{w}^*.$ \end{algorithmic} \label{alg:penalty} \end{algorithm} We conclude this section with a few remarks regarding this algorithm. In our application, $\Phi(\vec{w})$ is convex; therefore, each subproblem to update the design weights is also convex. To initialize the reweighted $\ell_1$ algorithm, we start with the weights $r_i = 1$, $i = 1, \ldots, {n_s}$. This ensures that, in the first step, we are computing the solution of the $\ell_1$-penalized optimization problem. The subsequent reweighted $\ell_1$ iterations further promote binary designs. To solve the subproblems in each reweighted $\ell_1$ iteration we use an interior point algorithm; however, any solver for appropriate convex optimization may be used. In \cref{sec:numerics}, we will provide a discussion of our choice of $\epsilon$ for our application. It is also worth mentioning that besides the penalty function $P_\epsilon$ used above, another possible choice is $\sum_{i=1}^{n_s}\arctan(|w_i|/\epsilon)$ which yields the weights $\frac{1}{|w_i|^2 + \epsilon^2}$. \section{Modifed A-optimal criterion}\label{sec:mod_a} Motivated by reducing the computational cost of computing A-optimal designs, in this section, we introduce a \emph{modified A-optimality} criterion. As mentioned in~\cite{ChalonerVerdinelli95}, we can consider a weighted A-optimal criterion $\trace{\boldsymbol\Gamma \mathbf{\Gamma}_\text{post} (\vec{w})}$, where $\boldsymbol\Gamma$ is a positive semidefinite weighting matrix. Similar to \cref{estobj}, we work with $\trace{ \boldsymbol{\Gamma}(\mathbf{\Gamma}_\text{post} - \prior{})}$, since the term $\trace{ \boldsymbol{\Gamma}(\prior{})}$ is independent of the weights $\vec{w}$. By choosing $\boldsymbol\Gamma = \prior{-1}$, we obtain the modified A-optimal criterion \begin{equation} \moda{(\vec{w})}\equiv \trace{\left(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w})\right)^{-1} - \mathbf{I}}. \label{moda} \end{equation} Note that the expression for $\moda{(\vec{w})}$ remains meaningful in the infinite-dimensional limit. This can be seen by noting that \[ \moda{(\vec{w})} = \trace{\big(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w})\big)^{-1} - \mathbf{I}} = -\trace{\mathbfcal{H}_\text{m}(\vec{w})\big(\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w})\big)^{-1}}, \] and using the fact that in the infinite-dimensional limit, for every $\vec{w} \in [0, 1]^{n_s}$, $\mathbfcal{H}_\text{m}(\vec{w})$ is trace class and $\big(\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w})\big)^{-1}$ is a bounded linear operator. We show in our numerical results that the modified A-optimality criterion can be useful in practice, if a cheaper alternative to the A-optimal criterion is desired, and minimizing $\moda{}$ can provide designs that lead to small posterior uncertainty. \subsection{Derivation of estimators} Here we seek to improve the efficiency of the modified A-optimal criterion by computing a randomized estimator for the modified A-optimal criterion and its gradient. As in the previous derivation of our estimators, we replace $\mathbfcal{H}_\text{m}(\vec{w})$ by its low rank approximation to obtain the randomized estimator for the modified A-optimal criterion \begin{equation} \moda{(\vec{w})}\approx\trace{(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}-\mathbf{I}} \equiv \modarand{(\vec{w};\ell)}\label{estmoda}. \end{equation} Similarly, we use the same low-rank approximation ${\widehat{\mathbfcal{H}}_\text{m}}(\vec{w})$ in the gradient of the modified A-optimal criterion to obtain the randomized estimator \begin{equation}\widehat{\partial_j\Phi}_\text{mod}(\vec{w};\ell) = -\trace{(\mathbf{I} + {\widehat{\mathbfcal{H}}_\text{m}}(\vec{w}))^{-1}{\mathbfcal{F}}^T\mathbf{E}_j^\text{noise}{\mathbfcal{F}}(\mathbf{I} + {\widehat{\mathbfcal{H}}_\text{m}}(\vec{w}))^{-1}}, \label{estmodagrad} \end{equation} for $j=1,\dots,n_s$. \subsection{Computational procedure and cost} Using similar techniques as those described in \cref{sec:aopt_estimators}, we can write the estimator for the modified A-optimal criterion in terms of the eigenvalues of $\mathbf{T}$: \begin{equation} \modarand{(\vec{w};\ell)}=-\trace{\mathbf{D}_\mathbf{T}}. \label{modaexpand} \end{equation} Moreover, \begin{equation}\label{eq:modgradient} \begin{aligned} \widehat{\partial_j\Phi}_\text{mod}(\vec{w}) &= -\trace{{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}} + 2\trace{{\mathbfcal{F}}\mathbf{V}\mathbf{D}_\mathbf{T}\mathbf{V}^\top{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}} -\\ &\quad\trace{{\mathbfcal{F}}\mathbf{V}\mathbf{D}_\mathbf{T}^2\mathbf{V}^\top{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}}. \end{aligned} \end{equation} with $\mathbf{D}_\mathbf{T}=\mathbf{\Lambda}_\mathbf{T}(\mathbf{I}+\mathbf{\Lambda}_\mathbf{T})^{-1}$ and $\mathbf{V}$ defined as in \cref{sec:aopt_estimators}. The procedure for computing the estimators for the modified A-optimal criterion follows the steps in \cref{alg:randobjgrad} closely. Instead of presenting an additional algorithm, we provide an overview of the computation of the estimators for the modified A-optimal criterion along with the associated computational cost in terms of the number of PDE solves. To evaluate the estimators for the modified A-optimal criterion, the only precomputation we perform is to obtain $$s_j=-\trace{{\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise}{\mathbfcal{F}}}, \quad j = 1, \ldots, {n_s}.$$ This term appears in the estimator for the gradient and accumulates a total cost of $n_sn_t$ PDE solves. The remaining terms in the estimators depend on a design $\vec{w}$ and, in particular, the eigenvalues and eigenvectors of ${\widehat{\mathbfcal{H}}_\text{m}}(\vec{w})$. As with the estimators for the A-optimal criterion, the eigenvalues and eigenvectors of $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})$ are obtained by \cref{alg:randsubspace}. Recall that the cost associated with \cref{alg:randsubspace}, with $q = 1$, is $4\ell$ PDE solves. Once the eigenvalues and eigenvectors are computed, \cref{estmoda} can be evaluated without any additional PDE solves. The remaining computational effort occurs in the evaluation of the gradient. Because of our modification to the A-optimal criterion, the expression \cref{eq:modgradient} is efficiently evaluated by computing ${\mathbfcal{F}}\mathbf{V}$. Therefore, the total cost of evaluating the estimators for the modified A-optimal criterion and its gradient is $5\ell$ PDE solves. From this computational cost analysis, we see the modified A-optimal estimators require $\ell$ less PDE solves than the A-optimal estimators. \subsection{Error analysis} We now quantify the absolute error of our estimators with the following theorem. \begin{theorem} Let $\modarand{(\vec{w};\ell)}$ and $\widehat{\nabla\Phi}_\text{mod}(\vec{w};\ell)$ be the randomized estimators approximating the modified A-optimal objective function $\moda{(\vec{w})}$ and its gradient $\nabla\moda{(\vec{w})}$, respectively. Using the notation and assumptions of \cref{objerror}, for fixed $\vec{w}\in[0,1]^{n_s}$ $$\expectation{|\moda{(\vec{w})}-\modarand{(\vec{w};\ell)}|}\leq \trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)},$$ and for $j=1,\dots,n_s$, $$\expectation{|\partial_j \moda{(\vec{w})}-\widehat{\partial_j \Phi}_\text{mod}(\vec{w};\ell)|}\leq 2\|\mathbf{P}_j \|_2\left(\trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)}\right),$$ where $C$ is defined in \cref{constant}. \label{modobjerror} \end{theorem} \begin{proof} See Appendix \ref{proofmodobjerror}. \end{proof} Notice the bounds presented in \cref{objerror} and \ref{modobjerror} differ by a factor of $\|\mathbf{Z}\|_2$. Since the modified A-optimal criterion removes one application of $\prior{}$ from the computation of the A-optimal criterion, the bounds related to the modified A-optimal criterion no longer have the factor of $\|\mathbf{Z}\|_2$. \section{Numerical results}\label{sec:numerics} In this section, we present numerical results that test various aspects of the proposed methods. We begin by a brief description of the inverse advection-diffusion problem used to illustrate the proposed OED methods, in \cref{sec:model_problem}. The setup of our model problem is adapted from that in~\cite{a14infinite}, where further details about the forward and inverse problem can be found. In \cref{sec:model_problem}, we also describle the numerical methods used for solving the forward problem, as well as the optimization solver for the OED problem. In \cref{subsec:aoptnumerics}, we test the accuracy of our randomized estimators and illustrate our error bounds. Then in \cref{subsec:rwl1numerics}, we investigate the performance of our proposed reweighted $\ell_1$-optimization approach. Next, we utilize the proposed optimization framework to compute A-optimal designs in \cref{subsec:computeOED}. Finally, in \cref{subsec:aoptvsmod}, we compare A-optimal sensor placements with those computed by minimizing the modified A-optimal criterion. \subsection{Model problem and solvers}\label{sec:model_problem} We consider a two-dimensional time-dependent advection-diffusion equation \[\begin{aligned} u_t-\kappa\Delta u+\vec{v}\cdot\nabla u = 0 & \hspace{2cm}\text{in }\mathcal{D}\times(0,T),\\ u(\cdot,0)=m & \hspace{2cm}\text{in }\mathcal{D},\\ \kappa\nabla u\cdot\vec{n} = 0 & \hspace{2cm}\text{on }\partial\mathcal{D}\times(0,T), \end{aligned} \] which models the transport of contaminants (e.g., in the atmosphere or the subsurface). Here $\kappa$ is the diffusion coefficient and is taken to be $\kappa = 0.01$. The velocity field $\vec{v}$ is computed by solving a steady-state Navier-Stokes equations, as in~\cite{a14infinite}. The domain $\mathcal{D}$, depicted in \cref{domain}, is the unit square in $\mathbb{R}^2$ with the gray rectangles, modeling obstacles/buildings, removed. The boundary $\partial\mathcal{D}$ is the union of the outer boundary and the boundaries of the obstacles. The PDE is discretized using linear triangular continuous Galerkin finite elements in space and implicit Euler in time. We let the final simulation time be $T=5$. The inverse problem involves reconstructing the initial state $m$ from space-time point measurements of $u(\vec{x},t)$. We consider ${n_s} = 109$ sensor candidate locations distributed throughout the domain which are indicated by hollow squares in \cref{domain}. Measurement data is collected from a subset of these locations at three observation times---$t=1,2,$ and $3.5$. To simulate noisy observations, 2\% noise is added to the simulated data. Recall, in~\cref{sec:background}, we define our prior covariance operator to be a Laplacian-like operator. Following~\cite{a14infinite}, an application of the square root of the prior on $s\in L^2(\mathcal{D})$ is $v=\mathcal{A}^{-1}s$, which satisfies the following weak form \begin{equation}\label{eq:prior} \int_\mathcal{D}\theta\nabla v\cdot\nabla p + \alpha vp\,d\vec{x} = \int_\mathcal{D} sp\,d\vec{x}\qquad\text{for every }p\in H^1(\mathcal{D}). \end{equation} Here $\theta$ and $\alpha$ control the variance and correlation length and are chosen to be $\alpha = 0.1$ and $\theta = 0.002$, respectively. The optimization solver used for the OED problem is a quasi-Newton interior point method. Specifically, to solve each subproblem of \cref{alg:penalty}, we use \textsc{MATLAB}'s interior point solver provided by the \verb|fmincon| function; BFGS approximation to the Hessian is used for line search. We use a vector of all ones, $\vec{1}\in\mathbb{R}^{n_s}$, as the initial guess for the optimization solver. \begin{figure}[!ht] \begin{center} \includegraphics[width = 0.5\textwidth]{figs/design.png} \end{center} \caption{Domain with 109 candidate sensor locations.} \label{domain} \end{figure} For the numerical experiments presented in this section, the random matrix $\boldsymbol{\Omega}$ in Algorithm 1 is fixed during the optimization process. However, because of the randomness, the accuracy of the estimators, and the optimal design thus obtained, may vary with different realizations of $\boldsymbol{\Omega}$. By conducting additional numerical experiments (not reported here) the stochastic nature of the estimators resulted in minor variability (at most one or two sensor locations) in the optimal experimental designs for a modest value of $\ell$. On the other hand, if $\ell$ is sufficiently large, we observed that the same design was obtained with different realizations of $\boldsymbol\Omega$ \subsection{Accuracy of estimators} \label{subsec:aoptnumerics} Here we examine the accuracy of the randomized estimators with respect to $\ell$, the number of columns in the sampling matrix $\boldsymbol{\Omega}$ in the randomized subspace iteration algorithm. Specifically, we compute \[ \begin{aligned} e_1(\ell)= \frac{ |\aopt(\vec{w}) - \aoptrand(\vec{w}; \ell)|}{|\aopt(\vec{w})|}, \quad & e_2(\ell)= \frac{ \|\nabla\aopt(\vec{w}) - \widehat{\nabla\Phi}_\text{aopt}(\vec{w}; \ell)\|_2} {\|\nabla\aopt(\vec{w})\|_2},\\ e_3(\ell)= \frac{ | \moda(\vec{w}) - \modarand(\vec{w}; \ell)|}{|\moda(\vec{w})|}, \quad & e_4(\ell)= \frac{ \|\nabla\moda(\vec{w}) - \widehat{\nabla\Phi}_\text{mod}(\vec{w}; \ell) \|_2} {\|\nabla\moda(\vec{w})\|_2}, \end{aligned} \] with $\vec{w}$ taken to be a vector of all ones; that is, with all sensors activated. We let $\ell$ to vary from $17$ to $327$, because the rank of $\mathbfcal{H}_\text{m}(\vec{w})$ is no larger than the number of observations taken, $n_sn_t=327$. \cref{relerr6} illustrates the relative error in the estimators for the A-optimal criterion and its gradient (left) and the modified A-optimal criterion and its gradient (right), as $\ell$ is varied. We observe that the error decreases rapidly with increasing $\ell$. This illustrates accuracy and efficiency of our estimators. Next, we consider the absolute error in the estimators for the objective function and compare them with the theoretical bounds derived in \cref{objerror,modobjerror}. In \cref{bounds}, we compare the absolute error in the estimators with bounds from \cref{objerror} and \cref{modobjerror}. As before, we take $\vec{w}=[1,1,\dots,1]^\top\in\mathbb{R}^{n_s}$. We observe that our error bound captures the general trend in the error. Moreover, the error bound for the modified A-optimality is better since it does not have the additional factor of $\|\mathbf{Z}\|_2$. \begin{figure}[!ht] \begin{center} \begin{subfigure}{.45\textwidth} \begin{center} \includegraphics[width=2.3in]{figs/aopt109relerror2.pdf} \end{center} \end{subfigure} \begin{subfigure}{.45\textwidth} \begin{center} \includegraphics[width=2.3in]{figs/moda109relerror2.pdf} \end{center} \end{subfigure} \caption{Relative error of the randomized estimators for the A-optimal criterion (left), and those corresponding to the modified A-optimal criterion (right) for varying $\ell$.} \label{relerr6} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \begin{subfigure}{.45\textwidth} \begin{center} \includegraphics[width=2.3in]{figs/aoptbound_finalized.pdf} \end{center} \end{subfigure} \begin{subfigure}{.45\textwidth} \begin{center} \includegraphics[width=2.3in]{figs/modbound_finalized.pdf} \end{center} \end{subfigure} \caption{Absolute error bound for $\widehat{\Phi}_\text{aopt}(\vec{w})$ from \cref{objerror} (left), and $\widehat{\Phi}_\text{mod}(\vec{w})$ from \cref{modobjerror} (right) for varying $\ell$.} \label{bounds} \end{center} \end{figure} \subsection{Performance of the reweighted $\ell_1$ algorithm} \label{subsec:rwl1numerics} We now consider solving \cref{opt} with \cref{alg:randobjgrad} and \ref{alg:penalty}. We first consider the choice of the value of $\epsilon$ in \cref{eqn:next2}. The user-defined parameter $\epsilon$ controls the steepness of the penalty function at the origin; see \cref{fig:norm}. However, we observed that if the penalty function is too steep, the optimization solvers took more iterations without substantially altering the optimal designs. We found $\epsilon = 1/2^8$ to be sufficiently small in our numerical experiments, and we keep this fixed for the remainder of the numerical experiments. We perform two numerical experiments examining the impact of changing $\ell$ and the penalty parameter $\gamma$ (in \cref{opt}). In the first experiment, we fix the penalty coefficient at an experimentally determined value of $\gamma=3$, and vary $\ell$; the results are recorded in \cref{aopttable}. Notice when $\ell\geq127$, the objective function value evaluated with the optimal solution, the number of active sensors, and number of subproblem solves do not change. This suggests that the randomized estimators are sufficiently accurate with $\ell = 127$ and this yields a substantial reduction in computational cost. \begin{table}[h!] \begin{center} \begin{tabular}{c|c|c|c|c} $\ell$ & subproblem solves & function count & function value & $ns_\text{active}$ \\ \hline 57 & 10 & 395 & 39.0176 & 38\\ 67 & 9 & 610 & 44.7219 & 30\\ 77 & 9 & 790 & 44.5484 & 29\\ 87 & 9 & 756 & 44.1142 & 30\\ 127 & 9 & 1015 & 44.1139 & 30\\ 207 & 9 & 978 & 44.1139 & 30\\ 307 & 9 & 970 & 44.1139 & 30\\ \end{tabular} \caption{Number of subproblem solves \cref{eqn:next2}, function evaluations, and active sensors for varying $\ell$ with the reweighted $\ell_1$ algorithm and $n_s=109$.} \label{aopttable} \end{center} \end{table} The second experiment involves varying $\gamma$, which indirectly controls the number of sensors in computed designs, with $\ell$ kept fixed. Here we fix $\ell=207$, which corresponds to an accuracy on the order of $10^{-7}$ for the A-optimal criterion (cf. \cref{relerr6}). In \cref{rwl1stats}, we report the design weights sorted in descending order, as $\gamma$ varies. This shows that the reweighted $\ell_1$ algorithm indeed produces binary designs for a range of penalty parameters. We also notice in \cref{rwl1stats2} that as $\gamma$ increases the sparsity increases (i.e., the number of active sensors $ns_\text{active}$ decreases) and the number of function evaluations increases. The right panel compares the cost of solving an $\ell_1$-penalized problem for the corresponding penalty parameter $\gamma$. Since this problem is the first iterate of the reweighted $\ell_1$ algorithm we see that an additional cost is required to obtain binary designs and this cost increases with increasing $\gamma$ (i.e., more sparse designs). \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{figs/binary109.pdf} \end{center} \caption{Optimal designs as a result of varying $\gamma$ with the reweighted $\ell_1$ algorithm for the A-optimal criterion. We set $\ell=207$.} \label{rwl1stats} \end{figure} \begin{figure}[!ht] \begin{center} \begin{subfigure}{.49\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/data_subsolves.pdf} \end{center} \end{subfigure} \begin{subfigure}{.5\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/data_funccount.pdf} \end{center} \end{subfigure} \caption{The effect of varying $\gamma$ on the reweighted $\ell_1$ algorithm for the A-optimal criterion. We set $\ell=207$.} \label{rwl1stats2} \end{center} \end{figure} \subsection{Computing optimal designs} \label{subsec:computeOED} In \cref{optdesign}~(left), we report an A-optimal sensor placement obtained using our optimization framework, with $\ell=207$ and $\gamma = 5$; the resulting optimal sensor locations, with $18$ active sensors, are superimposed on the posterior standard deviation field. While the design is computed to yield a minimal average variance of the posterior distribution, it is also important to consider the mean of this distribution. For completeness, in \cref{optdesign} we show the ``true`` initial condition~(middle panel), used to generate synthetic data, and the mean of the resulting posterior distribution for the $18$ active sensor design~(right panel). \begin{figure}[!ht] \begin{center} \begin{subfigure}{.32\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/var_5.png} \end{center} \end{subfigure} \begin{subfigure}{.30\textwidth} \begin{center} \includegraphics[width=0.91\textwidth]{figs/true_map_vary_gamma.png} \end{center} \end{subfigure} \begin{subfigure}{.32\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/map_5.png} \end{center} \end{subfigure} \caption{Standard deviation computed using the optimal design indicated by the gray circles (left). True initial condition (middle) and initial condition reconstruction (right). The optimal design was computed using $\ell=207$, the reweighted $\ell_1$ algorithm, and $\gamma = 5$.} \label{optdesign} \end{center} \end{figure} With the $18$ sensor design, we also illustrate the resulting uncertainty reduction by looking at the prior and posterior standard deviation fields; see \cref{priorvspost}. \begin{figure}[!ht] \begin{center} \begin{subfigure}{.49\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/prior_sd.png} \end{center} \end{subfigure} \begin{subfigure}{.5\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/post_sd.png} \end{center} \end{subfigure} \caption{Comparison of the prior standard deviation field (left) with the posterior standard deviation field (right) computed using the optimal design indicated by the gray circles.} \label{priorvspost} \end{center} \end{figure} We now compare the designs obtained using the reweighted $\ell_1$ algorithm and our estimators against designs chosen at random, illustrating the effectiveness of the proposed A-optimal design strategy. Recall that varying $\gamma$ allows us to obtain optimal designs with different numbers of active sensors. For each value of $\gamma$, we use \cref{alg:randobjgrad} and \ref{alg:penalty} to compute an optimal design. We then draw $15$ random designs with the same number of active sensors as the optimal design obtained using our algorithms. To enable a consistent comparison, we evaluate the exact A-optimal criterion $\Phi_\text{aopt}(\vec{w})$ at the computed optimal designs and the random ones; the results are reported in the left panel of~\cref{compare}. The values corresponding to the computed optimal designs are indicated as dots on the black solid line. The values obtained from the random designs are indicated by the squares. We note that the designs computed with the reweighted $\ell_1$ algorithm consistently beat the random designs, as expected. This observation is emphasized in the right panel of~\cref{compare}. Here, we compared the computed optimal design (when $\gamma=3$) with 1500 randomly generated designs using the exact trace of the posterior covariance. Again, the computed design results in a lower true A-optimal value than the random designs. \begin{figure}[!ht] \begin{center} \begin{subfigure}{.49\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/rwlvsrand.pdf} \end{center} \end{subfigure} \begin{subfigure}{.5\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{figs/30sensors1500_paper.pdf} \end{center} \end{subfigure} \caption{The true A-optimal criterion computed using the optimal and randomly generated designs. The optimal designs were computed using the A-optimal criterion and the reweighted $\ell_1$ algorithm for different values of $\gamma$ (left). Comparing the computed optimal design when $\gamma=3$ to 1500 randomly generated designs (right).} \label{compare} \end{center} \end{figure} \subsection{Comparing A-optimal and modified A-optimal designs} \label{subsec:aoptvsmod} Here we provide a quantitative comparison of sensor placements obtained by minimizing A-optimal and modified A-optimal criteria using our proposed algorithms. Specifically, for various values of $\gamma$, we solve \cref{opt} with both the A-optimal and modified A-optimal estimators to obtain two sets of designs. By varying $\gamma$, the resulting designs obtained with the A-optimal and modified A-optimal estimators have different number of active sensors. Using both sets of designs, we evaluate the exact A-optimal criterion $\Phi_\text{aopt}(\vec{w})$; these are displayed in \cref{avsmod}. Observe that in all cases the computed A-optimal and modified A-optimal designs lead to similar levels of average posterior variance. This suggests that the modified A-optimal criterion could be used as a surrogate for the A-optimal criterion. Using the modified A-optimal criterion decreases the overall number of PDE solves and yields designs that result in values of the average posterior variance close to those produced by A-optimal designs. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{figs/aoptvsmod2.pdf} \end{center} \caption{Comparison of designs obtained by minimizing the (approximate) A-optimal and modified A-optimal criteria. For each design, we report the exact trace of the corresponding posterior covariance operator.} \label{avsmod} \end{figure} \section{Conclusion}\label{sec:conclusions} We have established an efficient and flexible computational framework for A-optimal design of experiments in large-scale Bayesian linear inverse problems. The proposed randomized estimators for the OED objective and its gradient are accurate, efficient, simple to implement and parallelize. Specifically, the randomized estimators exploit the low-rank structure in the inverse problem; namely, the low-rank structure of the prior-preconditioned data misfit Hessian---a common feature of ill-posed inverse problems. Our reweighted $\ell_1$-minimization strategy is tailored to sensor placement problems, where finding binary optimal design vectors is desirable. We also presented the modified A-optimal criterion, which is more computationally efficient to compute and can provide designs that, while sub-optimal if the goal is to compute A-optimal designs, provide a systematic means for obtaining sensor placements with small posterior uncertainty levels. Open questions that we seek to explore in our future work include adaptive determination of the target rank $k$ within the optimization algorithm, to further reduce computational costs, while ensuring sufficiently accurate estimates of the OED objective and gradient. Another possible line of inquiry is to use different low-rank approximations, such as Nystr\"om's method, and extending the randomized estimators to approximate trace of matrix functions. We also seek to incorporate the randomized estimators in a suitable optimization framework for Bayesian nonlinear inverse problems, in our future work. \section*{Acknowledgments} We would like to thank Eric Chi for useful discussions regarding the MM algorithm. This material is based upon work supported in part by the National Science Foundation (NSF) award DMS-1745654. \begin{appendix} \section{Proofs of bounds} \subsection{Trace of matrix function} In the proofs below, we use the Loewner partial ordering~\cite[Chapter 7.7]{HoJ13}; we briefly recapitulate some main results that will be useful in our proof. Let $\mathbf{A},\mathbf{B} \in \mathbb{R}^{n\times n}$ be symmetric positive definite; then $\mathbf{A} \preceq \mathbf{B}$ means that $\mathbf{B} - \mathbf{A}$ is positive semidefinite. For any $\mathbf{S} \in \mathbb{R}^{n\times m}$, it also follows that $\mathbf{S}^\top \mathbf{A} \mathbf{S} \preceq \mathbf{S}^\top \mathbf{B} \mathbf{S}$. Let $\mathbf{U}\mathbf{\Lambda} \mathbf{U}^\top$ be the eigendecomposition of $\mathbf{A}$. Then, $f(\mathbf{A}) = \mathbf{U} f(\mathbf{\Lambda})\mathbf{U}^\top$ and $\trace{f(\mathbf{A})} = \sum_{i=1}^n f(\lambda_i)$. If $f$ is monotonically increasing then $\trace{f(\mathbf{A})} \leq \trace{f(\mathbf{B})}$ since $\mathbf{A} \preceq \mathbf{B}$ implies $\lambda_i(\mathbf{A}) \leq \lambda_i(\mathbf{B})$ for $i=1,\dots,n$. The following bound allows us to bound the trace of a matrix function in terms of its diagonal subblocks. \begin{lem}\label{lem:trace} Let $$\mathbf{A} = \bmat{\mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{12}^\top & \mathbf{A}_{22}}$$ be a symmetric positive definite matrix. Let $f$ be a nonnegative concave function on $[0,\infty)$. Then \[ \trace{f(\mathbf{A})} \leq \trace{f(\mathbf{A}_{11})} + \trace{f(\mathbf{A}_{22})}.\] \end{lem} \begin{proof} See Theorem 2.1 and Remark 2.4 in \cite{lee2011extension}.\end{proof} We are ready to state and prove our main result of this section, which is the key in proving \cref{objerror} and \ref{modobjerror}. \begin{thm}\label{thm:main} Let $\mathbf{A}\in\mathbb{R}^{n\times n}$ be a symmetric positive definite matrix with eigendecomposition \[ \mathbf{A} = \mathbf{U} \Lambda \mathbf{U}^\top = \bmat{\mathbf{U}_1 & \mathbf{U}_2} \bmat{\mathbf{\Lambda}_1 \\ & \mathbf{\Lambda}_2} \bmat{\mathbf{U}_1^\top \\ \mathbf{U}_2^\top}, \] where $\mathbf{\Lambda}_1 = \mathsf{diag}\,(\lambda_1,\dots,\lambda_k)$ and $\mathbf{\Lambda}_2 = \mathsf{diag}\,(\lambda_{k+1},\dots,\lambda_n)$ contain the eigenvalues arranged in descending order. Assume that the eigenvalue ratio $\gamma_k \equiv \frac{\lambda_{k+1}}{\lambda_k} < 1$. Let $k$ be the target rank, $p \geq 2$ be the oversampling parameter such that $\ell \equiv k+p \leq n$, and let $q \geq 1$ be the number of subspace iterations. Furthermore, assume that $\mathbf{Q} \in \mathbb{R}^{n\times \ell} $ and $\mathbf{T} \in \mathbb{R}^{\ell\times \ell}$ are computed using \cref{alg:randsubspace} and define $\widehat{\mathbf{A}} \equiv \mathbf{Q}\mathbf{T}\mathbf{Q}^\top$. Then \[ \begin{aligned} 0 \leq \expectation{\trace{(\mathbf{I}+\widehat{\mathbf{A}})^{-1}} - \trace{(\mathbf{I} + \mathbf{A})^{-1}}} \leq & \> \trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)}, \end{aligned}\] where $f=x/(1+x)$, and the constant $C$ is defined in \cref{constant}. \end{thm} \begin{proof} Suppose $\text{rank}(\mathbf{A}) = r$. Then, $\mathbf{A}$ has at most $r$ nonzero eigenvalues, and thus, we can define $\mathbf{\Lambda}_{r-k} = \text{diag}(\lambda_{k+1},\dots,\lambda_r)$ so that \begin{equation}\label{eqn:lamrk} \mathbf{\Lambda}_2 = \bmat{\mathbf{\Lambda}_{r-k} \\ & \mathbf{0}_{n-r-k}}.\end{equation} We split this proof into several steps. \paragraph{Step 0: Lower bound} Let $\tilde\lambda_1\geq \dots \geq \tilde\lambda_{\ell} \geq 0 $ be the eigenvalues of $\mathbf{T}$ (and also $\widehat{\mathbf{A}}$). By the Cauchy interlacing theorem (see \cite[Lemma 1]{saibaba2016randomized} for the specific version of the argument), $\lambda_i \geq \tilde\lambda_i $ for $i=1,\dots,\ell$. Using properties of the trace operator \[ \begin{aligned} \trace{(\mathbf{I}+\widehat{\mathbf{A}})^{-1}} - \trace{(\mathbf{I} + \mathbf{A})^{-1}} = & \> \sum_{i=1}^\ell \frac{1}{1 +\tilde\lambda_i} + (n-\ell) - \sum_{i=1}^n \frac{1}{1+\lambda_i}\\ = & \> \sum_{i=1}^\ell \frac{\lambda_i - \tilde\lambda_i}{(1+\lambda_i)(1+\tilde\lambda_i)} + \sum_{i=\ell+1}^n \frac{\lambda_i}{1 +\lambda_i}. \end{aligned} \] Since each term in the summation is nonnegative, the lower bound follows. \paragraph{Step 1. Trace of matrix function} We first write $\widehat{\mathbf{A}} = \mathbf{Q}\mathbf{T}\mathbf{Q}^\top = \mathbf{Q}\Q^\top\mathbf{A}\mathbf{Q}\Q^\top = \mathbf{P}_\mathbf{Q}\mathbf{A}\mathbf{P}_\mathbf{Q}$, where $\mathbf{P}_\mathbf{Q} = \mathbf{Q}\Q^\top$ is an orthogonal projection matrix onto the range of $\mathbf{Q}$. Since $\widehat{\mathbf{A}}$ has the same eigenvalues as $\mathbf{R} \equiv \mathbf{A}^{1/2}\mathbf{P}_\mathbf{Q}\mathbf{A}^{1/2}$~\cite[Theorem 1.3.22]{HoJ13}, \begin{equation}\label{eqn:interchange} \trace{(\mathbf{I} + \widehat{\mathbf{A}})^{-1}} = \trace{(\mathbf{I} + \mathbf{A}^{1/2}\mathbf{P}_\mathbf{Q}\mathbf{A}^{1/2})^{-1}}. \end{equation} Also, since $\mathbf{P}_\mathbf{Q} \preceq \mathbf{I}$, it follows that $\mathbf{R} =\mathbf{A}^{1/2}\mathbf{P}_\mathbf{Q}\mathbf{A}^{1/2}\preceq \mathbf{A}$ and $\mathbf{0} \preceq \mathbf{A} - \mathbf{R} $. Therefore, from the proof of~\cite[Lemma X.1.4]{bhatia1997matrix} and \cref{eqn:interchange}, \[ \trace{(\mathbf{I} + \widehat{\mathbf{A}})^{-1}} - \trace{(\mathbf{I}+\mathbf{A})^{-1}} \leq \trace{\left( \mathbf{I} - (\mathbf{I}+ \mathbf{A}- \mathbf{R})^{-1} \right)} = \trace{ f(\mathbf{A} - \mathbf{R})}, \] where $f(x)$ was defined in the statement of the theorem. \paragraph{Step 2. Reducing the dimensionality} Let $\mathbf{F}_\mathbf{S} \equiv \mathbf{\Lambda}_2^q \boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger \mathbf{\Lambda}^{-q}$. In \cref{alg:randsubspace}, we compute $\mathbf{Y} = \mathbf{A}^q\boldsymbol{\Omega} $ and let $\mathbf{Y} = \mathbf{Q} \mathbf{R}_{\mathbf{Y}}$ be the thin QR factorization of $\mathbf{Y}$. Let $\mathbf{W}_\mathbf{Q} = \mathbf{R}_{\mathbf{Y}}\boldsymbol{\Omega}_1^\dagger \mathbf{\Lambda}_1^{-q}(\mathbf{I} + \mathbf{F}_{\mathbf{S}}^\top\mathbf{F}_{\mathbf{S}})^{-1/2} \in \mathbb{R}^{\ell \times k}$ be defined as in the proof of \cite[Theorem 6]{saibaba2016randomized}. It was also shown that $\mathbf{W}_\mathbf{Q}$ has orthonormal columns, so that \[ \mathbf{Q}\mathbf{W}_\mathbf{Q} = \mathbf{U} \bmat{\mathbf{I} \\ \mathbf{F}_\mathbf{S}} (\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S})^{-1/2} \in \mathbb{R}^{n\times k},\] has orthonormal columns. The following sequence of identities also hold: $\mathbf{W}_\mathbf{Q}\mathbf{W}_\mathbf{Q}^\top \preceq \mathbf{I}$, $\mathbf{Q}\mathbf{W}_\mathbf{Q}\mathbf{W}_\mathbf{Q}^\top\mathbf{Q}^\top \preceq \mathbf{Q}\Q^\top$, and \[ \mathbf{A} - \mathbf{R} \preceq \mathbf{A} - \mathbf{A}^{1/2}\mathbf{Q}\mathbf{W}_\mathbf{Q}\mathbf{W}_\mathbf{Q}^\top\mathbf{Q}^\top \mathbf{A}^{1/2} \equiv \mathbf{S} .\] Since $f(x)$ is a monotonic increasing function, $\trace{f(\mathbf{A}-\mathbf{R})}\leq \trace{f(\mathbf{S})}$. \paragraph{Step 3. Split into the diagonal blocks} We can rewrite $\mathbf{S}$ as \[ \mathbf{S} = \mathbf{U} \bmat{\mathbf{S}_1 & *\\ * &\mathbf{S}_2} \mathbf{U}^\top ,\] where $*$ represents blocks that can be ignored and \[ \mathbf{S}_1 \equiv \mathbf{\Lambda}_1^{1/2}(\mathbf{I}-(\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S})^{-1})\mathbf{\Lambda}_1^{1/2}, \qquad \mathbf{S}_2 = \mathbf{\Lambda}_2^{1/2}(\mathbf{I}-\mathbf{F}_\mathbf{S}(\mathbf{I}+\mathbf{F}^\top_\mathbf{S}\mathbf{F}_\mathbf{S})^{-1}\mathbf{F}_\mathbf{S}^\top)\mathbf{\Lambda}_2^{1/2}. \] We can invoke \cref{lem:trace}, since $f(x) = x/(1+x)$ is concave and nonnegative on $[0,\infty)$. Therefore, we have \[\trace{f(\mathbf{S})} \leq \trace{f(\mathbf{S}_1)} + \trace{f(\mathbf{S}_2)}.\] Note that the matrix $\mathbf{U}$ disappears, because the trace is unitarily invariant. \paragraph{Step 4. Completing the structural bound} Using an SVD based argument it can be shown that $\mathbf{I} - (\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S} )^{-1} \preceq \mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}$, so that \[\mathbf{S}_1 = \mathbf{\Lambda}_1^{1/2}(\mathbf{I}-(\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S})^{-1})\mathbf{\Lambda}_1^{1/2} \preceq \mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}. \] Therefore, since $f$ is monotonically increasing \begin{equation}\label{eqn:s1} \trace{f(\mathbf{S}_1)} \leq \> \trace{ f( \mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}) } = \> \sum_{j=1}^k f\left( \lambda_j\left[\mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}\right] \right) . \end{equation} Note that $\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}$ is $(n-k)\times k$ has at most $\min\{n-k,k\}$ nonzero singular values. Looking more into the structure of $\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}$, and using \cref{eqn:lamrk}, we can write \[ \begin{aligned} \mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2} = & \> \mathbf{\Lambda}_2^{1/2} \bmat{ \mathbf{\Lambda}_{r-k}^{q-1/2} \\ & \mathbf{0}} \boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger \mathbf{\Lambda}_1^{-q + 1/2} \\ = & \> \mathbf{\Lambda}_2^{1/2} \bmat{ \mathbf{\Lambda}_{r-k}^{q-1/2} \widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger \mathbf{\Lambda}_1^{-q + 1/2} \\ \mathbf{0}}, \end{aligned}\] where $\widehat\boldsymbol{\Omega}_2 \in \mathbb{R}^{(r-k)\times (k+p)}$ such that $ \boldsymbol{\Omega}_2 = \bmat{\widehat\boldsymbol{\Omega}_2 \\ \mathbf{*}}$. Using the multiplicative singular value inequalities~\cite[Equation (7.3.14)]{HoJ13} and repeated use of the submultiplicative inequality gives \begin{equation}\label{eqn:mult}\sigma_j(\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}) \leq \gamma_k^{q-1/2} \|\widehat{\boldsymbol{\Omega}}_2\boldsymbol{\Omega}_1^\dagger\|_2 \sigma_j(\mathbf{\Lambda}_2^{1/2}), \qquad j=1,\dots,\min\{k,n-k\}. \end{equation} The analysis splits into two cases: \begin{description} \item [Case 1: $k \leq n-k$.] Since $f$ is monotonically increasing, using \cref{eqn:s1,eqn:mult} \[ \begin{aligned} \trace{f(\mathbf{S}_1)} \leq & \> \sum_{j=1}^k f\left( \lambda_j\left[\mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}\right] \right) \leq \sum_{j=1}^k f\left(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \sigma_j^2(\mathbf{\Lambda}_2^{1/2})\right) \\ \leq &\> \sum_{j=1}^{n-k} f\left(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \sigma_j^2(\mathbf{\Lambda}_2^{1/2})\right) = \trace{f (\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)}. \end{aligned} \] \item [Case 2: $k > n-k$.] Since $\mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}$ has at most $n-k$ nonzero eigenvalues, use the fact that $f(0)=0$, along with \cref{eqn:s1,eqn:mult} to obtain \[ \begin{aligned}\trace{f(\mathbf{S}_1)} \leq & \> \sum_{j=1}^k f\left( \lambda_j\left[\mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}\right] \right) = \sum_{j=1}^{n-k} f\left( \lambda_j\left[\mathbf{\Lambda}_1^{1/2}\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S}\mathbf{\Lambda}_1^{1/2}\right] \right)\\ \leq &\> \sum_{j=1}^{n-k} f\left(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \sigma_j^2(\mathbf{\Lambda}_2^{1/2})\right) = \trace{f (\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)}. \end{aligned} \] \end{description} To summarize, in both cases $\trace{f(\mathbf{S}_1)} \leq \trace{f (\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)}.$ Similarly, since $\mathbf{0}\preceq \mathbf{F}_\mathbf{S}(\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S})^{-1}\mathbf{F}_\mathbf{S}^\top $, we can show \[ \mathbf{S}_2 = \> \mathbf{\Lambda}_2^{1/2}(\mathbf{I}-\mathbf{F}_\mathbf{S}(\mathbf{I}+\mathbf{F}_\mathbf{S}^\top\mathbf{F}_\mathbf{S})^{-1}\mathbf{F}_\mathbf{S}^\top)\mathbf{\Lambda}_2^{1/2} \preceq \mathbf{\Lambda}_2, \] so that $\trace{f(\mathbf{S}_2)} \leq \trace{f(\mathbf{\Lambda}_2)}$. Combine with step 3 to obtain \[ \trace{f(\mathbf{S})} \leq \trace{f(\mathbf{\Lambda}_2) } +\trace{f(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)}.\] Combine this with the results of steps 1 and 2, to obtain the structural bound \[ \trace{(\mathbf{I} + \widehat{\mathbf{A}})^{-1}} - \trace{(\mathbf{I}+\mathbf{A})^{-1}} \leq \trace{f(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)} + \trace{f(\mathbf{\Lambda}_2)} .\] \paragraph{Step 5. The expectation bound} Note that $\widehat\boldsymbol{\Omega}_2 \in\mathbb{R}^{(r-k)\times (k+p)}$ and $\boldsymbol{\Omega}_1 \in \mathbb{R}^{k\times (k+p)}$. From the proof of \cite[Theorem 1]{saibaba2016randomized}, we have $\mathbb{E}\,[\|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2] \leq C$, where $C$ was defined in~\cref{constant}. By Jensen's inequality, using the fact that $f(x) = x/(1+x)$ is concave on $[0,\infty)$ we have \[ \begin{aligned} \expectation{\trace{(\mathbf{I}+\widehat{\mathbf{A}})^{-1}} - \trace{(\mathbf{I} + \mathbf{A})^{-1}}} \leq & \> \trace{f(\mathbf{\Lambda}_2)} + \expectation{\trace{f(\gamma_k^{2q-1} \|\widehat\boldsymbol{\Omega}_2\boldsymbol{\Omega}_1^\dagger\|_2^2 \mathbf{\Lambda}_2)}}\\ \leq & \> \trace{f(\mathbf{\Lambda}_2)} + \trace{f(\gamma_k^{2q-1}C\mathbf{\Lambda}_2)}. \end{aligned} \] Combining this with the lower bound (step 0) completes the proof. \end{proof} \subsection{Proof of \cref{objerror}} \label{proofobjerror} For the remaining discussion, recall the notation from \cref{objerror} \begin{equation} \mathbf{P}_j={\mathbfcal{F}}^\top\mathbf{E}_j^\text{noise} {\mathbfcal{F}}, \label{notation} \end{equation} where ${\mathbfcal{F}}$ and $\mathbf{E}_j^\text{noise}$ are defined in \cref{eqn:ff} and \cref{Wnoise}, respectively. We will also need \begin{lem}[See \cite{alexanderian2018dopt}] Let $\mathbf{A},\mathbf{B}\in\mathbb{R}^{n\times n}$ and let $\mathbf{B}$ be a symmetric positive semidefinite matrix. Then, we have $|\trace{\mathbf{A}\mathbf{B}}|\leq\|\mathbf{A}\|_2\trace{\mathbf{B}}$. \label{lemnormtr} \end{lem} \begin{proof}[ \cref{objerror}] Recall our estimator $\aoptrand{(\vec{w};\ell)}$ from \cref{aoptest}. For fixed $\ell$, using \cref{lemnormtr} we have $$ \begin{aligned} \mathbb{E}|\aopt{(\vec{w})}-\aoptrand{(\vec{w};\ell)}|=& \> \mathbb{E}\left|\trace{(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}\mathbf{Z}-(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\mathbf{Z}} \right|\\ \leq & \> \|\mathbf{Z}\|_2 \mathbb{E} |\trace{(\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}-(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}}|. \end{aligned} .$$ Applying \cref{thm:main} establishes \cref{equ:aopt_obj_bound}. Next, we consider \cref{equ:aopt_grad_bound}. Recall the estimator $\gradaoptrand{(\vec{w};\ell)}$ from \cref{estgrad}. We can write the absolute error as \begin{multline*} |\partial_j \aopt{(\vec{w})}-\gradaoptrand{(\vec{w};\ell)}|\\ = |\trace{\big((\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}\mathbf{P}_j(\mathbf{I} +\mathbfcal{H}_\text{m}(\vec{w}))^{-1} - (\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}{\mathbf{P}}_j(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\big)\mathbf{Z}}|, \end{multline*} where $\widehat{\mathbfcal{H}}_\text{m}(\vec{w})=\mathbf{Q}\mathbf{T}\mathbf{Q}^\top.$ We use the decomposition \begin{multline*} \big((\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}\mathbf{P}_j(\mathbf{I} +\mathbfcal{H}_\text{m}(\vec{w}))^{-1} - (\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}{\mathbf{P}}_j(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\big)\mathbf{Z} \\ = -\left(\mathbf{D} \mathbf{P}_j (\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w}))^{-1} + (\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\mathbf{P}_j\mathbf{D}\right)\mathbf{Z}, \end{multline*} where $\mathbf{D} \equiv (\mathbf{I} +\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1} - (\mathbf{I}+\mathbfcal{H}_\text{m}(\vec{w}))^{-1}$. Repeated application of \cref{lemnormtr} gives \[\begin{aligned} |\partial_j \aopt{(\vec{w})}-\partial_j \aoptrand{(\vec{w};\ell)}| \leq & \> \|\mathbf{P}_j\|_2\|\mathbf{Z}\|_2 \left(\|(\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w}))^{-1}\|_2 + \right. \\ & \> \qquad ~~~~~~~~~~~~~~~~~~~~~ \left. \|(\mathbf{I} + \widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\|_2 \right) \trace{\mathbf{D}}. \end{aligned} \] Since $\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w})$ and $\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w})$ have eigenvalues greater than or equal to one, $\|(\mathbf{I} + \mathbfcal{H}_\text{m}(\vec{w}))^{-1}\|_2 + \|(\mathbf{I}+\widehat{\mathbfcal{H}}_\text{m}(\vec{w}))^{-1}\|_2\leq 2$. Finally, taking the expectation and applying \cref{thm:main}, we have the desired result. \end{proof} \subsection{Proof of \cref{modobjerror}} \label{proofmodobjerror} \begin{proof} The proof follows in similar lines as the proof of \cref{objerror} except the fact that in \cref{modobjerror} we do not have $\mathbf{Z}$ in the expressions. \end{proof} \end{appendix}
2023-04-23T08:17:50.255Z
2020-04-02T02:18:47.000Z
redpajama/arxiv
arxiv_0000
724
14,922
d7f9920583658e9465643e8cd9c550db4c36e8d0
\section{Introduction} Reinforcement learning (RL) has had major success recently, in terms of mastering highly complex games like Go~\cite{silver2017mastering} and robotic control tasks. However, the ability of agents to transfer across tasks still remains an important problem to be solved towards artificial general intelligence. Transfer learning plays a crucial role in this; to give the agents the ability to adapt across tasks. In this work, we postulate that a key step towards transfer and multi-task learning is for the agent to be able to adapt and explore faster in the new task. Our goal is for the agent to explore unseen regions of the state space quickly in new tasks by re-using its past experiences. However, doing this in the on-policy setting is difficult, since the agent cannot re-use its past data as the task changes. Towards this, we provide an approach for faster exploration in the new task. We present a transfer learning strategy which fundamentally relies on Bayesian deep learning and the ability to represent a distribution over functions, as in \cite{bachman2018vfunc} \cite{garnelo2018neural}. Bayesian methods rely on modeling the uncertainty over value functions to represent the agent's belief of the environment. Recent work has shown that neural networks can be used to represent an uncertainty over the space of all possible functions \cite{bachman2018vfunc}. The idea of modeling a distribution over functions can be adapted in the RL setting to model a distribution over policies, such that we can also maximize the entropy over this distribution of policies. This is similar to maximum entropy exploration in RL, where instead of local entropy maximization, recent work maximizes the global entropy over the space of all possible sub-optimal policies. Our work relies on modeling a distribution over policies for transfer learning, where the pre-trained policies can be useful for learning the target policy in the transfer setup. We hypothesize that if for a given task, we can maximize the entropy of the distribution over policies, then this model can be useful for transfer. This is because, in the new tasks, each sampled policy would lead to a highly diverse trajectory, leading to faster coverage of the state space. We build upon a recent work named VFunc~\cite{bachman2018vfunc}, which presents a framework to represent a distribution over functions rather than the space of parameters. VFunc models the distribution $p(f, z)$, where $f$ is an element of the function space and $z$ is the latent variable. VFunc not only mitigates the intractable modeling of uncertainty in the parameter space but also leads to an efficient method of sampling of functions. \begin{figure*}[!t] \centering \includegraphics[width = \textwidth]{Figures/archFinal.png} \caption{\textbf{Schematic explaining Transfer Learning with VFunc:} We start with training the source environment using both VFunc+Policy Gradient (PG) and just Policy Gradient (PG) algorithms. PG algorithms correspond to either REINFORCE or A2C. The Source Model (SM) is then fine-tuned in the target environments to produce the Transferred Model (TM). We also train the target environment from scratch. Performance in terms of cumulative reward plots is then observed for all TM's. } \label{fig:formulation:arch} \end{figure*} VFunc can be viewed as the maximization of the entropy-regularized objectives of the form: $\mathcal{L}(f) = \mathbb{E}_{f\sim p(f)}\left[R(f)\right] + \lambda \cdot H(f)$. Here, $R(f)$ is a measure of the ``goodness'' of the function $f$, and $H(f)$ represents the entropy of $f\sim p(f)$. $R(f)$ can be the data log-likelihood in case of Bayesian deep learning or the reward in case of RL. Note that the efficient sampling of functions $f\sim p(f)$, by marginalizing $z$ from $p(f, z)$ in VFunc, can be directly used towards maximization of these objectives. Further, the joint distribution $p(f, z)$ can be utilized to obtain a variational lower bound for the entropy term $H(f)$. Thus, the approach of modeling $p(f, z)$ can be applied in an efficient and simple manner to a large number of Bayesian deep learning and RL problems. In reinforcement learning, the aim is to maximize a reward term $\mathbb{E}_{\pi\sim p(\pi)}\left[R(\pi)\right]$, measuring the goodness of the policy $\pi$, subject to large exploration in the policy space ensured by the term $H(p(\pi))$, which represents the entropy of the distribution over policies. Note that by varying the latent variable $z$, we can obtain different policies. To summarize, we propose a transfer learning approach based on modeling a distribution over policies. The entropy term in the optimization objective forces the model to consider a diverse set of functions while training. Since the learned policy parameters in the train (source) task are representative of the policy distribution, it improves the exploration of trajectories in the test (target) task. This offers benefits as compared to learning the target policy based on random explorations as the knowledge gained from the source task can be efficiently transferred to the target task to improve performance. Extensive experimentation on GridWorld and MiniGrid demonstrates that transfer to target environments by training with VFunc on the source environment not only results in improved convergence but also leads to diversity in learned policies. \section{Related Work} Significant work has been done investigating various approaches for transfer learning~\cite{taylor2009transfer,lazaric2012transfer}. These include meta-learning based transferable strategies~\cite{gupta2018meta}, learning decision states with information bottleneck for transfer~\cite{goyal2019infobot}, attention-based deep architectures for selective transfer~\cite{rajendran2015attend}, semi-Markov Decision Processes based approach~\cite{mehta2008transfer}, distillation based approach~\cite{distral}, and successor features based transfer~\cite{barreto2017successor}. In contrast, our approach utilizes diverse exploration in policy space via variational lower bound maximization of entropy-regularized objectives as the basis of transfer. There has also been some work that considers the problem of exploration strategies~\cite{schmidhuber1991curious,osband2016deep,houthooft2016variational}. However, these approaches do not carry out transfer of learning from source task to target tasks. In addition, recent works~\cite{haarnoja2018soft,neu2017unified,nachum2017bridging} demonstrate the advantages of entropy-regularized approaches. Also, there has been significant work on modeling policies with latent variables~\cite{hausman2018learning,florensa2017stochastic,haarnoja2018latent,eysenbach2018diversity}. In particular, the contribution of~\cite{hausman2018learning} is the policy gradient formulation for hierarchical policies with task-conditional latent variables. On the other hand, we rely on optimization of the entropy-regularized objectives for inducing diversity with latent variable based modeling of distribution over policies. \section{Method Description} In this section, we provide a brief description of VFunc \cite{bachman2018vfunc}, a deep generative model for modeling a distribution over the function space. Later, we describe how we use VFunc to do transfer learning. \subsection{Modeling Distribution over Policies} VFunc \cite{bachman2018vfunc} consists of a prediction network for a stochastic mapping of inputs to outputs and a recognition network for encoding a function into a latent variable. The prediction network $p(y\mid x, z)$ maps the inputs $x$ to outputs $y$ conditioned on latent variable $z$. In the case of RL problems, the underlying function $f$ is the policy, the inputs $x$ are the state representations and the outputs $y$ are the corresponding actions. The space of the latent variable $z$ is assigned a latent variable prior, denoted by $p(z)$. Analogously, the space of the functions (policies) $f$ is assigned a function prior $\bar{p}(f)$. The recognition network $q(z\mid f)$ learns a latent variable encoding for policy $f$. Further, the variational lower bound on the entropy term is formulated using the recognition network. The optimization objective which allows estimation of the true posterior $p(f\mid D)$ given dataset $D$, is given as follows (taken directly from \cite{bachman2018vfunc}): \begin{equation} \label{eq:formulation:optObjective} \textbf{Max} \, \, \mathcal{L}(f) = \mathbb{E}_{f\sim p(f)}[ R(D\mid f) + \log \bar{p}(f)] + \lambda H(f) \end{equation} The data-dependent reward term $\mathbb{E}_{f\sim p(f)}[R(D\mid f)]$ can be computed via standard policy-gradient techniques like A2C or REINFORCE. The function prior $\bar{p}(f)$ can be chosen to be unnormalized energy, i.e., arbitrary regularizer function. The estimation of $H(f)$ is not so straight-forward. However, it can be achieved by variational lower bound given below (taken directly from \cite{bachman2018vfunc}): \begin{equation} \label{eq:formulation:varLowBound} H(f) \geq H(z) + \mathbb{E}_{f, z\sim p(f, z)}[\log q(z\mid f)] + H(f\mid z) \end{equation} The latent variable prior $p(z)$ is chosen to have simple estimation of $H(z)$. \cite{bachman2018vfunc} model $p(y\mid x, z)$ to be a Gaussian distribution so that $H(f\mid z)$ estimation becomes easier. Finally, the conditional cross-entropy term can be computed in terms of recognition network $q(z\mid f)$ and the underlying joint distribution $p(f, z)$. Thus, from using the lower bound from Eqn.\ref{eq:formulation:varLowBound}, we can compute a lower bound estimate of the optimization objective. VFunc models a function $f$ as a set of $K$ input-output pairs, called as a partially observed version of the function, denoted by $\hat{f}$. To generate a (partially observed) policy function given a latent variable, they sample $K$ state representations $\{x_k\}_{k = 1}^K$ from the space of inputs and sample actions $y_k \sim p(y\mid x_k, z)$. Thus, the partially observed policy $\hat{f}$ is given by the input-output pairs: $\hat{f} = \{(x_k, y_k)\}_{k = 1}^K$. On the other hand, to encode a partially observed policy $\hat{f} = \{(x_k, y_k)\}_{k = 1}^K$, they use the prediction network and a fixed default latent variable $\bar{z}$ to obtain the default output $\hat{y}_k$. Then, the loss function for each input-output pair is computed as $L(y_k, p(y\mid x_k, \bar{z}))$. Back-propagating the sum of all such loss functions gives a differential gradient $\sum_{k = 1}^K\nabla_{\bar{z}}L(y_k, p(y\mid x_k, \bar{z}))$, which is an indicative of how skewed the prediction network is when the fixed default latent variable is used for prediction. This gradient forms the input to a multi-layer feed-forward network that essentially forms $q(z\mid \hat{f})$ and predicts a latent variable $z$ for $\hat{f}$. \begin{algorithm \footnotesize \DontPrintSemicolon \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwInOut{Nothing}{} \Input{\quad Training algorithm $\mathcal{A}\in\{\text{PG}, \text{VFunc+PG}\}$} \Input{\quad Initial Policy $\pi_\theta(a\mid s, z)$, parameterized with $\theta$} \Input{\quad Train and Test task distributions: $P_{\mathrm{train}}, P_{\mathrm{test}}$} \Output{\quad Trained Policy $\pi_{\theta^{\dagger}}(a\mid s, z)$ for the test task} \BlankLine \SetAlgoLined \Fn{Transfer ($\mathcal{A}, \pi_\theta(a\mid s, z), P_{\mathrm{train}}, P_{\mathrm{test}}$)}{ \vspace*{7pt} $\bullet$\ \textbf{Training Setup:} \vspace*{3pt} $\theta\leftarrow$\ random initialization Sample train task $\tau_{\mathrm{train}}\sim P_{\mathrm{train}}$ \For {\text{episodes $ = 1:N_{\mathrm{train}}$}}{ Train policy $\pi_{\theta}$ on task $\tau_{\mathrm{train}}$ with algorithm $\mathcal{A}$ } \textbf{end for} $\theta^\ast\leftarrow$\ Parameters after training on $\tau_{\mathrm{train}}$ \vspace*{7pt} $\bullet$\ \textbf{Transfer Setup:} \vspace*{3pt} $\theta \leftarrow \theta^\ast$\ (transfer) \textbf{or} $\theta\leftarrow$ random init. (train from scratch) Sample test task $\tau_{\mathrm{test}}\sim P_{\mathrm{test}}$ \For {\text{episodes $ = 1:N_{\mathrm{test}}$}}{ Train policy $\pi_{\theta}$ on task $\tau_{\mathrm{test}}$ with algorithm $\mathcal{A}$ } \textbf{end for} $\theta^\dagger\leftarrow$\ Parameters after training on $\tau_{\mathrm{test}}$ } \BlankLine \Return $\pi_{\theta^{\dagger}}(a\mid s, z)$\; \BlankLine \caption{Transfer Learning Via VFunc} \label{algorithm} \end{algorithm} \subsection{Transfer Learning with VFunc} For transfer learning with VFunc, we consider a set of tasks and the train, test task distributions\footnote{Here, train and test task distributions refer to source and target task distributions, respectively.}: $P_{\mathrm{train}}, P_{\mathrm{test}}$. In the training phase, a task $\tau_{\mathrm{train}}$ is sampled from $P_{\mathrm{train}}$. The parametrized policy $\pi_{\theta}$ is trained on the train task $\tau_{\mathrm{train}}$ with the VFunc algorithm~\cite{bachman2018vfunc} (using Eq.~\ref{eq:formulation:optObjective}) to obtain trained parameters $\theta^\ast$. Note that the training promotes exploration in the policy space due to the entropy term $H(f)$ in the objective~\ref{eq:formulation:optObjective}. Thus, the learned parameters encode the knowledge of the policy distribution. For transfer, we sample a new test task $\tau_{\mathrm{test}}$ from $P_{\mathrm{test}}$. For training a policy on the test task, we initialize the policy parameters with $\theta^\ast$. Since the pre-trained parameters represent a good and diverse encoding of policy distribution, it improves the exploration of trajectories in the test task. Instead of relying on random exploration of trajectories, which would be the case if VFunc based pre-trained parameters are not incorporated, the learning of policy involves better and faster trajectory exploration owing to the knowledge gained from the train task. Thus, we expect to observe faster learning and faster convergence to the maximum rewards in the test task. The results of the experiments, as described in the subsequent section, indeed validate this hypothesis. Our transfer learning algorithm is described in Algorithm~\ref{algorithm} and its schematic representation is depicted in Figure~\ref{fig:formulation:arch}. \section{Implementation} In order to test our hypothesis, we carried out multiple experiments on different environments and variable settings within each environment. In these experiments, we compared training with VFunc to training with vanilla policy gradient-approaches. We provide the details of the environments used and various experimental settings below. \subsection{Environments Used} \subsubsection{GridWorld} In this class of fully-observable environments, the task is to reach from a start position to a goal position in a rectangular grid. There is a reward of $+1$ on reaching the goal and a small penalty for the number of steps taken to reach the goal. Actions are deterministic and correspond to movement in the four directions. We experimented with six square gridworld domains of size 20 implemented with EasyMDP\footnote{Easy MDP: https://github.com/zafarali/emdp}. The details of each are provided in Figure \ref{fig:gridworld}. \begin{figure} \centering \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_1.jpg} \caption{Grid-1} \label{fig:grid-1} \end{subfigure} ~ \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_2.jpg} \caption{Grid-2} \label{fig:grid-2} \end{subfigure} ~ \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_3.jpg} \caption{Grid-3} \label{fig:grid-3} \end{subfigure} ~ \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_6.jpg} \caption{Grid-6} \label{fig:grid-6} \end{subfigure} ~ \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_7.jpg} \caption{Grid-7} \label{fig:grid-7} \end{subfigure} ~ \begin{subfigure}[b]{0.15\textwidth} \includegraphics[width=\textwidth]{Figures/grid_pachinko_8.jpg} \caption{Grid-8} \label{fig:grid-8} \end{subfigure} \caption{Different GridWorld Environments Used. Red and Green squares correspond to start and goal positions, respectively. The Grid-1 is used as the \textbf{Source} environment and the rest as the \textbf{Target} environments.}\label{fig:gridworld} \end{figure} \subsubsection{MiniGrid} We also experimented with the partially-observable class of gridworld gym environments called MiniGrid~\cite{gym_minigrid}. Specifically, we used the Multi-Room Environment suite with 2 and 3 rooms of sizes 4 and 6. This environment has a series of connected rooms with doors that must be opened in order to get to the next room. The final room has the green goal square the agent must reach in order to get a reward of +1. There is a small penalty added for the number of steps to the reach the goal. There are two settings which we experimented with. In the first setting [\textbf{\textit{Dynamic}}], a different environment with the same configuration is chosen randomly at the beginning of each episode. The number of rooms and size of each room remains constant, but the positions of rooms, the way they are connected and the position of doors is dynamic. These factors make this suite of environments extremely difficult to solve using RL alone. In the second setting [\textbf{\textit{Static}}], the environment at the beginning of each episode is kept same by fixing the seed. \begin{figure*}[h] \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-2_with_VFunc_from_Grid-1_full.pdf} \label{fig:vfunc_grid_2} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-2_with_REINFORCE_from_Grid-1_full.pdf} \label{fig:r_grid_2} \end{subfigure} \vspace*{-1cm} \caption{Transfer on Grid-2} \label{transfer_grid_2} \end{subfigure} \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-3_with_VFunc_from_Grid-1_200.pdf} \label{fig:vfunc_grid_3} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-3_with_REINFORCE_from_Grid-1_200.pdf} \label{fig:r_grid_3} \end{subfigure} \vspace*{-1cm} \caption{Transfer on Grid-3} \label{transfer_grid_3} \end{subfigure} \bigskip \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-6_with_VFunc_from_Grid-1_2000.pdf} \label{fig:vfunc_grid_3} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-6_with_REINFORCE_from_Grid-1_3000.pdf} \label{fig:r_grid_3} \end{subfigure} \vspace*{-1cm} \caption{Transfer on Grid-6} \label{transfer_grid_6} \end{subfigure} \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-7_with_VFunc_from_Grid-1_2000.pdf} \label{fig:vfunc_grid_3} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-7_with_REINFORCE_from_Grid-1_2000.pdf} \label{fig:r_grid_3} \end{subfigure} \vspace*{-1cm} \caption{Transfer on Grid-7} \label{transfer_grid_7} \end{subfigure} \bigskip \centering \vspace*{-0.85cm} \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-8_with_VFunc_from_Grid-1_full.pdf} \label{fig:vfunc_grid_3} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.5cm 0}, clip, width=\textwidth]{Plots/Grid-8_with_REINFORCE_from_Grid-1_full.pdf} \label{fig:r_grid_3} \end{subfigure} \vspace*{-1cm} \caption{Transfer on Grid-8} \label{transfer_grid_8} \end{subfigure} \caption{\textbf{Transfer Results on GridWorld:} Source env = Grid-1. Blue curve = Pretrained with VFunc, Purple curve = training from scratch, Red curve = Pretrained with REINFORCE. In each subfigure, left and right figures corresponds to retraining using VFunc and REINFORCE in the target environment, respectively. In almost all cases, pretraining with VFunc leads to faster learning of target policy.} \label{fig:transfer_gridworld} \end{figure*} \begin{figure}[h] \centering \begin{subfigure}[h]{0.5\textwidth} \includegraphics[trim = {0.25cm 1.4cm 0.25cm 0.5cm}, clip, width=\textwidth]{Rows/Row1.png} \label{fig:row1} \vspace*{-0.5cm} \caption{Sample heatmaps for Transfer with REINFORCE on Grid-2} \end{subfigure} \bigskip \centering \begin{subfigure}[h]{0.5\textwidth} \includegraphics[trim = {0.25cm 1.4cm 0.25cm 0.5cm}, clip, width=\textwidth]{Rows/Row2.png} \label{fig:row2} \vspace*{-0.5cm} \caption{Sample heatmaps for Transfer with VFunc on Grid-3} \end{subfigure} \bigskip \centering \begin{subfigure}[h]{0.5\textwidth} \includegraphics[trim = {0.25cm 1.4cm 0.25cm 0.5cm}, clip, width=\textwidth]{Rows/Row3.png} \label{fig:row3} \vspace*{-0.5cm} \caption{Sample heatmaps for Transfer with REINFORCE on Grid-7} \end{subfigure} \caption{\textbf{Heatmaps showing State Visitation Frequencies for 16 sampled policies for GridWorld}. The first and second column in each row corresponds to the case when Grid-1(source) is trained with REINFORCE and VFunc, respectively. The third column corresponds to training from scratch. Pre-training with VFunc results in policies which find successful paths between the start and goal states and at the same time are more diverse.} \label{heatMapsGridWorld} \end{figure} \subsection{Experiments} \subsubsection{Comparison with REINFORCE on GridWorld} We built upon the code base provided for VFunc by~\cite{bachman2018vfunc} and modified it to carry out transfer on different environments. In all experiments we took Grid-1 \ref{fig:grid-1} as the source environment where training is carried out using both REINFORCE~\cite{williams1992simple} and VFunc and the corresponding weights are stored. We then tested the performance on five different target environments(Grid-2, Grid-3, Grid-6, Grid-7 and Grid-8) with varying levels of difficulty and variation from the source environment. As can be seen from Figure~\ref{fig:gridworld}, we test on target environments where we add a horizontal wall in Grid-2, more vertical walls in Grid-3 and a combination of both horizontal and vertical walls in Grid-6. Grid-7 and Grid-8 are same as Grid-3 and Grid-6 respectively, but with the position of the goal state changed. We trained the source environment with both REINFORCE and VFunc (See Algorithm 1 and description from Figure~\ref{fig:formulation:arch}). This resulted in two sets of weights which were then used to initialize the network weights in the target environment. We then retrained the target environment with both REINFORCE and VFunc. For better comparison, we also trained the target environment from scratch (without initializing the network weights from the source environment) using both REINFORCE and VFunc. \subsubsection{Comparison with A2C on MiniGrid} We implemented Advantage Actor-Critic (called A2C henceforth) by~\cite{mnih2016asynchronous} as the policy-gradient algorithm and VFunc on top of it. The source environment was N2S4, i.e., a series of two rooms each of size four which was trained both with A2C and VFunc. We observed the performance while transferring the weights to N2S6 (two rooms of size six) and N3S4 (three rooms of size four) as the target environments and retraining using both A2C and VFunc. As before, we also trained the target environments from scratch without any weights initialized from the weights learned during training on the source environment. We experimented with both the \textbf{\textit{Static}} and \textbf{\textit{Dynamic}} settings. \begin{figure*}[h] \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {0.75cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N2S6_with_VFunc_from_N2S4_full.pdf} \label{fig:vfunc_n2s6} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {0.75cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N2S6_with_A2C_from_N2S4_3000.pdf} \label{fig:a2c_n2s6} \end{subfigure} \vspace*{-1cm} \caption{Transfer on N2S6 (Dynamic)} \label{transfer_n2s6_d} \end{subfigure} \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {0.75cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N3S4_with_VFunc_from_N2S4_full.pdf} \label{fig:vfunc_n3s4} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {0.75cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N3S4_with_A2C_from_N2S4_full.pdf} \label{fig:a2c_n3s4} \end{subfigure} \vspace*{-1cm} \caption{Transfer on N3S4 (Dynamic)} \label{transfer_n3s4_d} \end{subfigure} \bigskip \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N2S6_with_VFunc_from_N2S4_full_static.pdf} \label{fig:vfunc_n2s6_static} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N2S6_with_A2C_from_N2S4_full_static.pdf} \label{fig:a2c_n2s6_static} \end{subfigure} \vspace*{-1cm} \caption{Transfer on N2S6 (Static)} \label{transfer_n2s6_s} \end{subfigure} \centering \begin{subfigure}[h]{0.495\textwidth} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N3S4_with_VFunc_from_N2S4_full_static.pdf} \label{fig:vfunc_n3s4_static} \end{subfigure} \begin{subfigure}[b]{0.495\textwidth} \includegraphics[trim = {1.0cm 0 1.0cm 0}, clip, width=\textwidth]{Plots/N3S4_with_A2C_from_N2S4_full_static.pdf} \label{fig:a2c_n3s4_static} \end{subfigure} \vspace*{-1cm} \caption{Transfer on N3S4 (Static)} \label{transfer_n3s4_s} \end{subfigure} \caption{\textbf{Transfer Results on MiniGrid:} Source env = N2S4. Blue curve = Pretrained with VFunc, Purple curve = training from scratch, Red curve = Pretrained with A2C. In each subfigure, left and right figures corresponds to retraining using VFunc and A2C in the target environment, respectively. In both static and dynamic settings and both target environments, pretraining with VFunc is useful for learning target policy.} \label{fig:transfer_minigrid} \end{figure*} \section{Results and Discussions} \subsection{Results on GridWorld} The performance was measured by plotting the cumulative reward averaged across episodes and parallel processes with respect to the number of updates. We perform runs with 3 or more random seeds. Figures \ref{fig:transfer_gridworld} shows the results for transfer to Grid-2, Grid-3, Grid-6, Grid-7 and Grid-8 after training on Grid-1 as source; as well as training on these target environments from scratch. In the plots, the blue curve corresponds to the case when the weights of the source environment trained on VFunc are used to do transfer on the target environment. The red curve corresponds to the same case except that the weights of the source environment come from REINFORCE. The purple curves correspond to training from scratch on the target environment. Shaded regions around each curve reflect the variability in runs carried out across multiple seeds. In terms of difficulty, Grid-2 should be the most difficult to do transfer learning as it differs the most from the source environment (Grid-1). It is also difficult to learn from scratch as there is only one unblocked path on the far left which leads from the start to goal state. As we can see from Figure~\ref{transfer_grid_2}, transferring with VFunc trained weights is able to learn fast. Training from scratch on Grid-2 is the worst. In fact, it is not able to learn anything useful. On Grid-3 and Grid-6, Figures \ref{transfer_grid_3} and \ref{transfer_grid_6}, VFunc performs the best. Grid-7 is comparatively easy to solve due to its high resemblance to the source environment and VFunc emerges as a clear winner here (Figure~\ref{transfer_grid_7}) as it is able to transfer the knowledge present in the distribution of policies learned in the source environment quite readily to the target environment. Grid-8 being of medium difficulty and the goal state changed makes VFunc not the best choice to do transfer learning. In fact, training from scratch is better here (Figure~\ref{transfer_grid_8}). In Figure \ref{heatMapsGridWorld}, we also show sample heatmaps for the state visitation frequency corresponding to roll-outs pertaining to 16 different samples values of the latent variable, $z$ for Grid-2, Grid-3 and Grid-7. We see that transfer with VFunc pretrained model leads to finding successful paths between the start and goal states in all cases. In some cases, pretraining with REINFORCE or training from scratch on target environment is not able to discover any path between the start and goal states. We also see that the target policies learned by VFunc are quite diverse (as indicated by occasional blueness in the trajectory heatmaps) suggesting its usefulness for exploration in the target environment. \subsection{Results on MiniGrid} Figures \ref{transfer_n2s6_d} and \ref{transfer_n3s4_d} shows the results of transfer experiments carried out on N2S6 and N3S4 using N2S4 as the source environment in the \textbf{\textit{Dynamic}} setting. Figures \ref{transfer_n2s6_s} and \ref{transfer_n3s4_s} represent the same for \textbf{\textit{Static}} setting. The description of the legends is the same as above. All experiments correspond to runs with 5 different random seeds. As can be seen from the plots, training on N2S4 with VFunc and transferring using VFunc converges in less number of updates for both the settings and for both environments: N2S6 and N3S4. When we retrain with A2C on the target environment, the performance of VFunc is at par with A2C trained weights on N2S4. Training from scratch in the target environments is useless here. We believe that better hyperparameter tuning will help VFunc outperform A2C pretraining for N3S4. Besides, transfer to more complex environments will showcase the benefits of VFunc in a more appealing way. \section{Conclusions and Future Work} We observe that learning the target policy when pre-training with VFunc leads to faster convergence as compared to pre-training with other policy-gradient techniques (like REINFORCE and A2C) in the source task as well as training from scratch on the target task. The difference is more apparent as the difficulty level of the environment increases in terms of its variation from the source environment and environment dynamics (partial observability, degree of stochasticity, etc.). The explanation for improved transfer performance of VFunc is that during training with VFunc on the source environment, the model learns a distribution of good policies to explore in the target environment. This leads to faster exploration and hence, faster convergence. In addition, the learned policies are more diverse. In future, we wish to explore the Safe AI perspective \cite{jain2018safe} wherein the idea is that since VFunc learns a distribution over policies rather than a single optimal policy, it will be able to learn few policies which avoid dangerous states (states with a very high penalty). We plan to experiment with AI Safe GridWorlds in \cite{leike2017ai} for this task.
2023-04-23T08:17:50.376Z
2019-06-11T02:14:43.000Z
redpajama/arxiv
arxiv_0000
730
4,881
67b9e7cca9a4ab457c19f95b43950de2ea7235be
\section{Introduction} Insight into the three-dimensional (3-D) structure of proteins is fundamentally important to help us understand their cellular functions, their roles in disease mechanisms, and for structure based development of pharmaceuticals. Recent advancements in cryogenic electron microscopy (cryo-EM), including better detector technologies and data processing techniques, have enabled high-resolution imaging of proteins and large biological complexes at the atomic scale \cite{callaway2020}. To construct a structural, atomically detailed model for a protein, typically tens of thousands of single-particle images are collected, sorted and aligned to reconstruct a 3-D density map volume. Next, atomic coordinates are built into the density map. The latter routine is known as the map-to-model process, which typically requires a considerable amount of human intervention and inspection, notwithstanding the availability of automated tools to aid the process \cite{demaio2016, terwilliger2018a, terwilliger2018b}. Despite significant progress in machine learning techniques in 2-D or 3-D object detection \cite{huang2017, he2018, bochkovskiy2020, carion2020} and protein folding \cite{senior2020}, deep learning approaches to modeling atomic coordinates into cryo-EM densities remain relatively unexplored. Multiple research groups have proposed convolutional neural networks (CNNs) for detecting amino acid residues in a cryo-EM density map, but either did not address the final map-to-model step \cite{li2016, rozanov2018, subramaniya2019, mostosi2020}, or use a conventional optimization algorithm to construct the final model (see Related Work). Conventional search algorithms have high time- and space-complexity, constituting a bottleneck for large protein complexes, and are unable to exploit rich structural information encoded in genetic information \cite{senior2020}. Here, we address these shortcomings by presenting an approach for protein structure determination from cryo-EM densities based entirely on neural networks. First, we use a 3D CNN with residual blocks \cite{he2015} we called RotamerNet to locate and predict amino acid and rotameric identities in the 3-D density map volume. Next, we apply a graph convolutional network (GCN) \cite{kipf2017} to create a graph embedding using the nodes with 3D structural information generated by RotamerNet. Inspired by the UniRep approach \cite{alley2019}, we then apply a bidirectional long short-term memory (LSTM) module to select and impose an ordering consistent with the protein sequence on the candidate amino acids, effectively generating a refined version of the graph with directed edges connecting amino acids (Fig. \ref{fig:intro}). We trained our LSTM on sequence data (UniRef50) alone, taking advantage of structural information encoded in the vast amount of genetic information \cite{senior2020}. In this paper we focus on the protein structure generation part with a GCN and an LSTM, which together we termed the Structure Generator. Our main contributions are: \begin{itemize} \item The first, to our knowledge, entirely neural network based approach to generate a protein structure from a set of candidate 3D rotameric identities and positions. \item Exploitation of genetic information learned from UniRef50 sequences to help generate a 3-D structure from cryo-EM data using a GCN embedding and a bidirectional LSTM. \end{itemize} \begin{figure}[htbp] \centerline{\includegraphics[scale=1]{schematic.png}} \caption{Overview of the map-to-model pipeline. The present work focuses on the bottom panel (shaded box), determining a structural model consistent with the protein sequence from candidate amino acid positions.} \label{fig:intro} \end{figure} \section{Related work} \subsection{Map-to-model for cryo-EM maps} Over the last few years, cryo-EM has evolved as a major experimental technique for determining novel structures of large proteins and their complexes. Computational techniques to process and analyze the data, and build protein structures are challenged by this avalanche of data. For example, widely-used {\it de novo} cryo-EM structure determination tools, e.g., \verb+phenix.map_to_model+ \cite{terwilliger2018a, terwilliger2018b} or \verb+rosettaCM+ \cite{Song2013} partially automate cryo-EM data interpretation and reconstruction, but typically take many hours to generate a preliminary model and can require significant manual intervention. The underlying algorithms are often decades old, and are difficult to adapt to faster (e.g. graph processing unit, GPU) architectures. It will be critical to modernize these approaches, and capitalize on recent advances of deep learning and GPUs to expedite this procedure. Several deep-learning based approaches for protein structure determination from cryo-EM data have been proposed. Li and coworkers \cite{li2016} introduced a CNN-based approach to annotate the secondary structure elements in a density map, an approach later also proposed by Subramaniya \textit{et al.} and Mostosi \textit{et al.} \cite{subramaniya2019, mostosi2020}. The feasibility of an end-to-end map-to-model pipeline with deep learning has also been explored. Xu and colleagues trained a number of 3-D CNNs with simulated data to localize and identify amino acid residues in a density map and use a Monte-Carlo Tree Search (MCTS) algorithm to build the protein backbone \cite{xu2019}. Using an entirely different architecture, Si and coworkers divided the map-to-model procedure into several tasks addressed by a cascade of CNNs. However, their procedure also relied on a conventional Tabu-Search algorithm to produce the final protein model \cite{si2020}. \subsection{Graph neural networks} Graph neural networks \cite{scarselli2009, kipf2017} are natural representations for molecular structures with atoms as nodes and covalent bonds as edges. Duvenaud and coworkers pioneered this approach using a GCN to learn molecular fingerprints, which are important in drug design \cite{duvenaud2015}. Numerous other applications of GCNs to predict or generate molecular properties can be found in the literature. For example, Li and colleagues demonstrated the utility of a generative GCN to construct 3-D molecules from SMILES strings, among other applications \cite{li2018}. \subsection{Long short-term memory} Recurrent neural networks (RNNs) are widely used in natural language processing tasks. Their architecture is designed to process, classify, or predict properties of sequences as input and can output sequences with desired properties \cite{graves2013}. Among many RNN architectures, the LSTM model was proposed to address the gradient vanishing problem for long sequences \cite{hochreiter1997} and a number of variants have since been studied to further increase its capacity, such as multi-layer and bidirectional LSTMs. LSTMs are often used in conjunction with other neural network models. An image captioning system, for example, can be realized by using a 2-D CNN that extracts high-dimensional features from an image and an LSTM that outputs a sentence describing the input image \cite{donahue2015}. \section{Method} \subsection{Model} In this section we present the Structure Generator, a neural network model for protein model building consisting of a GCN and, subsequently, a bidirectional LSTM module. The input for the Structure Generator is a set of nodes labeled with 3-D coordinates and amino acid identity. To generate a set of candidate amino acids we have previously implemented RotamerNet (unpublished), a 3-D CNN based on the ResNet architecture \cite{he2015} that can identify amino acids and their rotameric identities in an EM map. This set of candidate amino acids is not constrained by the sequence, and and their 3-D locations are located based entirely on their density profiles. The set can contain false positives (an amino acid rotamer is proposed at a location where there is none) or false negative (a correct amino acid rotamer was not identified). RotamerNet outputs an amino acid and rotamer identity together with proposed coordinates for its C$\alpha$ atom. In the remainder, we will only consider the amino acid identity. A node $v$ is a proposed amino acid identity with the C$\alpha$ coordinates. Next, we generate a C$\alpha$ contact map for all predicted C$\alpha$ coordinate locations. We connect any two proposed C$\alpha$ with a distance less than a given threshold ($4.0 \mathrm{\AA{}}$) with an undirected edge. We represent the input with two matrices: an $m$ by $20$ matrix of node features and an $m$ by $m$ adjacency matrix that describes the connectivity between nodes. We generate a high dimensional embedding for each node $v$, $\mathbf{H}_{\mathrm{node}} = \mathrm{GCN}(\mathbf{A}, \mathbf{F})$, where $\mathbf{H} = [h_{\mathrm{node}}^{(1)\intercal}, \ldots, h_{\mathrm{node}}^{(m)\intercal}]^{\intercal}$, $\mathbf{A}$ is the adjacency matrix with $a_{i,j} = 1$ for each neighbor pair $(i, j)$ or the node itself, i.e. $i=j$, and $\mathbf{F} = [f_{\mathrm{node}}^{(1)\intercal}, \ldots, f_{\mathrm{node}}^{(m)\intercal}]^{\intercal}$ are input features. Features are generated with $f_{\mathrm{node}}^{(v)} = \mathrm{NN}(s^{(v})$, where $s \in \mathbb{R}^{20}$ is the normalized softmax score vector for a node $v$ obtained from the RotamerNet and $\mathrm{NN}\left(\cdot\right)$ denotes a single-layer neural network. We implemented the GCN module following \cite{kipf2017} (Fig. \ref{fig:archi}(a)). Note that the GCN can be applied in $T$ layers to propagate messages, thereby increasing the capacity of the network \cite{duvenaud2015, li2018}. As depicted in Fig. \ref{fig:archi}(a), in each of the GCN layer, messages propagate through edges, sharing the embedding of a node with its neighbors. For example when $T=2$, $\mathbf{H}_{\mathrm{node}} = \mathrm{GCN}^{(2)}(\mathbf{A}, \mathrm{GCN}^{(1)}(\mathbf{A}, \mathbf{F}))$, and these two GCN layers can share the same set or have different sets of parameters. The Structure Generator then uses a bidirectional LSTM module as a decoder for the refined protein chain generation. We use zero vectors for initial hidden and cell states. At each time step $t$, an embedding of an amino acid in the sequence $h_{\mathrm{seq}}^{(t)} = \mathrm{NN}\left(c_{\mathrm{seq}}^{(t)}\right)$, where $c_{\mathrm{seq}}^{(t)} \in \mathbb{R}^{20}$ is a one-hot encoding for an amino acid at position $t$ in the sequence, is fed into the LSTM cell. The cell output at each time step, $h_{P}^{(t)}$, can be viewed as the current graph representation for $P$, the protein structure to be built. A score $z_{\mathrm{node}}^{(v)} \in \mathbb{R}$ of a candidate node to be selected as the next node to be added to $P$ is determined by $z_{\mathrm{node}}^{(v)} = \mathrm{NN}\left(h_{\mathrm{add}}^{(v)} + h_{P}\right)$, where $h_{\mathrm{add}}^{(v)} = \mathrm{NN}\left(h^{(v)}\right)$. At each time step $t$, the node with the highest softmax score $p_{\mathrm{node}}^{(v)} = \exp(z_{\mathrm{node}}^{(v)}) / \sum_{v'} \exp(z_{\mathrm{node}}^{(v')})$ is selected and added to $P$. The selection process continues until the end of the sequence $t=N$, where $N$ is the length of the sequence, is reached, at which point the cross entropy loss is computed for the entire sequence in the training stage, or bitwise accuracy for the inference stage. We implemented the decoder with a bidirectional LSTM, in which the outputs from one LSTM fed with a forward sequence and another fed with a backward sequence are concatenated to obtain $h_{P}^{(t)}$ for each time step $t$. We found that a bidirectional LSTM consistently outperformed a uni-directional LSTM. We also found that using the ensemble of inference results with a forward (from N-terminus) and a backward (from C-terminus) sequence further improves the accuracy. Fig. \ref{fig:archi}(b) illustrate the generation process with the sequence as the input at each time step (top) and the best corresponding node prediction as output (bottom). Importantly, the sequence information is used both in the training and inference stages to guide the protein modeling. \begin{figure}[htbp] \centerline{\includegraphics[scale=1]{architecture.png}} \caption{Architecture of the Structure Generator. (a) A graph convolutional network allows the embedding of each node to communicate through edges for $T$ rounds of propagation. (b) Protein sequence (SE...Q) is fed to the bidirectional LSTM to guide the modeling. The outputs from the forward ($f$) and backward ($b$) LSTM state at each time step are concatenated to predict the best node to add to the protein $P$. $N$ and $M$ are the length of the sequence and the number of nodes in the graph, respectively.} \label{fig:archi} \end{figure} \subsection{Training data} To train the Structure Generator, we randomly selected 1,000,000 and 100 sequences with length in $[50, 450]$ from the UniRef50 dataset \cite{suzek2015} for the training and validation sets, respectively. The remaining sequences (approximately 30 million) are kept untouched for future uses. The median and mean sequence length in the validation set are $174$ and $201.3$. RotamerNet was trained on simulated density profiles of proteins, generated as follows. We selected high-quality protein structures from the Protein Data Bank, with resolution between $1.4$ and $1.8 \mathrm{\AA{}}$. We used \verb+phenix.fmodel+ to generate electron scattering factors with $10\%$ noise to simulate the cryo-EM density maps for $18,893$ protein structures, $98\%$ of which were used to train the RotamerNet. The orders of amino acid residues in a given protein structure are shuffled. Because the UniRef50 dataset has only protein sequences, we assumed perfect C$\alpha$-C$\alpha$ contact maps and simulated input features, i.e. a normalized softmax score vector $s \in \mathbb{R}^{20}$ by $s_i = |\mathcal{N} \sim (0, 0.01)|$ for $i \neq j$ and $s_j = 1 - \sum_{i \neq j} s_i$, where $j$ is the index corresponding the ground-truth amino acid identity. The ground truth sequence is used in both training and inference stages. \subsection{Training the Structure Generator} We trained the Structure Generator with the ADAM optimizer with batch size $1$ and learning rate $0.001$ for first 100,000 iterations and decreased the learning rate to $0.0001$ for the rest. The sum of the cross entropy loss at each sequence position with the ground true index $j\in \mathbb{R}$ and the vector of normalized scores $p \in \mathbb{R}^{m}$, \begin{equation} \mathrm{Loss} = -\sum_{t=1}^{n} \log p_{j} \end{equation} where $n$ is the length of the ground truth sequence and $m$ is the number of nodes in the raw graph, is calculated and back-propagated through the entire network, i.e. the LSTM and then the GCN. In the inference stage, the average accuracy \begin{equation} \mathrm{AA} = \frac{1}{K}\sum_{\mathrm{prot}} \frac{1}{N_{\mathrm{prot}}}\sum_{t=1}^{N_\mathrm{prot}} \mathbbm{1}(\hat{j}_t = j_t), \end{equation} i.e., the fraction of amino acids whose identity was predicted correctly, is used to evaluate the performance of the Structure Generator on a set of $K$ protein structures, where $N_{\mathrm{prot}}$ denotes the sequence length of a protein, $j_t$ and $\hat{j}_t$ are the ground truth and predicted note index for the $t$-th step in the LSTM, respectively. We trained the GCN with sequence embedding dimension $32$, node embedding dimension $128$ and LSTM hidden state dimension $2 \times 256$ ($256$ for each direction). During training, we added $(500 - n)$ dummy (false positive) nodes with random edges in each iteration to complicate the training data. \section{Results} We first examined the effect of the GCN on structure determination. We found that the number of GCN layers can dramatically improve the average accuracy on the validation set. For example, using two rather than one GCN layer, i.e. $T=1$ to $T=2$, yields an $20\%$ improvement (Fig. \ref{fig:training}(a)). Encouraged by this improvement, we further trained the Structure Generator with $T=\{3,4\}$. Fig. \ref{fig:training}(a) shows the error rate ($1 - \textrm{AA}$) curves on the validation data for different number of GCN layers $T$. While increasing to $T=3$ again gives another $0.3\%$ increase in average accuracy, $T=4$ adds only $0.04\%$ (Table \ref{table:ensemble}). This observation suggests that $T=2$, which can be interpreted as learning 5-mer spatial motifs in the graph (see discussion in \ref{sec:rotamernet}) is sufficient for the model to capture the implicit structural information in the graph and the sequence. In the remainder, unless stated otherwise we fixed $T=2$ for inference in all experiments. Fig. \ref{fig:training}(b) shows the error counts on the 100 protein structure in the validation set as a function of sequence length, suggesting that the error counts increased very mildly with the length of sequence. \begin{figure}[htbp] \centerline{\includegraphics[scale=1]{uniref50_training.png}} \caption{Validation results. $N=100$. (a) Error rate, defined as $1 - \textrm{average accuracy}$, vs. training iterations with GCNs with different layers. (b) Error counts, i.e. the number of incorrect amino acid assignments in a protein structure as a function of sequence length for the $T=2$ model.} \label{fig:training} \end{figure} To study the efficacy of the GCN, we also tested a GCN with $T=0$, i.e., the node features $\mathbf{F}$ are added directly to the LSTM outputs. This model dramatically reduced the average accuracy to $0.0347$, which is approximately the probability of randomly selecting the correct node out of candidate nodes that have the amino acid identity matching the sequence input. This result indicates that a GCN embedding of RotamerNet's output is required for the LSTM to predict an ordered graph consistent with the sequence and the GCN. \begin{table}[] \centering \caption{\label{table:ensemble} Average accuracy on various validation datasets with different GCN layer and inference settings.} \begin{tabular}{rccrrrr} Dataset & Inference & \multicolumn{1}{r}{$T=0$} & $T=1$ & $T=2$ & $T=3$ & $T=4$ \\ \hhline{=======} \multirow{2}{*}{UniRef50} & Forward & \multicolumn{1}{r}{0.0347} & 0.7952 & 0.9949 & 0.9981 & 0.9985 \\ & Ensemble & \multicolumn{1}{r}{0.0347} & 0.8383 & 0.9954 & 0.9986 & 0.9987 \\ \hline \multirow{2}{*}{ProteinNet} & Forward & - & 0.8214 & 0.9899 & 0.9957 & 0.9960 \\ & Ensemble & - & 0.8577 & 0.9908 & 0.9960 & 0.9963 \\ \hline \multicolumn{1}{l}{\multirow{2}{*}{RotamerNet}} & Forward & - & 0.6162 & 0.7443 & 0.6912 & 0.6853 \\ \multicolumn{1}{l}{} & Ensemble & - & 0.6409 & 0.7538 & 0.7060 & 0.6965 \end{tabular} \end{table} \subsection{Performance of the Structure Generator on the ProteinNet dataset} Next, we evaluated the Structure Generator on the ProteinNet data set, a standardized machine learning sequence-structure dataset with standardized splits for the protein structure prediction and design community \cite{alquraishi2019}. The CASP12 ProteinNet validation set used here has 224 structures with sequence lengths ranging from 20 to 689, with median $163$ and mean $204.4$. We selected the same parameters as those for the UniRef50 dataset to generate simulated feature vectors and generated C$\alpha$ contact maps based on the backbone atom coordinates from ProteinNet. We note that a small number of C$\alpha$ coordinates are absent in ProteinNet owing to lack of experimental data. Compared to the UniRef50 validation set, the ProteinNet validation set is therefore more challenging, as edges in the input graph can be missing. Validation results on ProteinNet are given in the second row of Table \ref{table:ensemble}. Nonetheless, well over $50\%$ of the structures in the data set are correctly predicted without any errors (Fig. \ref{fig:proteinnet}(a)). Remarkably, the Structure Generator can correctly predict amino acids for which the C$\alpha$ records are missing. For example, atomic coordinates for the first three and last two amino acids of prosurvival protein A1 (PDB ID 2vog) are missing, but the Structure Generator can still completely reconstruct the protein model (Fig. \ref{fig:proteinnet}(b)). Several amino acids, for example glutamine (Q), occur multiple times in the sequence. As a result, the last two rows in the Structure Generator output have repeating patterns (Fig. \ref{fig:proteinnet}(c)), which did not prevent the Structure Generator from predicting the correct nodes for each of the positions corresponding to glutamine. \begin{figure}[htbp] \centerline{\includegraphics[scale=1]{proteinnet_val.png}} \caption{Test on the ProteinNet CASP12 validation set. (A) Error counts, i.e. number of incorrect amino acid assignments in a protein structure as a function of sequence length, and the histogram, with the $T=2$ model. (b) Contact map of prosurvival protein A1 (PDB ID 2vog). Green pixels are contacts and purple pixels indicate where C$\alpha$ coordinates are unknown and thus the contacts are missing. (c) The output of the Structure Generator on 2vog. Red means higher probability whereas blue means less likely.} \label{fig:proteinnet} \end{figure} \subsection{Performance of the Structure Generator on RotamerNet data\label{sec:rotamernet}} Finally, to demonstrate the utility of our approach with an upstream machine learning approach, we tested the Structure Generator on output from RotamerNet. The RotamerNet validation data set consists of the amino acid type classification scores for simulated cryo-EM density maps from $45$ protein structures of various lengths from $15$ to $278$ with various nominal resolution ranging from $1.4$ to 1.8$\mathrm{\AA{}}$. The average accuracy of the RotamerNet validation set is $0.852$, meaning that a non-trivial fraction of input features for the Structure Generator is noisy or incorrect. Fig. \ref{fig:rotamernet}(a) shows the RotamerNet and the Structure Generator accuracy of the $45$ proteins with various sequence lengths. Remarkably, while the performance of the Structure Generator is largely limited by the RotamerNet accuracy (data points beneath the dashed gray line in Fig. \ref{fig:rotamernet}(a)), as indicated by the correlation, a number of proteins have higher Structure Generator accuracy than RotamerNet accuracy, suggesting that the Structure Generator can tolerate and recover errors from upstream machine learning approaches. To understand the characteristics of the Structure Generator, we plot the confusion matrix for the C-terminal calponin homology domain of alpha-parvin (PDB ID 2vzg). Among those amino acids whose identity and position are correctly predicted by RotamerNet and the Structure Generator (Fig. \ref{fig:rotamernet}(b), blue dots on the diagonal), there are two red dots indicating that the prediction error from RotamerNet does not necessarily prevent the Structure Generator from making correct predictions. Again we point out that the Structure Generator has been trained only on UniRef50 sequences and simulated features, and has not been fine-tuned with the RotamerNet data. We anticipate that either doing so or training with the upstream model will further enhance the performance of the Structure Generator. We observe in Table \ref{table:ensemble} that the $T=2$ model performs best on the RotamerNet data. The Structure Generator relies on learning the correlation between the graph embedding and the motifs in the sequence. Increasing the number of GCN layers in principle allows the Structure Generator to recognize longer n-grams and spatial motifs of increased connectivity length. However, such correspondences will become increasingly noisy as lengths increase. Based on this observation, we therefore suggest that $T=2$ is a practical choice. \section{Conclusion} Building an atomic model into a map is a time- and labor-intensive step in single particle cryo-EM structure determination, and mostly relies on traditional search algorithms that cannot exploit recent advancements in GPU computing and deep learning. To address these shortcomings, we have presented the Structure Generator, a full-neural network pipeline that can build a protein structural model from a set of unordered candidate amino acids generated by other machine learning models. Our experiments show that a GCN can effectively encode the output from the upstream model as a graph while a bidirectional LSTM can precisely decode and generate a directed amino acid chain, even when the input contains false or erroneous entries. Our experiments suggest that a two-layer GCN is sufficient for processing the raw graph while preventing over-fitting to the training data. The Structure Generator exploits genetic information to guide the protein structure generation, and showed promising results on the RotamerNet data set without fine tuning. Training on the ProteinNet dataset and fine-tuning on the RotamerNet dataset will further enhance performance. While a practical machine learning model for cryo-EM map-to-model is still a work-in-progress, in part because of the lack of high-resolution experimental data \cite{nakane2020} for training, our proposed framework can complement the existing approaches and ultimately pave ways toward a fully trainable end-to-end machine learning map-to-model pipeline, making human intervention-free protein modelling in a fraction of a minute possible. \begin{figure}[t] \vspace{-4.5cm} \centerline{\includegraphics[scale=1]{rotamernet_results.png}} \caption{Results on the RotamerNet data. (a) Structure Generator accuracy vs. RotamerNet accuracy. Each data point represents a protein structure. The color code indicates the length of the structure. (b) Amino acid position of a select structure, PDB ID 2vzg, predicted by the Structure Generator. Red pixels are where RotamerNet made incorrect prediction on the amino acid type. Sequence on the top is derived from RotamerNet output and on the right is the ground truth.} \label{fig:rotamernet} \end{figure} \section*{Current affiliation} This work was initiated when S.H.d.O. and H.v.d.B. were at SLAC National Accelerator Laboratory. S.H.d.O. is currently at Frontier Medicines, CA, USA. In addition to his position at Atomwise, H.v.d.B. is on the faculty of the Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA, USA.
2023-04-23T08:17:51.241Z
2020-09-04T02:07:21.000Z
redpajama/arxiv
arxiv_0000
757
4,172
db5942fca4b721fa2afcca5d96c371ddcd3cfe1d
\section{Introduction} Two classfiles namely \file{cas-sc.cls} and \file{cas-dc.cls} were written for typesetting articles submitted in journals of Elsevier's Complex Article Service (CAS) workflow. \subsection{Usage} \begin{enumerate} \item \file{cas-sc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-sc} \end{vquote} \item \file{cas-dc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-dc} \end{vquote} \end{enumerate} and have an option longmktitle to handle long front matter. \section{Front matter} \begin{vquote} \title [mode = title]{This is a specimen $a_b$ title} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \author[1,3]{CV Radhakrishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.cvr.cc, [email protected]} \end{vquote} \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{CV Rajagopal}[% role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{[email protected]} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} \address[2]{Sayahna Foundation, Jagathy, Trivandrum 695014, India} \author[1,3]{Rishi T.} \cormark[2] \fnmark[1,3] \ead{[email protected]} \ead[URL]{www.stmdocs.in} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the first author footnote. but is common to third author as well.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} \begin{vquote} \nonumnote{This note has no numbers. In this work we demonstrate $a_b$ the formation Y\_1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. } \begin{abstract}[S U M M A R Y] This template helps you to create a properly formatted \LaTeX\ manuscript. \noindent\texttt{\textbackslash begin{abstract}} \dots \texttt{\textbackslash end{abstract}} and \verb+\begin{keyword}+ \verb+...+ \verb+\end{keyword}+ which contain the abstract and keywords respectively. Each keyword shall be separated by a \verb+\sep+ command. \end{abstract} \begin{keywords} quadrupole exciton \sep polariton \sep \WGM \sep \BEC \end{keywords} \maketitle \end{vquote} \begin{figure} \includegraphics[width=\textwidth]{sc-sample.pdf} \caption{Single column output (classfile: cas-sc.cls).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{dc-sample.pdf} \caption{Double column output (classfile: cas-dc.cls).} \end{figure} \subsection{Title} \verb+\title+ command have the below options: \begin{enumerate} \item \verb+title:+ Document title \item \verb+alt:+ Alternate title \item \verb+sub:+ Sub title \item \verb+trans:+ Translated title \item \verb+transsub:+ Translated sub title \end{enumerate} \begin{vquote} \title[mode=title]{This is a title} \title[mode=alt]{This is a alternate title} \title[mode=sub]{This is a sub title} \title[mode=trans]{This is a translated title} \title[mode=transsub]{This is a translated sub title} \end{vquote} \subsection{Author} \verb+\author+ command have the below options: \begin{enumerate} \item \verb+auid:+ Author id \item \verb+bioid:+ Biography id \item \verb+alt:+ Alternate author \item \verb+style:+ Style of author name chinese \item \verb+prefix:+ Prefix Sir \item \verb+suffix:+ Suffix \item \verb+degree:+ Degree \item \verb+role:+ Role \item \verb+orcid:+ ORCID \item \verb+collab:+ Collaboration \item \verb+anon:+ Anonymous author \item \verb+deceased:+ Deceased author \item \verb+twitter:+ Twitter account \item \verb+facebook:+ Facebook account \item \verb+linkedin:+ LinkedIn account \item \verb+plus:+ Google plus account \item \verb+gplus:+ Google plus account \end{enumerate} \begin{vquote} \author[1,3]{Author Name}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910, facebook=<facebook id>, twitter=<twitter id>, linkedin=<linkedin id>, gplus=<gplus id>] \end{vquote} \subsection{Various Marks in the Front Matter} The front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by of an Conformal asterisk (*) mark. \subsubsection{Title marks} Title mark can be entered by the command, \verb+\tnotemark[<num>]+ and the corresponding text can be entered with the command \verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be: \begin{vquote} \title[mode=title]{Leveraging social media news to predict stock index movement using RNN-boost} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \verb+\tnotetext+ and \verb+\tnotemark+ can be anywhere in the front matter, but shall be before \verb+\maketitle+ command. \subsubsection{Author marks} Author names can have many kinds of marks and notes: \begin{vquote} footnote mark : \fnmark[<num>] footnote text : \fntext[<num>]{<text>} affiliation mark : \author[<num>] email : \ead{<emailid>} url : \ead[url]{<url>} corresponding author mark : \cormark[<num>] corresponding author text : \cortext[<num>]{<text>} \end{vquote} \subsubsection{Other marks} At times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides \verb+\nonumnote+ for this purpose. The usage \begin{vquote} \nonumnote{<text>} \end{vquote} \noindent and should be entered anywhere before the \verb+\maketitle+ command for this to take effect. \subsection{Abstract and Keywords} Abstract shall be entered in an environment that starts with \verb+\begin{abstract}+ and ends with \verb+\end{abstract}+. Longer abstracts spanning more than one page is also possible in Class file even in double column mode. We need to invoke longmktitle option in the class loading line for this to happen smoothly. The key words are enclosed in a \verb+{keyword}+ environment. \begin{vquote} \begin{abstract} This is a abstract. \lipsum[3] \end{abstract} \begin{keywords} First keyword \sep Second keyword \sep Third keyword \sep Fourth keyword \end{keywords} \end{vquote} \section{Main Matter} \subsection{Tables} \subsubsection{Normal tables} \begin{vquote} \begin{table} \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2\\ \midrule 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ \bottomrule \end{tabular*} \end{table} \end{vquote} \subsubsection{Span tables} \begin{vquote} \begin{table*}[width=.9\textwidth,cols=4,pos=h] \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\ \midrule 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ \bottomrule \end{tabular*} \end{table*} \end{vquote} \subsection{Figures} \subsubsection{Normal figures} \begin{vquote} \begin{figure} \centering \includegraphics[scale=.75]{Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Fig. \protect\ref{FIG:2}).} \label{FIG:1} \end{figure} \end{vquote} \subsubsection{Span figures} \begin{vquote} \begin{figure*} \centering \includegraphics[width=\textwidth,height=2in]{Fig2.pdf} \caption{Schematic of formation of the evanescent polariton on linear chain of \PMS. The actual dispersion is determined by the ratio of two coupling parameters such as exciton-\WGM coupling and \WGM-\WGM coupling between the microspheres.} \label{FIG:2} \end{figure*}\end{vquote} \subsection{Theorem and theorem like environments} CAS class file provides a few hooks to format theorems and theorem like environments with ease. All commands the options that are used with \verb+\newtheorem+ command will work exactly in the same manner. Class file provides three commands to format theorem or theorem like environments: \begin{enumerate} \item \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font for theorem statement, bold weight for theorem heading and theorem number typeset at the right of theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. Here is an example coding and output: \begin{vquote} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm} The \WGM evanescent field penetration depth into the cuprous oxide adjacent crystal is much larger than the \QE radius: \begin{equation*} \lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1} \right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6 \mbox{ \AA} \end{equation*} \end{theorem} \end{vquote} \item \verb+\newdefinition+ command does exactly the same thing as with except that the body font is up-shape instead of italic. See the example below: \begin{vquote} \newdefinition{definition}{Definition} \begin{definition} The bulk and evanescent polaritons in cuprous oxide are formed through the quadrupole part of the light-matter interaction: \begin{equation*} H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s} \cdot {\bf p} \end{equation*} \end{definition} \end{vquote} \item \verb+\newproof+ command helps to define proof and custom proof environments without counters as provided in the example code. Given below is an example of proof of theorem kind. \begin{vquote} \newproof{pot}{Proof of Theorem \ref{thm}} \begin{pot} The photon part of the polariton trapped inside the \PMS moves as it would move in a micro-cavity of the effective modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it can escape through the evanescent field. This evanescent field essentially has a quantum origin and is due to tunneling through the potential caused by dielectric mismatch on the \PMS surface. Therefore, we define the \emph{evanescent} polariton (\EP) as an evanescent light - \QE coherent superposition. \end{pot} \end{vquote} \end{enumerate} \subsection{Enumerated and Itemized Lists} CAS class files provides an extended list processing macros which makes the usage a bit more user friendly than the default LaTeX list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. You can see the coding and typeset copy. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.' so that the item counter will be suffixed by a period as in the optional argument. \item If you provide a closing parenthesis to the number in the optional argument, the output will have closing parenthesis for all the item counters. \item You can use `(a)' for alphabetical counter and `(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \end{vquote} \begin{vquote} \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \section{Biography} \verb+\bio+ command have the below options: \begin{enumerate} \item \verb+width:+ Width of the author photo (default is 1in). \item \verb+pos:+ Position of author photo. \end{enumerate} \begin{vquote} \bio[width=10mm,pos=l]{tuglogo.jpg} \textbf{Another Biography:} Recent experimental \cite{HARA:2005} and theoretical \cite{DEYCH:2006} studies have shown that the \WGM can travel along the chain as "heavy photons". Therefore the \WGM acquires the spatial dispersion, and the evanescent quadrupole polariton has the form (See Fig.\ref{FIG:3}): \endbio \end{vquote} \section[CRediT...]{CRediT authorship contribution statement} Give the authorship contribution after each author as \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \end{vquote} To print the details use \verb+\printcredits+ \begin{vquote} \author[1,3]{V. {{\=A}}nand Rawat}[auid=000, bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \end{vquote} \begin{vquote} \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.cvr.cc, www.tug.org.in} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Indian \TeX{} Users Group, Trivandrum 695014, India} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{T. Rishi Nair}[role=Co-ordinator, suffix=Jr] \fnmark[2] \ead{[email protected]} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} . . . . . . . . . \printcredits \end{vquote} \section{Bibliography} For CAS categories, two reference models are recommended. They are \file{model1-num-names.bst} and \file{model2-names.bst}. Former will format the reference list and their citations according to numbered scheme whereas the latter will format according name-date or author-year style. Authors are requested to choose any one of these according to the journal style. You may download these from The above bsts are available in the following location for you to download: \url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} \hfill $\Box$ \end{document} \section{My Appendix} \section{Acknowledgement} This work was supported by the National Natural Science Foundation of China under project No. 62002084, and partially supported by a key program of fundamental research from Shenzhen Science and Technology Innovation Commission (No. ZX20210035), Singapore Ministry of Education Academic Research Fund Tier 1 (Award No. 2018-T1-002-069), the National Research Foundation, Prime Ministers Office, Singapore under its National Cybersecurity R\&D Program (Award No. NRF2018NCR-NCR005-0001), the Singapore National Research Foundation under NCR Award Number NRF2018NCR-NSOE003-0001, NRF Investigatorship NRFI06-2020-0022. \bibliographystyle{cas-model2-names} \section{Introduction} \add{Massive amount of source code are being produced in people's daily lives and works, thus bridging the gap between source code and natural language has become a practically useful but challenging task.} Mitigating such gap will enable the semantics of source code being connected to natural language, which is critical for solving many important tasks, such as commit message generation. In the life cycle of software development, the commit messages on version control systems (e.g., GitHub, GitLab) are essential for developers to document the abstract code changes in high-level natural language summaries. One example of code commit message is shown in Figure \ref{fig:example}, where a line of code has been updated for more generic exception handling. The line marked with ``+'' in green background is the newly added code while the line in red background marked with ``-'' indicates code been deleted, and the corresponding commit message is shown at the top. High-quality commit messages allow developers to comprehend the high-level intuition behind the software evolution without diving into the low-level implementation details, which can significantly ease the collaboration and maintenance of large-scale projects \cite{buse2010automatically}. \begin{figure} \vspace{0.15cm} \centering \includegraphics[width=0.48\textwidth]{figures/example.pdf} \caption{An example of code commit and its corresponding commit message.} \label{fig:example} \end{figure} \add{In practice, however, the quality of commit messages is not guaranteed.} Dyer et al. \cite{dyer2013boa} report in their study that around 14\% of the Java projects on SourceForge leave commit messages completely blank. \add{Developers' intentional or unintentional negligence due to their lack of time and motivation both result in the sacrifice of commit messages' quality, let alone writing meaningful yet concise commit messages requires developers to grasp the essential ideas behind the code changes and explicitly summarize them from a holistic perspective, which is a skill that relies heavily on individual developer's expertise.} Even for the experienced experts, writing high-quality summaries for massive code commits still poses considerably extra workload. Therefore, automatic generation of high-quality commit messages becomes necessitated and many approaches have been proposed to address the needs. At the earlier stage, researchers adopt pre-defined templates to generate commit messages from extracted information \cite{buse2010automatically,cortes2014automatically,linares2015changescribe,shen2016automatic}. However, these rule-based methods require human developers to manually define templates. For the code commits that do not match any of the pre-defined rules, their approaches may fail in generating meaningful commit messages. For example, in Shen et al.'s work \cite{shen2016automatic}, their defined rules can only handle four stereotypical types of code commits \add{straightforwardly} as filling in the template ``Add [added information] at [method name]'' for in-method sentence modifications. To solve this issue, later works \cite{huang2017mining, liu2018nngen} leverage information retrieval techniques to reuse existing commit messages for incoming code commits. In spite of the improved flexibility, the quality of retrieved messages is still constrained by inconsistent variable/function names. With the advancement of neural machine translation (NMT), recent researchers treat commit message generation as a code-to-text translation task and utilize deep neural networks to model the relationship between code commits and commit messages \cite{jiang2017automatically, loyola2017neural,xu2019commit,liu2019generating}, which are claimed to achieve the state-of-the-art performance on the benchmark. Despite the comparative successes of deep learning models in code commit message generation, all of these studies suffer from three critical limitations. First, existing research generally adopts static embedding methods for code representation, mapping a code token to an identical vector \add{representation} regardless of its context. However, code data are essentially different from textual data considering the semantic gap between source code and natural language. For example, a single token alone, if in textual data, can represent partial semantics, but usually cannot convey any meaningful information in source code without a context. Second, prior studies simply take the whole code commit snippet as input without attending explicitly to the changed fragments. Third, existing NMT models for commit message generation are all recurrent-based, which has been evidenced to suffer from long-term dependency issue \cite{DBLP:journals/corr/BahdanauCB14}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/workflow.pdf} \caption{Workflow of {CoreGen}\xspace's two-stage framework. } \label{fig:flow} \end{figure*} In this paper, we propose a novel two-stage framework for code commit message generation, named {CoreGen}\xspace, to address the above limitations. Inspired by the recent success of pre-trained language models \cite{peters2018deep, devlin2018bert, radford2019language, song2019mass}, we propose to model the code semantics with contextualized code representations, endowing one identical code token with different embeddings based on the respective contextual information. By training the model to predict code changes, the model is also guided to put more attention on the changed fragments rather than the whole commit snippets. At the second stage, the learned code representations are preserved, and further fine-tuned for downstream commit message generation. Both stages are implemented based on Transformer \add{to overcome the drawbacks of recurrent-based models}. Experimental results on benchmark dataset indicate that {CoreGen}\xspace achieves the new state-of-the-art on code commit message generation. The main contributions of our work are summarized as follows: \begin{itemize} \item We propose a two-stage framework named {CoreGen}\xspace that \add{first in the field highlights the divergence between the two categories of code commits, and effectively exploits contextualized code representations by predicting either the code changes or masked code fragment according to the nature of commits, which is built upon the Transformer model, yet can be easily adapted to other model architectures such as RNN. } \item We empirically show that {CoreGen}\xspace significantly outperforms previous state-of-the-art models with at least 28.18\% improvement on BLEU-4 score. Our in-depth studies and comparison experiments \add{further demonstrate {CoreGen}\xspace's superior usefulness in speeding up the model convergence and performing well under low-resource settings.} \item \add{We highlight {CoreGen}\xspace's potentials in generalizing to other low-resource tasks by adopting similar contextualized representation learning tasks, and a promising future research direction of improving {CoreGen}\xspace by modeling more complicated code structural information. We have released our implementation details publicly\footnote{https://github.com/Flitternie/CoreGen} to facilitate future research. } \end{itemize} The rest of the paper is structured as follows. Section \ref{sec:approach} introduces our proposed two-stage framework. Section \ref{sec:experiment} and Section \ref{sec:result} describe the experimental setups and results. Section \ref{sec:discussion} provides \add{some detailed discussion around {CoreGen}\xspace}. Finally, Section \ref{sec:literature} reviews the related works and Section \ref{sec:con} concludes the paper. \section{Approach}\label{sec:approach} In this section, we introduce our approach, \underline{Co}ntextualized Code \underline{Re}presentation Learning for Commit Message \underline{Gen}eration ({CoreGen}\xspace), a two-stage framework for commit message generation. An overview of {CoreGen}\xspace is shown in Figure \ref{fig:flow}. {CoreGen}\xspace first learns contextualized code representation for the two separate categories of code commits via their respective representation learning strategy at Stage I, as illustrated in the right part of Figure \ref{fig:model}, then fine-tunes the whole model for downstream commit message generation task at Stage II, as shown in the left of the same figure. \add{Unlike previous works that neglect the divergence between the two categories of code commits, we recognize such difference and deliberately propose separate representation learning strategies to achieve more effective exploitation of the code contextual information. Also, please note that {CoreGen}\xspace's framework is orthogonal to the selection of specific model architecture, and can be easily generalized to include other code representation learning tasks, as explained in details in Section \ref{sec:discussion}. } \begin{figure} \centering \subfloat[A code commit with explicit code changes]{\includegraphics[width=0.48\textwidth]{figures/type1.pdf}}\hfill \subfloat[A code commit with implicit binary file changes]{\includegraphics[width=0.48\textwidth]{figures/type2.pdf}} \caption{Examples of two code commit categories: \add{(a) explicit code changes, in which line-by-line code modification can be easily detected; and (b) implicit binary file changes, where content changes cannot be examined in details.} } \label{fig:types} \end{figure} \subsection{Stage I: Contextualized Code Representation Learning} Code commits can be naturally categorized into two types: one with explicit code changes and another with implicit binary file changes, by their respective features as illustrated in Figure \ref{fig:types}. \add{To enrich code representations with the contextual information for more accurate commit message generation,} for each code commit, {CoreGen}\xspace performs automatic categorization, then trains the Transformer via its corresponding representation learning task to exploit contextualized code representations. The details are elaborated as the following. \subsubsection{Code Changes Prediction} The first category of code commits includes explicit code changes such as line addition, deletion, or modification. Generally, the lines are marked with special tokens at the beginning, e.g., ``+'' for addition and ``-'' for deletion. These changed code statements, comparing to the unchanged part of source code, play a much more crucial role in code commit message generation, since commit messages, by definition, should be summarizing the changes instead of the whole code snippets. For example, in Figure \ref{fig:types}(a), the commit message is primarily describing the changed code fragments (i.e., the lines in colored background) rather than the whole snippet that implements the class methods. Therefore, code changes prediction is designated as the contextualized code representation learning task for this category of code commits. Given a code commit sequence $X$, we preprocess and split the source code sequence into code-before-change and code-after-change subsequences, denoted as $X^{\textit{before}}$ and $X^{\textit{after}}$ respectively, by locating the special tokens marked in the code commits. If explicit code changes are identified, we train the Transformer network to predict the changes by modeling the relationship between $X^{\textit{before}}$ and $X^{\textit{after}}$. Transformer \cite{vaswani2017attention} is a self-attention-based encoder-decoder architecture that has achieved the state-of-the-art performance in many machine translation benchmarks. In general, the encoder module reads the input sequence as a sequence of hidden representations and the decoder module converts the hidden representations into an output sequence by generating one token at a time. Specifically, we feed the code-before-changes sequences into the Transformer as input to predict the corresponding code-after-change sequences, as illustrated in the top right part of Figure \ref{fig:model}. Log likelihood is used as the objective function: \begin{equation} \mathlarger{\mathcal{L}}_{a}=- \sum_{X\in \mathds{C}_1} \sum_{i}\log \mathcal{P}(x^{\textit{after}}_i|X^{\textit{before}};\theta)\ , \end{equation} where $x^{\textit{after}}_i$ represents the $i$-th code token in the code-after-changes sequence $X$ to be predicted, $\mathds{C}_1$ refers to the code commit subcorpora with explicit code changes, i.e., $X^{\textit{after}} \neq X^{\textit{before}}$, and $\theta$ represents the Transformer model parameters to be learned. By predicting the code changes from their respective contexts, we explicitly guide the Transformer to put more attention to the changed code fragments and build up connections between the contextual code tokens and changed code tokens, thereby enriching the representations of code changes with their contextual information. \subsubsection{Masked Code Fragment Prediction} Another category of commits include implicit binary file changes where detailed modifications inside the binary files are not visible. For example, in Figure \ref{fig:types}(b), two binary files are added in the commit while no content changes can be examined in detail. To model the context of file changes, we randomly mask a fragment of the code commit sequence and learn the contextualized code representations by predicting the masked tokens from the remaining ones. Instead of randomly masking only one token as in BERT \cite{devlin2018bert}, we mask a fragment of tokens for the Transformer to model the context, considering that one single token in code snippet is generally of limited semantics. Given a code commit sequence $X$, we split it into $n$ different lines $\{X^1, X^2, ..., X^n\}$ using the special token ``<nl>'' and randomly mask a certain fragment of the longest line denoted as $X^k$. Then we train the Transformer to predict the masked code fragment $X^k_{u:v}$ from its context $\{X^1, ..., X^k_{\setminus u:v}, ..., X^n\}$, as illustrated in the bottom right part of Figure \ref{fig:model}. Log likelihood is again used as the objective function: \begin{equation} \mathlarger{\mathcal{L}}_{b}=- \sum_{X\in \mathds{C}_2}\sum_{i=u}^{v} \log \mathcal{P}(x^k_{i}|X^1, ..., X^k_{\setminus u:v}, ..., X^n;\theta)\ , \end{equation} where $x^k_i$ represents the $i$-th token in the masked line $X^k$ to be predicted, $\mathds{C}_2$ refers to the code commit subcorpora with implicit file changes, i.e., $X^{\textit{after}} = X^{\textit{before}}$, and the mask length is determined together by a mask rate $\phi$ and the length of the longest line $|X^k|$: \begin{equation} |X^k_{u:v}|=\phi \cdot |X^k|\ . \end{equation} By predicting the masked code fragments based on their contexts, contextual information is incorporated into Transformer's embedding layer and encoder-decoder modules, which altogether produce contextualized code representations. \\ Finally, the overall objective of the first stage's training can be expressed as: \begin{equation} \mathlarger{\mathcal{L}}_{\text{I}}(\theta;\mathds{C}) =\frac{1}{|\mathds{C}|}(\mathcal{L}_a+\mathcal{L}_b)\ , \end{equation} where $\mathds{C}$ refers to the entire training corpus that consists of \add{two categories of subcorpora} $\mathds{C}_1$ and $\mathds{C}_2$. As this stage ends, the learned contextualized representations of code commits are then transferred to Stage II for further fine-tuning. \subsection{Stage II: Downstream Commit Message Generation} At Stage II, we transfer the contextualized code representations along with the Transformer model parameters (i.e., $\theta$) learned from Stage I for downstream commit message generation training. The whole Transformer network is optimized throughout the fine-tuning process with back-propagation applied to all layers. Specifically, given a code commit sequence $X$, the model is fine-tuned to predict its corresponding commit message sequence $Y$ with the following objective function: \begin{equation} \mathlarger{\mathcal{L}}_{\text{II}}(\theta;\mathds{C})=-\frac{1}{|\mathds{C}|}\sum_{X\in \mathds{C}} \sum_{i} \log \mathcal{P}(y_i|X;\theta)\ , \end{equation} where $y_i$ represents the $i$-th commit message token to be generated, $\mathds{C}$ refers to the same training corpus as in Equation (4) and $\theta$ represents the model parameters that have been trained in Stage I. To ensure a complete parameter migration with all the contextual information maintained, \add{the model architecture in Stage II is kept consistent to Stage I. } \section{Experimental Setup}\label{sec:experiment} In this section, we describe the benchmark dataset, metrics, baseline models, and parameter settings used in our evaluation. \subsection{Dataset} We conduct evaluation experiments based on the benchmark dataset released by Liu et al. \cite{liu2018nngen}, which is a cleansed subset of Jiang et al.'s published dataset \cite{jiang2017automatically}. The original dataset contains $\sim$2M pairs of code commits and corresponding commit messages collected from popular Java projects in GitHub. Liu et al. further cleanse the dataset by tokenizing the code commit sequences with white space and punctuation, removing non-informative tokens (e.g., issue IDs and commit IDs), and filtering out the poorly-written commit messages \cite{liu2018nngen}. This leaves us $\sim$27k pairs of code commit and commit message, which \add{have been} split into training set, validation set and test set at an approximate ratio of 8:1:1. \begin{table*} \centering \caption{Comparison of {CoreGen}\xspace with the baseline models using different evaluation metrics.} \label{result} \scalebox{1.0}{ \begin{tabular}{cc|c|c|c|c|c} \toprule \multicolumn{2}{c|}{\multirow{1}{*}{\textbf{Model}}} & \multirow{1}{*}{\textbf{BLEU-4}} & \multirow{1}{*}{\textbf{ROUGE-1}} & \multirow{1}{*}{\textbf{ROUGE-2}} & \multirow{1}{*}{\textbf{ROUGE-L}} & \multirow{1}{*}{\textbf{METEOR}} \\ \midrule \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{Baselines}}} & \multicolumn{1}{l|}{NMT} & 14.17 & 21.29 & 12.19 & 20.85 & 12.99 \\\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{NNGen} & 16.43 & 25.86 & 15.52 & 24.46 & 14.03 \\\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{PtrGNCMsg} & 9.78 & 23.66 & 9.61 & 23.67 & 11.41 \\ \midrule \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Ours}}} & \multicolumn{1}{l|}{{CoreGen}\xspace\textsubscript{II}} & 18.74 & 30.65 & 18.06 & 28.86 & 15.18 \\ \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{{CoreGen}\xspace} & \textbf{21.06} & \textbf{32.87} & \textbf{20.17} & \textbf{30.85} & \textbf{16.53} \\ \bottomrule \end{tabular}} \end{table*} \subsection{Evaluation Metrics} We verify the effectiveness of {CoreGen}\xspace with automatic evaluation metrics that are widely used in natural language generation tasks, including BLEU-4, ROUGE and METEOR. BLEU-4 measures the 4-gram precision of a candidate to the reference while penalizes overly short sentences \cite{papineni2002bleu}. BLEU-4 is usually calculated at the corpus-level, which is demonstrated to be more correlated with human judgments than other evaluation metrics \cite{DBLP:conf/emnlp/LiuLSNCP16}. Thus, we use corpus-level BLEU-4 as one of our evaluation metrics. To mitigate BLUE-4's preference on long-length commit messages, we also employ ROUGE, a recall-oriented metric particularly proposed for summarization tasks, to evaluate the quality of generated commit messages \cite{lin2002rouge}. In this paper, we compute the ROUGE scores on unigram (ROUGE-1), bigram (ROUGE-2) and longest common subsequence (ROUGE-L) respectively. Taking advantages of the weighted F-score computation and penalty function on misordered tokens, METEOR is \add{another} natural language generation metric used in our experiments \cite{lavie2007meteor}. \subsection{Baseline Models} We compare the proposed {CoreGen}\xspace with the following baseline models in the experiments. For the sake of fairness, we apply the same cleansed benchmark dataset for evaluating the baseline models and {CoreGen}\xspace. \begin{itemize} \item \textbf{NMT.} NMT model uses an attentional RNN Encoder-Decoder architecture to translate code commits into commit messages \cite{loyola2017neural, jiang2017automatically}. Specifically, Jiang et al. \cite{jiang2017automatically} implement the NMT model using a TensorFlow built-in toolkit named Nematus \cite{sennrich2017nematus}. \item \textbf{NNGen.} NNGen is a retrieval-based model that leverages nearest neighbor algorithm to reuse existing commit messages \cite{liu2018nngen}. It represents each code commit sequence as a ``bags of words'' vector and then calculates the cosine similarity distance to retrieve top $k$ code commits from the database. The commit with the highest BLEU-4 score to the incoming commit is thereafter regarded as the nearest neighbor and the corresponding commit message is then output as the final result. \item \textbf{PtrGNCMsg.} PtrGNCMsg \cite{liu2019generating} is another RNN-based Encoder-Decoder model that adopts a pointer-generator network to deal with out-of-vocabulary (OOV) issue. At each prediction time step, the RNN decoder learns to either copy an existing token from the source sequence or generate a word from the fixed vocabulary, enabling the prediction of context-specific OOV tokens in commit message generation. \end{itemize} \subsection{Parameter Setting} We conducted experiments on different combinations of hyperparameters to optimize \add{{CoreGen}\xspace's end-to-end performance of on the validation set of the benchmark}. Specifically, we feed the code commits and commit messages into {CoreGen}\xspace with a shared vocabulary of 55,732 unique tokens. The input dimension of the tokens is set as 512. The input embeddings of the code tokens are randomly initialized at the beginning of Stage I, then get trained \add{throughout the contextualized code representation learning procedure. The learned embeddings are next transferred} for Stage II's downstream commit message generation, during which the code embeddings are further fine-tuned to be task-aware. For the Transformer, both the encoder and decoder modules are composed of 2 identical layers while each layer includes 6 parallel multi-attention heads. For training, we use Adam optimizer \cite{kingma2014adam} with batch size equals to 64 and the learning rate is adjusted dynamically in line with the original implementation with the warm-up step set to 4000 \cite{vaswani2017attention}. The mask rate $\phi$ for Stage I's \textit{Masked Code Fragment Prediction} is set to 0.5. \add{All these hyperparameter settings are tuned on the validation set.} A detailed analysis about the impact of hyperparameters on {CoreGen}\xspace's performance can be found in Section \ref{sec:setting}. \section{Experimental Results}\label{sec:result} \subsection{Result Analysis}\label{sec:main_result} Table \ref{result} shows the experimental results of our model and the baselines. {CoreGen}\xspace outperforms baseline models across all evaluation metrics with at least 28.18\%, 26.12\% and 17.82\% improvement on BLEU-4, ROUGE-L and METEOR scores, respectively. We attribute this to its effectiveness for attending to the critical segments of code snippets, i.e., the changed code fragments. \add{Besides, comparing to PtrGNCMsg that employs an extra pointer-generator network to copy the context-specific OOV tokens, {CoreGen}\xspace's superior performance further supports our claim that exploiting the code contextual information can achieve a more accurate modeling of the context-specific tokens (e.g., variable/function names), leading to an elegant solution to the OOV issue.} \begin{figure} \centering \includegraphics[width=.45\textwidth]{figures/converge.pdf} \caption{Convergence between Transformer and {CoreGen}\xspace. } \label{fig:conv} \end{figure} Besides, {CoreGen}\xspace's contextualized code representation learning procedure can also speed up model's convergence. In our experiment comparing vanilla Transformer and {CoreGen}\xspace on model's convergence along the Stage II training procedure, as Figure \ref{fig:conv} illustrates, {CoreGen}\xspace can converge faster to achieve equivalent generation quality as the vanilla Transformer model at 25 training epochs ahead. \add{In practice, collecting high-quality commit messages is difficult since substantial efforts are required for differentiating messages' quality \cite{liu2018nngen}. Therefore, to simulate real-life usage, we further validate {CoreGen}\xspace's generalization ability under low-resource settings. After using the whole training corpus for contextualized code representation learning, we adjust the amount of labels (i.e., commit messages) available for Stage II's supervised fine-tuning. As shown in Figure \ref{fig:lowres}, {CoreGen}\xspace outperforms the baseline models (annotated as the dotted lines) by making use of only 50\% of the labels. This inspiring result not only indicates the strong generalization ability of our proposed contextualized code representation learning strategies, but also suggests promising future research directions, such as training the contextualized code representation on larger corpus as a general solution to code-related tasks, especially when under the low-resource settings. } \subsection{Ablation Study} To further validate the usefulness of contextualized code representation learning, we also compare {CoreGen}\xspace with an ablated method {CoreGen}\xspace\textsubscript{II} that performs Stage II's downstream fine-tuning from scratch on Transformer and skips Stage I's representation learning procedure. \begin{figure} \centering \includegraphics[width=.35\textwidth]{figures/fig4.pdf} \caption{{CoreGen}\xspace's performance under low-resource settings. Dotted horizontal lines indicate the best performance achieved by baselines.} \label{fig:lowres} \end{figure} \begin{figure*} \centering \subfloat[Impact of mask rate]{\includegraphics[width=0.3\textwidth]{figures/fig1.pdf}} \subfloat[Impact of layer number ]{\includegraphics[width=0.3\textwidth]{figures/fig2.pdf}} \subfloat[Impact of head size]{\includegraphics[width=0.3\textwidth]{figures/fig3.pdf}} \caption{{CoreGen}\xspace's performance under different hyper-parameter settings. Best results can be achieved by setting the mask rate, layer number and head size to 0.5, 2 and 6, respectively. } \label{fig:parameter} \end{figure*} As we can observe from the comparison in Table \ref{result}, about half of the performance gain compared to the previous state-of-the-art comes from the contextualized code representation learning while the rest can be attributed to the advanced self-attentional model architecture of Transformer. Here, {CoreGen}\xspace\textsubscript{II}'s substantial improvement compared with the baselines also demonstrates Transformer's strengths over the traditional recurrent-based model architecture in the task domain of code commit message generation. However, a remarkable performance gap still exists between {CoreGen}\xspace\textsubscript{II} and {CoreGen}\xspace, which, again, affirms the necessities of contextualized code representation learning in {CoreGen}\xspace. \subsection{Parameter Sensitivity}\label{sec:setting} We further analyze the impact of three key parameters on {CoreGen}\xspace's performance, including mask rate $\phi$, layer number and head size. Figure \ref{fig:parameter} depicts the analysis results. Figure \ref{fig:parameter}(a) shows that the generation quality improves as the mask rate increases from 0.1 to 0.5, but deteriorates as the mask rate keeps increasing. This affirms our hypothesis that masking a continuous fragment \add{can model more code semantics} than masking only a single token, while the adverse impacts of overlarge mask rate can be contrarily explained by the lack of contextual information. In {CoreGen}\xspace, we set the mask rate to 0.5 in this work, meaning that 50\% tokens of the longest line are randomly masked for Stage I's representation learning. Figure \ref{fig:parameter}(b) and \ref{fig:parameter}(c) implies that, while small layer number or head size reduces performance, excessive number of layers or heads also do harms to model's downstream generation quality. Therefore in {CoreGen}\xspace, Transformer's layer number and head size are set to 2 and 6 respectively to save computation costs. \subsection{Analysis on the Effects of Data Deduplication} After a further analysis, we notice that the cleansed dataset released by Liu et al. \cite{liu2018nngen} still contains overlapped code commits across training, validation and test sets. The analysis results are presented in Table \ref{dataset}, where ``Identical Code Changes'' means the commit records containing code changes that are covered in the training set, and ``Completely Identical Entries'' refers to the records having already appeared in the training set with totally same code changes and corresponding commit messages. \nie{Since data duplication could adversely affect model performance \cite{allamanis2019adverse}, we then conduct evaluation on the deduplicated dataset, with results shown in Table \ref{supp-bert}. We choose the best retrieval model NNGen \cite{liu2018nngen} and the best generative model NMT \cite{jiang2017automatically} as the baselines. As can be seen, our proposed method {CoreGen}\xspace still outperforms the baseline models by a significant margin on the deduplicated dataset, which again indicates the efficacy of our proposed contextualized code representation learning framework.} \begin{table} \centering \caption{Overlapped entries in the benchmark dataset.} \label{dataset} \scalebox{0.78}{ \begin{tabular}{|l|c|c|} \hline & \textbf{ Validation Set } & \textbf{ Test Set} \\ \hline \ Total Entries & 2,511 & 2,521 \\ \ Identical Code Changes & 267 (10.63\%) & 282 (11.19\%) \\ \ Completely Identical Entries & 119 (4.74\%) & 119 (4.72\%) \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{Evaluation results on the deduplicated dataset.} \label{supp-dataset} \scalebox{0.75}{ \begin{tabular}{c|c|c|c|c|c} \toprule \multicolumn{1}{c|}{\multirow{1}{*}{\textbf{Model}}} & \multirow{1}{*}{\textbf{BLEU-4}} & \multirow{1}{*}{\textbf{ROUGE-1}} & \multirow{1}{*}{\textbf{ROUGE-2}} & \multirow{1}{*}{\textbf{ROUGE-L}} & \multirow{1}{*}{\textbf{METEOR}} \\ \midrule \multicolumn{1}{l|}{\ \ NMT } & 10.54 & 20.37 & 10.44 & 19.20 & 9.57 \\ \multicolumn{1}{l|}{\ \ NNGen } & 12.44 & 24.22 & 12.04 & 23.76 & 11.66 \\ \midrule \multicolumn{1}{l|}{\ \ {CoreGen}\xspace} & \textbf{15.86} & \textbf{27.31} & \textbf{15.09} & \textbf{25.46} & \textbf{13.38} \\ \bottomrule \end{tabular}} \end{table} \subsection{BERT-Based Approach Comparison} When trying to comprehend {CoreGen}\xspace's mechanism, readers may consider the contextualized code representation learning stage (Stage I) as a ``pre-training'' process and the downstream commit message generation stage (Stage II) as a fine-tuning process. However, we adopt the term ``contextualized'' instead of ``pre-trained'' here to distinguish the proposed approach from popular pre-training language models such as GPT \cite{radford2018improving, radford2019language}, BERT \cite{devlin2018bert}, etc. The wording is mainly based on two reasons: 1) popular pre-training language models generally require huge amount of data as training corpus, while only limited size of high-quality data ($\sim$27k pairs of code commits and commit messages) are available in our scenario. 2) popular pre-training language models commonly facilitate multiple downstream tasks \cite{devlin2018bert,feng2020codebert}, while {CoreGen}\xspace is specifically designed for the commit message generation task. To prevent readers from misunderstanding that we are proposing a general-purpose pre-training approach, we avoid using the terms ``pre-training''/``pre-trained'' in the paper. To highlight the difference between the proposed {CoreGen}\xspace and BERT-like pre-trained models, we also compare {CoreGen}\xspace with the pretrained-BERT-based model \cite{DBLP:conf/iclr/ZhuXWHQZLL20}, named BERT-fused model. The BERT-fused model also uses Transformer as its base model and fuses the word representations extracted from a pre-trained BERT model with Transformer's encoder and decoder layers. We choose this work as the baseline for two reasons: 1) this work is a representative work that leverages pre-trained BERT model for neural machine translation and achieves state-of-the-art results on several machine translation benchmark datasets; and 2) this work also uses standard Transformer as the basic architecture similar to {CoreGen}\xspace, therefore eliminating the influence of basic architecture variations. We use the default hyperparameter settings to implement the BERT-fused model. The experimental results are illustrated in Table \ref{supp-bert}. We can observe that {CoreGen}\xspace significantly outperforms the BERT-based approach with an increase of 38.92\% in terms of BLEU-4 score. This indicates the effectiveness of {CoreGen}\xspace's specialized contextualized code representation learning strategies over BERT-based approaches in the task domain of commit message generation. \begin{table} \centering \caption{Comparison of {CoreGen}\xspace with BERT-based approach.} \label{supp-bert} \scalebox{0.75}{ \begin{tabular}{c|c|c|c|c|c} \toprule \multicolumn{1}{c|}{\multirow{1}{*}{\textbf{Model}}} & \multirow{1}{*}{\textbf{BLEU-4}} & \multirow{1}{*}{\textbf{ROUGE-1}} & \multirow{1}{*}{\textbf{ROUGE-2}} & \multirow{1}{*}{\textbf{ROUGE-L}} & \multirow{1}{*}{\textbf{METEOR}} \\ \midrule \multicolumn{1}{l|}{\ \ BERT-fused} & 15.16 & 25.81 & 14.98 & 24.42 & 13.43 \\ \midrule \multicolumn{1}{l|}{\ \ {CoreGen}\xspace} & \textbf{21.06} & \textbf{32.87} & \textbf{20.17} & \textbf{30.85} & \textbf{16.53} \\ \bottomrule \end{tabular}} \end{table} \subsection{Analysis of {CoreGen}\xspace with Combined Loss Function } In {CoreGen}\xspace, the loss functions of Stage I and Stage II are separated since their respective objectives are essentially different and the model is designed to be optimized in order, i.e., learning the code representations first and then generating commit messages based on the learnt representations. Combining the two losses may bring in undesirable noises along with the task-specific knowledge in training, leading to poor generation results. We conduct a comparison experiment where {CoreGen}\xspace is associated with a hybrid loss function, named as {CoreGen}\xspace\textsubscript{Hybrid}. Specifically, during training, with all the other experimental setups kept optimal, Transformer is optimized with a combined loss function: \begin{equation} \mathlarger{\mathcal{L}}(\theta;\mathds{C}) =\mathcal{L}_{\text{I}}+\mathcal{L}_{\text{II}} ,\end{equation} where $\mathcal{L}_{\text{I}}$ represents the loss function for Stage I and $\mathcal{L}_\text{II}$ represents the loss function for Stage II, to simultaneously fit in both tasks, i.e. contextualized code representation learning and commit message generation. The experimental results are depicted in Table \ref{supp-hybrid}. As can be seen, the performance of {CoreGen}\xspace declines dramatically when the two losses are integrated, which indicates the importance of optimizing the model with two separate loss functions sequentially for the task. \begin{table} \centering \caption{Comparison of our proposed training method separating the two-stage losses with {CoreGen}\xspace training with combined loss (denoted as {CoreGen}\xspace\textsubscript{Hybrid}).} \label{supp-hybrid} \scalebox{0.73}{ \begin{tabular}{c|c|c|c|c|c} \toprule \multicolumn{1}{c|}{\multirow{1}{*}{\textbf{Model}}} & \multirow{1}{*}{\textbf{BLEU-4}} & \multirow{1}{*}{\textbf{ROUGE-1}} & \multirow{1}{*}{\textbf{ROUGE-2}} & \multirow{1}{*}{\textbf{ROUGE-L}} & \multirow{1}{*}{\textbf{METEOR}} \\ \midrule \multicolumn{1}{l|}{\ \ {CoreGen}\xspace\textsubscript{Hybrid}} & 15.41 & 22.15 & 11.04 & 20.71 & 13.79 \\ \midrule \multicolumn{1}{l|}{\ \ {CoreGen}\xspace} & \textbf{21.06} & \textbf{32.87} & \textbf{20.17} & \textbf{30.85} & \textbf{16.53} \\ \bottomrule \end{tabular}} \end{table} \subsection{Analysis of NMT with {CoreGen}\xspace Framework} The core idea of contextualized code representation learning in {CoreGen}\xspace can be flexibly incorporated into other sequence-to-sequence neural architectures. To validate the transferability of our proposed framework, we conduct one supplementary experiment where {CoreGen}\xspace's Transformer model is replaced with the basic NMT model in Jiang et al.'s work \cite{jiang2017automatically} while keeping the rest experimental setups unchanged. Specifically, the NMT model is first optimized by the objective function as described in Equation 4 in the paper, then get further fine-tuned for downstream commit message generation with Jiang et al.'s default settings. We name this new baseline as NMT\textsubscript{{CoreGen}\xspace}. The comparison results are detailed in Table \ref{supp-nmt-1}. As can be observed, by incorporating the contextualized code representation learning framework, NMT\textsubscript{{CoreGen}\xspace} achieves a significantly better performance over the basic NMT model, presenting an increase of 26.75\% in terms of BLEU-4 score. The result can further exhibit the necessities of contextualized code representation learning in commit message generation, while the performance gap between NMT\textsubscript{{CoreGen}\xspace} and {CoreGen}\xspace again demonstrates Transformer's superiority over the traditional NMT model in this task domain. \begin{table} \centering \caption{Analysis of Jiang et al.'s baseline model (NMT) adopting {CoreGen}\xspace's two-stage framework (denoted as NMT\textsubscript{{CoreGen}\xspace}).} \label{supp-nmt-1} \scalebox{0.65}{ \begin{tabular}{cc|c|c|c|c|c} \toprule \multicolumn{2}{c|}{\multirow{1}{*}{\textbf{Model}}} & \multirow{1}{*}{\textbf{BLEU-4}} & \multirow{1}{*}{\textbf{ROUGE-1}} & \multirow{1}{*}{\textbf{ROUGE-2}} & \multirow{1}{*}{\textbf{ROUGE-L}} & \multirow{1}{*}{\textbf{METEOR}} \\ \midrule \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Baselines}}} & \multicolumn{1}{l|}{NMT} & 14.17 & 23.12 & 14.36 & 22.09 & 12.54 \\\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{NMT\textsubscript{{CoreGen}\xspace}} & 17.96 & 24.99 & 14.07 & 23.70 & 14.28 \\ \midrule \multicolumn{1}{c|}{\textbf{Ours}}& \multicolumn{1}{l|}{{CoreGen}\xspace} & \textbf{21.06} & \textbf{32.87} & \textbf{20.17} & \textbf{30.85} & \textbf{16.53} \\ \bottomrule \end{tabular}} \end{table} \vspace{0.1cm} \subsection{Future Research Direction} According to the task nature, various representation learning methods can be also integrated together to maximize the exploitation and utilization of code's contextual and structural information. In {CoreGen}\xspace, we propose \textit{Code Changes Prediction} and \textit{Masked Code Fragment Prediction} tasks to model the code contextual information corresponding to the two separate categories of code commits. In future, these tasks can be further extended to include more complicated and well-designed representation learning methodologies. For example, comparing to natural language, the syntactic structure of code are more rigid. Tokens in the same code statement are generally of stronger semantic relations than the tokens from other statements. Therefore, we design an additional code representation learning task to model this in-statement code structural information. Specifically, inspired by the idea of pairwise code encoding \cite{ahmad2020transformer}, in Stage I, we additionally train the model to predict a randomly masked code token from the other tokens in the same code statement. Formally, given a source code sequence $X$ that can be split into a set of $n$ code statements $\{X^1, X^2, ..., X^n\}$, for each statement $X^i$, we randomly mask a token $x^i_u \in X^i$, then predict this masked token based on the remaining tokens from the same statement $\{x^i_1, ..., x^i_{u-1}, x^i_{u+1}, ..., x^i_{m}\}$ using the log likelihood objective function: \begin{equation} \mathlarger{\mathcal{L}}_{3}=- \sum_{X\in \mathds{C}_1} \sum_{X^i\in X} \log \mathcal{P}(x^{i}_{u}|x^i_1, ..., x^i_{u-1}, x^i_{u+1}, ..., x^i_{m};\theta), \end{equation} where $\mathds{C}_1$ refers to the same commit subcorpora with explicit code changes as in Equation (1). Thereby, the attention inside Transformer can be subtly guided to flow among the code tokens of the same statement, allowing the model to capture the code structural information more effectively and achieve more accurate contextualized code representation learning. The experimental results with the above method integrated are shown in Table \ref{supp-future}. As can be seen, the in-statement code structure modeling task further boosts {CoreGen}\xspace's performance on downstream commit message generation. This promising result suggests the great potentials of {CoreGen}\xspace in more effectively exploiting code contextual and structural information with other representation learning tasks integrated. In future, we will consider embedding code structural graphs, such as control flow graph and program dependency graph, for a more accurate modeling of the code contextual information. \section{Discussion}\label{sec:discussion} \input{sections/4.0} \input{sections/4.1} \input{sections/4.2} \input{sections/4.3} \input{sections/4.4} \section{Related Works}\label{sec:literature} This section reviews the most related works and groups them into three lines: commit message generation, contextualized word representation, and code representation learning. \subsection{Commit Message Generation} Existing literature for commit message generation can be roughly divided by their methodologies into three categories: rule-based, retrieval-based and deep-learning-based. Earliest works in the field attempt to automate the commit message generation by extracting information from the code commits and filling in pre-defined templates \cite{buse2010automatically, cortes2014automatically, linares2015changescribe, shen2016automatic}. Among them, Shen et al. \cite{shen2016automatic} use pre-defined formats to identify the commit type and generate commit messages based on corresponding templates. ChangeScribe \cite{linares2015changescribe} further takes the impact set of a commit into account when extracting core information from the code commits. In spite of the involvement of prior knowledge, these rule-based methods can only handle the code commits that match certain formats and the produced commit messages can only cover trivial commits. Therefore, later works leverage informational retrieval techniques to allow more flexible commit message generation \cite{huang2017mining, liu2018nngen}. For example, Huang et al. \cite{huang2017mining} evaluate the similarity among code commits based on both syntactic and semantic analysis and reuse the message of the most similar commit as model output. NNGen \cite{liu2018nngen} generalizes the similarity measurement by calculating the cosine distance between bag-of-words vectors of the code commits, which extends to also support the code commits with implicit binary file changes. However, retrieval-based approaches are still limited in two aspects: the variable/function names are usually not consistent in the retrieved message, and the generation performance relies heavily on the coverage of the database. By adopting deep neural networks to translate code commits into messages, deep-learning-based methods have gradually become the mainstream approach in this research field. Both Loyola et al. \cite{loyola2017neural} and Jiang et al. \cite{jiang2017automatically} propose to bridge the gap between code commits and commit messages with an attentional encoder-decoder framework. Loyola et al.'s later work \cite{loyola2018content} further takes intra-code documentation as a guiding element to improve the generation quality. Since deep learning models suffer heavily from context-specific tokens, CODISUM \cite{xu2019commit} and PtrGNCMsg \cite{liu2019generating} both attempt to mitigate OOV issue by incorporating the copying mechanism, while the former one fails in supporting the code commits with implicit binary file changes. In all these methods, code contextual information is either neglected or built up using an additional network. \subsection{Contextualized Word Representation} Our work also relates closely to the contextualized word representation methods. Pioneering word representation methods keep the mapping function invariant across different sentences \cite{mikolov2013distributed, pennington2014glove, bojanowski2017enriching}. For example, Word2vec learns the word embedding by a skip-gram or continuous-bag-of-word (CBOW) model, which are both based on distributed center-context word pair information \cite{mikolov2013distributed}. Comparatively, Glove produces word embeddings by factorizing the word co-occurrence matrix to leverage global statistical information contained in a document \cite{pennington2014glove}. Although these methods can capture both syntactic and semantic meanings behind the words, the limitation of these static word embedding approaches lies mainly in two aspects: 1) these approaches do not leverage the information of entire sentence and the relationships learned from the center-context pairs are restricted in fixed window-size, and 2) these approaches fail to capture polysemy since the embedding tables are kept invariant across different contexts. In recent years, contextualized word representation methods have gained overwhelming dominance. Pre-trained from large unlabeled corpus, contextualized word representations can capture word sense, syntax, semantic roles and other information dynamically from the context, achieving state-of-the-art results on many downstream tasks including question answering, sentiment analysis, reading comprehension, etc \cite{peters2018deep, radford2018improving, radford2019language, devlin2018bert}. Specifically, Peter et al. \cite{peters2018deep} derive the word representations from a bi-directional LSTM trained with coupled language model objective on a large corpus. The GPT model proposed by OpenAI instead uses multi-layer Transformer decoders for the language model pre-training \cite{radford2018improving, radford2019language}. However, the left-to-right architecture of GPT models can be harmful for many token-level tasks where the contextual information from both directions are equally essential. Therefore, to alleviate the unidirectional nature of language models, Devlin et al. \cite{devlin2018bert} further pre-train a denoising auto-encoder using a brand new self-supervised learning task named ``masked language model''. By predicting the randomly masked word tokens from their contexts, contextualized word representations are embedded into the initialized model parameters for downstream tasks' usage. Unlike static word representation methods that require an extra network for downstream task processing, these networks can be adapted to various downstream tasks with simple architecture modifications. \subsection{Code Representation Learning} Among the previous works of code representation learning, traditional machine learning algorithms used to be the standard practices. In particular, by treating the code as a sequence of tokens, n-gram language model was widely adopted in modeling the source code for authorship classification \cite{frantzeskou2008examining}, repository mining \cite{allamanis2013mining}, convention detection \cite{allamanis2014learning}, etc. SVM is another common approach for representing the programs that has been applied for malicious code detection \cite{choi2011efficient} and code domain categorization \cite{linares2014using}. By further taking the syntax tree structure of code into consideration, Maddison \& Tarlow \cite{maddison2014structured} describe new generative models based on probabilistic context-free grammars, while Raychev et al. \cite{raychev2016probabilistic} build up code probabilistic model by learning decision trees out of a domain-specific language called TGen. Recent advancement of deep learning models also changes the way researchers representing code semantics. Token-based techniques process code as textual data and adopt RNN models to learn the code features together with downstream tasks \cite{raychev2014code}. Tree-based techniques transform syntax tree into vectors that are later formatted as model input. For example, for code defect prediction, Wang et al. \cite{wang2016automatically} leverage a deep belief network to learn the semantic code representations from abstract syntax tree (AST) nodes, while for code clone detection, White et al. \cite{white2016deep} use a recursive auto-encoder to exploit code syntactical information from ASTs. TBCNN \cite{mou2016convolutional} includes a tree-based convolution on ASTs to learn program vector representations. ASTNN \cite{zhang2019novel} decomposes large ASTs into sequences of small statement trees and finally learns the code representation from encoded statement vectors. Last category of graph-based techniques constructs the entire syntax graph as model input. Allamanis et al. \cite{allamanis2017learning} leverage a Gated Graph Neural Network to represent both the syntactic and semantic structure of source code. Compared with these methods, our proposed approach focuses on learning contextualized code representation without using external ASTs or constructed graphs, which can achieve a greater balance between the performance and usability of downstream commit message generation. Inspired by the success of the aforementioned pre-trained language models, SCELMo \cite{karampatsis2020scelmo} and CodeBERT \cite{feng2020codebert} propose to pre-train code representation on large unlabeled corpus. However, these works directly borrow the pre-training tasks from original implementations without explicitly taking into account the semantic gaps between source code and natural language. \section{Conclusion}\label{sec:con} Code commit message generation is a necessitated yet challenging task. In this paper, we proposed {CoreGen}\xspace, a two-stage framework that takes advantage of contextualized code representation learning to boost the downstream performance of commit message generation. Specifically, with regard to the two categories of code commits, we introduce two representation learning strategies, namely \textit{Code Changes Prediction} and \textit{Masked Code Fragment Prediction}, for the exploitation of code contextual information. Experimental results showed that {CoreGen}\xspace significantly outperforms competitive baselines and achieves the state-of-the-art on the benchmark dataset. \add{{CoreGen}\xspace is also validated under low-resource settings, where high quality commit messages were generated with only 50\% of the labels utilized during the fine-tuning. This points out promising future directions of extending this contextualized code representation learning framework to larger code corpus and other similar code-related tasks, such as code summarization. Moreover, {CoreGen}\xspace's improvements after exploiting the in-statement code structure also demonstrate its great potentials in integrating more complicated code contextual and structural information in future. }
2023-04-23T08:17:51.461Z
2021-06-22T02:28:41.000Z
redpajama/arxiv
arxiv_0000
766
9,874
f70904496f0ef588278a3a816024bc4f6c4b7693
\section{Introduction} \label{sec:introduction} \vspace*{3mm} {\it "The challenge [of quantum software engineering] is to rework and extend the whole of classical software engineering into the quantum domain so that programmers can manipulate quantum programs with the same ease and confidence that they manipulate today's classical programs."} \begin{flushright} \hspace*{1cm}{\bf Susan Stepney}, in the 2004 report of Grand Challenges in Computing Research~\cite{hoare2004grand}. \end{flushright} \vspace*{2mm} Quantum computing is a rapidly developing field that is expected to make breakthroughs in many areas~\cite{lanyon2010towards,barends2014superconducting,cross2015quantum,benedetti2016estimation,o2016scalable,olson2017quantum}. Quantum computing uses the principles of quantum mechanics to process information and has the potential to perform specific tasks much faster than classical computing. The basis of quantum computing is the quantum bit (qubit). Unlike a classical computer bit that is assigned either 0 or 1, a qubit can be either assigned states that is the superposition of 0 and 1. Compared to classical algorithms, quantum algorithms have the potential to solve specific problems with exponential acceleration. In the last two decades, there has been a large amount of literature that contributed to the development of quantum algorithms~\cite{deutsch1985quantum,grover1996fast,shor1999polynomial,mosca2008quantum,montanaro2016quantum,shao2019quantum}. Just like classical computing, quantum computing is applicable to applications in many disciplines and will have a wide impact~\cite{glanz1995quantum}. There are two types of applications where quantum computers are expected to outperform classical computers~\cite{martonosi2019next}. The first application is the problems that require large amounts of parallel computing, such as optimization~\cite{guerreschi2017practical,farhi2014quantum}, encryption~\cite{mosca2018cybersecurity}, big data analysis~\cite{rebentrost2014quantum}, and machine learning~\cite{dunjko2016quantum,biamonte2017quantum}. The other application is the problems that require efficient and accurate simulation of quantum problems in nature~\cite{feynman1982simulating,zalka1998efficient} from the areas, such as physics~\cite{childs2018toward}, chemistry~\cite{reiher2017elucidating,mcardle2018quantum,olson2017quantum} and materials science~\cite{yang2017mixed,grimsley2019adaptive}. With the rapid development of impressive quantum hardware as well as the accessibility of universal quantum devices to the researchers and professionals via Quantum-as-a-Service (QaaS)~\cite{leymann2020quantum,ball2020software}, it is high time to focus our attention on engineering quantum software systems to reap the benefits of quantum technology. Meanwhile, various application domains of quantum computing~\cite{wecker2013can,wecker2014gate,karalekas2020quantum,tura2020quantum} urgently need quantum software engineering methodologies and techniques to solve their specific problems. Therefore, we believe that there is an urgent need to build a community for quantum software engineering that focuses on devising methods, tools, and processes for developing quantum software systems efficiently. Quantum software development techniques aim at providing means for creating quantum software throughout the quantum software life cycle (refer to Section~\ref{subsec:QSLC}). A number of quantum programming approaches are available, for instance, Scaffold~\cite{abhari2012scaffold,javadiabhari2015scaffcc}, Qiskit~\cite{ibm2017qiskit}, Q\#~\cite{svore2018q}, ProjectQ~\cite{projectq2017projectq}, and Quipper\cite{green2013quipper}. Moreover, the concepts are also being applied at the earlier stages of quantum software development. For example, at the quantum software design stage, \cite{Perez-Delgado2020quantum,carmelo2013quantum,cartiere2016quantum} provides means for modeling and specifying quantum software systems, and \cite{sodhi2018quality} studied how the characteristics of quantum computing systems impact the quality attributes of quantum software architectures. With a variety of techniques available to a software engineer at each stage, the task of engineering a quantum software system poses significant challenges. At each development stage, the software engineer needs to employ the most suitable quantum software technique for the application being developed. The choice of technique can be dictated by various factors, including system requirements, organizational practices, and constraints imposed by the tools or development environments, and the nature of the quantum programming (mechanics). This implies that multiple techniques may be employed at each stage in conjunction with each other. As quantum software development techniques mature, there is a need for guidelines supporting the development of well-engineered quantum software systems. In this paper, we present a comprehensive survey of quantum software engineering. We summarize the aspects of previous work that have focused mainly on quantum software engineering issues, while simultaneously covering various approaches across different phases of the quantum software life cycle. We have organized the literature according to five different aspects: requirements, design, implementation, testing, and maintenance. Some papers may cover more than one aspect. For such papers, we refer them to all the relevant sections to ensure completeness. Additionally, we identify challenges and opportunities for the emerging research community working at the intersection between methodologies and techniques for classical software engineering and problems in engineering quantum software systems. To ensure that our survey is self-contained, we have tried to include enough materials to thoroughly guide the software engineering researchers who are interested in techniques and methods for engineering quantum software systems. We also seek to provide quantum computing researchers with a complete survey of software engineering solutions to improve the development of quantum systems. There have been surveys related to some aspects of quantum software engineering. For example, a number of surveys~\cite{selinger2004brief,gay2006quantum,unruh2006quantum,rudiger2007quantum,jorrand2007programmer, sofge2008survey,miszczak2011models,ying2012quantum,valiron2013quantum,valiron2015programming,hietala2016quantum,chong2017programming,spivsiak2017quantum,zorzi2019quantum,garhwal2019quantum} have been proposed for quantum programming languages, which mainly focus on the language paradigms, implementations, semantics, and tools. Surveys related to quantum software development environments~\cite{roetteler2017design,larose2019overview, fingerhuth2018open, shaydulin2020making} have also been proposed, from various points of view. However, as far as we know, no previous work has focused on a comprehensive survey of the whole life cycle of quantum software development, including quantum software requirements analysis, design, implementation, testing, and maintenance. In summary, the main contributions of this paper are as follows: \begin{itemize} \item{\bf Definition.} It defines the term "quantum software engineering" and introduces a quantum software life cycle model for supporting quantum software development. \item{\bf Survey.} It provides a comprehensive survey on quantum software engineering, which is across various phases of quantum software life cycle, including quantum software requirements analysis, design, implementation, testing, and maintenance. \item{\bf Horizons.} It identifies challenges, opportunities, and promising research directions for quantum software engineering, intended to promote and stimulate further research. \end{itemize} The rest of this paper is organized as follows. The structure of this paper is also depicted in Table~\ref{table:structure}. Section~\ref{sec:background} reviews the fundamental terminology in quantum computing. Section \ref{sec:QSD} briefly introduces the quantum software development which includes quantum programming, a definition of "quantum software engineering," and the quantum software life cycle for quantum software development. From Section~\ref{sec:requirement} to Section~\ref{sec:maintenance}, we survey the current state of the art on quantum software requirements analysis, design, implementation, testing, and maintenance, respectively. Section~\ref{sec:reuse} covers the current state of the art on quantum software reuse. Section~\ref{sec:challenge} discusses the challenges and opportunities for quantum software engineering. Related works are discussed in Section \ref{sec:work}, and concluding remarks are given in Section~\ref{sec:conclusion}. An extensive set of references is provided for readers wishing to pursue the matter further. \begin{table*}[h] \centering \caption{A Table-like Structure for Paper Organization} \label{table:structure} \renewcommand\arraystretch{1.2} \footnotesize \begin{tabular}{|l|l|l|} \hline {\bf Introduction} \hspace{0.4mm} [Sec.\ref{sec:introduction}] & \multicolumn{2}{l|}{} \\ \hline \multirow{9}{*}{{\bf Background} \hspace{0.4mm} [Sec.\ref{sec:background}]} & \multicolumn{2}{l|}{Quantum bit (qubit) \hspace{0.5mm} [Sec.\ref{subsec:qubit}]} \\\cline{2-3} & \multirow{3}{*}{Quantum gate \hspace{0.5mm} [Sec.\ref{subsec:q-gate}]} & NOT gate \hspace{0.5mm} [Sec.\ref{subsubsec:not}] \\\cline{3-3} & & Hadamard gate \hspace{0.5mm} [Sec.\ref{subsubsec:hadamard}] \\\cline{3-3} & & Controlled NOT gate \hspace{0.5mm} [Sec.\ref{subsubsec:controlled}] \\\cline{2-3} & \multicolumn{2}{l|}{Quantum circuit \hspace{0.5mm} [Sec.\ref{subsec:q-circuit}]} \\\cline{2-3} & \multicolumn{2}{l|}{Superposition and entanglement \hspace{0.5mm} [Sec.\ref{subsec:q-entanglement}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum measurement \hspace{0.5mm} [Sec.\ref{subsec:q-measurement}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum algorithm \hspace{0.5mm} [Sec.\ref{subsec:q-algorithm}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum computing \hspace{0.5mm} [Sec.\ref{subsec:q-computing}]} \\ \hline \multirow{7}{*}{\tabincell{l}{{\bf Quantum Software Engineering} \hspace{0.5mm} [Sec.\ref{sec:QSD}]}} & \multirow{3}{*}{\tabincell{l}{Quantum programming \hspace{0.5mm} [Sec.\ref{subsec:q-programming}]}} & Concepts of quantum programming \hspace{0.5mm} [Sec.\ref{subsubsec:concept}] \\\cline{3-3} & & Languages for quantum programming \hspace{0.5mm} [Sec.\ref{subsubsec:QPL-qpl}] \\\cline{3-3} & & Semantics of quantum programming \hspace{0.5mm} [Sec.\ref{subsubsec:QPL-semantics}] \\\cline{2-3} & \multicolumn{2}{l|}{Definition of quantum software engineering \hspace{0.5mm} [Sec.\ref{subsec:QSE-definition}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software engineering methods, tools, and processes \hspace{0.5mm} [Sec.\ref{subsec:QSE-process}]} \\\cline{2-3} & \multicolumn{2}{l|}{Generic view of quantum software engineering \hspace{0.5mm} [Sec.\ref{subsec:QSE-view}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software life cycle \hspace{0.5mm} [Sec.\ref{subsec:QSLC}]} \\ \hline {\bf Quantum Requirements Analysis} \hspace{0.4mm} [Sec.\ref{sec:requirement}] & \multicolumn{2}{l|}{} \\ \hline \multirow{4}{*}{{\bf Quantum Software Design} \hspace{0.5mm} [Sec.\ref{sec:design}]} & \multicolumn{2}{l|}{Quantum software modeling \hspace{0.5mm} [Sec.\ref{subsec:d-modelling}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software specification \hspace{0.5mm} [Sec.\ref{subsec:d-specification}]} \\\cline{2-3} & \multicolumn{2}{l|}{Pattern language for quantum software \hspace{0.5mm} [Sec.\ref{subsec:q-pattern}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quality attributes \hspace{0.5mm} [Sec.\ref{subsec:d-architecture}]} \\ \hline {\bf Quantum Software Implementation} \hspace{0.4mm} [Sec.\ref{sec:implementation}] & \multicolumn{2}{l|}{} \\ \hline \multirow{13}{*}{{\bf Quantum Software Testing} \hspace{0.4mm} [Sec.\ref{sec:testing}]} & \multicolumn{2}{l|}{Bug types in quantum software \hspace{0.5mm} [Sec.\ref{subsec:bug-type}]} \\\cline{2-3} & \multirow{3}{*}{Assertions for quantum software \hspace{0.5mm} [Sec.\ref{subsec:q-assertion}]} & Invariant and inductive assertion \hspace{0.5mm} [Sec.\ref{subsubsec:invariant}] \\\cline{3-3} & & Applied quantum Hoare logic \hspace{0.5mm} [Sec.\ref{subsubsec:aQHL}] \\\cline{3-3} & & Assertion library for quantum software \hspace{0.5mm} [Sec.\ref{subsubsec:property}] \\\cline{2-3} % & \multirow{4}{*}{Quantum software testing \hspace{0.5mm} [Sec.\ref{subsec:q-testing}]} & Open problems on testing quantum software \hspace{0.5mm} [Sec.\ref{subsubsec:open-problem}] \\\cline{3-3} & & Fuzz testing \hspace{0.5mm} [Sec.\ref{subsubsec:fuzz-testing}] \\\cline{3-3} & & Property-based testing \hspace{0.5mm} [Sec.\ref{subsubsec:property-testing}] \\\cline{3-3} & & Functional, white-box, and model-based testing \hspace{0.5mm} [Sec.\ref{subsubsec:funtional-testing}] \\\cline{2-3} & \multirow{4}{*}{Quantum program debugging \hspace{0.5mm} [Sec.\ref{subsec:q-debugging}]} & Debugging tactics \hspace{0.5mm} [Sec.\ref{subsubsec:debugging-tactic}] \\\cline{3-3} & & Assertion-based debugging \hspace{0.5mm} [Sec.\ref{subsubsec:assertion-debugging}]\\\cline{3-3} & & Debugging quantum processes \hspace{0.5mm} [Sec.\ref{subsubsec:quantum-process}] \\\cline{3-3} & & Language support for debugging \hspace{0.5mm} [Sec.\ref{subsubsec:debugging-language}] \\\cline{2-3} & \multicolumn{2}{l|}{Quantum program analysis for bug detection \hspace{0.5mm} [Sec.\ref{subsec:q-analysis}]} \\\cline{2-3} \hline \tabincell{l}{{\bf Quantum Software Maintenance} \hspace{0.4mm} [Sec.\ref{sec:maintenance}]} & \multicolumn{2}{l|}{Reengineering classical information system to quantum computing \hspace{0.5mm} [Sec.\ref{subsec:reengineering}]} \\ \hline \multirow{3}{*}{\tabincell{l}{{\bf Quantum Software Reuse} \hspace{0.5mm} [Sec.\ref{sec:reuse}]}} & \multicolumn{2}{l|}{Quantum pattern reuse \hspace{0.5mm} [Sec.\ref{subsec:pattern-reuse}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum circuit reuse \hspace{0.5mm} [Sec.\ref{subsec:circuit-reuse}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum state reuse \hspace{0.5mm} [Sec.\ref{subsec:state-reuse}]} \\ \hline \multirow{12}{*}{\tabincell{l}{{\bf Challenge and Opportunities} \hspace{0.4mm} [Sec.\ref{sec:challenge}]}} & \multicolumn{2}{l|}{Quantum software requirements analysis \hspace{0.5mm} [Sec.\ref{subsec:co-requirement}]} \\\cline{2-3} &\multirow{2}{*}{\tabincell{l}{{Quantum software design} \hspace{0.5mm} [Sec.\ref{subsec:co-design}]}} & Quantum architectural design \hspace{0.5mm} [Sec.\ref{subsubsec:co-architectural}] \\\cline{3-3} & & Detailed quantum design \hspace{0.5mm} [Sec.\ref{subsubsec:co-detail}] \\\cline{3-3} & & Design models for quantum software \hspace{0.5mm} [Sec.\ref{subsubsec:co-model}] \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software implementation \hspace{0.5mm} [Sec.\ref{subsec:co-implementation}]} \\\cline{2-3} & \multirow{5}{*}{Quantum software reliability [Sec.\ref{subsec:co-testing}]} & Fault model for quantum software \hspace{0.5mm} [Sec.\ref{subsubsec:co-fault}] \\\cline{3-3} & & Quantum software testing \hspace{0.5mm} [Sec.\ref{subsubsec:co-testing}] \\\cline{3-3} & & Quantum program debugging \hspace{0.5mm} [Sec.\ref{subsubsec:co-debugging}] \\\cline{3-3} & & Quantum software visualization \hspace{0.5mm} [Sec.\ref{subsubsec:co-visualization}] \\\cline{3-3} & & Quantum program verification \hspace{0.5mm} [Sec.\ref{subsubsec:co-verification}] \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software maintenance \hspace{0.5mm} [Sec.\ref{subsec:co-maintenance}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software resue \hspace{0.5mm} [Sec.\ref{subsec:co-reuse}]} \\ \hline \multirow{3}{*}{{\bf Related work} \hspace{0.4mm} [Sec.\ref{sec:work}]} & \multicolumn{2}{l|}{Quantum programming languages \hspace{0.5mm} [Sec.\ref{subsec:QPL-survey}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software engineering \hspace{0.5mm}[Sec.\ref{subsec:QSE}]} \\\cline{2-3} & \multicolumn{2}{l|}{Quantum software development environments \hspace{0.5mm} [Sec.\ref{subsec:IDE}]} \\ \hline {\bf Concluding Remarks} \hspace{0.4mm} [Sec.\ref{sec:conclusion}] & \multicolumn{2}{l|}{} \\ \hline \end{tabular} \end{table*} \section{Background} \label{sec:background} This section briefly introduces some basics of quantum mechanics, which form the basis of quantum computing. More detailed reading materials can be found in the books by Gruska~\cite{gruska1999quantum}, Nielsen \& Chuang~\cite{nielsen2002quantum}, and Nakahara~\cite{nakahara2008quantum}. Preskill's lecture notes~\cite{preskill2018lecture} are also a valuable resource. \subsection{Quantum Bit (Qubit)} \label{subsec:qubit} A classical bit is a binary unit of information used in classical computation. It can take two possible values, 0 or 1. A quantum bit (or qubit) is different from the classical bit in that its state is theoretically represented by a linear combination of two bases in the quantum state space (represented by a column vector of length 2). We can define two qubits |0$\rangle$ and |1$\rangle$, which can be described as follows: $$|0\rangle = \begin{bmatrix}1 \\0 \end{bmatrix} \hspace*{8mm} |1\rangle = \begin{bmatrix}0 \\1 \end{bmatrix}$$ \noindent Qubits |0$\rangle$ and |1$\rangle$ are the computational basis state of the qubit. In other words, they are a set of basis of quantum state space. Any qubit |$e\rangle$ can be expressed as a linear combination of two basis as $\alpha$|0$\rangle$ + $\beta$|1$\rangle$, where $\alpha$ and $\beta$ are complex numbers, and $|\alpha|^2+|\beta|^2 = 1$. This restriction is also called {\it normalization conditions}. For example, 0.6|0$\rangle$ + 0.8|1$\rangle$ is a qubit state, and $\frac{1-i}{2}|0\rangle + \frac{1+i}{2}|1\rangle$ is also a qubit state, but (1-$i$)|0$\rangle$ + (1+$i$)|1$\rangle$ is not a legitimate qubit state because it does not satisfy the normalization condition. Intuitively, qubits can be viewed as a superposition of classical bits. For example, 0.6|0$\rangle$ + 0.8|1$\rangle$ can be considered as a superposition state of |0$\rangle$ and |1$\rangle$, where the probability for bit |0$\rangle$ is $0.6^2 = 0.36$ and for bit |1$\rangle$ is $0.8^2 = 0.64$. Note that $0.6^2 + 0.8^2 = 1$ . \subsection{Quantum Gate} \label{subsec:q-gate} Just as a logic gate in a digital circuit that can modify the state of a bit, a quantum gate can change the state of a qubit. A quantum gate can have only one input and one output (transition of a single quantum state), or it can have multiple inputs and multiple outputs (transition of multiple quantum states). The number of inputs and outputs should be equal because the operators need to be reversible which means no information can be lost in quantum computing. Here, we describe two quantum gates with single input and output, and one quantum gate with multiple inputs and outputs. \subsubsection{\bf NOT Gate} \label{subsubsec:not} The NOT gate works on a single qubit. It can exchange the coefficients of two basis vectors: $$NOT(\alpha |0\rangle + \beta |1\rangle) = \alpha |1\rangle + \beta |0\rangle$$ \noindent The quantum NOT gate is an extension of the NOT gate in classical digital circuits. A single input-output quantum gate can be represented by a $2\ \times\ 2$ matrix. The state of a quantum state after passing through the quantum gate is determined by the value of the quantum state vector left-multiplied by the quantum gate matrix. The quantum gate matrix corresponding to the NOT gate is $$X = \begin{bmatrix}0&1\\1&0\end{bmatrix}$$ \noindent Therefore, the result of a qubit passing a NOT gate is $$X \begin{bmatrix}\alpha \\ \beta \end{bmatrix} = \begin{bmatrix}0&1\\1&0\end{bmatrix} \begin{bmatrix}\alpha \\ \beta \end{bmatrix} = \begin{bmatrix}\beta \\ \alpha \end{bmatrix}$$ \begin{figure*}[h!] \centerline{ \Qcircuit @C=0.8em @R=0.75em { \lstick{\ket{j_{1}}} & \gate{H} & \gate{R_{2}} & \gate{R_{3}} & \qw & \cdots & & \gate{R_n} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \rstick{\ket{y_1}} \qw \\ \lstick{\ket{j_{2}}} & \qw & \ctrl{-1} & \qw & \qw & \qw & \qw & \qw & \gate{H} & \gate{R_2} & \qw & \cdots & & \gate{R_{n-1}} & \qw & \qw & \qw & \qw & \qw & \qw & \rstick{\ket{y_2}} \qw \\ \lstick{\ket{j_{3}}} & \qw & \qw & \ctrl{-2} & \qw & \qw & \qw & \qw & \qw & \ctrl{-1} & \qw & \qw & \qw & \qw & \gate{H} & \qw & \cdots & & \gate{R_{n-2}} & \qw & \rstick{\ket{y_3}} \qw \\ \lstick{\vdots } & & & & & \ddots & & & & & & \ddots & & & & & \ddots & & & & \rstick{\vdots } \\ \lstick{\ket{j_{n}}} & \qw & \qw & \qw & \qw & \qw & \qw & \ctrl{-4} & \qw & \qw & \qw & \qw & \qw & \ctrl{-3} & \qw & \qw & \qw & \qw & \ctrl{-2} & \gate{H} & \rstick{\ket{y_{n}}} \qw } } \caption{Quantum circuit for the quantum Fourier transform (QFT) algorithm.} \label{figure:QFT_circuit} \end{figure*} \subsubsection{\bf Hadamard Gate} \label{subsubsec:hadamard} The Hadamard gate also works on a single qubit, which can decompose existing quantum states according to its coefficients: $$H(\alpha |0\rangle + \beta |1\rangle) = \frac{\alpha + \beta}{\sqrt{2}}|0\rangle + \frac{\alpha - \beta}{\sqrt{2}}|1\rangle$$ This can be represented by a matrix as: $$H = \frac{\sqrt{2}}{2}\begin{bmatrix}1&1\\1&-1\end{bmatrix}$$ Although Hadamard gate is not directly related to the AND and OR gates in classical digital circuits, it has important applications in many quantum computing algorithms. Interested readers can try to prove that after applying the Hadamard gate twice in a row, the quantum state will return to its original state. This behavior is consistent with the NOT gate. There can be an infinite variety of single input and output quantum gates, as long as the result of the left multiplication of the qubit state vector by the qubit state vector-matrix still satisfies the qubit normalization conditions. \subsubsection{\bf Controlled NOT Gate} \label{subsubsec:controlled} Computer programs are full of conditional judgment statements: if so, what to do, otherwise, do something else. In quantum computing, we also expect that the state of one qubit can be changed by another qubit, which requires a quantum gate with multiple inputs and outputs. The following is the controlled-NOT gate (CNOT gate). It has two inputs and two outputs. If the input and output are taken as a whole, this state can be expressed by $$\alpha |00\rangle + \beta |01\rangle + \gamma |10\rangle + \theta |11\rangle$$ where $|00\rangle$, $|01\rangle$, $|10\rangle$, $|11\rangle$ are column vectors of length 4, which can be generated by concatenating $|0\rangle$ and $|1\rangle$. This state also needs to satisfy the normalization conditions, that is $$|\alpha|^2 + |\beta|^2 + |\gamma|^2 + |\theta|^2 = 1$$ The CNOT gate is two-qubit operation, where the first qubit is usually referred to as the control qubit and the second qubit as the target qubit. When the control qubit is in state |0$\rangle$, it leaves the target qubit unchanged, and when the control qubit is in state |1$\rangle$, it leaves the control qubit unchanged and performs a Pauli-X gate on the target qubit. It can be expressed in mathematical formulas as follows: $$CNOT(\alpha |00\rangle + \beta |01\rangle + \gamma |10\rangle + \theta |11\rangle) = \alpha |00\rangle + \beta |01\rangle + \gamma |11\rangle + \theta |10\rangle$$ \noindent The action of the CNOT gate can be represented by the following matrix: $$X = \begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}$$ \begin{figure*}[t] \centerline{\includegraphics[width=0.8\linewidth]{architecture.png}} \caption{The architecture of quantum computing system in~\cite{sodhi2018quality}.} \label{fig:architecture} \end{figure*} \subsection{Quantum Circuit} \label{subsec:q-circuit} Quantum circuits, also known as quantum logic circuits, are the most commonly used general-purpose quantum computing models, which represent circuits that operate on qubits under an abstract concept. A quantum circuit is a collection of interconnected quantum gates. The actual structure of quantum circuits, the number and the type of gates, as well as the interconnection scheme, are all determined by the unitary transformation $U$, performed by the circuit. The result of a quantum circuit can be read out through quantum measurements. As an example, Figure~\ref{figure:QFT_circuit} shows the quantum circuit for the quantum Fourier transform (QFT) algorithm. The quantum gates used in the circuit are the Hadamard gate and the controlled phase gate $R_{m} = \begin{bmatrix}1 & 0\\0 & e^{\frac{2{\pi}i}{2^{m}}}\end{bmatrix}$. The black dots present the control bits. \subsection{Superposition and Entanglement} \label{subsec:q-entanglement} Quantum computers use the laws of quantum mechanics to provide a computation mechanism that is significantly different from classical machines. The first distinguishing feature of a quantum system is known as {\it superposition}~\cite{dirac1981principles,zeilinger1999experiment}, or, more formally, the superposition principle of quantum mechanics. A quantum system is actually in all its possible states at a time, rather than existing in one distinct state at the same time. For a quantum computer, this means that a quantum register exists in the superposition of all its possible 0's and 1's configurations, which is different from a classical system whose register contains only one value at any given time. Until the system is observed, it collapses into an observable, deterministic classical state. Quantum systems may also exhibit {\it entanglement}~\cite{einstein1935can,schrodinger1935discussion}, which is a quantum mechanical phenomenon. A state is considered to be entangled if it cannot be broken down into its more basic parts. In other words, two distinct elements of the system are entangled if one part of a system cannot be described without considering the other. A particularly interesting feature of quantum entanglement is that elements of a quantum system may be entangled even if they are separated by considerable space. Thus, measurements performed on one system can instantaneously influence other systems entangled with it. Quantum entanglement has applications in quantum computation~\cite{shor1999polynomial,nielsen2002quantum}, quantum cryptography~\cite{bennett2014quantum}, and quantum teleportation~\cite{gottesman1999quantum,jin2010experimental}. \subsection{Quantum Measurement} \label{subsec:q-measurement} From the introduction of the quantum gate above, we can see that a qubit is the superposition state of two quantum states |0$\rangle$ and |1$\rangle$; two qubits combined into a whole are in a superposition of four quantum states |00$\rangle$, |01$\rangle$, |10$\rangle$, and |11$\rangle$. From this analogy, $n$ qubits can be described by a superimposed state of $2^n$ qubits, which is a huge advantage over $n$ classical bits that have only one fixed state. However, the laws of physics also have their limitations-in order to know the exact state of a qubit, a measurement is needed. But the measurement can cause the superposition state to collapse into a deterministic state. The information contained in a qubit after measurement is similar to a classical bit, and its value can only be 0 or 1. A superposed state can collapse to 0 or 1, the probability of which is determined by the coefficients $\alpha$ and $\beta$ in the superposed state. The probability that this superposed state collapses to |0$\rangle$ or |1$\rangle$ is $|\alpha|^2$ and $|\beta|^2$, respectively. Similarly, for a two-qubit system, the probability that the measurement results in |00$\rangle$, |01$\rangle$, |10$\rangle$, |11$\rangle$ is ${|\alpha|}^2$, ${|\beta|}^2$, ${|\gamma|}^2$, and ${|\theta|}^2$, respectively. Such physical law causes the result of quantum computing to be non-deterministic. In actual applications, additional means are needed to verify the correctness of the output. For the algorithm used in quantum computing, if we can reduce the superposition state of qubits as much as possible in the previous step of measurement, we can get correct results with a high probability. \subsection{Quantum Algorithm} \label{subsec:q-algorithm} Quantum algorithms are designed to solve classical problems in a probabilistic manner~\cite{montanaro2016quantum}. For example, the phase estimation algorithm (QPE)~\cite{kitaev2002classical} takes a matrix $U$, an eigenvector $|\psi\rangle$, as inputs, and calculates the eigenvector value of $U$ corresponding to $|\psi\rangle$. Another example is Grover's search algorithm~\cite{grover1996fast}: given a sparse non-zero function $f: \{0,\ .\ .\ .\ ,2^{n-1}\} \rightarrow \{0,\ 1\}$, Grover's algorithm outputs a value $x$ such that the probability of $f(x) = 1$ is high enough to beat brute force search, which is true in general. Quantum algorithms are usually designed to solve (classical) problems more efficiently than existing classical algorithms. A quantum algorithm is generally realized on a quantum circuit designed according to the parameters of the problem (such as the size of the instance), usually through the following three steps of iteration: \begin{itemize} \item[(1)]{\it Memory initialization}: preparing the initial state (basis state) $|x\rangle$ with $x \in \mathbb{B}^{n}$ (classical register), \item[(2)]{\it Operation of the quantum circuit}: applying a quantum circuit of polynomially many in $n$ gates from some universal gate set, and \item[(3)]{\it Performing an elementary measurement}: measuring the quantum state to retrieve classic data. \end{itemize} Quantum circuits are regarded as a predictive tool that can provide some (classical) information probabilistically, from which the target result can be inferred. The fact that the probability is high enough is a direct consequence of the mathematical properties of the unitary mapping described by the quantum circuit. The essence of quantum algorithms (and the reason for their efficiency) is to describe an effective circuit that implements this unitary mapping. Therefore, it is crucial to describe an efficient circuit that can realize the unitary mapping in order to obtain the efficiency of the quantum algorithm. \begin{table*}[h] % \begin{threeparttable} \centering \caption{A Brief and Historical Summary of Quantum Programming Languages} \label{table:QPLsummary} \renewcommand\arraystretch{1.0} \small \begin{tabular}{|c|c|c|c|c|c|} \hline {\bf Year} & {\bf Language} & {\bf Reference(s)} & {\bf Semantics} & {\bf Host Language} & {\bf Paradigm} \\ \hline 1996 & Quantum Lambda Calculi & \cite{maymin1996extending} & Denotational & lambda Calculus & Functional \\\hline 1998 & QCL & \cite{omer1998procedural,omer2000quantum,omer2003structured,omer2005classical} & & C & Imperative \\\hline 2000 & qGCL &\cite{sanders2000quantum,zuliani2001quantum,zuliani2001formal,zuliani2004non} & Operational & Pascal & Imperative \\\hline 2003 & $\lambda_{q}$ & \cite{van2003quantum,van2004lambda} & Operational & Lambda Calculus & Functional \\\hline 2003 & Q language & \cite{bettelli2002architecture,bettelli2003toward} & & C++ & Imperative \\\hline 2004 & QFC (QPL) & \cite{selinger2004towards,selinger2004towards+,selinger2006lambda} & Denotational & Flowchart syntax (Textual syntax) & Functional \\\hline 2005 & QPAlg & \cite{jorrand2004quantum,lalire2004process} & & Process calculus & Other \\\hline 2005 & QML & \cite{altenkirch2005functional,altenkirch2005qml,grattage2006qml} & Denotational & Syntax similar to Haskell & Functional \\\hline 2004 & CQP & \cite{gay2004communicating,gay2005communicating,gay2006quantum} & Operational & Process calculus & Other \\\hline 2005 & cQPL &\cite{mauerer2005semantics} & Denotational & & Functional \\\hline 2006 & LanQ & \cite{mlnarik2006introduction,mlnarik2007quantum,mlnarik2007operational,mlnavrik2008semantics} & Operational & C & Imperative \\\hline 2008 & NDQJava & \cite{xu2008quantum} & & Java & Imperative \\\hline 2009 & Cove & \cite{purkeypile2009cove} & & C\# & Imperative \\\hline 2011 & QuECT & \cite{chakraborty2011quect} & & Java & Circuit \\\hline 2012 & Scaffold & \cite{abhari2012scaffold,javadiabhari2015scaffcc} & & C (C++)& Imperative \\\hline 2013 & QuaFL & \cite{lapets2013quafl}& & Haskell & Functional \\\hline 2013 & Quipper & \cite{green2013introduction,green2013quipper} & Operational & Haskell & Functional \\\hline 2013 & Chisel-Q & \cite{liu2013chisel} & & Scala & Imperative, functional \\\hline 2014 & LIQUi|$\rangle$ & \cite{wecker2014liqui} & Denotational & F\# & Functional \\\hline 2015 & Proto-Quipper & \cite{ross2015algebraic,rios2017categorical} & & Haskell & Functional \\\hline 2016 & QASM & \cite{pakin2016quantum} & & Assembly language & Imperative \\\hline 2016 & FJQuantum & \cite{feitosa2016fjquantum} & & Feather-weight Java & Imperative \\\hline 2016 & ProjectQ & \cite{haner2016high,projectq2017projectq,steiger2018projectq} & & Python & Imperative, functional \\\hline 2016 & pyQuil (Quil) & \cite{smith2016practical} & & Python & Imperative \\\hline 2017 & Forest & \cite{smith2016practical,regetti2017forest} & & Python & Declarative \\\hline 2017 & OpenQASM & \cite{cross2017open} & & Assembly language & Imperative \\\hline 2017 & qPCF &\cite{paolini2017mathsf,paolini2019qpcf}& & Lambda calculus & Functional \\\hline 2017 & QWIRE &\cite{paykin2017qwire} & & Coq proof assistant & Circuit \\\hline 2017 & cQASM & \cite{khammassi2018cqasm} & & Assembly language & Imperative \\\hline 2017 & Qiskit & \cite{ibm2017qiskit,gadi_aleksandrowicz_2019_2562111} & & Python & Imperative, functional \\\hline 2018 & IQu & \cite{paolini2019quantum} & & Idealized Algol & Imperative \\\hline 2018 & Strawberry Fields & \cite{killoran2018strawberry,killoran2019strawberry} & & Python & Imperative, functional \\\hline 2018 & Blackbird & \cite{killoran2018strawberry,killoran2019strawberry} & & Python & Imperative, functional \\\hline 2018 & QuantumOptics.jl & \cite{kramer2018quantumoptics} & & Julia & Imperative \\\hline 2018 & Cirq & \cite{cirq2018google} & & Python & Imperative, functional \\\hline 2018 & Q\# & \cite{svore2018q} & & C\# & Imperative \\\hline 2018 & $Q|SI\rangle$ & \cite{liu2018q} & & .Net language & Imperative \\\hline 2020 & Silq & \cite{bichsel2020sliq} & & Python & Imperative, functional \\\hline \end{tabular} \begin{tablenotes} \item[$\ast$]{\bf Year}: The invented year of the language, \item[$\ast$]{\bf Language}: The name of the language, \item[$\ast$]{\bf Reference(s)}: The main reference paper(s) of the language, \item[$\ast$]{\bf Semantics}: The type(s) of semantics for the language the authors described in the reference(s), \item[$\ast$]{\bf Host language}: The classical language on (or to) which the language is based (or extended), \item[$\ast$]{\bf Paradigm}: We consider each language to belong to three types of paradigms: {\it imperative language}, {\it Functional language}, and {\it circuit design language}. \end{tablenotes} \end{threeparttable} \end{table*} \subsection{Quantum Computing} \label{subsec:q-computing} The necessary procedure of quantum computing is as follows: \begin{itemize} \item Start with a set of calculated ground states (each qubit is initialized to $|0\rangle$ or $|1\rangle$ as input to the calculation); \item Pass the qubits through a series of quantum gates according to a predetermined algorithm; \item A series of bits is obtained as a result of quantum measurement. \end{itemize} Note that the realization of each quantum gate requires manual manipulation of quantum gates. In~\cite{sodhi2018quality}, a general architecture of a quantum computing system has been proposed. As shown in Figure~\ref{fig:architecture}, the architecture comprises of two parts: {\it quantum layer} and {\it classical layer}. The quantum layer contains purely quantum hardware and circuitry, and can be considered as comprising the quantum processing unit (QPU). The detailed composition of this layer is listed as follows. \begin{itemize} \item[$\bullet$] {\it Physical building blocks} include quantum hardware that typically makes use of superconducting loops for the physical realization of qubits, and the physical qubit coupler/interconnect circuitry and other elements that are needed for qubit addressing and control operations. \item[$\bullet$] {\it Quantum logic gates} contain physical circuitry that makes up quantum logic gates. \item[$\bullet$] {\it Quantum-classical interface} includes the hardware and software which provides the interfacing between classical computers and a QPU. \end{itemize} The classical layer consists of classical hardware and software, as shown in the following: \begin{itemize} \item[$\bullet$] {\it Quantum programming environment} provides the quantum assembly language that is necessary for instructing a QPU, the programming abstractions for writing quantum programs in a high-level programming language, and the simulator support, as well as IDEs. \item[$\bullet$] {\it Business applications} include quantum software applications that are written to cater to business requirements. \end{itemize} \section{Quantum Software Engineering} \label{sec:QSD} Quantum computing is not only a technological advancement, but can also be considered as a new general-purpose paradigm for software development, which can radically influence the way a software system is conceived and developed. This calls for new, quantum-specific, software engineering approaches. In this section, we first introduce quantum programming and define quantum software engineering. This will be followed by an introduction of a quantum software life cycle for quantum software development, which will be a baseline for discussing the state of the art of quantum software engineering activities in the rest of the paper. \subsection{Quantum Programming} \label{subsec:q-programming} Quantum programming is the process of designing and building executable quantum computer programs to achieve a particular computing result~\cite{miszczak2012high,ying2016foundations}. Since the quantum programming efforts predate the other quantum software development techniques, we will focus first on quantum programming. Here, we briefly introduce the concepts, languages, and semantics of quantum programming. An excellent book written by Ying~\cite{ying2016foundations} covers similar material to this section, with more detailed discussions of the fundamentals of quantum programming. \subsubsection{\bf Concepts of Quantum programming} \label{subsubsec:concept} A quantum program consists of blocks of code, each of which contains classical and quantum components. Quantum operations can be divided into {\it unitary} operations (reversible and preserve the norm of the operands), and {\it non-unitary} operations (not reversible and have probabilistic implementations). A quantum program executed on a quantum computer uses a quantum register of qubits to perform quantum operations, and a classical register of classic bits to record the measurements of the qubits' states and apply quantum operators conditionally~\cite{cross2017open}. Therefore, a typical quantum program usually consists of two types of instructions. One is called {\it classical instructions} that operate on the state of classical bits and apply conditional statements. Another is called {\it quantum instructions} that operate on the state of qubits and measure the qubit values. \subsubsection{\bf Languages for Quantum Programming} \label{subsubsec:QPL-qpl} Early quantum programming language development efforts focused on exploring the quantum Turing Machine (QTM) model proposed by Deutsch~\cite{deutsch1985quantum}, but did not result in practical tools for programming quantum computers. This situation made the quantum circuit models quickly become the driving force for quantum programming. To build it as a practical language (rather than just designing circuits), Knill~\cite{knill1996conventions,knill2000encyclopedia} proposed a pseudocode notion for quantum programming and the model of a quantum random-access machine (QRAM) in which the quantum system is controlled by a classical computer. This model influenced the design of subsequent quantum programming languages. {\"O}mer~\cite{omer1998procedural,omer2000quantum,omer2003structured,omer2005classical} developed the first practical quantum programming language QCL with a C-like syntax in 1998. Since then, many quantum programming languages have been designed and implemented in terms of different types of language paradigms for programming quantum computers, including qGCL~\cite{sanders2000quantum,zuliani2001quantum,zuliani2001formal,zuliani2004non}, LanQ~\cite{mlnarik2006introduction,mlnarik2007quantum}, Scaffold~\cite{abhari2012scaffold,javadiabhari2015scaffcc}, Q language~\cite{bettelli2002architecture,bettelli2003toward}, NDQJava~\cite{xu2008quantum}, Q\#~\cite{svore2018q}, $Q|SI\rangle$~\cite{liu2018q}, ProjectQ~\cite{haner2016high,steiger2018projectq}, and Qiskit~\cite{gadi_aleksandrowicz_2019_2562111} for imperative quantum programming languages. Quantum lambda calculi~\cite{maymin1996extending}, QFC (QPL)~\cite{selinger2004towards,selinger2004brief,selinger2006lambda}, QML~\cite{altenkirch2005functional,altenkirch2005qml,grattage2006qml}, cQPL~\cite{mauerer2005semantics}, ~\cite{vizzotto2005concurrent}, QuaFL~\cite{lapets2013quafl}, Quipper~\cite{green2013introduction,green2013quipper}, LIQUi|>~\cite{wecker2014liqui}, qPCF~\cite{paolini2017mathsf, paolini2019qpcf}, and IQu~\cite{paolini2019quantum} for functional quantum programming languages, and QPAlg~\cite{jorrand2004quantum}, qASM~\cite{pakin2016quantum}, QuECT~\cite{chakraborty2011quect}, QWire~\cite{paykin2017qwire}, Quil~\cite{smith2016practical} for other quantum programming languages paradigms. Table~\ref{table:QPLsummary} gives a summary of quantum programming languages since it first emerged in 1996. In the table, we summarize each quantum programming language according to six categories: {\it year}, {\it language}, {\it reference}, {\it semantics}, {\it host language}, and {\it language paradigm}. For more information regarding quantum programming languages, one can also refer to survey papers discussed in Section~\ref{sec:work}. \subsubsection{\bf Semantics for Quantum Programming} \label{subsubsec:QPL-semantics} Semantics for quantum programming have also been studied extensively, and recently many approaches~\cite{selinger2004towards,brunet2004dynamic,feng2005semantics,mauerer2005semantics,mlnarik2007operational, mlnavrik2008semantics,gay2010semantic,ying2012quantum,pagani2014applying,cho2016semantics,ying2016foundations,hasuo2017semantics,clairambault2019game} have been proposed for describing the semantics of quantum programming languages. These approaches can be classified into three categories: {\it operational semantics}, {\it denotational semantics}, and {\it axiomatic semantics}. Although there is no survey for the semantics of quantum programming languages, readers can also refer to some survey papers~\cite{selinger2004brief,gay2005bibliography,jorrand2007programmer,ying2012quantum} on quantum programming languages for the detailed discussions of the quantum language semantics issues. \subsection{Definition of Quantum Software Engineering} \label{subsec:QSE-definition} While quantum programming languages are exciting developments, coding is not the primary source of problems in quantum software development. Requirements and design problems are much more common and costly to correct. Therefore, the focus on quantum software development techniques should not be limited to quantum programming issues, but should also focus on other aspects of quantum software engineering. The promise quantum software development methodologies hold for overcoming complexity during analysis and design and accomplishing analysis and design reuse is really significant. If it is accepted that quantum software development is more than quantum programming, then a whole new approach, including life cycle, must be adopted to address other aspects of quantum software development. Software engineering is a problem-solving process with roots in behavioral, engineering, project management, computer science, programming, cost management, and mathematical science. According to~\cite{fuggetta2000software}, the software engineering process can be defined as follows: \vspace*{2mm} \noindent {\it "A software process can be defined as the coherent set of policies, organizational structures, technologies, procedures, and artifacts that are needed to conceive, develop, deploy, and maintain a software product."} \vspace*{2mm} \noindent For the definition of software engineering, the IEEE has developed a 'standard' definition~\cite{ieee1990ieee} for classical software engineering as follows: \vspace*{2mm} \noindent {\it "(1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1)."} \vspace*{2mm} Fritz Bauer~\cite{naur1969software} first stated the definition of software engineering as following: \vspace*{2mm} \noindent {\it "The establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines."} \vspace*{2mm} \noindent Although many definitions of software engineering~\cite{boehm1976software,andriole1993software,ieee1990ieee,finkelsteiin2000software,sommerville2011software,webster2020software} have been developed since then, Bauer's definition is still the widely accepted one~\cite{pressman2010software}, and can serve as the basis for our definition of quantum software engineering in this paper. In this paper, we define quantum software to include not only a set of executable quantum programs but also associated supporting libraries and documents needed to develop, operate, and maintain quantum software. By defining quantum software in this broader perspective, we hope to emphasize the necessity of considering timely documentation as an integral part of the quantum software development process. Inspired by the definitions of classical software engineering, we define quantum software engineering as following: \vspace*{2mm} \noindent {\it "Quantum software engineering is the use of sound engineering principles for the development, operation, and maintenance of quantum software and the associated document to obtain economically quantum software that is reliable and works efficiently on quantum computers."} \vspace*{2mm} \noindent In this definition, we would like to highlight three important issues in quantum software engineering. First, it is important to apply the "{\it sound engineering principles}" to quantum software development. Second, the quantum software should be built "{\it economically}." Finally, the quantum software should be "{\it reliable}" and needs to work "{\it efficiently}" on quantum computers. \subsection{Quantum Software Engineering Methods, Tools, and Processes} \label{subsec:QSE-process} Quantum software engineering, as its classical counterpart~\cite{dorfman1997software}, can also be considered as a layered technology, which contains three elements: {\it methods}, {\it tools}, and {\it processes}. Quantum software engineering {\it methods} provide the techniques for constructing the quantum software. They consist of a wide range of tasks, including the design of data structures, program architecture, algorithm procedure, coding, testing, and maintenance. Quantum software engineering {\it tools} provide automated or semi-automated support for these methods. Quantum software engineering {\it processes} are the foundation for the quantum software engineering. The process provides the glue that holds the methods and tools together and enables the rational and timely development of quantum software. They define the sequence in which methods would be applied: the deliverables, the controls that help quality assurance and change coordination, as well as the milestones that enable quantum software managers to access the progress. Different ways of combining these three elements of quantum software engineering would lead to different quantum software engineering models. The selection of a suitable model should be based on the nature of the project and the application, the methods and tools to be used, and the controls and deliverables that are required. Three typical examples discussed in~\cite{pressman2010software} for classical software engineering are the classical life cycle (or waterfall model), the prototyping model, and the evolutionary model. Each of these models could be extended to the domain of quantum computing for supporting the development of quantum software systems. In Section~\ref{subsec:QSLC}, we will introduce a life cycle for quantum software, which is inspired by the idea of the waterfall model from classical software engineering. \begin{figure*}[t] \centerline{\includegraphics[width=0.75\linewidth]{waterfall.png}} \caption{A quantum software life cycle.} \label{fig:c-lifecycle} \end{figure*} \subsection{A Generic View of Quantum Software Engineering} \label{subsec:QSE-view} Pressman~\cite{pressman2010software} introduced a generic view of classic software engineering, which can be extended to the field of quantum software engineering. Quantum software engineering can be regarded as a branch of systems engineering~\cite{everitt2016quantum,chestnut1967systems}, which involves the development of large and complex quantum software systems. To this end, the following issues must be discussed. \begin{itemize}[leftmargin=2em] \item What is the problem to be solved by quantum software? \item What characteristics of the quantum software are used to solve the problem? \item How will the quantum software (and the solution) be implemented? \item What method will be used to detect errors made in the design and construction of the quantum software? \item How will the quantum software be supported in a long period, when the users request corrections, adaptations, and enhancements? \end{itemize} To adequately engineering quantum software, a quantum software engineering process must be defined. In this section, we consider the generic characteristics of the quantum software process. The work related to quantum software engineering can be categorized into three generic phases: {\it definition}, {\it development}, and {\it support} phases. These phases are independent of the application domain, project size, or complexity. The definition phase focuses on {\it what}, dealing with the problems that quantum software developers try to determine. These issues include topics such as what information to process, what functions and performance are required, what interfaces to build, what design constraints exist, and what verification standards are needed to define a successful quantum software system. At this stage, there may be three sub-processes, including quantum software system analysis, quantum software project planning, and quantum software requirements analysis. The development phase focuses on {\it how}, which deals with the problems that quantum software developers try to describe. These issues include how to design quantum software architecture and related data structures, how to implement quantum program details, how to translate quantum software design into a quantum programming language, and how to conduct quantum software testing. This phase may also contain some sub-processes, including quantum software design, quantum software implementation, and quantum software testing. The maintenance phase focuses on {\it change}, which is related to error correction, adaptions required as the quantum software environment evolves, and modifications due to enhancements brought about by changes in customer requirements. The maintenance phase reapplies the definition and development phases, but it is carried out in the context of existing quantum software. \begin{table*}[h] \centering \caption{Brief Summary of the Patterns for Quantum Algorithms in~\cite{leymann2019towards}} \label{table:pattern} \renewcommand\arraystretch{1.1} \small \begin{tabular}{|l|p{9cm}|} \hline \multicolumn{1}{|c|}{\bf Pattern Type} & \multicolumn{1}{c|}{\bf Description} \\ \hline (1) Initialization (aka state preparation) & Initializing the input of a quantum register in a straightforward manner \\ \hline (2) Uniform superposition & \tabincell{l}{Creating an equally weighted superposition of all possible states of the qubits\\ of a quantum register} \\ \hline (3) Creating entanglement & Creating an entangled state \\ \hline (4) Function table & Computing a function table of a finite Boolean function \\ \hline (5) Oracle (aka black box) & Reusing the computation of another quantum algorithm \\ \hline (6) Uncompute (aka unentangling aka copy-uncompute) & Removing entanglement that resulted from a computation \\ \hline (7) Phase shift & Distinguishing important aspects of a state efficiently \\ \hline (8) Amplitude amplification & Increasing the probability of finding a solution \\ \hline (9) Speedup via verifying & Achieving a speedup when verifying a solution is simple \\ \hline (10) Quantum-classic split & Splitting a solution between a quantum computer and a classical computer \\ \hline \end{tabular} \end{table*} \subsection{\bf Quantum Software Life Cycle} \label{subsec:QSLC} A software life cycle model can be defined as a reference model for a software development process that supports the design and construction of high-quality software~\cite{ghezzi2002fundamentals}. A software life cycle provides a well-structured flow of phases that can help an organization to quickly produce high-quality software that is well-tested and ready for production use. Several software life cycle models have been proposed for classical software development, including the waterfall model~\cite{royce1970managing}, evolutionary model~\cite{hirsch1985evolutionary}, and the Spiral model~\cite{boehm1988spiral}. Among them, the most widely accepted life cycle model for classical software development is the waterfall life cycle model, which is sometimes called the classical model. Other models are often improved upon it. As the first step, this paper introduces a systemic, sequential life cycle model for the quantum software development process to support the design and construction of quantum software systems. A quantum software life cycle model begins at the system level and progresses through requirements analysis, design, implementation, testing, and maintenance. The model, however, is extensible in the sense that one can add more phases into it if necessary. Figure~\ref{fig:c-lifecycle} shows the life cycle, which encompasses the following phases, each one is flowing into the next. \begin{itemize}[leftmargin=3em] \item {\it Quantum software requirements analysis} \item {\it Quantum software design} \item {\it Quantum software implementation} \item {\it Quantum software testing} \item {\it Quantum software maintenance} \end{itemize} In the following, we briefly introduce each phase of the life cycle model from the perspective of quantum software development. \subsubsection{\bf Quantum Software Requirements Analysis} The quantum software life cycle model begins with the requirements analysis phase, where the stakeholders discuss the requirements of the software that needs to be developed to achieve a goal. The requirements analysis phase aims to capture the detail of each requirement and to make sure everyone understands the scope of the work and how each requirement is going to be fulfilled. The analysis creates a set of measurable quantum software requirements that specify, from the supplier's perspective, what characteristics, attributes, and functional and performance requirements the quantum software system is to possess, to satisfy stakeholder requirements. Later life cycle phases, including design, implementation, testing, maintenance for quantum software, assume that requirements analysis continues through the quantum software life cycle. \subsubsection{\bf Quantum Software Design} Quantum software design is the second phase in the life cycle, which is a process to transform user requirements into a suitable form that helps the programmer in quantum software implementation (coding). As classical software design~\cite{dorfman1997software}, quantum software design may also involve two stages: {\it architectural design} and {\it detailed design}. {\it Architectural design} defines a collection of quantum software components, their functions, and their interactions (interfaces) to establish a framework for the development of a quantum software system. {\it Detailed design} refines and expends the architectural design to describe the internals of the quantum software components (the algorithms, processing logic, data structures, and data definitions). Detailed design is complete when the description is sufficient for implementation (coding). \subsubsection{\bf Quantum Software Implementation} After completing the requirements and design activities, the next phase of the life cycle is the implementation or development of quantum software. At this phase, developers start coding based on the requirements and design discussed in the previous phases. Developers also perform the unit testing for each component to test the new code they write, review the code for each other, create builds, and deploy quantum software to the environment. This development cycle is repeated until the requirements are met. \subsubsection{\bf Quantum Software Testing} Testing is the last phase of the quantum software life cycle before the software is delivered to customers. During testing, testers start to test the quantum software according to the requirements. The aim is to find defects within the software as well as to verify whether the software behaves as expected based on the documentation from the quantum software requirements analysis phase. \subsubsection{\bf Quantum Software Maintenance} Quantum software maintenance is the last phase of the quantum software life cycle, which represents all the modifications and updates made after the delivery of quantum software products. \section{Quantum Software Requirements Analysis} \label{sec:requirement} To the best of our knowledge, until recently, no research work has been proposed to address the issues on quantum software requirements analysis. However, as quantum software development resources are going to be accumulated, we believe that the development of methodologies, techniques, and tools to support quantum software requirements analysis will become a critical and inevitable issue. \begin{table*}[h] \begin{threeparttable} \centering \caption{The Effects of QCS characteristics on Quality Attributes~\cite{sodhi2018quality}} \label{table:QAs} \renewcommand\arraystretch{1.1} \small \begin{tabular}{|l|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|p{0.3cm}|} \hline {\bf QCS Characteristics}\hspace*{2.6cm}\rotatebox{90}{\tabincell{l}{\bf{Quality}\\\bf{Attributes}}} & \rotatebox{90}{Availability} & \rotatebox{90}{Interoperability} & \rotatebox{90}{Maintainability} & \rotatebox{90}{Manageability} & \rotatebox{90}{Performance} & \rotatebox{90}{Reliability} & \rotatebox{90}{Scalability} & \rotatebox{90}{Security} & \rotatebox{90}{Testability} & \rotatebox{90}{Usability} \\ \hline {Platform heterogeneity} & U & $-$ & U & U & $-$ & U & $-$ & $-$ & U & $-$ \\ \hline {Special physical environment} & U & $-$ & $-$ & U & $-$ & U & U & U & U & $-$ \\ \hline {Large form factor} & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & U & $-$ & $-$ & $-$ \\ \hline {Higher energy efficiency} & $-$ & $-$ & $-$ & $-$ & F & $-$ & F & $-$ & $-$ & $-$ \\ \hline {Lower level of the programming abstractions} & U & $-$ & U & $-$ & $-$ & U & $-$ & $-$ & U & $-$ \\ \hline {Remote software development and deployment} & $-$ & $-$ & U & $-$ & $-$ & $-$ & $-$ & $-$ & U & $-$ \\ \hline {Dependency on quantum algorithms} & $-$ & U & U & $-$ & F & $-$ & F & $-$ & U & $-$ \\ \hline {Limited portability of software} & U & U & U & $-$ & $-$ & $-$ & U & $-$ & $-$ & $-$ \\ \hline {Limited quantum networking} & U & $-$ & $-$ & $-$ & U & U & U & $-$ & $-$ & $-$ \\ \hline {Lack of native quantum operating system} & $-$ & $-$ & $-$ & U & U & U & U & U & $-$ & $-$ \\ \hline {Fundamentally different programming model} & $-$ & U & U & $-$ & $-$ & U & $-$ & U & U & $-$ \\ \hline {Dependency on classical storage} & $-$ & $-$ & $-$ & U & U & U & U & $-$ & $-$ & $-$ \\ \hline \end{tabular} \begin{tablenotes} \item[$\ast$] Cell value indicates an impact on quality attributes: {\it "F":~Favorable, "U":~Unfavorable, "$-$":~Unknown/Neutral}. \end{tablenotes} \end{threeparttable} \end{table*} \section{Quantum Software Design} \label{sec:design} Developing quantum algorithms is considered extremely difficult comparing to classical counterparts due to some intrinsic features of quantum computing, such as superposition and entanglement. Hence, new design principles for quantum algorithms are strongly demanded. Although software design is recognized as a critical phase in classical software engineering, researches~\cite{Perez-Delgado2020quantum,cartiere2016quantum,thompson2018quantum} on the principle and methodology for quantum software design are just starting. This section gives an overview of state of the art on quantum software design. \subsection{Quantum Software Modelling} \label{subsec:d-modelling} The Unified Modeling Language (UML)~\cite{boochunified,rurnbaughunified,jacobson1999unified} is a general-purpose, well-known modeling language in the field of classical software engineering. It provides a standard way to visualize the design of the classical software development life cycle. It seems reasonable to extend the UML approach to support quantum system design. Recently, P\'{e}rez-Delgado and Perez-Gonzalez~\cite{Perez-Delgado2020quantum} presented an approach to extending the UML to model quantum software systems. The extension covers two types of UML diagrams: {\it class diagram} and {\it sequence diagram}. Extension for the other three components (i.e., composite structure diagrams, activity diagrams, and state machine diagrams) still needs to be studied. They also shared three fundamental observations from their work, including: \begin{itemize} \item The nature of quantum computation, leading to the intrinsic difference between quantum and classical computation is in how it achieves its goal, \item Quantum computation changes the very nature of information itself, and is much more productive and powerful than classical computation, and \item The classical vs. quantum nature of the information used by a module is an important consideration when discussing about its internal implementation and interface. \end{itemize} Their observations, though not complete, can serve as a base for further studying the design methodologies of quantum software systems. \subsection{Quantum Software Specification} \label{subsec:d-specification} Quantum computing relies on quantum mechanics, which is a subject more familiar to physicists rather than computer scientists and software engineers. Thus, we must be aware of the underlying theory before we reason about quantum computers. Even though delivering the principles of quantum mechanics is not an easy task, due to their counter-intuitive nature. Recently, the method for reasoning about quantum computing is a mixture of linear algebra and Dirac representation, which is a subject more suitable for physicists than computer scientists (software engineers)~\cite{cartiere2016quantum}. Therefore, it is necessary to provide a more "intuitive" way to think and write quantum algorithms, thereby simplifying the design and implementation of quantum software. This can be achieved, for example, by introducing a specification language, which adopts the symbolism and reasoning principle of software engineering and will take on the role of Hilbert space algebra, allowing us to describe quantum structures and design quantum algorithms in a more natural way~\cite{grattage2006qml}. To this end, Cartiere~\cite{carmelo2013quantum,cartiere2016quantum} presented some work on defining a formal specification language for quantum algorithms. The goal is to introduce a formal specification language that can be used to represent the basic notations of quantum computing, which are more suitable for quantum computing scientists and quantum software engineers. The language is based on Z~\cite{abrial1980specification,woodcock1996using}, a well-studied formal specification language for classical software systems. The language can represent some elementary quantum gates such as {\it Identity gate}, {\it Pauli-X gate}, {\it Phase Shift gate}, {\it Pauli-Z gate}, {\it Hadamard gate}, and {\it C-NOT gate}, which perform unitary transformations to support writing quantum programs. Cartiere also showed how to use the language to specify some simple quantum algorithms, such as the Deutsch-Jozsa algorithm~\cite{deutsch1985quantum,deutsch1992rapid}. It is, however, unclear if the language can be used to represent more complex quantum algorithms such as Shor's {\it integer factoring} quantum algorithm~\cite{shor1999polynomial} and Grover's {\it database search} quantum algorithm~\cite{grover1996fast}. \subsection{\bf Pattern Language for Quantum Software} \label{subsec:q-pattern} In classical computing, software engineers have long noticed that specific patterns recur and persist across different applications and systems. A pattern is a systematic design that can capture the experiences of experts on good designs or best practices, and record the essence of this wisdom in an easy-to-understand way for peers to use~\cite{alur2003core}. Patterns can be used to describe and reason about the relationship between a specific context, and a problem and its solutions in that context. A pattern language is a network of related patterns that may jointly contribute to an encompassing solution of a complex problem~\cite{alexander1977pattern,beck1987using}. Patterns and pattern languages are used in multiple design and engineering disciplines, especially for software architecture design~\cite{frank1996pattern,buschmann2007pattern,fowler2002patterns,volter2006software}. Leymann~\cite{leymann2019towards} proposed a pattern language for supporting the development of quantum algorithms. There is a need for documenting solutions for problems recurred in quantum software development, as observed and mentioned in~\cite{nielsen2002quantum,lipton2014quantum}, which informally summarized some basic "tricks" used in quantum algorithms. The main contribution of this work is to systematize this to become a subject of a software engineering discipline for quantum algorithms. In~\cite{leymann2019towards}, Leymann identified ten types of basic patterns derived from existing quantum algorithms that mainly based on gate models. Each pattern is described through eight elements, including {\it name}, {\it intend}, {\it icon}, {\it problem statement}, {\it context}, {\it solution}, {\it know uses}, and {\it next}. The summary of these ten types of patterns is listed in Table~\ref{table:pattern}. Additionally, Leymann also discussed some issues on the using of these patterns regarding software engineering, and mentioned that the pattern language proposed can be stored as a pattern repository. In the repository, each pattern could be linked with the corresponding implementation in a concrete quantum programming language to support the programming of the pattern. Note that the patterns identified in~\cite{leymann2019towards} is in no way meant to be exhaustive, and more patterns should be identified in the future to make the patterns and the pattern language practical useful. \subsection{\bf Quality Attributes} \label{subsec:d-architecture} With practical quantum computing systems (QCSs) becoming a reality rapidly, it is desirable to make use of their real potential in software applications. Therefore, it is crucial to determine the implications of QCSs for quantum software architecture, and some questions must be answered, like "{\it What are the key characteristics of QCSs, which are efficacious for quantum software development?}" and "{\it In what way does a QCS affect the quality attributes (and non-functional requirements) of quantum software applications?}" To answer these questions, Sodhi~\cite{sodhi2018quality} presented an in-depth study on the state-of-the-art QCSs for identifying all such characteristics of a QCS that are relevant from a software architecture perspective, and performed an investigation on how these characteristics may affect the architecture of a quantum software application. As the first step, Sodhi sought and investigated related papers and software artifacts of QCSs and identified thirteen critical characteristics of QCSs. This includes platform heterogeneity, physical environment, large form factor, energy efficiency, lower level of the programming abstractions, remote software development and deployment, dependency on quantum algorithms, limited portability of software, limited quantum networking, lack of native quantum operating system, limited multitasking and multiprocessing, fundamentally different programming model, and dependency on classical storage. These key characteristics of QCSs form a base for further studies on how they could affect the quality attributes of the software architecture of a QCS. Quality attributes (QAs) are measurable or testable properties of a system that is used to indicate how well the system satisfies the needs of its stakeholders~\cite{bass2012software}. They, therefore, have a system-wide impact on the architecture, implementation as well as the operation of the system. QAs are usually categorized in various specialized areas, such as design qualities, runtime qualities, system qualities, user qualities, non-runtime qualities, architecture qualities, and business qualities~\cite{bourque2014guide,patterns2009microsoft}. It is recognized that when software applications have at least a reasonable level of quality attributes such as performance, reliability, scalability, security, and usability, the overall design and quality of the applications can be considered good~\cite{bass2000quality,bass2012software}. For investigating the impact of QCS characteristics on the QAs, Sodhi~\cite{sodhi2018quality} adapted a slightly expanded list of QAs, which include availability, interoperability, maintainability, manageability, performance, reliability, scalability, security, testability, and usability. He only discussed the parts of a QA that are related to determine how this QA is affected by the characteristics of QCSs. To do so, one may take each characteristic of QSC and considers how it affects the various QAs. The impact of each characteristic on QAs is classified as {\it favorable}, {\it unfavorable}, or {\it neutral}. The summary of the impact of various characteristics of QCSs on QAs is shown in Table~\ref{table:QAs}, and the detailed discussions on those impacts can be referred to the paper~\cite{sodhi2018quality}. Additionally, several issues should be further studied, such as identifying more characteristics of QCSs to enhance the current list and investigating more QAs for obtaining more complete findings (insights) for the problem studied in the paper. In conclusion, Sodhi~\cite{sodhi2018quality} argued that the evolution of the technology would likely introduce additional concerns and factors that may affect the architecture of quantum software application, as quantum computing is undergoing rapid development. \section{Quantum Software Implementation} \label{sec:implementation} This section will be brief, as much of the material has been covered in the survey (or overview) papers of quantum programming languages~\cite{selinger2004brief,gay2006quantum,sofge2008survey,miszczak2011models,ying2012quantum,valiron2013quantum,valiron2015programming,ying2016foundations,chong2017programming,spivsiak2017quantum,garhwal2019quantum,zorzi2019quantum}. The initial focus of quantum software development on the implementation level has resulted in a range of quantum programming approaches. It is crucial to identify a suitable quantum programming technique when implementing a quantum application. Most quantum programming techniques support one or more base programming languages. For example, languages are available for quantum programming in C (QCL), C++ (Scafflod), C\# (Q\#), Python (ProjectQ, Qiskit, Forest), F\# (LIQUi|$\rangle$), Scala (Chisel-Q), and Haskell (Quiper). Therefore, the first natural step in the choice of a suitable technique is to reduce the available set of techniques to those that support the base programming language to be employed in application development. If no suitable technique is available for the base programming language, a custom language needs to be implemented. \begin{table*}[h] \centering \caption{Bug types and their corresponding defense types for quantum software in~\cite{huang2018qdb,huang2019statistical}} \label{table:bugtype} \renewcommand\arraystretch{1.1} \small \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{\bf Bug Type} & \multicolumn{1}{c|}{\bf Defense Strategy} \\ \hline (1) Incorrect quantum initial values & Assertion checks for classical and superposition preconditions \\ \hline (2) Incorrect operations and transformations & Assertion checks for unit testing\\ \hline (3) Incorrect composition of operations using iteration & Assertion checks for classical intermediate states \\ \hline (4) Incorrect composition of operations using recursion & Assertion checks for entangled intermediate states\\ \hline (5) Incorrect composition of operations using mirroring & Assertion checks for product state postconditions\\ \hline (6) Incorrect classical input parameters & Assertion checks for classical postconditions \\ \hline (7) Incorrect deallocation of qubits & Assertions on algorithm postconditions\\ \hline \end{tabular} \end{table*} \section{Quantum Software Testing} \label{sec:testing} Quantum computers are powerful. However, since the human intuition is much better adapted to the classical world than the quantum world, programmers might make mistakes more frequently when designing programs on quantum computers compared to programming on classical counterparts \cite{ying2012floyd}. Furthermore, compared to the classical computers, quantum computers have very different properties, such as quantum superposition, entanglement, and no-cloning~\cite{nielsen2002quantum}. Therefore, the prediction of the behavior of quantum software is difficult~\cite{chong2017programming,ying2012floyd}. As a result, new quantum software testing and debugging techniques are needed to discover~\cite{stepney2006journeys}. Recently, researches are emerging for identifying bugs, and for testing and debugging of quantum software. This section gives an overview of state of the art on quantum software testing, in a broader sense, according to quantum software {\it bug type}, {\it assertion}, {\it testing}, {\it debugging}, and {\it analysis}. \subsection{Bug Types in Quantum Software} \label{subsec:bug-type} A software bug is regarded as the abnormal program behaviors which deviate from its specification~\cite{allen2002bug}. Fault models are a common concept in the software and hardware testing field. To debugging and testing quantum software, it is essential to define some fault model which can help deeply understanding the behavior of bugs in quantum programs. Several researches considered this issue recently. To support quantum software debugging, Huang and Martonosi~\cite{huang2018qdb,huang2019statistical} studied the bug types for special quantum programs. Based on the experiences of implementing some quantum algorithms, they identified several types of bugs specific to quantum computing. These bug types include incorrect quantum initial values, incorrect operations and transformations, incorrect composition of operations using iteration, incorrect composition of operations using recursion, incorrect composition of operations using mirroring, incorrect classical input parameters, and incorrect deallocation of qubits. They also proposed some defense strategies for each type of bugs. The detailed summary of these bugs and their corresponding strategies can be found in Table~\ref{table:bugtype}. \subsection{Assertions for Quantum Software} \label{subsec:q-assertion} An assertion is a statement about the expected behavior of a software component that must be verified during execution~\cite{foster2006assertion}. In software development, a programmer defines an assertion to ensure a specific program state at run time. Assertions have been used extensively for detecting runtime faults, documenting programmer intent, and formally reasoning about the correctness of classical programs~\cite{clarke2006historical}. Recently, several researches have been carried out for defining and identifying assertions in quantum software systems. \subsubsection{\bf Invariant and Inductive Assertions} \label{subsubsec:invariant} Ying {\it et al.}~\cite{ying2017invariants} studied the issue of how to define the notion of invariant and inductive assertion~\cite{floyd1993assigning} for quantum programs. They considered this issue in two different ways -- from the perspectives of additive and multiplication. They proved that both additive and multiplication invariants could be used to prove the partial correctness of quantum programs. They also showed that additive invariants could be derived from additively inductive assertions, and can be generated through an SDP (Semidefinite Programming~\cite{vandenberghe1996semidefinite}) solver. However, how to generate the multiplication invariants is still an open problem that has not been addressed in~\cite{ying2017invariants}, and therefore still needs to be explored. \subsubsection{\bf Applied Quantum Hoare Logic} \label{subsubsec:aQHL} Hoare logic (also known as Floyd-Hoare logic)~\cite{hoare1969axiomatic, floyd1967assigning} is a formalism with a set of logic rules used for rigorous reasoning on the correctness of computer programs. Hoare logic has been extensively used for the verification, debugging, and testing of classical software~\cite{adrion1982validation,apt2019fifty}. Recently, researchers have extended Hoare logic to the quantum domain to support the formal verification of quantum programs~\cite{brunet2004dynamic,baltag2006lqp,d2006quantum,kakutani2009logic,ying2012floyd,unruh2019quantum,zhou2019applied}. Among them, Li {\it et al.}~\cite{zhou2019applied} introduced applied quantum Hoare logic (aQHL), which is a simplified version of quantum Hoare logic (QHL)~\cite{ying2012floyd}, with particular emphasis on supporting debugging and testing of the quantum programs. aQHL simplified two issues of QHL through: (1) binding QHL to a particular class of pre- and post-conditions (assertions), that is, projections, to reduce the complexity of quantum program verification and to provide a convenient way used for debugging and testing of quantum software. (2) providing additional rules for reasoning about the robustness of quantum programs. As aQHL can be used to specify assertions such as pre- and post-conditions, and invariants for quantum software, hopefully, it could provide a very efficient way to support assertion-based quantum software debugging and testing. Although no detail ~\cite{zhou2019applied} was given on how aQHL could be used to debug and test quantum software, we believe it could be a promising way for tracking errors in quantum software, similar to its classical counterpart. Moreover, besides aQHL, similar approaches that use other types of (dynamic) quantum logic~\cite{brunet2004dynamic,baltag2006lqp,kakutani2009logic} for specifying assertions for supporting debugging of quantum software should also be investigated. \subsubsection{\bf Assertion Library for Quantum Software} \label{subsubsec:property} As we will discuss in Section~\ref{subsec:q-testing}, Honarvar and Nagarajan~\cite{honarvar2020property} proposed a property-based approach to testing quantum programs in Q\# (developed by Microsoft)~\cite{svore2018q}. To this end, they developed a library of assertion methods that can be used to specify various properties (assertions) in Q\# programs for testing. These assertion methods include \verb+AssertProbability+, \verb+AssertEntangled+, \verb+AssertEqual+, \\\verb+AssertTeleported+ and \verb+AssertTransformed+. \verb+AssertProbability+ uses a robust statistical method to test the probability of observing a qubit in a given state. \verb+AssertEntangled+ takes two qubits as its arguments to test whether or not they are entangled. \verb+AssertEqual+ tests the equality of two states in quantum programs. \verb+AssertTeleported+ tests quantum teleportation as it is a significant protocol in the quantum realm. \verb+AssertTransformed+ tests the validity of any unitary transformation. The detailed description of each assertion method is summarized in Table~\ref{table:AssertionMethod}. Although those assertion methods are used initially for testing, they also have the potential to be used in quantum program debugging and runtime error checking. \begin{table*}[h] \centering \caption{Brief Summary and Comparison of Assertion Methods in~\cite{honarvar2020property} and Assertion Functions in~\cite{liu2020quantum}} \label{table:AssertionMethod} \renewcommand\arraystretch{1.2} \footnotesize \begin{tabular}{|l|p{9cm}|} \hline \multicolumn{1}{|c|}{\bf Assertion Methods~\cite{honarvar2020property}} & \multicolumn{1}{c|}{\bf Description} \\ \hline \verb+AssertProbability(q0,state,probability)+ & To take three arguments to test the expected \verb+probability+ of observing a qubit \verb+q0+ in a given \verb+state+ after measurement. \\ \hline \verb+AssertEntangled(q0,q1)+ & To take two qubits \verb+q0+ and \verb+q1+ as its arguments to test whether or not they are entangled. \\ \hline \verb+AssertEqual(q0,q1)+ & To take two qubits \verb+q0+ and \verb+q1+ as its arguments to compare the equality of these two states. \\ \hline \verb+AssertTeleported(q0,q1)+ & To take the sent and received qubits \verb+q0+, \verb+q1+ as its arguments to test quantum teleportation.\\ \hline \verb+AssertTransformed(q0,(+$\theta$ \verb+interval)(+$\phi$ \verb+interval))+ & To test the validity of any unitary transformation.\\ \hline \hline \multicolumn{1}{|c|}{\bf Assertion Functions~\cite{liu2020quantum}} & \multicolumn{1}{c|}{\bf Description} \\ \hline \verb+classical_assertion(circuit,qubitList,value)+ & To take three arguments to specify the quantum circuit under test, the list of qubits for assertion, and a particular classical value to assert for. \\ \hline \verb+entanglement_assertion(circuit,qubitList,flag)+ & To take three arguments specifying the quantum circuit under test, the list of qubits for assertion, and the type of entanglement. \\ \hline \verb+superposition_assertion(circuit,qubitList,phaseDict,flag)+ & To take four arguments specifying the quantum circuit under test, the list of qubits for assertion, the quantum state dictionary for the qubits, and a flag. \\ \hline \end{tabular} \end{table*} \subsection{Quantum Software Testing} \label{subsec:q-testing} Testing~\cite{myers1979art,beiser1983software} is the process that executes a program with the intent to find errors, which is a very critical process for supporting quality assurance during software development. In this section, we review state of the art for testing quantum software. \subsubsection{\bf Open Problems on Testing Quantum Software} \label{subsubsec:open-problem} Miranskyy and Zhang~\cite{miranskyy2019testing} proposed some challenges associated with white-box and black-box testings as well as verification and validation of quantum programs. They have shown that some of the existing software engineering practices are readily transferable to the quantum computing domain (e.g., code review). Others are difficult to transfer (e.g., interactive debugging). And the rest have to be introduced (e.g., complexity-dependent placement of a validation program). Rather than proposing a particular testing method (or strategy) for quantum software, they tried to define the software engineering practices for quantum software. With this definition at hand, software engineering specialists in academia and industry can start exploring this fascinating area of quantum computing and expand their work to other areas of testing as well as the remaining phases of the quantum software life cycle. \subsubsection{\bf Fuzz Testing} \label{subsubsec:fuzz-testing} Fuzz testing (or fuzzing)~\cite{takanen2018fuzzing} is a software testing technique that inputs invalid or random data called {\it fuzz} into the software system to discover coding errors and security loopholes. Wang {\it et al.}~\cite{wang2018quanfuzz} adapted an existing technique from classical software testing called coverage-guided fuzzing (CGF)~\cite{zalewski2007american,craig2002systematic,serebryany2015libfuzzer} to test quantum software. They proposed \texttt{QuanFuzz}, a search-based test input generator for quantum software. The basic idea of \texttt{QuanFuzz} is to define some quantum sensitive information to evaluate the test inputs for quantum software and use a matrix generator to generate test cases with higher coverage. The testing process of \texttt{QuanFuzz} consists of two steps: \begin{itemize} \item[(1)] Extracts quantum sensitive information from the quantum source code. Such information includes the measurement operations on the quantum registers and the sensitive branches associated with the measurement results. \item[(2)] Uses the sensitive information guided algorithm to mutate the initial input matrix and selects those matrices which improve the probability weight for a value of the quantum register to trigger the sensitive branch. \end{itemize} \noindent This process keeps iterating until the sensitive branch is triggered. They also implemented \texttt{QuanFuzz} based on $Q|SI\rangle$~\cite{liu2018q} and {\it Nrefactory} for quantum software simulation and code instrumentation respectively, and evaluated \texttt{QuanFuzz} on seven quantum programs with different registers (containing 2 to 8 qubits), which are build-in benchmark programs from $Q|SI\rangle$. The experimental result showed that \texttt{QuanFuzz} can obtain 20\%$\sim$60\% more branch coverage than the classical test input generation method. \subsubsection{\bf Property-Based Testing} \label{subsubsec:property-testing} Property-based testing~\cite{fink1997property,claessen2011quickcheck} uses specifications of essential properties to produce testing criteria and procedures which focus on these properties in a systematic manner. Honarvar and Nagarajan~\cite{honarvar2020property} presented \texttt{QSharpCheck}, a property-based testing approach for quantum software written in Q\#. To this end, they defined a testing specification language to specify the properties of Q\# programs. The whole design of the language is inspired by several ideas, such as the syntax of Q\#, quantum predicates and predicate transformers by D'Hondt and Panangaden~\cite{d2006quantum}, and quantum Hoare logic by Ying~\cite{ying2012floyd}. In the language, a test is represented by four parts: test property name and parameters, allocation and setup, function call, and asset and de-allocate. They also identified several types of assertions that formed the basis of post-conditions and assertion types associated with the test specification language. Based on these specified properties, one can generate various types of test cases, run the test cases to test Q\# programs, and check the output results to see if there are any problems within the programs. Moreover, they also showed some basic considerations on the design and implementation of \texttt{QSharpCheck} tool and carried out two case studies to demonstrate the effectiveness of \texttt{QSharpCheck} through mutation analysis of Q\# programs. \subsubsection{\bf Functional, White-box, and Model-based Testing} \label{subsubsec:funtional-testing} Usaola~\cite{usaolaquantum} proposed some ideas and identified some challenges when applying classical software testing to the quantum computing domain. He discussed how some prevalent strategies of classical software testing, such as {\it functional testing}, {\it white-box testing}, and {\it model-based testing}, can be applied to test quantum software. Functional testing~\cite{myers2011art} is a type of software testing that validates the software system against the functional requirements/specifications. Usaola discussed some basic concepts regarding functional testing (such as test case and test suite) and showed that a process of functional testing for quantum software might consist of the following three steps: \vspace*{-0.5mm} \begin{itemize}[leftmargin=2em] \item[(1)] The initial situation of a quantum test case sets up the initial status of the qubits. \item[(2)] Similar to classical testing, the quantum circuit is executed. \item[(3)] The test suite saves the obtained result to calculate the most probable result. \end{itemize} \vspace*{-0.4mm} White-boxing testing~\cite{myers2011art} is a method that tests a software solution's internal structure, design, and coding. Among different white-box testing methods, Usaola specially studied mutation testing, which might be a good candidate technique for quantum software testing. He also gave a simple example program of adding two integer numbers using IBM's Qiskit simulator to show how the mutation testing can be applied for testing. However, how to define suitable quantum mutant operators to support quantum mutation testing has not been discussed. Model-based testing~\cite{utting2010practical} is a software testing technique where run time behavior of the software during the test is checked against predictions made by a model. To apply model-based testing to the quantum software, one should first use some modeling language, for example, UML, to model the behavior of the quantum software. Then, based on the model, one can generate test cases for the software under test to perform the testing. As we discussed in Section~\ref{sec:design}, one may use the Q-UML~\cite{Perez-Delgado2020quantum}, an extension of UML to the quantum domain, to model a quantum software system to support the test case generation of the system. Q\#~\cite{svore2018q} is a quantum programming environment developed by Microsoft that supports quantum programming and quantum software development. Q\# offers tools for testing and debugging quantum programs~\cite{q2017testing}. The tools can be used to verify that quantum programs act as intended as well as to diagnose an incorrect quantum program. Although testing is a widely used technique to guarantee the reliability of a software system, as Dijkstra stated in~\cite{dahl1972structured}, {\it testing can be used to show the presence of bugs, but never to show their absence}. Therefore, after testing, systematical techniques such as {\it debugging} is still needed to localize the bugs in the system. \subsection{Quantum Program Debugging} \label{subsec:q-debugging} The process of finding and fixing bugs is called debugging, which is the activity of analyzing the program to locate and correct errors in a program by reasoning about causal relations between bugs and the error detected in the program~\cite{bradley1985science,bentley1985programming}. Program debugging tools are essential for any programming environment~\cite{araki1991general,lencevicius2000advanced}. In the life cycle of software development, almost 25\% of maintenance is carried out for debugging~\cite{lientz1980software}. Debugging methodologies and techniques are also crucial for quantum programming~\cite{chong2017programming}. This section gives an overview of quantum program debugging techniques from four aspects: {\it debugging tactics}, {\it debugging quantum processes}, {\it assertion-based debugging}, and {\it language support for debugging}. \subsubsection{\bf Debugging Tactics} \label{subsubsec:debugging-tactic} For classical program debugging, some well-established common tactics including {\it backtracking}, {\it cause elimination}, and {\it brute force}~\cite{myers2011art,pressman2010software} are widely used. These debugging tactics can also be used to support the quantum program debugging. Miranskyy {\it et al.}~\cite{miranskyy2020your} analyzed the debugging tactics for quantum programs and discussed the possibility of applying some well-established classical debugging techniques to the quantum domain. They mainly considered three types of debugging tactics in classical debugging, as mentioned above, and concluded that {\it backtracking} tactics, especially those based on code reviews and inspections, could probably be applied for quantum debugging. This is also confirmed by the discussions with practical quantum software developers who usually use code reviews and inspections for their daily programming tasks~\cite{miranskyy2020your}. For {\it cause elimination} tactics, they pointed out that it could be naturally extended to the domain of quantum computing. First, one can set a hypothesis and specify a root cause for a bug understudy during quantum debugging, and then collect data and perform some experiments based on the data to prove the hypothesis. However, considering the probabilistic nature of the quantum program's behavior~\cite{nielsen2002quantum,miranskyy2019testing}, the final result of the quantum program should be assessed based on the distribution of the results obtained from multiple executions of the program. To solve the problem, Miranskyy {\it et al.}~\cite{miranskyy2020your} suggested to apply classical testing techniques of probabilistic programs~\cite{dutta2018testing,dutta2019storm} to the quantum computing domain. For {\it brute force} tactics, the most common tactic for classical debugging, its applicability to quantum debugging might depend on two situations. If we regard the quantum program as a black box, one could follow the classical debugging process: (1) First, trace the input and the output of the program (2) Then, record the input and output data in a log file. (3) Finally, analyze and compare these data against expected values to see if they are consistent. If we treat the quantum program as a white box, one has to consider how to capture the execution traces during the execution of the program to perform interactive debugging. However, considering the specific features such as superposition, entanglement, and no-cloning, in quantum computing, it is almost impossible. Miranskyy {\it et al.}~\cite{miranskyy2020your} discussed some scenarios regarding those specific quantum features for which the classical debugging cannot be applied, and suggested some potential solutions to solve these problems. Those solutions, however, might still be immature, thus may be infeasible in practice. Therefore, much work remains to be done to work out some solutions to support quantum program debugging. \subsubsection{\bf Assertion-based Debugging} \label{subsubsec:assertion-debugging} Several assertion-based approaches have been proposed recently for debugging quantum programs. As we described previously, Huang and martonosi~\cite{huang2018qdb,huang2019statistical} summarized several types of bugs in quantum programs based on their experiences of implementation of a set of quantum programs (or algorithms~\cite{shor1999polynomial,grover1996fast,olson2017quantum,mcardle2018quantum}). They also proposed some corresponding defense strategies in terms of programming and assertion to prevent such bugs during programming to develop bug-free quantum programs. This study is preliminary, and further, Huang and Martonosi~\cite{huang2019statistical} presented the statistical assertions for quantum programs based on statistical tests on some classical observations. These allow the programmers to decide if a quantum program state matches its expected value in one of classical, superposition, or entangled types of states. Based on this, they classified possible assertions in quantum software into three types: {\it classical assertion}, {\it superposition assertion}, and {\it entanglement assertion}. They extended an existing quantum programming language called Scaffold~\cite{abhari2012scaffold,javadiabhari2015scaffcc} with the ability to specify quantum assertions, and developed a tool to check these assertions in a quantum program simulator called QX~\cite{khammassi2017qx}. To demonstrate the effectiveness of their assertion-based approach, they performed three case studies on three benchmark quantum programs: factoring integers, database search, and quantum chemistry, each of which represents a different class of quantum algorithms. They also showed some results on what types of bugs are possible and laid out a strategy for using quantum programming patterns to place assertions and prevent bugs. Moreover, to validate the proposed approach, they cross-validated the quantum programs and the simulation results through the functional equivalent programs implemented in different quantum programming languages, including LIQUi|$\rangle$~\cite{roetteler2017design}, ProjectQ~\cite{haner2016high,steiger2018projectq}, and Q\#\cite{svore2018q}. Through this validation, they found and shared some results and insights on how language features of different quantum programming languages can help insert the quantum assertions in a suitable place in the program, or otherwise prevent bugs in the first place. Zhou and Byrd~\cite{zhou2019quantum} and Liu {\it et al.}~\cite{liu2020quantum} observed that the critical limitation of Huang and Martonosi's assertion-based approach~\cite{huang2019statistical} is that each measurement during debugging has to stop the execution of the program, and the assertions require aggregates of runs when the actual computation results are to be measured. To overcome this limitation, motivated by quantum error correction (QEC)~\cite{nielsen2002quantum,gottesman2010introduction} and nondestructive discrimination (NDD) ~\cite{jain2009secure}, they proposed an approach to constructing suitable quantum circuits to support runtime assertion check of quantum software~\cite{liu2020quantum,zhou2019quantum}. The key idea of this approach is to use ancilla qubits (some additional quantum bits) to collect the information of the qubits under test indirectly and to measure those ancilla qubits, rather than the original qubits under test directly. This can avoid disrupting the program execution during the assertion check at runtime. To this end, they identified three types of dynamic assertions, respectively, for classical values, entanglement, and superposition. {\it Assertion for classical value} is to ensure that the qubits are initialized to the correct value, or some intermediate classical results should satisfy some condition such as (|$\psi\rangle$==|0$\rangle$). {\it Assertion for entanglement} is to assert two or more qubits are in entangled states based on checking the parity of two qubits. {\it Assertion for superposition} is to assert that the use of Hadamard gates to set the input qubits in the uniform superposition state, and also to assert arbitrary superposition state represented as $|\psi\rangle = \sin(\frac{\theta}{2})|0\rangle + e^{i\varphi} cos(\frac{\theta}{2})|1\rangle$. The implementation of the idea of assertion circuits is also discussed. This is based on Qiskit developed by IBM~\cite{ibm2017qiskit}, an open-source framework for quantum computing, through integrating three types of assertion functions into the Qiskit development environment. Programmers can use these functions to instrument the necessary assertion circuits for classical, entanglement, and superposition states, and therefore can check the corresponding ancilla qubits for error detection. These three kinds of assertion functions include \verb+classical_assertion+, \verb+entanglement_assertion+, and \verb+superposition_assertion+. Table~\ref{table:AssertionMethod} summarises and compares these assertion functions, with the assertion methods proposed in~\cite{honarvar2020property}. The experimental results for several case studies have confirmed the effectiveness of the proposed approach on debugging as well as on improving the success rate for quantum algorithms such as Quantum Fourier Transform (QFT)~\cite{coppersmith2002approximate}, Quantum Phase Estimation (QPE)~\cite{kitaev2002classical}, and Bernstein-Vazirani algorithm~\cite{bernstein1993quantum,bernstein1997quantum}. The assertion schemes proposed by Huang and Martonosi~\cite{huang2018qdb}, Zhou and Byrd~\cite{zhou2019quantum}, and Liu {\it et al.}~\cite{liu2020quantum}, however, still have some limitations that they can only handle limited quantum states related assertions and limited assertion locations, which may increase the difficulty in debugging based on the assertions. To overcome this problem, Li {\it et al.}~\cite{li2019poq} proposed {\it Poq}, a projection-based runtime assertion checker for debugging quantum programs. {\it Poq} uses projection~\cite{birkhoff1936logic} to represent assertions, which, compared to classical representation, has more expressive power. Moreover, since projection naturally matches the projective measurement, which may not affect the measured state when the state is in one of its basis states~\cite{li2019poq}, it may reduce the testing overhead. {\it Poq} can assert sophisticated quantum algorithms. The above work on tackling bugs in quantum programs via assertion checking is promising, but it usually has to check assertions dynamically during runtime, which could waste the quantum computing (or simulation) resources. To overcome this, inspired by previous work on the verification of quantum programs such as quantum weakest preconditions~\cite{d2006quantum} and quantum Floyd-Hoare logic~\cite{ying2012floyd}, Singhal~\cite{singhalhoare} proposed to encode assertions into a static type system of a quantum programming language to support programmers to write correct programs from the start. In this way, it is hopeful that the programmers can encode some of the semantic properties that they intend for their programs as specifications in their code, so that a type checker could be used to ensure the correctness of some of those properties during compilation time. The basic idea, to this end, is to extend the Hoare Type presented in Hoare Type Theory (HTT) in~\cite{nanevski2008hoare} for classical programming languages, to the quantum Hoare Types in the quantum domain which can be encoded into a quantum programming language to support static checking, and even formal verification of quantum software. A Hoare type, similar to the Hoare triples and is represented as $$\{P\} x: A \{Q\},$$ can be ascribed to a stateful computation if when executed in a heap satisfying the precondition {\it P}, the computation diverges or results in a heap satisfying the postcondition {\it Q} and returns a value of type {\it A}~\cite{nanevski2008hoare}. To support quantum circuits, the Hoare type is extended with its syntax augmented with that of QWire, a quantum circuit language that supports quantum circuit programming and verification. Through this way, the host language HTT can be augmented with a wire type $W$ in QWire, so that to treat quantum circuits as data, it can be used to specify properties in quantum programs as well. Here, $W$ and $A$ can be described as: $$W =\ 1\ |\ bit\ |\ qbit\ |\ W1 \otimes W2$$ and $$A\ ::=\ . . .\ |\ Unit\ |\ Bool\ |\ A \times A\ |\ Circuit(W1, W2),$$ respectively. This work aims to build a unified system eventually for supporting quantum programming, specification, and verification, but it is still preliminary recently, and more work remains to be done to realize the goal. \subsubsection{\bf Debugging Quantum Processes} \label{subsubsec:quantum-process} Since the measurement of a quantum system may cause its state to collapse, it is challenging to debug quantum programs through the monitoring of the program. To solve this problem, Li and Ying~\cite{li2014debugging} proposed an approach to debugging quantum processes through monitoring measurements. The basic idea of the approach is to develop a protocol to support the debugging of quantum processes through monitoring, which does not disturb the states of the processes. Informally, the approach can be described as following. Suppose that we have built a quantum system to execute some quantum processes. The system is set to an initial state $|\psi\rangle$, and then evolve under the controlled Hamiltonian denoted by $H(t)$. This guarantees that the trajectory \{$|\psi\rangle$\} of the system states is anticipated. Since the time for the whole process is usually much longer than the time of a single quantum component (such as a gate) action, it may be considered as infinite. Here, we can see that if there is a bug of the system in the process at time $t'$, the system will not truly be $H(t)$ for $t \geq t'$ during the execution, which may cause errors in the system state, so we write $\rho_{t}$. For this situation, to debug the process, we should try to find a projection operator $P=0$ of the system as well as a sequence of time points $t_{1}, t_{2}, \dots (t_{n} \rightarrow \infty)$, such that $P|\psi_{t_{n}}\rangle=0$ for all $n$. Here, $P|\psi_{t}\rangle=0$ means that nothing can be detected by $P$ if the system state is $|\psi_{t}\rangle$ as anticipated. So, to perform debugging, we can monitor the process at time $t_{1}, t_{2}, \dots,$ by using a measurement apparatus formalized by $P$, which is called a {\it monitoring measurement}, and together with probability $tr(P_{\rho_{t_{n}}})$, the error state could be detected at time $t_{n}$. If this really happened, an error would be detected in the state, and $t'$ is more likely in $[t_{n-1}, t_{n}]$, and the relevant components should be carefully checked. Practically, the time points $t_{1}, t_{2}, \dots$ are determined by a classical program $S$. The critical point for this debugging approach is to find the required projector $P$, and the condition $P|\psi_{t}\rangle=0$ guarantees that the anticipated process is not disturbed by $P$. On the other hand, it is also implied that the debugging procedure is conclusive in the sense that if the process runs correctly, it would be no error reported. Besides this, the authors also gave a formal definition of the proposed debugging approach in the case of discrete-time evolution. As mentioned in the conclusion of the paper, the proposed debugging approach can only handle the debugging of the quantum processes with time-independent Hamiltonians, and the debugging with time-dependent hamiltonians is still an open problem~\cite{li2014debugging}. \subsubsection{\bf Language Support for Debugging} \label{subsubsec:debugging-language} Programming languages with good design features, such as static type systems and modular programming, can help prevent bugs. Hietala {\it et al.}~\cite{hietala2019tracking} proposed a type-based approach to tracking errors in quantum programs at the language level. The approach uses the principle of quantum error correction code~\cite{gottesman2010introduction} for encoding each logic qubit in the program with a block of physical qubits, and then uses fault-tolerant operations to manipulate those encoded qubits to track and correct the errors in the programs. The goal is to determine whether the error correction operations added can successfully offset potential errors. To achieve this goal, it needs to track how errors propagate through the circuit and ensure that the cumulative number of errors in each encoded qubit does not exceed the threshold. To support all these issues, the authors extended QWIRE~\cite{paykin2017qwire} quantum circuit language with new error types and an error correction gate EC by interpreting QWIRE's \verb+Qubit+ type as an encoded qubit and assuming that all gates are fault-tolerant. Since QWIRE is seamlessly integrated into the Coq proof assistant~\cite{bertot2013interactive,team2019coq}, the language can utilize both QWIRE and Coq type system to check the errors in the circuits of the program. A simple case study for the implementation was carried out and showed that the proposed language system could type check a teleport example correctly. This guaranteed that every function produces the specified number of errors in the given number of specified errors from the input, and the error threshold is never passed. Several issues are also discussed as the future work, including (1) a possible extension of the current type system with probabilities to check if all qubits have less than {\it t} (some threshold) errors with probability at least {\it p}, and (2) the development of a type system which the qubit type representing the physical qubits, rather than the encoded qubits, and the error is tracked through fidelity. \subsection{Quantum Program Analysis for Bug Detection} \label{subsec:q-analysis} Program analysis utilizes static techniques for reliable computation of the information about the dynamic behavior of programs~\cite{nielson2015principles}. Example applications include compilers (for code improvement) and software validation (for detecting errors). Several researches have been carried out for program analysis of quantum programs recently, which can support bug detection for quantum programs. ScaffCC~\cite{javadiabhari2014scaffcc,javadiabhari2015scaffcc} is a scalable compiler framework developed at Princeton University for the quantum programming language Scaffold~\cite{abhari2012scaffold}. ScaffCC supports both compilation and compiling-time analysis for Scaffold programs to optimize and detect possible errors in the code. One analysis that ScaffCC supports is called {\it entanglement analysis}, which can conservatively identify each possible pair of qubits that might be entangled in the program. Such entangle information can help a programmer to design algorithms and to debug. To perform entanglement analysis, ScaffCC explores data-flow analysis techniques to automatically track the entanglements within the code through annotating the output of the QASM-HL program, an intermediate representation of ScaffCC, to denote possibly entangled qubits. The analysis is conservative in the sense that it assumes that if two qubits interact, they are likely to have become entangled with each other. However, this might lead to some false positive entangled qubit pairs when the number of the qubits is large. Moreover, entanglement analysis can also help to find the un-computing portions in the module, through analysis of un-entanglement, which can be created by applying inverse functions such as CNOT and Toffoli operations to the same set of control and target qubits. QASM~\cite{pakin2016quantum} is an open-source Python tool that allows programmers to investigate how to map arbitrary computation onto a quadratic unconstrained binary optimization (QUBO)~\cite{wang2009analyzing} that runs on a quantum annealing D-Wave 2X system~\cite{d-wave2020d-wave}. QASM computes a simple heuristic computation algorithm and present symbolic variable names as well as assembly like code blocks that might be reusable. Developers may integrate the solution into the D-wave system directly or by using a local simulator, which raises questions about the reliability security of such a tool~\cite{alroum2017detecting}. Alsaadi {\it et al. }~\cite{alsaadi2019analyzing} proposed to use static code analysis techniques to investigate the security threats in QASM code written in Python. The result indicates that the D-Wave system might be vulnerable to multiple security breaches. They used flow-sensitive, inter-procedural, and context-sensitive data flow analysis to uncover vulnerable points in the program, and several security threats have been found. They found that most of the identified functions and modules, though well structured, are either undefined or unused. Also, it is hard to apply fixes because QASM is highly dependent on D-Wave’s proprietary SAPI library (programmed in C++) and does not work correctly without it. \section{Quantum Software Maintenance} \label{sec:maintenance} Support for software maintenance is crucially important in the development of computer software. A motivation for this is the well-known fact that somewhere between 60$\%$ and 80$\%$ of the cost of software comes from the maintenance phase of software development~\cite{vandoren1997maintenance}. The primary purpose of quantum software maintenance is to modify and update software applications after delivery to correct faults and to improve performance. Research work on this area~\cite{Castillo2020reengineering,kruger2020quantum} is emerging recently, which mainly focuses on the reengineering of existing classical software systems to integrate with new quantum algorithms. \subsection{Reengineering Classical Information Systems to Quantum computing} \label{subsec:reengineering} Software Reengineering~\cite{chikofsky1990reverse,demeyer2002object} refers to the inspection and modification of the target software system with a series of activities such as design recovery, re-documentation, reverse engineering, and forward engineering. It aims to reconstruct the existing system as a new form to develop higher quality and more maintainable software. Although significant progress has been made in the development of quantum computers, it is evident that quantum computers can not be used for all things in the short term (due to its high initial cost, among other reasons). Instead, it is common to use quantum computers to solve some important problems by making specific calls from classical computers to remote quantum computers in the cloud~\cite{stepney2006journeys}. In this case, most enterprises require to integrate and migrate their first quantum algorithm or future quantum algorithm with their existing classical information systems. Therefore, reengineering must be revisited to deal with the problems associated with the migrations of quantum computing and the next coexistence of classical and quantum software. To solve this problem, P{\'{e}}rez{-}Castillo~\cite{Castillo2020reengineering} proposed a software modernization approach (model-driven reengineering)~\cite{seacord2003modernizing} to restructuring classical systems together with existing or new quantum algorithms to provide target systems that combine both classical and quantum information systems. The software modernization method in classical software engineering has been proved to be an effective mechanism that can realize the migration and evolution of software while retaining business knowledge. The solution proposed is systematic and based on existing, well-known standards such as Unified Modelling Language (UML)~\cite{boochunified,rurnbaughunified,jacobson1999unified} and Knowledge Discovery Metamodel (KDM)~\cite{perez2011knowledge}. The solution could benefit from reducing the development of new quantum information systems. Moreover, since it is based on international standards to represent the knowledge in an agnostic manner, it would be independent of any quantum programming languages. Besides, P{\'{e}}rez{-}Castillo also pointed out that quantum technologies and programming have not yet been addressed with techniques, good practices, and development methodologies of software engineering to meet quantum programs'needs. Another issue regarding reengineering (maintenance) is how to integrate quantum computation primitives (for instance, quantum software components) to an existing classical software system. Since quantum computer (QC) is very different from the previous technology and method, integrating QC into existing software systems requires that it not only solves this problem at the level of algorithm implementation, but also involves many more extensive problems studied in software engineering~\cite{pressman2010software}. Quantum annealing~\cite{kadowaki1998quantum,finnila1994quantum,shin2014quantum} is a quantum-mechanical metaheuristic (generic and approximate method) to solve combinatorial optimization and sampling problems. Kr{\"{u}}ger and Mauerer~\cite{kruger2020quantum} performed a case study on how to augment a quantum software component implemented in a quantum annealer (QA) for the Boolean satisfiability problem (SAT)~\cite{cook1997finding}, to an existing software system. In this case study, they discussed the related quality measures of quantum components, and showed that the method of transforming the SAT into a QA, which is mathematically equivalent but structurally different, can lead to substantial differences in these qualities. Their research also showed that defining and testing specific quality attributes of QC components, as studied in~\cite{sodhi2018quality}, is one of the key challenges. Although these properties do not play a core role in classical engineering, they must be considered in the software architecture with quantum components. Their study may help readers form a realistic intuition about the ability, potential, and challenges of using quantum components to enhance classical software in the near and medium term. They also claimed that in the current development stage, this problem must be considered at a much lower level than the conventional abstraction level in classical software engineering. The research also implies that the ability to easily and smoothly replace functional components of classical software architecture with quantum components, just like in classical component-based software engineering~\cite{heineman2001component}, is crucial for the success of the project. \section{Quantum Software Reuse} \label{sec:reuse} Computer software can be systematically reused across the entire development life-cycle, i.e., requirement specification, design, and implementation~\cite{krueger1992software,frakes2005software}. It has its place even in the post-delivery stages of development, e.g., its continuing quality assessment or software maintenance. This section gives an overview of state of the art on quantum software reuse in several aspects, including {\it quantum pattern reuse}, {\it quantum circuit reuse}, and {\it quantum state reuse}. \subsection{Quantum Pattern Reuse} \label{subsec:pattern-reuse} As we discussed in Section~\ref{sec:design}, Leymann~\cite{leymann2019towards} identified some quantum patterns which can help develop quantum algorithms from the perspective of software reuse. They also plan to represent these quantum patterns in a pattern repository, which is a specialized database that stores pattern documents and manages the links between them. Such a quantum pattern repository allows a quantum software developer to query the database to find appropriate quantum patterns (e.g., to determine the entry pattern corresponding to a problem), supports browsing the content of each pattern document, and enables navigating between patterns based on the links between them. In this way, a quantum software developer could find suitable patterns from the repository, that may cross to several different domains, and compose them to solve a complex problem. \subsection{Quantum Circuit Reuse} \label{subsec:circuit-reuse} The design of efficient quantum circuits is an essential problem in quantum computation~\cite{williams1998automated,perkowski2003hierarchical}. For a given unitary matrix, it is a difficult task to find a highly optimized quantum circuit. There are a few known methods of quantum circuit design, which are mainly based on heuristic search techniques such as genetic algorithm and simulated annealing. However, these methods are limited to a relatively small circuit size, and the solutions generated by the methods are usually difficult to explain. To solve these problems, Klappenecker and R\"{o}tteler~\cite{klappenecker2003quantum} presented a new design principle for quantum circuits that is based exactly on the idea of reusing some existing quantum circuits in the construction of other quantum circuits. The unique characteristics of the design principle is: assuming that we have an effective set of quantum circuits available, we can systematically construct efficient quantum circuits by reusing and combining a group of highly optimized quantum circuits. The sound mathematical basis of this design method allows meaningful and natural explanations of the generated circuits. They also suggested that from a practical perspective, it would be interesting to build a database of medium-sized matrix groups with effective quantum circuits. A given transformation can then be searched in this database by linear algebra, and automatically deriving quantum circuit implementations in this way could be an attractive possibility. Allouche {\it et al.}~\cite{allouche2017reuse} attempted to extend further the design-by-reuse method proposed by Klappenecker and R\"{o}tteler~\cite{klappenecker2003quantum} to a general framework to support circuit synthesis through reusing the existing quantum circuits. In their extension, the approach needs to find suitable groups for the implementation of new quantum circuits. They also identified some critical points which are necessary for constructing their extended method. For example, this method relies on the distribution of the information between the group and the coefficient matrix. When the group contains enough information, such as the Fourier transform power group, the coefficient matrix is easy to calculate, and the efficiency of the quantum Fourier transform synthesis is used to generate an effective circuit on non-trivial operators. Besides, they also studied some potential group candidates, such as the {\it projective Pauli group} and the {\it dihedral group}, for testing the proposed method. \subsection{Quantum State Reuse} \label{subsec:state-reuse} "{\it Some quantum state are hard to create and maintain, but are a valuable resource for computing. Twenty-first-century entrepreneurs could make a fortune selling disposable quantum states}~\cite{preskill1999plug}." To make it real, Preskill presented an interesting idea called {\it plug-in quantum software} for probably reusing quantum states, which make up a quantum software program~\cite{preskill1999plug}. The basic idea is that manufacturers can design a valuable quantum state and use a special-purpose device to make multiple copies of that quantum state; these copies can be tested to ensure their quality and stored until they are needed. Consumers can pay to download that state and plug it into their quantum computer for a performance boost. A candidate application of plug-in quantum software is to ensure the reliable operation of quantum computers, which is usually achieved by applying the principle of quantum error correction~\cite{shor1995scheme,steane1996error}. For each known quantum error correction scheme, some quantum gates in the fault-tolerant universal set are easy to implement, while others are difficult. The latter can be efficiently executed with quantum software, which can be prepared in advance and then consumed during the execution of the quantum gate. The advantage of using software rather than hardware to implement the gate is that one can verify that the software is ready according to specifications before use. If a problem is found with the software, it can be discarded or repaired. On the contrary, if the hardware has a severe failure during the execution of the quantum gate, it may be difficult to recover. The idea for creating and preparing quantum states offline is not new. Instead, several researches~\cite{shor1994algorithms,knill1998resilient,gottesman1999demonstrating} have been carried out. Among them, Gottesman and Chuang~\cite{gottesman1999demonstrating} proposed an interesting approach to preparing, using, and classifying quantum states. The goal of their approach is to make the design of fault-tolerant quantum computers more straightforward and methodical. The main idea is to use a generalized form of quantum teleportation, a simple technique to reduce the required resources to construct necessary quantum gates (states), such as Pauli gates ($X$, $Y$ and $Z$), Cliff group, Toffoli gate, the $\pi$/8 gate (rotation about the $Z$-axis by an angle $\pi$/4), and the phase gate. Their construction relies on the ability to create some ancilla states, which is independent of the data being manipulated, and therefore can be prepared offline in advance. So they are valuable general-purpose quantum resources, perhaps a kind of "quantum software," as mentioned by Gottesman and Chuang~\cite{gottesman1999demonstrating}, and may be considered as commercial products that can be manufactured. \section{Challenges and Opportunities} \label{sec:challenge} Quantum computing is an emerging field where new engineering methodology needs to be developed to address issues such as unforeseen changes in requirements, lack of expertise in software development and limited budgets that plague scientific software projects~\cite{shaydulin2020making}. The interdisciplinary nature of the quantum computing leads to the complexity of the field. This section discusses the challenges and opportunities in the area of quantum software engineering. \subsection{Quantum Software Requirements Analysis} \label{subsec:co-requirement} Quantum software requirements analysis may include two stages: {\it stakeholder requirements definition} and {\it system requirements definition}. A key issue for the stakeholder requirements definition is to establish the critical performance requirements so that it is possible to determine whether quantum technology provides enough benefits over classical technology. Similarly, the acceptability (size, cost, capability) of the user system must be established as early as possible to determine the feasibility of using quantum technology in the application under consideration. Still, the types of benefits that quantum systems may provide are not fully defined. Therefore, if the existing technology can meet the needs, especially considering that the technical risks of using quantum systems may be high, users may not consider quantum systems. However, research and expected efforts to utilize quantum systems in the coming years will likely provide the knowledge needed to define the needs of quantum systems properly so that they can be considered in the solution space. Therefore, in the process of defining stakeholder needs, it is necessary to ensure that quantum and non-quantum solutions can be adequately distinguished so as not to ignore quantum benefits. The definition of system requirements is to transform the stakeholder requirements into a set of (technical) system requirements that the proposed solution must meet. The resulting system requirements cover functional, non-functional, performance, process, and interface aspects, and include design constraints. However, due to the limitations of the models, the definition of design constraints due to quantum technology effects might also be problematic. Therefore, one of the main problems in the system requirements definition process is whether the models used to establish the relationship and correlation between different technical requirements are adequate and sufficient. This problem, however, can only be solved by developing better models and considering the effects of quantum technology adequately. \subsection{Quantum Software Design} \label{subsec:co-design} This section discusses challenges and opportunities regarding quantum software design from the perspectives of {\it architectural design} and {\it detailed design}. \subsubsection{\bf Quantum Architectural Design} \label{subsubsec:co-architectural} The software architecture of a system defines its high-level structure, revealing its overall organization as a set of interacting components~\cite{perry1992foundations,mary1996software}. A well-defined architecture allows an engineer to reason about system properties at a high level of abstraction. The incorporation of quantum computing effects on software architectures may require new methods to define the attributes, features, functions, and constraints assigned to architectural elements. This will require quantum software architects to reach a consensus and establish standard representations for quantum software components. The extent to which new types of components are introduced at the architecture level depends on the level of abstraction required, which is yet unclear. The functional architecture is unlikely to be affected by the introduction of quantum software components, though specific functions within the architecture may be new. \vspace*{1.5mm} \noindent {\it Quantum Architectural Patterns (Styles).}\hspace*{1.3mm} The software architecture pattern provides a skeleton or template for the overall software architecture or high-level design of software applications~\cite{gomaa2011software}. Shaw and Garlan~\cite{mary1996software} mentioned the architectural style (or pattern) of software architecture, which is the repetitive architecture used in various software applications (see also~\cite{bass2000quality}). Architectural patterns can be classified into two main categories: {\it architectural structure patterns}, which focus on the static structure of the architecture, and {\it architectural communication patterns}, which focus on the dynamic communication among distributed components of the architecture. Currently, software architects do not have architectural patterns for quantum software systems, which can be used to adequately deal with the quantum issues when specifying the architecture of quantum software systems. The current repository for architectural patterns do not provide patterns which consider the quantum issues. Therefore, to perform the architectural-level design of quantum systems, it is necessary to identify some {\it quantum architectural patterns}, which are well-considered to model the quantum effect of the system. \vspace*{1.5mm} \noindent {\it Quantum Architectural Description Languages.}\hspace*{1.3mm} Architectural description languages (ADLs)~\cite{clements1996survey} are formal languages that can be used to represent the architecture of a software system. They focus on the high-level structure of the overall application rather than the implementation details of any specific source module. Several ADLs have been proposed, such as Wright~\cite{robert1997formal}, Rapide~\cite{luckham1995specification}, UniCon~\cite{shaw1995abstractions}, Darwin~\cite{magee1996dynamic}, and ACME~\cite{garlan1997acme} to support formal representation and reasoning of classical software architectures. Recently, software architects do not have tools to adequately deal with the quantum issues when specifying the architecture of quantum software systems. Current ADLs do not provide primitives to specify quantum components and their connections with classical components. Using ADLs, software architects can specify the functionality of the system using components and the interaction between components using connectors. It is necessary to extend current ADLs to {\it quantum architectural description languages (qADLs)} to formally specify the architectures of quantum software systems for architectural-level design of quantum systems. We believe that such a qADL should contain at least those mechanisms, including specifications of classical components with interfaces and connections between interfaces (already provided in classical ADLs), specifications of quantum components, and specifications of connectors between classical and quantum components. Also, a qADL should support the tasks of formal analysis, verification, and validation of quantum software architectures.\\ \noindent {\it Software Quality Attributes.}\hspace*{1.3mm} Software quality attributes~\cite{bass2000quality} refer to the non-functional requirements of software, which can have a profound effect on the quality of a software product. The software quality attributes of a system should be considered and evaluated during the development of the software architecture. These attributes relate to how the architecture addresses important non-functional requirements, such as performance, security, and maintainability. Other software quality attributes include modifiability, testability, traceability, scalability, reusability, and availability. As we summarized in section~\ref{sec:design}, Sohdi~\cite{sodhi2018quality} identified some critical characteristics of quantum computing systems, and studied how these characteristics may affect the quality attributes of quantum software architectures. A more significant issue worth investigating is how to use these quality attributes to evaluate the quality of the quantum software architecture during the architectural design. \subsubsection{\bf Detailed Quantum Design} \label{subsubsec:co-detail} The detailed quantum design provides information about the quantum software system as a whole and the individual quantum software components to enable implementation. An essential aspect of the detailed design is the selection of the technologies required for each quantum software component. Substantially, the inclusion of quantum software component increases the search space available to the quantum software engineer in making technology selection, but may lead to a challenge. The development of suitable models and their incorporation into quantum software design frameworks should be an area of intensive research effort in the future. Another important consideration is that at any particular stage in the quantum software life cycle, an appropriate level of quantum software modeling must be included. Moreover, it is crucial to recognize that the individual quantum software components may be developed by several or many different organizations. \vspace*{1.5mm} \noindent {\it Quantum Design Patterns.}\hspace*{1.3mm} A design pattern describes a recurring design problem to be solved, a solution to the problem, and the context in which that solution works~\cite{gamma1995design,frank1996pattern}. Design patterns have been evolved over a long period and provide the best solutions to specific problems faced during software development. Learning these patterns helps inexperienced developers to learn software design in an easy and faster way. Design patterns have proved highly useful within the object-oriented field and have helped to achieve good design of applications through the reusability of validated components. These experiences tell us that it is crucial to identify the elements of good and reusable designs for quantum software and to start formalizing people’s experience with these designs through quantum design patterns. We hope that the quantum design patterns, if being identified, could benefit the quantum software development, and a quantum pattern catalog for quantum software could be especially useful. \subsubsection{\bf Design Models for Quantum Software} \label{subsubsec:co-model} There is an absence of modeling techniques for quantum software system design. The current existing model of quantum software systems~\cite{Perez-Delgado2020quantum} is generally a simple extension of the classical modeling technique. The lack of suitable models is probably one of the most significant difficulties facing quantum software engineering, particularly as this may impact design, testing, and possibly maintenance parts of the quantum software life cycle. Design models for classical software engineering have been extensively studied from various perspectives. Among them, several notable design models are {\it data flow diagrams} (DFDs)~\cite{yourdon1979structured}, {\it entity-relationship diagrams} (ERDs)~\cite{chen1976entity} and {\it unified modeling language} (UML)~\cite{boochunified}. We believe that the first natural step is to investigate these well-established design models to see if they could be extended to the quantum computing domain to support quantum software modeling by incorporating the quantum effects within the models appropriately. \subsection{Quantum Software Implementation} \label{subsec:co-implementation} Quantum software implementation refers to the development of quantum software entities that meet the requirements, architecture, and design. It is not uncommon to introduce new constraints in the implementation. These constraints must be reflected back into requirements. The essential design characteristics for achieving a subtle quantum effect are likely not to be compromised. Therefore, the process must include rules and tests to ensure that unwanted changes are not introduced. One can expect that reliability will become an important area of research in quantum software technologies. The principles derived from this research will provide a basis for the implementation strategy formulated during the implementation process. In short, the implementation process will need to incorporate the principles of quantum software reliability engineering. Testing the potential impact of implementation changes on the entire quantum software system requires modeling before agreeing to the changes. On the other hand, it seems challenging to introduce new quantum programming languages into widespread practice. Perhaps a promising way is to define the requirements for the future high-level quantum programming, which may eventually lead to the development and widespread use of a more efficient quantum programming language. Another possible research direction could be to develop techniques and tools to generate quantum code from quantum software design specifications automatically. \subsection{Quantum Software Reliability } \label{subsec:co-testing} This section discusses the challenges and opportunities on quantum software reliability, including fault model, testing, debugging, verification, and visualization. \subsubsection{\bf Fault Model for Quantum Software} \label{subsubsec:co-fault} In general, a fault model refers to an engineering model that may cause problems in the construction or operation of equipment, structures, or software~\cite{binder2000testing}. The unique features of quantum programming, such as superposition, entanglement, and no-cloning, do not occur in classical imperative, functional, or even multi-paradigm programming. Each feature can manifest new fault types that ultimately lead to quantum program failures. Therefore, to test quantum software, a fault model for quantum software is required. Such a fault model should be based on the peculiarities of quantum programs, and defined through careful analysis of the unique constructs of quantum programming languages and reflects an initial evaluation of classes of potential faults. Such a fault model can be used to measure the fault-detection effectiveness of automatic test generation and selection techniques of quantum software. Although work on identifying bug types for quantum software is just beginning~\cite{huang2018qdb,huang2019statistical}, more study should be carried out to build a practical fault model for supporting quantum software testing and debugging. \subsubsection{\bf Quantum Software Testing} \label{subsubsec:co-testing} Systematic testing of quantum software systems must be based on fault models that reflect the structural and behavioral characteristics of quantum software. Criteria and strategies for testing quantum programs should be developed in terms of the fault model. As its classical counterpart, quantum software testing must explore the following issues rigorously and provide the answers for each of them to build effective and efficient testing frameworks for quantum software. \begin{itemize} \item[(1)] How to define testing coverage criteria of quantum software? \item[(2)] How to automatically and efficiently generate test cases for quantum software? \item[(3)] How to evaluate the test data quality for quantum software? \item[(4)] How to test quantum software regressively? \end{itemize} Although some of these issues, such as testing covering criteria~\cite{wang2018quanfuzz} and test case generation~\cite{honarvar2020property}, have been addressed in current researches, it is far from practical uses for testing quantum software. \subsubsection{\bf Quantum Program Debugging} \label{subsubsec:co-debugging} A commonly used classical debugging technique is the one in which a developer examines the program state through setting breakpoints~\cite{lazzerini1992program}. This technique, however, cannot be used to debug quantum software since any inspection of a quantum register can cause it to decohere~\cite{wolf2012artificial}. A simple variant of this technique that entails making a copies of the quantum register is similarly foiled due to the physical impossibility of making copies of quantum objects. One possible way is to use multiple quantum registers that are prepared in the same way. However, the probabilistic nature of quantum computation should serve as a constant reminder to the debugger of quantum software that no two quantum registers can be assumed to be identical. It is evident here that more investigations are needed. Recently, several pieces of research ~\cite{li2014debugging,huang2018qdb,huang2019statistical,liu2020quantum,zhou2019quantum,li2019poq} have been carried out that showed some promising initial results on debugging quantum software. However, it is still not clear what the appropriate debugging techniques for quantum computing are~\cite{mosca2019quantum}. As a result, new approaches still need to be developed. On the other hand, while assertion-based debugging~\cite{huang2018qdb,huang2019statistical,liu2020quantum,li2019poq} seems a promising way in debugging quantum software, different kinds of debugging techniques in classical computing, such as interactive debugging, static and dynamic analysis, code review and inspection, and post-mortem, are also worth further investigation~\cite{mosca2019quantum}. A question that naturally arises is: {\it are these classical debugging techniques applicable to the quantum domain?} As an example, {\it Whyline}~\cite{ko2008debugging}, a novel interactive debugging paradigm for classical software, can reduce the debugging time by allowing programmers to select "why" and "why not" questions to query the behavior of a program. We have not seen such a novel idea in the debugging of quantum software, but it would be worth exploring. An in-depth understanding of how quantum programmers perform debugging would provide an opportunity to develop new quantum debugging tools, and such work should be based on previous research that demonstrates best practices in scientific computing~\cite{ashktorab2019thinking}. Another example of classical debugging paradigms that should be paid attention to is {\it algorithmic debugging}~\cite{shapiro1982algorithmic,shapiro1983algorithmic}. Algorithmic debugging is an interactive process in which the debugging system gains knowledge of the expected behavior of the program being debugged and uses this knowledge to locate errors. An algorithmic debugger can be invoked after noticing an externally visible symptom of a bug. Then, it executes the program and builds an execution trace tree at the procedure level, while saving some useful trace information such as the procedure name and input/output parameter values. After that, the debugger traverses the execution tree and interacts with the user by asking for the expected behavior of each procedure. The user can answer "yes" or "no" to give an assertion about the predicted behavior of the procedure. Once some necessary conditions are satisfied, the search ends, and the location of the bug is identified. Algorithmic debugging provides a declarative way of debugging. If applied to quantum software, it might offer abstractions that allow programmers to think at an algorithmic level with less concern for details like control pulse generation, which may provide a possibility for overcoming the problems for which the quantum software debugging are facing~\cite{chong2017programming}. \subsubsection{\bf Quantum Software Visualization} \label{subsubsec:co-visualization} Understanding the behavior of quantum software is an important and nontrivial task, and software visualization~\cite{knight1999comprehension,stasko1998software}, a well-developed technique for understanding the behavior of classical software, may significantly contribute to it. Visualization of qubit states is particularly challenging since the number of achievable quantum states is exponential~\cite{ashktorab2019thinking}. Recently, several methods have been proposed for visualizing the state of qubits by using Bloch sphere~\cite{wikipedia2020bloch,gidney2017visualizing} and matrices of Pauli Expectation Value (PEVs)~\cite{wiseman1993interpretation,gross2010quantum}, and for visualizing the transitions between qubit states by using two-qubit representation~\cite{ibm2020entanglion}. However, the explosion of qubit state-space necessitates the development of scalable visualization that can intuitively help quantum software developers understand the states their systems are in, to facilitate the development, debugging, and validation of quantum algorithms~\cite{ashktorab2019thinking}. \subsubsection{\bf Quantum Program Verification} \label{subsubsec:co-verification} Verification plays an essential role in quantum programming, both in the short and long term. Quantum programs are difficult to debug and test due to features such as superposition, entanglement, and no-cloning. To build the confidence regarding the correctness of a quantum program, we need to develop verification methods. There has been substantial progress in verifying quantum programs~\cite{chadha2006reasoning,brunet2004dynamic,feng2007proof,ying2012floyd,ying2013verification,rand2019formal,rand2018formally,liu2019formal}. However, as Rand {\it et al.}~\cite{rand2019formal} pointed out, novel verification methods are still needed to deal with errors and to verify error-prone quantum programs concerning the hardware we intend to run them on. Moreover, we need also approaches to verify quantum compilations~\cite{hietala2019verified}. \subsection{Quantum Software Maintenance} \label{subsec:co-maintenance} Software maintenance is an essential activity during software development. The objective is to modify the existing software product while preserving its integrity~\cite{bennett2000software}. Methods, techniques, and tools for classical software maintenance have been well studied and established~\cite{bennett2000software,grubb2003software,dorfman1997software,sommerville2011software}, but research on the maintenance of quantum software systems is just starting~\cite{Castillo2020reengineering}. We believe that any quantum software maintenance process should deal with at least the following three main issues: \begin{itemize} \item How to understand the existing quantum software? \item How to modify the existing quantum software? \item How to re-validate the modified quantum software? \end{itemize} Moreover, the maintenance process of quantum software systems likely needs to include the monitoring of the systems so that fault diagnosis could be informed during the maintenance and evolution of the systems. \subsection{Quantum Software Reuse} \label{subsec:co-reuse} Component-based software engineering (CBSE) is an approach to software systems development based on reusing software components~\cite{kozaczynski1998component,heineman2001component}. Component here means a unit or a part of a model. It is the process that emphasizes the design and construction of computer-based systems using reusable software components or simply building software systems from pre-existing components~\cite{sommerville2011software}. The maintenance of CBSE can be easily accomplished by replacing or introducing a new component to the system~\cite{mili2001reuse}. As quantum software resources are getting accumulated, it is crucial to develop some methodologies, techniques, and tools for reusing quantum software systems. One promising way, we believe, might be the component-based quantum software engineering (CBQSE), which focuses on the design and development of quantum computer-based systems with the use of reusable quantum (classical) software components. Such an approach can save the development time and also be easy for maintenance during the evolution of the quantum software system. Another possible research direction, as we discussed in Section~\ref{subsec:co-design}, is to build a quantum architectural (design) pattern catalog to support the reuse of quantum software entities efficiently and effectively. \section{Related Work} \label{sec:work} To the best of our knowledge, this work is the first comprehensive survey on the research of quantum software engineering regarding various phases of its life cycle, including quantum software requirements analysis, design, implementation, testing, and maintenance. This section discusses some related work in quantum programming languages, quantum software engineering, and quantum software development environments. \subsection{Quantum Programming Languages} \label{subsec:QPL-survey} Software is a solution for solving a computational problem using a formal programming language. The construction of the language and the tools that can be used to model, implement, and test software systems affect the quality of the solution, including correctness, reliability, readability, computational efficiency, and efficiency of design and development. Therefore, the quantum programming language plays a vital role in the life cycle of quantum software development. Several comprehensive surveys on quantum programming languages have been carried out from different perspectives~\cite{selinger2004brief,gay2006quantum,unruh2006quantum,rudiger2007quantum,jorrand2007programmer, sofge2008survey,miszczak2011models,ying2012quantum,valiron2013quantum,valiron2015programming,hietala2016quantum,chong2017programming,spivsiak2017quantum,zorzi2019quantum,garhwal2019quantum}. Selinger~\cite{selinger2004brief} carried out the first survey on quantum programming language in 2004. His survey uses a similar classification scheme and offers a different perspective on some of the issues in the field. Gay~\cite{gay2006quantum} published a comprehensive and detailed survey on quantum programming languages in 2006. He classified the central theme of each paper from the perspectives of programming language design, semantics, and compilation. Regarding to programming language design, the paper considers imperative and functional programming languages, and for semantics, it focuses on the denotational techniques. Besides the survey, Gay also maintained an online bibliography as a resource for the community~\cite{gay2005bibliography}. Unruh~\cite{unruh2006quantum} also gave an overview of the state of the art on the development of quantum programming languages in 2006. In his overview, quantum programming languages are classified into two types, i.e., {\it practical programming language} and {\it formal programming language}. Practical programming languages, such as QCL~\cite{omer1998procedural,omer2000quantum} and Q~\cite{bettelli2003toward}, aim at practical applications such as simulation or the programming of actual quantum computers, while formal programming languages, such as QFC (QPL)~\cite{selinger2004towards,selinger2004brief,selinger2006lambda}, Quantum lambda calculus~\cite{maymin1996extending}, qGCL~\cite{sanders2000quantum,zuliani2001quantum,zuliani2001formal,zuliani2004non}, QPAlg~\cite{lalire2004process}, and CQP~\cite{gay2004communicating,gay2005communicating}, concentrate on how to model the semantics of a quantum program. The survey also pointed out some open problems and challenges regarding the practical and formal programming languages. For practical languages, the main challenges are how to design some powerful language constructs to abstract from the low-level model of quantum circuits to lift towards the high-level programming, and how to develop compilers and optimizers to obtain the best results out of the available resources. For formal languages, the main challenges lay in how to design expressive, easy-to-read quantum languages with well-defined semantics, and how to develop effective methods to verify quantum programs written by these languages. R\"{u}diger~\cite{rudiger2007quantum} presented a comprehensive survey for quantum programming languages from the pragmatic perspective in 2006. He first gave some introduction on necessary notations of quantum theory and quantum computation, and then discussed the goals and design issues for quantum programming languages. He claimed that quantum programming languages should enable programmers to reason about the structures of quantum algorithms and programs, and a well-designed quantum programming language should help find new quantum algorithms. The survey also discussed several concrete quantum programming languages such as pseudocode~\cite{knill1996conventions,knill2000encyclopedia}, QCL~\cite{omer1998procedural,omer2000quantum}, and Q language~\cite{bettelli2002architecture,bettelli2003toward} in detail, and gave a comparison of these three languages from the perspective of language paradigms and semantics. The survey also pointed out some research directions towards quantum programming language design and implementation. Jorrand's survey~\cite{jorrand2007programmer} on the quantum computing paradigm started from a brief overview of the principles that underlay quantum computing and introduced the breakthroughs achieved by the quantum computing field, which include quantum algorithms and teleportation. The main focus of the survey is on the quantum computation paradigm, that is, quantum programming languages and their semantics. Three styles for quantum programming languages have been discussed in the survey, including {\it imperative}, {\it functional}, and {\it parallel and distributed}. For each style, several languages are mentioned and compared, that is, QCL~\cite{omer1998procedural,omer2000quantum} and qGCL~\cite{sanders2000quantum,zuliani2001quantum,zuliani2001formal,zuliani2004non} for imperative, the approaches by Tonder~\cite{van2003quantum} and Girard~\cite{girard2004between} and QPL~\cite{selinger2004towards,selinger2004brief,selinger2006lambda} by Selinger for functional, and CQP~\cite{gay2004communicating,gay2005communicating} and QPAlg~\cite{lalire2004process} for parallel and distributed. The survey also discussed the semantics issues of quantum programming languages, from the perspectives of operational, axiomatic, and denotational aspects, respectively. Sofge~\cite{sofge2008survey} gave a brief survey on quantum programming languages from the perspective of history, method, and tools in 2008. He introduced a taxonomy of quantum programming languages, which divided into (1) imperative quantum programming languages, (2) functional quantum programming languages, and (3) other quantum programming language paradigms, which include some types of mathematical formalism that not intended for computer execution. Based on the taxonomy, a brief survey is given to reflect state of the art for quantum programming languages until 2007. Some challenges and a path forward for quantum programming languages are also given in the survey. Ying {\it et al.}~\cite{ying2012quantum} presented a survey on the programming methodology for quantum computing in 2012. Their survey discussed those issues regarding the design of sequential and concurrent quantum programming languages and their semantics and implementations, with more emphasis on the work conducted by the authors. Notably, in addition to discussion on the quantum programming languages themselves, they also reviewed some formal verification approaches for quantum programs and protocols, and discussed the potential applications of these quantum languages and verification approaches in quantum computing engineering. One of the most recent surveys for quantum programming languages is proposed by Garhwal {\it et al.}~\cite{garhwal2019quantum} in 2019, which gave an overview of state of the art in the field of quantum programming languages, focusing on actual high-level quantum programming languages, their features and comparisons. They classified quantum programming languages into five categories: quantum imperative programming language, quantum functional language, quantum circuit language, multi-paradigm language, and quantum object-oriented language. In the survey, they tried to answer the following research questions~\cite{garhwal2019quantum}: \begin{itemize}[leftmargin=2em] \item What are the different types of quantum languages? \item What are the recent trends in developing quantum languages? \item What are the most popular publication venues for quantum programming? \item What are the most cited papers in the area of a quantum language? \item Which major companies, groups, institutes, and universities are working on the development of quantum languages? \end{itemize} They concluded the survey by pointing out some recent trends, results, and future work in the area. Another recent survey~\cite{zorzi2019quantum} is presented by Zorzi, which focuses on the QRAM (Quantum Random Access Machine) architectural model. The survey is organized from the twofold perspectives, i.e., theory and concrete, and identifies the main problems one has to face in quantum language design. The survey tried to find some evidences from current researches to answer the following fundamental questions~\cite{zorzi2019quantum} regarding quantum language design and provided some possible answers to these questions. \begin{itemize}[leftmargin=2em] \item What is the architectural model the language refers to? \item How to manage quantum data (which are no-duplicable to ensure the no-cloning property, so need a form of linear treatment)? \item What class of quantum functions one aim to program (expressive power)? \end{itemize} Besides those above, several other surveys are also given in~\cite{miszczak2011models,valiron2013quantum,valiron2015programming,hietala2016quantum,spivsiak2017quantum,chong2017programming}, on quantum programming languages from different viewpoints. However, although these surveys on quantum programming languages give very comprehensive pictures on the state of the art of researches on quantum programming languages, they lack the discussion of other phases of the life cycle of quantum software development, such as requirement, design, testing, and maintenance. \subsection{Quantum Software Engineering} \label{subsec:QSE} The term {\it quantum software engineering} was originally coined by John Clark and Susan Stepney~\cite{clark2002quantum} at the Workshop on Grand Challenges for Computing Research organized by Tony Hoare and Malcolm Atkinson, in 2002. Since then, there have been extensive research works on the different aspects of quantum software engineering as surveyed in this paper. Although no survey has been presented for quantum software engineering until now, some papers~\cite{barbosa2020software,PiattiniPPHSHGP2020} discussed the challenges of which quantum computing is facing on during quantum software development. \subsubsection{\bf Journeys in Non-Classical Computation} \label{subsubsec:QSE-nonclassical} In their seminal papers titled "Journeys in non-classical computation I and II"~\cite{stepney2005journeys,stepney2006journeys}, Stepney {\it et al.} presented a grand challenge on quantum software engineering, to develop a mature discipline of quantum software engineering for fully exploiting commercial quantum computer hardware. They claimed in ~\cite{stepney2006journeys} that "{\it The whole of classical software engineering needs to be reworked and extended into the quantum domain}.", and introduced this challenge from different perspectives of quantum software engineering, including foundations, quantum computational models, languages and compilers, methods and tools, as well as novel quantum possibilities. Here, we briefly list some challenge problems for each aspect mentioned in ~\cite{stepney2005journeys,stepney2006journeys}: \begin{itemize}[leftmargin=2em] \item {\it Foundations}: how to develop metaphors and models of quantum computing, which one can use to design and reason about quantum algorithms without considering the details within the quantum machine and unitary matrices, etc. \item {\it Quantum computational models}: how to generalize various classical formalisms to the quantum realm in different ways \item {\it Languages and compilers}: how to determine the fundamental building blocks of quantum programming; how to design suitable assembly level and high-level quantum programming languages and the compilers of these languages; how to develop suitable reasoning systems and refinement calculi for these languages. \item {\it Methods and tools}: how to develop powerful simulation systems so that one can perform computational experiments and validate the designs of the languages and algorithms; how to discover what high-level structuring techniques and architectures are suitable for quantum software; how to develop novel debugging and testing techniques for quantum computing, as we know that quantum execution is in principle unobservable. \item {\it Novel quantum possibilities}: how to extend quantum software engineering to encompass those new issues from quantum mechanics, which cannot be even simulated by discrete deterministic classical computers. \end{itemize} We believe that the software engineering and the quantum computing communities should pay more attention to the issues listed above, to facilitate the construction of the overall picture for the field of quantum software engineering. \subsubsection{\bf A landscape of Research Challenges for Quantum Software Engineering} \label{subsubsec:QSE-landscape} Barbosa~\cite{barbosa2020software} presented a landscape of some research challenges to overcome for evolving the software engineering application in quantum computing, along with potential research directions in several aspects of the problem, including models, architectures, and properties. He believed that it is the time to discuss an agenda for a solid, rigorous software engineering discipline for quantum systems. He claimed that any roadmap for such a discipline should contain three main aspects: \begin{itemize} \item[(1)] How quantum software systems are modeled. \item[(2)] How the models of these systems are composed. \item[(3)] How the properties of these systems' behaviors can be predicted, specified, and verified. \end{itemize} He also pointed out some challenges and research directions in terms of the models, architectures, and properties of the quantum systems, respectively. Among those directions, one particularly interesting issue, which is worth exploring, is how to extend the contract-based design, a successful paradigm in classical software engineering, to the quantum domain. The paper is presented in the form, of which the formal aspect of the quantum software engineering principle is emphasized. \subsubsection{\bf The Talavera Manifesto for Quantum Software Engineering and Programming} \label{subsubsec:QSE-manifesto} Piattini {\it et al.}~\cite{PiattiniPPHSHGP2020} presented the Talavera Manifesto for quantum software engineering and programming, which is the result obtaining from the discussions and different viewpoints of academia and industry practitioners who joined at the QANSWER, the 1st International Workshop on Quantum Software Engineering \& Programming, promoted by aQuantum at School of Computer Science, in the Campus of Talavera de la Reina of the University of Castilla-La Mancha. The manifesto collects some principles and commitments about the field of quantum software engineering and programming, as well as some calls for action. They believe that quantum software engineering should have a necessary contribution to the success of quantum computing, and it is the time to develop quantum software by applying or adapting the well-established principles and methodologies from classical software engineering field, to the development of quantum software, which may include processes, methods, techniques, and practices. Concretely, they listed some principles and commitments regarding quantum software engineering, such as quantum software {\it process}, {\it reengineering}, {\it requirement}, {\it evolution}, {\it testing and debugging}, {\it reuse}, {\it security and privacy}, and {\it management}. They also call for actions to the stakeholders, including software practitioners, researchers, educators, government and funding agencies, quantum technology vendors, professional associations, customers, and users, who may contribute to the field of quantum software engineering. However, the manifesto contains no detailed discussion about the current status of research in the field of quantum software engineering. \subsection{Quantum Software Development Environments} \label{subsec:IDE} Roetteler {\it et al.}~\cite{roetteler2017design} presented a survey on investigating recent progress in building a quantum software framework, which can compile quantum algorithms from the high-level descriptions to physical quantum gates, which can be implemented on fault-tolerant quantum computers. In the survey, they discussed why tools such as compilation and design automation are essential to meet the enormous challenges of building scalable quantum computers. They also introduced a library developed with LIQUi|$\rangle$~\cite{wecker2014liqui} programming language, including reversible circuits for arithmetic and new real quantum methods that rely on quantum computer architecture, which allows the probability execution of quantum gates. This library, in some cases, can reduce time and space overhead. Also, the survey highlights why these libraries are useful for implementing many quantum algorithms. Finally, the tool \verb+Revs+ was investigated. This tool can help to compile high-level irreversible programs into low-level reversible circuits with resource efficiency, while trying to optimize the memory footprint of the resulting reversible network. Construction of the tool is motivated by the fact that the availability of qubit is limited in the foreseeable future. LaRose~\cite{larose2019overview} presented an overview and comparison of gate-level quantum software platforms. The overview mainly focuses on four new software platforms in quantum computing, namely, Forest (pyQuil) from Rigetti~\cite{smith2016practical,regetti2017forest}, Qiskit from IBM~\cite{ibm2017qiskit}, ProjectQ from ETH Zurich~\cite{projectq2017projectq,steiger2018projectq}, and Quantum Developer Kit (Q\#) from Microsoft~\cite{svore2018q}. Forest, Qiskit, and ProjectQ allow the users to access the real quantum computing devices, while Quantum Developer Kit only allows access to the simulator. In the overview, each platform is discussed and summarized from six aspects: requirements and installation, documentation and tutorials, quantum programming language syntax, quantum assembly/instruction language, quantum hardware, and simulator capabilities, with the links to documents and tutorial sources of each package. The overview also compares each platform with additional aspects, including library support, quantum hardware, and quantum compilers, and lists some notable and useful features of each platform. The purpose of this overview is to provide users with essential information for each platform to help them select a suitable platform to start exploring quantum computing programming. In addition to the four platforms mentioned, the overview also briefly introduced several quantum programming environments, including Quipper~\cite{green2013introduction,green2013quipper}, Scaffold~\cite{abhari2012scaffold}, and QCL~\cite{omer1998procedural}. Fingerhuth {\it et al.}~\cite{fingerhuth2018open} presented an exhaustive review of open-source software projects in quantum computing. The review covers the whole stages in the quantum toolchain from quantum hardware interfaces through quantum compiler to implementations of quantum algorithms and also the full spectrum of quantum computing paradigms, which include quantum annealing~\cite{kadowaki1998quantum,finnila1994quantum,shin2014quantum}, and discrete and continuous-variable gate-model quantum computing~\cite{ortiz2017continuous}. For each project, the evaluation covers those features, including documentation, licensing, the choice of programming language, compliance with software engineering specifications, and the culture of the project. Fingerhuth {\it et al.} also discussed some of the findings that though the diversity of the projects is fascinating, only a few projects attract external developers, and even many commercially supported frameworks have flaws in software engineering. Based on these findings, the authors highlighted some best practices that could foster a more active community around quantum computing software, welcome newcomers to the field, and also ensure high-quality, well-documented code. Shaydulin {\it et al.}~\cite{shaydulin2020making} surveyed open-source quantum computing projects from a different perspective, which mainly focused on the contributors of quantum software projects. They observed that one of the main problems in quantum computing is the lack of understanding of what training is required for success in the quantum computing field. To answer this question, they collected data on 148 contributors to three open-source quantum computing projects hosted on GitHub, including Qiskit~\cite{gadi_aleksandrowicz_2019_2562111} by IBM, PyQuil/Grove~\cite{smith2016practical} by Rigetti, and Cirq~\cite{cirq2018google} by Google. They studied the successful contributors to these projects from the field as well as the wider quantum computing community to understand the background and training that contributed to their success. These observations can help develop educational resources targeted at preparing a new generation of quantum computing researchers and practitioners. Their study could have a positive effect on bringing software engineering methodologies and techniques to the quantum domain. All these surveys focus specifically on the quantum software development environments, one of the essential aspects of quantum software development, but not on the whole life cycle of quantum software development, as we presented in this paper. \section{Concluding Remarks} \label{sec:conclusion} This paper presented a comprehensive survey in the field of quantum software engineering, which is across all the phases of the quantum software life cycle, and covered the crucial issue on quantum software reuse as well. The paper also discussed some challenges and opportunities in the field. In its short history, quantum software development has been driven by the emergence of quantum programming languages~\cite{omer1998procedural,gay2005bibliography,green2013quipper,ibm2017qiskit,svore2018q}. In other words, quantum software development has mostly been synonymous with quantum programming. While such languages have helped popularise the subject, this is not a healthy position in the longer term. In particular, it is vital that a complete software engineering discipline emerges for quantum software development. This paper is aimed as a stepping stone in this direction. This paper has examined state of the art in engineering support for quantum software systems, looking at areas such as requirements, design, implementation, testing, and maintenance in the quantum software life cycle. The crucial issue of quantum software reuse has also been considered. The evidence presented suggests that rapid progress is being made in these areas. It would be wrong though to claim maturity in the topics addressed by this paper; indeed, although many techniques have emerged, further experience is required to appreciate their relative strengths and indeed to consolidate proposals into a small number of critical approaches. \bibliographystyle{ACM-Reference-Format}
2023-04-23T08:17:52.839Z
2020-07-15T02:18:03.000Z
redpajama/arxiv
arxiv_0000
800
24,360
a511857104509087b354933dd200037ae00d9e5f
\section{Introduction} \label{sec:intro} Hartree Fock (HF) theory \cite{Szabo-Ostland,MolElecStruc} is so immensely useful in large part due to the rigorous and convenient link it provides between a qualitatively correct many-electron description and an affordable and more intuitive one-electron equation. The link it makes is rigorous in that, when solved, its one-electron equation guarantees that the many-electron description underneath it is optimal in a variational sense, meaning that the energy is made stationary with respect to changes in the wave function. The link is also convenient, because many-electron properties like the energy can be evaluated in terms of inexpensive one-electron quantities, and because solving a one-electron equation, even one with mean field operators that must be brought to self-consistency, is in most cases easier and less expensive than a direct minimization of the many-electron energy. The fact that this useful link is possible at all owes much to the simplicity of the Slater determinant many-electron wave function on which HF theory is built. Essentially, the Slater determinant is as close as we can get to a truly mean field, correlation-free Hartree product ansatz while still capturing the important effects of Pauli correlation. Happily, this single step away from a product state does not prevent a useful and intuitive formulation in terms of a self-consistent one-electron equation in which mean field operators account for electron-electron coulomb repulsion. In this paper, we will show how excited state mean field (ESMF) theory \cite{Shea2018} can also be formulated in terms of a one-electron mean field equation that, when solved self consistently, produces optimal orbitals. As in HF theory, this formulation is possible thanks to the ansatz hewing closely to the mean field limit: ESMF takes only one additional step away from a truly mean field product state by adding the open-shell correlation that arises in an excitation on top of the Pauli correlations already present in the ground state. Perhaps most importantly, the resulting one-electron equation that determines the optimal orbitals can, like the Roothaan equations, be solved by iteratively updating a set of mean field operators until they are self-consistent with the orbital shapes. As we will see, when accelerated by direct inversion in the iterative subspace (DIIS), \cite{pulay1982diis} this self consistent field (SCF) approach brings the orbital optimization cost down to within a factor of two of HF theory, and significantly lowers the overall cost of ESMF theory compared to previous approaches. Given that ESMF offers a powerful platform upon which to construct excited-state-specific correlation theories \cite{Zhao2019dft, Shea2020gvp, Clune2020topesmp2} and that it has recently been shown to out-compete other low-cost methods like configuration interaction singles (CIS) and density functional theory in the prediction of charge density changes, \cite{zhao2020esmf} this acceleration of the theory and simplification of its implementation should prove broadly useful. While recent work has provided an improved ability to optimize the ESMF ansatz via the nonlinear minimization of a generalized variational principle (GVP), \cite{Shea2020gvp, zhao2020esmf} the current lack of an SCF formulation stands in sharp contrast to the general state of affairs for methods based on Slater determinants. Even in contexts outside of standard HF for ground states, SCF procedures are the norm rather than the exception when it comes to optimizing Slater determinants' orbitals. Indeed, among many others, the $\Delta$SCF, \cite{bagus1965scf,Pitzer1976scf,argen1991xray,Gill2009dscf} restricted open-shell Kohn Sham, \cite{Shaik1999,Kowalczyk2013} constrained density functional theory, \cite{VanVoorhis2005cdft} ensemble density functional theory, \cite{theophilou1979,kohn1986quailocal,gross1988density} projected HF, \cite{jimenez2012PHF} and $\sigma$-SCF \cite{vanVoorhis2017sigmaSCF,Voorhis2019} methods all favor SCF optimization approaches. Although the direct minimization of a GVP or the norm of the energy gradient \cite{hait2020oo} offers protection against a Slater determinant's ``variational collapse'' to the ground state or lower excited states, this rigorous safety comes at some cost to efficiency. It is not for nothing that direct energy minimization methods, although available, \cite{VanVoorhis2002gdm} are not the default HF optimization methods in quantum chemistry codes. In cases where they prove stable, SCF approaches are typically more efficient. In the case of the ESMF anstatz, an SCF approach is also at risk of collapse to an undesired state, but, even in such troublesome cases, a brief relaxation of the orbitals by SCF may still offer a low-cost head start for the direct minimization of a GVP. In cases where an SCF approach to ESMF is stable, history strongly suggests that it will be more efficient than nonlinear minimization. In short, our preliminary data agree with history's suggestion. \section{Theory} \label{sec:theory} \subsection{Hartree-Fock Theory} \label{sec::hf} To understand how an SCF formulation of ESMF theory comes about, it is useful to first review the formulation of HF theory and in particular how its condition for optimal orbitals can be written as a commutator between a mean field operator and a one-body reduced density matrix (RDM). In HF theory, the energy of the Slater determinant $\Psi_{SD}$ is made stationary with respect to changes in the orbital variables, which is the Slater determinant's approximation of the more general condition that an exact energy eigenstate will have an energy that is stationary with respect to any infinitesimal variation in the wave function. For convenience, and without loss of generality, the molecular orbitals are constrained by Lagrange multipliers to be orthonormal. \cite{Szabo-Ostland} For Restricted Hartree Fock (RHF), the resulting Lagrangian \begin{align} \label{eqn:rhf_lagrangian} L_{RHF} &= E_{RHF} + 2 \hspace{0.5mm} \mathrm{tr} \left[ (\bm{I} - \bm{C}^T \bm{S} \bm{C}) \bm{\epsilon} \right] \end{align} in which $\bm{C}$ is the matrix whose columns hold the molecular orbital coefficients, $\bm{S}$ is the atomic orbital overlap matrix, $\bm{I}$ is the identity matrix, $\bm{\epsilon}$ is the symmetric matrix of Lagrange multipliers, $\mathrm{tr}[]$ is the matrix trace operation, and $E_{RHF}$ is the RHF energy (given below), is then made stationary by setting derivatives with respect to $\bm{C}$ equal to zero. After some rearrangement, \cite{Szabo-Ostland} this condition can be formulated into the famous Roothaan equations, \begin{align} \label{eqn:rhf_roothaan} \big( \bm{h} + \bm{W}\left[\bm{A}\right] \big) \bm{C} = \bm{S} \bm{C} \bm{\epsilon} \end{align} in which $\bm{h}$ is the matrix representation of the one-electron components of the Hamiltonian in the atomic orbital basis and $\bm{W}$ is interpreted as a mean field approximation for electron-electron repulsions. Of course, this mean field repulsion depends on the orbital shapes, causing the operator $\bm{W}$ to be a function of $\bm{A}$, the Aufbau determinant's 1-body $\alpha$-spin RDM. In what comes below we will consider RDMs and other matrices in both the atomic orbital (AO) and molecular orbital (MO) bases, and will adopt the notation that a matrix with no superscript (e.g.\ $\bm{A}$) refers to the AO representation, while the MO representation is explicitly denoted as such (e.g.\ $\bm{A}^{(MO)}$). The closed-shell Aufbau determinant's RDM has the form \begin{align} \label{eqn:rhf_rdm} \bm{A}^{(MO)} = \bm{I}_o \qquad \bm{A} = \bm{C} \bm{A}^{(MO)} \bm{C}^T \end{align} where the matrix $\bm{I}_o$ has ones on the first $n_o$ elements of its diagonal and zeros elsewhere ($n_o$ is the number of occupied molecular orbitals). Although in many contexts it is useful to separate the restricted HF (RHF) mean field electron-electron repulsion operator $\bm{W}[\bm{A}]=2\bm{J}[\bm{A}]-\bm{K}[\bm{A}]$ into its ``coulomb'' $\bm{J}$ and ``exchange'' $\bm{K}$ components, \begin{align} \label{eqn:J} J[\mathcal{\gamma}]_{pq}&=\sum_{rs}{\mathcal{\gamma}_{rs}(rs|pq)} \\ \label{eqn:K} K[\mathcal{\gamma}]_{pq}&=\sum_{rs}{\mathcal{\gamma}_{rs}(pr|qs)}, \end{align} defined here using the two-electron integrals in 1122 order, this separation is not necessary at present and so we will work instead in terms of the combined mean field operator $\bm{W}$. Now, while the Roothaan equation has both an intuitive appeal as a one-electron Schr\"{o}dinger equation and a practical appeal as a convenient setup for an SCF cycle based on the efficient numerical diagonalization of a symmetric generalized eigenvalue problem, it is not the only way to formulate HF theory's central requirement of Lagrangian stationarity. Noting that only the first $n_o$ columns of $\bm{C}$ affect the ansatz, we can right-multiply Eq.\ (\ref{eqn:rhf_roothaan}) by $\bm{I}_o=\bm{A}^{(MO)}$ to focus our attention on them while at the same time left-multiplying by $\bm{C}^T$ to eliminate the overlap matrix, which results in \begin{align} \label{eqn:rhf_modified_roothan} \bm{F}^{(MO)} \bm{A}^{(MO)} = \bm{C}^T \big( \bm{h} + \bm{W} \big) \bm{C} \bm{A}^{(MO)} = \bm{\epsilon} \bm{I}_o \end{align} where we have made the usual definition of the Fock operator. \begin{align} \label{eqn:rhf_fock_op} \bm{F}^{(MO)} = \bm{C}^T\bm{F}\bm{C} = \bm{C}^T \big( \bm{h} + \bm{W} \big) \bm{C} \end{align} If we ensure that we work in the canonical representation, \cite{Szabo-Ostland} the matrix $\bm{\epsilon}$ will be diagonal, and so Eq.\ (\ref{eqn:rhf_modified_roothan}) essentially says that the product $\bm{F}^{(MO)} \hspace{0.3mm} \bm{A}^{(MO)}$ must produce a symmetric matrix. We may enforce this requirement by setting the difference between this product and its transpose equal to zero, which leads to a commutator condition for Lagrangian stationarity that can be used as an alternative to the Roothaan equation when optimizing orbitals. \cite{McWeeny, pulay1982diis} \begin{align} \label{eqn:rhf_commutator} \big[ \hspace{0.6mm} \bm{C}^T\bm{F}\bm{C}, \hspace{0.6mm} \bm{A}^{(MO)} \hspace{0.6mm} \big] = 0 \end{align} If we consider the HF energy expression \begin{align} \label{eqn:rhf_energy} E_{RHF} = \mathrm{tr}\big[ (2\bm{h} + \bm{W}) \bm{A} \big] \end{align} alongside the Fock operator definition $\bm{F}=\bm{h}+\bm{W}$, we see a nice connection between the commutator condition and the energy. Specifically, if one halves the one-electron component of the mean field operator whose trace with the density yields the energy, the resulting operator ($\bm{F}$ in this case) must, when put in the MO basis, commute with the MO basis representation of the density matrix in order for the Lagrangian to be stationary. With this connection pointed out, we now turn our attention to ESMF theory, where a generalization of Eq.\ (\ref{eqn:rhf_commutator}) yields a useful SCF formulation for orbital optimization. \subsection{Excited State Mean Field Theory} \label{sec::esmf} Like HF theory, the energy expression for the ESMF ansatz for a singlet excited state can be written in terms of traces between mean field operators and density-like matrices. In particular, if we take the simple version of the singlet ESMF ansatz in which the Aufbau coefficient is set to zero, \begin{align} \label{eqn:esmf_ansatz} \left. |\Psi_{ESMF}\right\rangle = \sum_{ia} t_{ia} \left|\hspace{0.3mm}{}^{a_{\uparrow}}_{i_{\uparrow}}\right\rangle + t_{ia} \left|\hspace{0.3mm}{}^{a_{\downarrow}}_{i_{\downarrow}}\right\rangle, \end{align} where $\bm{t}$ is the matrix of CIS-like configuration interaction coefficients and $\left|\hspace{0.3mm}{}^{a\uparrow}_{i\uparrow}\right\rangle$ is the Slater determinant resulting from an $\hspace{1mm}i\rightarrow a\hspace{1mm}$ $\alpha$-spin excitation out of the Aufbau determinant (note we do not say the HF determinant, as we are not in the HF MO basis), then the ESMF singlet energy amounts to four traces between mean field operators and density-like matrices. \begin{align} \notag E_{ESMF} = \hphantom{+} \hspace{0.5mm} \mathrm{tr}\big[& \hspace{0.7mm} ( \hspace{0.5mm} 2\bm{h} \hspace{0.3mm} + \bm{W}[\bm{A}] \hspace{0.7mm} ) \hspace{1mm} \mathcal{\gamma} \hspace{1mm} \big] + \hspace{0.5mm} \mathrm{tr}\big[ \hspace{0.5mm} \bm{W}[\bm{D}] \hspace{0.5mm} \bm{A} \hspace{0.5mm} \big] \\ \label{eqn:esmf_energy} + \hspace{0.5mm} \mathrm{tr}\big[& \hspace{0.5mm} \bm{W}[\bm{T}] \hspace{0.5mm} \bm{T}^T \hspace{0.5mm} \big] + \hspace{0.5mm} \mathrm{tr}\big[ \hspace{0.5mm} (\bm{W}[\bm{T}])^T \hspace{0.5mm} \bm{T} \hspace{0.5mm} \big] \end{align} Here $\mathcal{\gamma}$ is the one-body alpha-spin RDM for the ESMF ansatz. \begin{align} \label{eqn:esmf_rdm} \gamma^{(MO)} = \bm{I}_o + \left(\begin{array}{c|c} -\bm{t}\hspace{0.4mm}\bm{t}^T & 0 \\ \hline 0 & \bm{t}^T\bm{t}\rule{0pt}{3.8mm} \end{array}\right) \qquad \gamma = \bm{C} \gamma^{(MO)} \bm{C}^T \end{align} The matrix $\bm{A}$ is the Aufbau determinant's one-body RDM, as in Eq.\ (\ref{eqn:rhf_rdm}). The difference between these density matrices we define as $\bm{D}=\gamma-\bm{A}$. Finally, $\bm{T}$ is the non-symmetric matrix that, in its MO representation, has the $\alpha$-spin transition density matrix between the Aufbau determinant and the ESMF ansatz (which is as for CIS just $\bm{t}\hspace{0.2mm}$) in its upper-right corner. \begin{align} \label{eqn:esmf_tdm} \bm{T}^{(MO)} = \left(\begin{array}{c|c} 0 & \bm{t} \\ \hline 0 & 0 \rule{0pt}{3.8mm} \end{array}\right) \qquad \bm{T}=\bm{C}\bm{T}^{(MO)}\bm{C}^T \end{align} With the ESMF energy written in terms of one-body mean field operators and density-like matrices, we can now present our central result, in which the stationarity conditions for the ESMF Lagrangian \begin{align} \label{eqn:esmf_lagrangian} L_{ESMF} = E_{ESMF} + 2 \hspace{0.5mm} \mathrm{tr} \left[ (\bm{I} - \bm{C}^T \bm{S} \bm{C}) \bm{\epsilon} \right] \end{align} with respect to orbital variations are written in a one-electron equation that admits an SCF-style solution. We begin, as in HF theory, by setting the (somewhat messy) derivatives $\partial L_{ESMF} / \partial \bm{C}$ equal to zero. With some care, this condition can be organized into \begin{align} \notag &\big( \bm{h}+\bm{W}[\bm{A}] \big) C \gamma^{(MO)} + \bm{W}[\bm{D}] C A^{(MO)} \\ &\hspace{6mm} + \bm{W}[\bm{T}] C (T^{(MO)})^T + (\bm{W}[\bm{T}])^T C T^{(MO)} = \bm{S} \bm{C} \bm{\epsilon} \label{eqn:esmf_roothaan} \end{align} whose structure is similar to but also notably different from the analogous HF expression in Eq.\ (\ref{eqn:rhf_roothaan}). The formal difference is that there are now four terms on the left hand side, one for each trace in the energy expression. The practical difference is that the ESMF equation is not an eigenvalue problem, and it is not obvious that it can be reorganized into one due to the incompatible kernels of the matrices $\gamma^{(MO)}$, $A^{(MO)}$, and $T^{(MO)}$. Thus, it is at present not clear whether this ESMF equation can offer the same spectral information that the Roothaan equation provides for HF. Nonetheless, for orbital optimization, we have found a convenient alternative by transforming this stationary condition into commutator form by following the same steps that took us from Eq.\ (\ref{eqn:rhf_roothaan}) to Eq.\ (\ref{eqn:rhf_commutator}) in HF theory. Defining $\bm{F}_A=\bm{h}+\bm{W}[\bm{A}]$, the result is that the Lagrangian stationary condition can be written as \begin{align} \notag 0 = & \hspace{0.3mm} \big[ \hspace{1mm} \bm{C}^T\bm{F}_A\bm{C}, \hspace{1mm} \mathcal{\gamma}^{(MO)} \hspace{1mm} \big] \\ \notag & + \big[ \hspace{1mm} \bm{C}^T\bm{W}[\bm{D}]\bm{C}, \hspace{1mm} \bm{A}^{(MO)} \hspace{1mm} \big] \\ \notag & + \big[ \hspace{1mm} \bm{C}^T\bm{W}[\bm{T}]\bm{C}, \hspace{1mm} (\bm{T}^{(MO)})^T \hspace{1mm} \big] \\ & + \big[ \hspace{1mm} \bm{C}^T(\bm{W}[\bm{T}])^T\bm{C}, \hspace{1mm} \bm{T}^{(MO)} \hspace{1mm} \big]. \label{eqn:esmf_commutator} \end{align} It is interesting that the same pattern holds as in the HF case: the commutator condition has one commutator per trace in the energy expression, and the mean field operators (with any one-electron parts halved) are again paired with the same density-like matrices as in the energy traces. We find this pattern especially interesting in light of the fact that it does not simply follow that each trace produces one commutator. Instead, cancellations of terms coming from derivatives on different traces are needed to arrive at the commutators above, and so we do wonder whether this is a happy accident or whether there is an underlying reason to expect such cancellations. \subsection{Self Consistent Solution} \label{sec::scs} Either way, Eq.\ (\ref{eqn:esmf_commutator}) forms the basis for an efficient SCF optimization of the ESMF orbitals. Assuming that we are a small orbital rotation away from stationarity, we insert the rotation $\bm{C}\rightarrow\bm{C}\mathrm{exp}(\bm{X})$ into our commutator condition and then expand the exponential and drop all terms higher than linear order in the anti-symmetric matrix $\bm{X}$. The result is a linear equation for $\bm{X}$ (see Eq.\ (\ref{eqn:linear_eq_x}) in the Supplementary Material) which we solve via the iterative GMRES method. Note that, if desired, one can control the maximum step size in $\bm{X}$ by simply stopping the GMRES iterations early if the norm of $\bm{X}$ grows beyond a user-supplied threshold. This may be desirable, as we did after all assume that only a small rotation was needed and our linearization of the equation prevents us from trusting any proposed rotation that is large in magnitude. In parallel to SCF HF theory, which holds $\bm{F}$ fixed while solving the Roothaan equation for new orbitals, we hold $\bm{F}_A$, $\bm{W}[\bm{D}]$, and $\bm{W}[\bm{T}]$ fixed while solving our linear equation. Thus, although the modified GMRES solver is not as efficient as the dense eigenvalue solvers used for HF theory, it remains relatively inexpensive as it does not does not involve any Fock builds and so does not have to access the two-electron integrals. (\begin{small}\textit{Technical note: in practice, we can speed up the GMRES solver considerably by preconditioning it with a diagonal approximation to the linear transformation that is set to one for $\bm{X}$ elements in the occupied-occupied and virtual-virtual blocks (since these are expected to play little role in the orbital relaxation) and, in the other blocks, replaces $\bm{C}^T\bm{F}_A\bm{C}$ with its diagonal, replaces $\gamma^{(MO)}$ with $\bm{I}_o$, and neglects $\bm{W}[\bm{D}]$ and $\bm{W}[\bm{T}]$ (see Supplementary Material for the explicit form). DIIS is also effective when we take Eq.\ (\ref{eqn:esmf_commutator}) transformed into the AO basis as the error vector and the $\bm{F}_A$, $\bm{W}[\bm{D}]$ and $\bm{W}[\bm{T}]$ matrices as the DIIS parameters. We use both of these accelerations in all calculations.}\end{small}) Only after the linear equation is solved and the orbitals are updated do we rebuild the three mean field operators, and so each overall SCF iteration requires just three Fock builds, which, as they can be done during the same loop over the two-electron integrals, come at a cost that is not much different than HF theory's single Fock build. This arrangement contrasts sharply with the nine Fock builds and two integral loops that are necessary to form the analytic derivative of the energy with respect to $\bm{C}$ that is used in descent-based orbital optimization. \cite{zhao2020esmf} In summary, the ESMF orbitals, like the HF orbitals, can be optimized particularly efficiently via the self-consistent solution of a one-electron mean field equation. Although this exciting result makes clear that the ESMF ansatz really does hew closely enough to the mean field product-state limit for one-electron mathematics to be of use, there are a number of questions we should now address. First, and we will go into more detail on this point in the next paragraph, is the SCF approach actually faster than descent? The answer, at least in simple systems, is a resounding yes. Second, what of the configuration interaction coefficients $\bm{t}$? At present, we optimize them in a two-step approach, in which we go back and forth between orbital SCF solutions and CIS calculations (taking care to include the new terms that arise for CIS when not in the HF MO basis) until the energy stops changing. In future, more sophisticated approaches that provide approximate coupling between these optimizations may be possible, as has long been true in multi-reference theory. \cite{kreplin2020mcscf} Third, what physical roles can we ascribe to the different mean field operators that appear in the SCF approach to ESMF? The operator $\bm{F}_A$ obviously carries the lion's share of the electron-electron repulsion, as it is the only mean field operator derived from a many-electron density matrix. Indeed, $\bm{W}[\bm{D}]$ and $\bm{W}[\bm{T}]$ represent repulsion from one-electron densities, and so they cannot provide the bulk of the electron-electron repulsion. Thus, we suggest that it is useful to view $\bm{F}_A$ as a good starting point that includes the various repulsions between electrons not involved in the excitation but that gets the repulsions affected by the excitation wrong. $\bm{W}[\bm{D}]$ and $\bm{W}[\bm{T}]$ then act as single-electron-density corrections to this starting point. If one considers the simple case in which we ignore all electrons other than the pair involved in the excitation (e.g.\ consider the HOMO/LUMO excitation in H$_2$), then a close inspection reveals that $\bm{W}[\bm{D}]$ eliminates the spurious HOMO-HOMO repulsion that is present in the first trace of the energy expression, while the $\bm{W}[\bm{T}]$ terms bring the excited electron pair's repulsion energy into alignment with the actual repulsion energy that results from the singlet's equal superposition of two open-shell determinants. \begin{table}[t] \caption{\label{tab:h2o_homo_lumo}Convergence of SCF- and GVP-based ESMF for the HOMO/LUMO excitation of cc-pVDZ H$_2$O. Initial values for $\bm{t}$ and $\bm{C}$ are set to the two-determinant HOMO/LUMO open shell singlet and the RHF orbitals, respectively. For SCF, the two-step method toggled between CIS and SCF calculations, with CIS going first. As the guess is quite good in this system, the GVP optimization set $\mu=0$ right away and so amounted to a BFGS minimization of the energy gradient norm. At various points during each optimization (measured both by the cumulative number of loops over the TEIs and by the wall time) we report the energy error $\Delta E$ compared to the fully converged energy. Both calculations used a single core on a 2015 MacBook Air. } \begin{tabular}{c c c c c c c} \hline\hline \multicolumn{3}{c}{SCF ESMF} & $\quad$ & \multicolumn{3}{c}{GVP ESMF \rule{0pt}{3.2mm}} \\ TEI Loops & Time (s) & $\Delta E$ (a.u.) & & TEI Loops & Time (s) & $\Delta E$ (a.u.) \\ \hline 10 & 0.007 & 0.062605 & & 76 & 0.397 & 0.003761 \rule{0pt}{3.2mm} \\ 20 & 0.025 & 0.000032 & & 150 & 0.783 & 0.000654 \\ 30 & 0.033 & 0.000004 & & 226 & 1.187 & 0.000184 \\ 40 & 0.054 & 0.000000 & & 300 & 1.579 & 0.000001 \\ \hline\hline \vspace{0.1mm} \end{tabular} \end{table} \begin{table}[t] \caption{\label{tab:orb_timing}Total time in seconds and number of iterations $n_i$ taken for the orbital optimization in the ground state (for RHF) or the excited state (for SCF-based ESMF) to get within 5$\mu$E$_h$ of its fully converged value. The RHF and ESMF methods rely on the same underlying Fock build code, both use DIIS, and both used one core on a 2015 MacBook Air. For ESMF, only the orbitals are optimized, with $\bm{t}$ set to the HOMO/LUMO open-shell singlet and the initial guess for $\bm{C}$ set to the RHF orbitals. For RHF, the eigen-orbitals of the one-electron Hamiltonian were used as the initial guess for $\bm{C}$. Times do not include the generation of one- and two-electron AO integrals, which are the same for both methods. } \begin{tabular}{l c c c c c} \hline\hline Molecule \hspace{8mm} & \hspace{1mm} Basis \hspace{1mm} & \hspace{1mm} RHF (s) \hspace{0mm} & \hspace{0mm} $n_i$ \hspace{1mm} & \hspace{2mm} ESMF (s) \hspace{0mm}\rule{0pt}{3.2mm} & \hspace{0mm} $n_i$ \hspace{1mm} \\ \hline water & cc-pVTZ & 0.087 & 8 & 0.185 & 6\rule{0pt}{3.2mm} \\ formaldehyde & cc-pVTZ & 0.424 & 11 & 0.862 & 8 \\ ethylene & cc-pVTZ & 0.903 & 8 & 1.735 & 6 \\ toluene & cc-pVDZ & 4.366 & 19 & 6.835 & 11 \\ \hline\hline \vspace{0.1mm} \end{tabular} \end{table} \section{Results} \label{sec:results} \subsection{Efficiency Comparisons} \label{sec::efficiency} Returning now to the question of practical efficiency, we report in Table \ref{tab:h2o_homo_lumo} the convergence of the energy for the HOMO/LUMO excitation in the water molecule for both SCF-based and GVP descent-based ESMF (note all geometries can be found in the Supplementary Material). Whether one measures by the number of times the expensive two-electron integral (TEI) access must be performed or by the wall time, the two-step SCF approach is dramatically more efficient than GVP-based descent in this case. (The keen-eyed observer will notice that in the SCF case, the TEI loop count and the wall time do not increase at the same rate, which is due to the CIS iterations having many fewer matrix operations to do as compared to SCF in between each access of the TEIs.) If we focus in on just the orbital optimization, as shown in Table \ref{tab:orb_timing}, we find that the SCF approach for ESMF is almost as efficient as ground state HF theory. In practice, of course, we also want to optimize $\bm{t}$, and for now we rely on the two-step approach, as used in Table \ref{tab:h2o_homo_lumo}. While the SCF approach has clear advantages in simple cases, the GVP is still expected to be essential for cases in which the SCF approach may not be stable. For example, without implementing an interior root solver or freezing an open core (and we have not done either), Davidson-based CIS would be problematic for a core excitation. However, as shown in Table \ref{tab:h2o_core}, a combination of an initial SCF optimization of the orbitals followed by a full GVP optimization of $\bm{t}$ and $\bm{C}$ together is quite effective. In this case, the SCF approach brings the energy close to its final value, converging to an energy that is too low by 54 $\mu$E$_h$ (remember, excited states do not have any upper bound guarantee, even when a variational principle like energy stationarity or the GVP is in use). From this excellent starting point, the GVP's combined optimization of $\bm{C}$ and $\bm{t}$ converges quickly to the final energy, needing just ten gradient evaluations to get within 1 $\mu$E$_h$. In contrast, if the initial SCF orbital optimization is omitted, the GVP coupled optimization requires hundreds of gradient evaluations (exactly how many depends on the choice for $\omega$ and how $\mu$ is stepped down to zero) \cite{Shea2020gvp} to reach the same level of convergence, and was only able to converge to the correct state at all by setting $\mu$ to 0.5 and $\omega$ 0.08 E$_h$ lower than the final energy for the initial iterations to avoid converging to a higher-energy core excitation. Especially interesting is the fact that, if we move to the aug-cc-pVTZ basis, the ESMF predictions for the two lowest core excitations in H$_2$O are 534.3 and 536.2 eV, which are quite close to the experimental values \cite{schirmer1993} of 534.0 and 535.9 eV and which match the delta between them even more closely. Thus, even in cases where the SCF approach would be difficult to use on its own, it can offer significant benefits in partnership with direct minimization. \begin{table}[t] \caption{\label{tab:h2o_core}Convergence of the energy for the lowest singlet core excited state of H$_2$O in the aug-cc-pVDZ basis. Initial values for $\bm{t}$ and $\bm{C}$ are set to the two-determinant 1s$\rightarrow$LUMO open shell singlet and the RHF orbitals, respectively. An initial SCF optimization converged after 10 iterations (involving one TEI loop each), after which GVP-based BFGS descent (again with $\mu$ set immediately to zero) was started from the SCF result (the GVP requires 2 TEI loops per gradient evaluation). We report the energy error $\Delta E$ compared to the fully converged energy as a function of the cumulative wall time and the cumulative number of TEI loops. The calculation used a single core on a 2015 MacBook Air. } \begin{tabular}{c c r} \hline\hline TEI Loops & \hspace{2mm} Time (s) \hspace{2mm} & $\Delta E$ (a.u.) \rule{0pt}{3.2mm} \\ \hline \multicolumn{3}{l}{Start with SCF: \rule{0pt}{3.2mm}} \\ 5 & 0.163 & 0.008698 \hspace{0.3mm} \\ 10 & 0.267 & -0.000054 \hspace{0.3mm} \\ \multicolumn{3}{l}{Switch to GVP:} \\ 20 & 0.435 & 0.000002 \hspace{0.3mm} \\ 30 & 0.604 & 0.000001 \hspace{0.3mm} \\ \hline\hline \vspace{0.1mm} \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{pycm_orbs.pdf} \caption{ Donor (a) and acceptor (b) orbitals for the lowest charge transfer state in the PYCM molecule as predicted by ESMF. The excited state SCF calculation took just two and a half times as long as the RHF calculation. \label{fig:pycm_orbs} } \end{figure} \subsection{PYCM} \label{sec::pycm} To verify that the benefits of the SCF approach are not confined to smaller molecules, we exhibit its use on a charge transfer state in the PYCM molecule that Subotnik used to demonstrate CIS's bias against charge transfer states. \cite{Subotnik2011} Working in a cc-pVDZ basis for the heavy atoms and 6-31G for hydrogen, we consider the lowest charge transfer state, for which we provide iteration-by-iteration convergence details in the Supplementary Material. In Figure \ref{fig:pycm_orbs}, we plot the ESMF prediction for the donor and acceptor orbitals, which in this case are just the relaxed HOMO and LUMO orbitals as the $\bm{t}$ matrix coefficients are strongly dominated by the HOMO$\rightarrow$LUMO transition. We see that this state transfers charge from the $\pi$ bonding orbital on the methylated ethylene moiety to the $\pi^{*}$ orbital on the cyano-substituted ethylene moiety. Aside from the efficiency of the SCF solver in this case (it takes just two-and-a-half times as long as RHF when using the same Fock build code) it is interesting to compare the prediction against that of CIS, which is the analogous theory when orbital relaxation is ignored. CIS predicts a 7.30 eV excitation energy for the lowest state in which this charge transfer transition plays a significant role, whereas ESMF predicts a 4.82 eV excitation energy. This multiple-eV energy lowering after orbital relaxation serves as a stark reminder of how important these relaxations are for charge transfer states. \section{Conclusion} \label{sec:conclusion} In conclusion, orbital optimization in ESMF theory can be formulated in terms of a one-electron equation in which mean field operators provide electron-electron repulsion and which is brought to self-consistency through an efficient iterative process that closely mirrors ground state HF theory. In particular, it is possible to formulate the excited state many-electron energy in terms of four traces between density matrices and mean field operators, and the central commutator condition likewise contains four commutators between these density matrices and their partner mean field operators. In a sense, this is a straightforward extension of the HF case, where only one trace and one commutator are needed. As has long been true for Slater determinants, the SCF approach to the ESMF orbitals appears to be significantly more efficient than quasi-Newton methods, at least in cases where the SCF iteration converges stably to the desired state. Looking forward, it will be interesting to see if, as in the ground state case, the SCF approach admits Kohn-Sham-style density functionals and whether the optimization of the excitation coefficients can be more tightly coupled to the optimization of the orbitals. $\vspace{1mm}$ \noindent {\small \textbf{SUPPLEMENTARY MATERIAL}} $\vspace{1mm}$ See supplementary material for additional mathematical details, additional calculation details, and molecular geometries. $\vspace{1mm}$ \noindent {\small \textbf{ACKNOWLEDGEMENTS}} $\vspace{1mm}$ This work was supported by the Early Career Research Program of the Office of Science, Office of Basic Energy Sciences, the U.S. Department of Energy, grant No.\ {DE-SC0017869}. While final timing calculations were carried out on a laptop, many preliminary calculations with earlier pilot code were performed using the Berkeley Research Computing Savio cluster. $\vspace{1mm}$ \noindent \textit{Data Availability Statement} --- The data that supports the findings of this study are available within the article and its supplementary material.
2023-04-23T08:17:53.179Z
2020-10-08T02:04:38.000Z
redpajama/arxiv
arxiv_0000
810
5,576
8726e6a719b7aceb2fbafd889c1c2d666e5c788a
\section{Introduction} Since Darwin's and Mendel's works on the theory of evolution by natural selection~\cite{darwin2004origin} and on the laws of biological inheritance~\cite{mendel1866versuche} respectively, evolutionary theories have focused mainly on the role of selection acting on randomly-generated genetic material in the origination of \emph{phenotypic diversification}---and finally \emph{speciation}. This way of thinking---summarised by the so-called ``Neo-Darwinian Synthesis'' or ``Modern Synthesis''---fostered the idea that the responsibility for the generation and the subsequent establishment of ``evolutionary novelty'' was prerogative of genetic material. So, the differential survival and reproduction success of biological organisms has been ascribed to genotype. Parallel to the development of ``Modern Synthesis'', Mayr~\cite{Mayr2091} argued that: ``[...] it is the phenotype which is the part of the individual that is \emph{visible} to selection.'' Mayr's argument, together with Waddington's work on the \emph{Epigenetic Landscape}~\cite{waddington1942epigenotype}, has paved the way for the formulation of a theoretical framework according to which the phenotype---and not the genotype---,the environment~\footnote{With the term \emph{environment} here we refer---without loosing generality---to all external perturbations from which a subject can be influenced, some of these could be the external world itself, organisms of the same or other species, etc.} and above all the development process play a primary role in the origin of the novelty from an evolutionary point of view. The epigenetic landscape metaphor, which finds a formal basis in dynamical systems theory, stresses the concept for which there is no trivial deterministic mapping between genotype and phenotype~\cite{huang2012molecular}. It is the dynamics of the complex network of interaction among genes, and between genes and the environment, which will determine the stable expression patterns and so ultimately will affect the phenotype determination. Therefore, the ensemble of dynamics than can be generated by the genes composing the organism's genetic code represents a source of diversification that can explain the birth of new phenotypes and, consequently, their affirmation on the evolutionary scale. It is important here to emphasise the role of the environment in constraining and shaping these actual dynamics. In biology, the capacity of a genotype to produce different phenotypes depending on the environment in which it is located is defined as \emph{phenotypic plasticity}~\cite{PFENNIG2010459,kelly2011phenotypic}, \emph{developmental plasticity} if differences emerges during development~\cite{fusco-minelli-2010,gilbert2016developmental}. The specific dynamics that shapes an organism's phenotype during its development is indeed the response to various influences, among which we inevitably find the external environment, other organisms and noise~\cite{longo_how_2018}. More in general, these external agents influence the process of regulation so that they might destabilise reached (meta)stable patterns of gene expression and induce a network dynamics reconfiguration, able to accommodate and possibly give appropriate responses to the new state of the external environment. In other words, they stimulate the process of construction of a new internal model of the external world. Biologists call this process \emph{developmental recombination}; in the works~\cite{West-Eberhard6543,PFENNIG2010459} reasons and evidences why this process is held responsible for the origin of differences between species are presented. Noteworthy is the hypothesis for which the phenotypic plasticity would be able to allow the crossing of the valleys present in the fitness landscape, a crossing that would be precluded to evolution-by-mutations as the valleys' phenotypes would be selectively disadvantageous. Therefore, even if mutations (random or not) contribute to the creation of diversification, by modifying the gene regulatory networks topology and therefore the constraints imposed on its dynamics, they are not necessary condition for the phenotypic plasticity and, in the light of the previous discussion, assume a role of supporting actors. They are, however, implicated in the \emph{genetic accommodation} process~\cite{West-Eberhard6543}, that is the process following the selection of the phenotypic variant with a genetic component; or when a reorganisation of the genotype allows individuals of subsequent generations to reach the same phenotype at a lower cost~\cite{bateson_gluckman_2011}, in terms of time, resources, etc. From a cybernetics point of view~\cite{wiener1948cybernetics,ashby-cybernetics}, this capability of producing an internal model of the external world, which is provided by phenotypic plasticity, is of great importance. In abstract terms, this process makes it possible for an organism to compress the wealth of information coming from the external world into an internal representation that values only the pieces of information relevant for the organism's survival; on the basis of this internal model the organism acts so as to achieve its tasks, \textit{in primis} to attain homeostasis, i.e. maintaining its \textit{essential variables} within physiological ranges~\cite{design-for-a-brain}. We can then state that phenotypic plasticity not only is a vital property for living organisms, but it may also be of great value for artificial ones. Therefore, a fundamental question arises as to what are the \emph{generic properties} that allow organisms to exhibit the phenotypic plasticity observed in the process of organisms development and so attain an effective level of adaptivity. If these properties are found, on the one hand they may provide us insights about the mechanisms underlying the adaptive behaviours of organisms during their development process; on the other hand, given the reported relevance that the development process may have on evolutionary-scale changes~\cite{arthur_2004}, they can be the key to understand the onset of the differences, and at the same time the common traits, between the species~\footnote{Evolutionary Developmental Biology (evo-devo) was born around the end of the twentieth century with the intention of answering these and other related questions. In particular, it focuses on the role of the developmental process, and the effects of its alteration, on evolutionary changes~\cite{Hall2012,WallaceEvoDevo}}. An approach based on generic properties provides an alternative to the comparative studies between different species, which although have led to great results (see above all the discoveries of \emph{homeobox} and \emph{Pax6} gene, ~\cite{Gehring2007,WallaceEvoDevo,Hall2012,Xu383}) have the limitation of being highly costly and not being easily generalisable. In addition, general properties supporting phenotypic plasticity may provide an effective design principle for artificial systems capable to adapt. To this aim we believe it is necessary to start from the known and most relevant properties of the organisms and check whether they can also provide plausible hypotheses for the construction of general principles that can first explain the phenotypic plasticity and then hopefully can bring us to link development and evolution. We believe that one of these principles can be found in \textit{criticality}. A long-standing conjecture in complex system science---the \emph{criticality hypothesis}---emphasises the optimal balance between robustness and adaptiveness of those systems that are in a dynamical regime between order and chaos~\cite{Kauf93,Kau1996}.\footnote{A recent account of dynamical criticality can be found in ~\cite{roli2018dynamical,munoz2018colloquium}.} Theoretical studies on properties of such systems and a bunch of empirical evidences led to a reshape of this conjecture into ``life exists at the edge of chaos''~\cite{Lan1990,Pac1988}, or in the field of information processing into ``computation at the edge of chaos''~\cite{CruYou1990,Pro2013}. Among the most remarkable experimental studies that have brought evidence for the criticality hypothesis, we focus our attention on those that belong to the field of biology, since we are interested in biological development and evolution. In many papers, it emerges that biological cells---or more precisely their gene networks---operate in the critical dynamic regime. This has been repeatedly corroborated by different authors and with the help of different techniques, models and working hypotheses. The comparison of time sequences of microarray gene-expression data against data generated by random Boolean networks (RBNs) models~\cite{kauffman69} led Shmulevich and others~\cite{shmulevich2005eukaryotic}, to conclude that genetic regulatory network of HeLa cells---an eukaryotic cancer cell line---operate in the ordered or critical regime, but not in chaotic one. In~\cite{DanielsWalkerCriticality}, by exploiting the CellCollective~\cite{helikar2012cell} database of Boolean models of real regulatory networks, the authors showed that by using the \emph{sensitivities} measure~\cite{Sensitivities} all the network taken in exam were critical, or near critical. Further, Serra and Villani showed that Boolean networks that best fit the knock-out avalanches in the yeast \textit{S.~Cerevisiae} are ordered, but very close to the critical boundary. Many noteworthy papers~\cite{beggsCriticality,BeggsBrain,ChialvoNeural} that focus on analyses of the brain, mainly making use of models, brought evidences supporting that also the brain works in critical condition. As an example, evidence concerning models of C. Elegans' nervous system activities in free locomotion condition shows that its brain functioning has some signature of criticality~\cite{cElegansIzquierdo}. At the same time, there is evidence suggesting that organisms that operate in critical condition are the most advantaged, evolutionarily speaking. Aldana et al. in~\cite{AldBalKauRes2007} pointed out that well-known model of biological genetic networks, RBNs (formally introduced in the following), in critical regime showed the properties of robustness and, in particular, \emph{evolvability}, at the same time. More in detail, they introduced network \emph{mutations} (see experimental details in the original paper) and assessed the degree to which the original attractors (those exhibited before mutations) are retained and, simultaneously, the capacity to give rise to new attractors. Although both ordered and critical networks have proven capable of retaining the original attractors with high probability, critical networks were those with the greatest tendency to produce new attractors, and so \emph{evolvability}. Torres-Sosa and others~\cite{TorresSosa} have reached the same conclusion by following a slightly different path. By modelling natural selection acting on Boolean networks as an evolutionary algorithm with mutation and gene duplication operators, they showed that dynamical criticality emerges as a consequence of the attractor landscape evolvability property at the evolutionary level. A remarkable property of critical systems is that they can reliably respond to the inputs while being capable to react with a wide repertoire of possible actions~\cite{kauffman2000investigations}: this functionality is essential for organisms that must select, filter and compress the information coming from the environment that is relevant for their life. At this point, we wonder if criticality can foster the emergence of phenotypic plasticity and, if so if this can be at the same time the responsible of the establishment of robustness and adaptivity properties, characteristic of organisms produced by both phylogenesis and ontogenesis. In this paper we begin tackle these fundamental, open questions; so we start investigating if dynamical criticality can favour phenotypic plasticity. Indeed, if this were to be the case, not only we could bring further evidence to the ``criticality hypothesis'', but we could start to shed light on the relationship between phenotypic plasticity, criticality and evolution. \section{Creation of novelty in robotics} In this work, we make use of robotic agents as a proxy to start investigating the questions raised in the previous section. The robotics literature is full of examples in which techniques inspired by the natural world are used to allow robots---or swarms of robots---to perform complex tasks~\cite{bonabeau1999swarm,nolfi-evorobot-book,braccini2017applications}. Dually, we can think of employing artificial agents for representing or mimicking natural dynamics and therefore, to investigate issues and open questions otherwise impossible to study. Indeed, robots development and design costs are very low (especially considering the possibilities offered by simulation) and therefore, ideal for these analyses. Although this artificial approach has intrinsic limits---their results are not easily generalisable \emph{as are} to the natural counterpart---they can provide us with some new clues, hypotheses or different perspectives that could lead to the formation of new models or theories, besides suggesting specific experiments to be undertaken. Artificial devices have already proven to be able to give rise to the emergence of diversity and the creation of novelty. In this regard, Gordon Pask in the 1950s conducted a remarkable experiment involving truly evolvable hardware~\cite{pask1958physical,pask1960natural,cariani1993evolve}. Pask builded an electrochemical device with emerging sensory abilities. In particular, the experimental structure he created---composed of electrodes immersed in a ferrous sulphate solution---was able to evolve from scratch the ability to recognise sounds or magnetic fields. In other words, the assembly~\footnote{It can be considered an example of evolutionary robotic device~\cite{cariani1992some}.} developed its own sensors from scratch and therefore its own \emph{relevance criteria} from the outside world. Inspired by Pask's works, Peter Cariani proposes a classification of the kinds of adaptive behaviours attainable by physical devices~\cite{cariani1992some,Cariani2008EmergenceAC}. He calls \emph{nonadaptive robotic devices} those devices that are not able to modify their internal structure based on their past experiences. \emph{Adaptive computational devices} is instead the category for devices that can change their computational module, if advantageous. They can improve only the mapping between their fixed input and output though. With the term \emph{structurally adaptive devices} he refers to devices capable of constructing new hardware, sensors and effectors for themselves. As Cariani points out, this is analogous to the biological evolution of organs. Through the building of sensors and effectors, and so with the freedom to gather and manipulate the kind of information needed to perform a given task, robotic agents can create new semantic states: \emph{relevance criteria} different from those that the designer may have imposed on the robot, initially. Cariani identifies this last category as a necessary condition for the construction of agents with \emph{epistemic autonomy}: in this condition, the agent is completely autonomous also as regards the creation of new perspectives or point of views of itself, of the world in which is immersed and the relationship among them. Although they follow a more abstract approach than Pask's pioneering work, attempts to evolve sensors into robotic agents are present in the literature. For example, in the work~\cite{mark_framework_1998}, both the number and size of sensors are evolved in a simulated environment, finding a preliminary correlation between the number of sensors (in this case artificial eyes) and complexity of the task achieved. A different---and in some ways more general---perspective is pursued by those approaches that deals with the coevolution of agents' body and brain~\cite{lund_evolving_1997,Dellaert_1996,bongard_evolving_2003}. Although many of these are inspired by cellular processes and by biological development mainly, these are \emph{offline} approaches. There are works that tries to apply some elements of evolution in robotics in an online setting~\cite{bredeche2018embodied}, as well as a kind of online epigenetic adaptation~\cite{brawer2017epigenetic}.\footnote{Of course, here we do not consider here all the works concerning generic online adaptation of some parameters of the robotic system.} The relevance of development in the designing of artificial agents has so far been underestimated; only recently its prospective role and the properties that can derive from it (phenotypic plasticity above all) in the robotic field are emerging~\cite{hunt_phenotypic_2020,jin2011morphogenetic}. The development process in the designing of artificial entities, by analogy with the biological counterpart, can be represented by an \emph{online adaptation process}~\footnote{The one we propose in the following sections is just a possible example of it.}. According to the above reported biological evidence and preliminary experiments conducted in artificial contexts, this mechanism has the potential to bring out novelties in the behaviour of artificial agents. In this context we define \emph{novelty} as the behaviour or outcome reached by the artificial agents that would not have been contemplated by the model the designer has about the robot. Indeed, the robots actual perceptual experiences and real contingencies give the agent the opportunity to overcome and override the initial designer's constraints and biases. To conclude, development-inspired (online) approaches are viable alternatives, or complementary tools, for techniques of automatic design of robotic agents. \section{Proof of Concept} In light of previous discussions, we \emph{(i)} propose an \emph{online} adaptive mechanism capable of giving rise to the observable phenotypic plasticity property typical of biological organisms without requiring mutations; \emph{(ii)} start analysing the conditions under which robotic agents---using the mechanism referred to in the previous point---obtain a measurable advantage. Firstly, we briefly introduce the Boolean networks model, since it represents the actual substrate on which our mechanism is based. Boolean networks (BNs) have been introduced by Kauffman~\cite{kauffman69} as an abstract model of gene regulatory networks. They are discrete-state and discrete-time non-linear dynamical systems capable of complex dynamics. Formally, a BN is represented by a directed graph with $n$ nodes each having a variable number $k$ of incoming nodes, a Boolean variable $x_i$, $i = 1,...,n$ and a Boolean function $f_i = (x_{i_1},...,x_{i_k})$. They have received much attention not only for their capacity to reproduce important properties of biological cells~\cite{shmulevich2005eukaryotic,roli2018dynamical,serra2006,helikar2012cell,DanielsWalkerCriticality} but also for their capacity to express rich dynamics combined with their relatively compact description, characteristics that make them appealing also for artificial applications. For this reason, the so-called Boolean network robotics takes advantage of them by employing BN instances as control software in robots. Some examples of the remarkable results that could be obtained through their employment are reported in the following works~\cite{roli2012preliminary,roli-aiia2015,RoliAttractorLandscape}. The approach we propose---which is grounded on the BN-robotics---consists of using a Boolean network as a robot program. Its dynamics, in a way similar to that performed by gene regulatory networks in biological cells~\cite{braccini2017applications}, determines the behaviour of the robot and ultimately its final phenotype. The word \emph{phenotype} is used in this context with a more generic meaning than its biological counterpart: regardless of the specific physical characteristics of the robot, it identifies the overall observable behaviour achieved by the artificial agent. As illustrated in~\cite{roli2011design} the first step to take when designing a robot control software based on Boolean networks is to choose the coupling between the nodes and the robot actuators and sensors. Usually, this mapping is chosen at design-time and stay the same during all the design, simulation and, possibly, real-world applications phases. The mapping itself can be subject to optimisation during the design phase, but once reached the desired/optimal level of robot performance---according to a defined fitness function---it will not undergo any variation. These approaches are referred to as offline design methods. With the intention of conferring the property of phenotypic plasticity observed in the development phase of biological organisms (see \emph{(i)}), we propose a novel \emph{online} adaptive mechanism for the design of control software for robots ~\footnote{In the present discussion, we will refer only to robots with fixed morphology, although this mechanism finds natural application in self-assembling robots. They indeed can build their own sensors and really capture their relevance criteria.}. The BN chosen as control software for the robotic agent is generated once and remains unchanged during all the robot's life. What distinguishes our approach from past ones is the fact that what changes is the coupling between the BN nodes and the sensors of the agents~\footnote{Although this mechanism is abstract enough to be able to contemplate the variations of both sensors and actuators, in this discussion, we will consider varying only the former.}. The coupling changes are not externally imposed by the designer: the robot determines which couplings are more suitable to it by continually interacting with the environment in which it is located. The task chosen for our proof of concept is that of navigation with obstacle avoidance. The robot, equipped with proximity sensors, must, therefore, try to move in its environment, represented by an arena and at the same time avoid colliding with the obstacles present in it. This problem can be formally considered as a dynamic classification problem. Indeed, the artificial agent is faced with a problem of classification of time series which are not independent of the agent's behaviour since they are conditioned by the dynamics belonging to its \emph{sensorimotor loop}~\cite{lungarella2001robots}. Through a designer defined objective function, we provide a figure of merit assessing the degree of adaptation attained by the robot. This function will act as selective pressure and guide the robot adaptation process. It should be considered as an abstraction of the rewarding mechanisms (both intrinsic and extrinsic) that characterise adaptation in natural and artificial systems. In Figure~\ref{fig:img_adaptive_sensors_image_pp}, we see a schematic representation of two consecutive steps of the process. \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{image_pp.pdf} \caption{Schematic representation of two consecutive steps of the proposed online adaptive mechanism. The topology of the network, the number of sensors, as well as the nodes coupled with them are only used for example purposes and do not reflect the experimental conditions used in our experiments.} \label{fig:img_adaptive_sensors_image_pp} \end{figure} Albeit in a more abstract form, this mechanism takes inspiration from the Pask's evolvable and self-organising device. Here, the space of possibilities among which the robot can choose it's not open-ended, like the one used in Pask's experiment, but it is limited by the possible set of dynamics of the Boolean network, the number of sensors and the coupling combinations between the two. Simultaneously, it can be considered an artificial counterpart of the adaptive behaviour without mutations present in the development phases of biological organisms. Indeed, the robot exploits the feedbacks it receives from the environment and consequently tries to re-organise the \emph{raw genetic material} it owns. In doing so, it does not modify the Boolean network functions or topology but it will use the intrinsic BN's information processing capabilities. In addition, our adaptive mechanism resembles a step that takes place in the biological phenomenon of neuroplasticity or brain plasticity~\cite{neuroplasticity}. In neuroplasticity the creation of synaptic connections and changes to neurones occur mainly during the development phase. However, the process of refinement of the neural network that starts at birth plays a crucial role. This last occurs as a function of the environmental stimuli and feedbacks the individual receives after his activities and interactions with it. The different ensembles of cognitive skills, sensorimotor coordination and, in general, all processes influenced by the brain's activities which an individual develops through his experience represent the sets of possible phenotypes. At a very high level of abstraction, our ``scrambling phase''---that which change coupling among BN nodes and robot sensors phase---acts as the activity-driven refinement mechanism found in the child's brain. A further investigative idea behind this experiment and expressed in point \emph{(ii)} is to start finding out what general principles govern the best performing robotic agents, and therefore they promote and at the same time take advantage of the phenotypic plasticity characteristic of our adaptive mechanism. Fortunately, the literature relating to Boolean networks provides us with a wide range of both theoretical and experimental results to start from. A natural starting point for an analysis of differential performances obtained through the use of Boolean network models is that concerning the dynamical regimes in which they operate. So, in the next sections we investigate what Boolean network dynamical regimes---ordered, critical or chaotic---provides an advantage for robots equipped with the adaptive mechanism we have just introduced. \subsection{Experimental setting} \begin{figure}[t] \centering \includegraphics[scale=0.5]{footbot.png} \caption{The robot used in the experiments.} \label{fig:footbot} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.5]{arena.png} \caption{The arena used in the experiments.} \label{fig:arena} \end{figure} In our experiments we used a robot model equipped with 24 proximity sensors (evenly placed along its main circumference) and controlled by two motorised wheels (see figure~\ref{fig:footbot}). The robot moves inside a squared arena, delimited by walls, with a central box (see figure~\ref{fig:arena}). The goal we want the robot to achieve is to move as fast as possible around the central box without colliding against walls and the box itself. The robot is controlled by a BN. The coupling between the BN and the robot is as follows: two nodes are randomly chosen and their value is taken to control the two motors, which can be either ON (node with value 1) or OFF (node with value 0) and control the wheels at constant speed. The sensor readings return a value in $[0,1]$ and so are binarised by a simple step function with threshold $\theta$\footnote{In our experiments we set $\theta = 0.1$}: if the sensor value is greater than $\theta$, then the BN node is set to 1, otherwise it is set to 0. The 24 sensors are randomly associated to 24 randomly chosen nodes in the network, excluding the output ones. At each network update, the binarised values from the sensors are overridden to the current values of the corresponding nodes, so as to provide an external signal to the BN. The adaptive mechanism consists in randomly rewiring $q$ connections between sensors and BN nodes (excluding output nodes, of course). The actual value of $q$ is randomly chosen at each iteration in $\{1,2,\ldots,6\}$. The robot is then run for $T=1200$ steps (corresponding to $120$ seconds of real time); if the current binding enables the robot to perform better, then it is kept, otherwise it is rejected and the previous one is taken as the basis for a new perturbation. We remark that the binding between proximity sensors and BN ``input'' nodes is the only change made to the network: in this way we address the question as to what extent a random BN can indeed provide a sufficient bouquet of behaviours to enable a robot to adapt to a given (minimally cognitive) task. BNs are generated with $n$ nodes, $k=3$ inputs per node and random Boolean functions defined by means of the bias $b$, i.e. $b$ is the probability of assigning a 1 a truth table entry. In the experiments we tested $n \in \{100,1000\}$ and $b \in \{0.1,0.21,0.5,0.79,0.9\}$. According to~\cite{sole_critical_points}, random BNs with $k=3$ generated with bias equal to 0.1 or 0.9 are likely to be ordered, with bias equal to 0.5 are likely to be chaotic and bias equal to 0.21 and 0.79 characterises criticality.\footnote{Along the critical line, $k$ and $b$ are linked by this relation: $k = \dfrac{1}{2b(1-b)}$.} Only the BN nodes controlling the wheels have function randomly chosen always with bias $0.5$; this is to avoid naively conditioning the behaviour of the robot, which would tend to be always moving (resp. resting) for high biases (resp. low biases). This choice has anyway a negligible contribution to the overall dynamical regime of the network. The performance is evaluated by an objective function that is accumulated along the robot execution steps and then normalised. The function is defined as follows: \begin{center} \begin{math} F = (1-p_{max}) \; (1-|v_l-v_r|) \; \frac{(v_l+v_r)}{2} \end{math} \end{center} \noindent where $p_{max}$ is the maximal value returned among the proximity sensors, and $v_l$ and $v_r$ are the binarised values used to control the left and right motor, respectively. The intuition of the function is to favour fast and as much straight as possible trajectories far from the obstacles~\cite{nolfi-evorobot-book}. Experiments are run in simulations with ARGoS~\cite{pinciroli2012-argos}.\footnote{The controller has been implemented in Lua and it is available from the authors upon request; raw data of the results are available as well.} \subsection{Results} \begin{figure}[t] \centering \includegraphics[scale=0.38]{boxplot-100_title.pdf} \includegraphics[scale=0.38]{boxplot-1000_title.pdf} \caption{Boxplots summarising the performance as a function of BN bias for BNs with $n=100$ and $n=1000$.} \label{fig:xor} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.38]{boxplot-100-dual_title.pdf} \includegraphics[scale=0.38]{boxplot-1000-dual_title.pdf} \caption{Boxplots summarising the performance as a function of BN bias for BNs with $n=100$ and $n=1000$ for robots controlled by BNs with a dual encoding (i.e., 0 denotes that an obstacle is detected).} \label{fig:xor-dual} \end{figure} We run 1000 random replicas for each configuration of BN parameters and collected statistics on the best performance attained after a trial of $1.44 \times 10^4$ seconds (corresponding to $1.44 \times 10^5$ steps in total). In order to avoid variance due to the initialisation of the network, all nodes are initially set to 0. Since the evaluation function can not be maximal across the whole run, as the robot must anyway turn to remain in the arena and avoid the obstacles, values of $F$ greater than 0.7 correspond to a good performance. As we can observe in figure~\ref{fig:xor}, despite the simple adaptation mechanism, a large fraction of BN attains a good performance. Notably, critical networks attains the best performance---this result is striking for large BNs ($n=1000$). In particular, for $n=100$ the results achieved with $b=0.21$ are significantly better (Wilcoxon test, with $\alpha=0.05$) than all the other cases, with the exception of $b=0.5$ for which we could not reject the null hypothesis. As for $n=1000$ the case with $b=0.21$ is significantly better than all the other ones. We observe, however, that just one of the two bias values corresponding to the critical regime corresponds to a good performance. The reason is that in our experiment the symmetry between 0 and 1 is broken, because a 1 means that an obstacle is detected. To test the robot in the dual condition, we ran the same experiments with a negative convention on the values (if the obstacle is near, then the node is set to 0; similarly, the wheels are activated if the corresponding output node state is 0\footnote{We kept this dual condition uniformly across all the choices, even if, being the bias of output nodes 0.5, the encoding has no effect on average on the wheels.}). As expected, results (see figure~\ref{fig:xor-dual}) are perfectly specular to the previous ones (and the same results of the statistical test hold). \section{Discussion}\label{discussion} The picture emerging from our experiments is neat: one bias value characterises the best overall performance and this value is one of the two along the critical line. The reason of the asymmetry between the two critical bias values has to be ascribed to the symmetry breaking introduced by the binarisation of the sensor values. Anyway, the remarkable observation is that random BNs generated with a bias corresponding to critical regime adapt better than the other kinds of BNs. Since the adaptive mechanism only acts on the mapping between sensors and input nodes, the dynamical regime of the BNs is preserved; therefore, we have a further evidence that critical BNs achieve the best performance in discriminating the external signals. One might ask as to what extent the adaptive mechanism we have implemented can be said to be a case of phenotypic plasticity. To answering this question we first observe that, in our setting, adaptation involves only the way external information is filtered by the robot and so it concerns the sensing module of the system; second, this adaptation takes place during the ``life'' of the individual and it is based on a feedback that rewards specific behaviours (i.e. those favouring wandering with collision avoidance), without changing the actual ``genetic'' structure. In other words, our mechanism mimics a kind of sensory development tailored to the specific environment: in general, the robot can be coupled with the environment in a huge number of possible combinations, each constraining the system to express a particular behaviour (phenotype); the mapping between sensors readings and network nodes is the result of the embodied adaptation of the sensory-motor loop and manifests one particular phenotype, emerged from the interaction between the robot and the environment. \section{Conclusion}\label{conclusions} In this work we have shown that robots controlled by critical BNs subject to sensor adaptation achieve the highest level of performance in wandering behaviour with collision avoidance. The sensor adaptation mechanism used in this work consists in varying the coupling between robot sensors and BN nodes---whose value is then set by the binarised reading on the associated sensor. Other possible adaptive mechanisms can be chosen, e.g. varying the coupling between BN nodes and robot actuators, and can also be combined; in general, structural modifications of the BN are also possible, such as the ones acting on Boolean function, network topology and also network size. These adaptive mechanisms are subject of ongoing work and preliminary results suggest that sensor and actuator adaptation mechanisms are way better than structural ones, and that critical BNs again attain superior performance. As a next step, we plan to investigate the relation between criticality of controlling BNs, their performance and the maximisation of some information theory measures, such as predictive information~\cite{ay2008predictive}, integrated information~\cite{edlund2011integrated} and transfer entropy~\cite{lizier2008information}. Besides providing evidence to the \textit{criticality hypothesis}, the results we have presented make it possible to speculate further: criticality may be a property that enables phenotypic plasticity---at least as long as sensory adaptation is concerned. We believe that this outcome provides a motivation for deeper investigations, which may be primarily conducted in simulation or anyway with artificial systems. Nevertheless, we also envisage the possibility of devising wet experiments, in which the dynamical regime of an organism is externally controlled and its ability to exhibit phenotypic plasticity can be estimated. In addition, a mechanism like the one we have introduced may be an effective tool for tuning artificial systems to the specific environment in which they have to operate. As a futuristic application, we imagine the construction of miniaturised robot that can accomplish missions precluded to humans, such as being inoculated into higher organisms to repair them, or recovering polluted environments. In fact, recent technological advances have made it possible to build incredibly small robots, till the size of tens of nanometers. The current smallest robots---built by biological matter---can perform only a few predetermined actions,\footnote{See, e.g. the recent prominent case of Xenobots~\cite{xenobots}} therefore they can not attain the level of adaptivity and robustness needed for a complex mission. On the other hand, Artificial Intelligence software has recently made tremendous advancements and has been proved capable of learning and accomplishing difficult tasks with a high degree of reliability. This software, however, can not be run onto tiny robots. A viable way for filling this gap is provided by control programs based on unconventional computation, such as the ones derived from cell dynamics models, where phenotypic plasticity may play an important role.
2023-04-23T08:17:53.634Z
2020-06-04T02:18:31.000Z
redpajama/arxiv
arxiv_0000
820
5,825
9c1162e5fe721ba2694ef3765fb7a2e0b723680b
\section{Introduction} Over the last two decades, fast multipole methods (FMMs) and related hierarchical fast algorithms have become widespread for computing $N$-body interactions in computational chemistry, astrophysics, acoustics, fluid dynamics, and electromagnetics. In the time-harmonic, acoustic setting, a typical computation of interest is the evaluation of \begin{equation} F_m = \sum_{\substack{ n=1 \\ n\neq m}}^N \sigma_n \, G_k(\bx_m,\bx_n) \label{fmmpt} \end{equation} where \begin{equation} \label{eq:greenfunhelm} G_k(\bx,\by) = \frac{e^{ik|\bx-\by|}}{4\pi|\bx-\by|} \end{equation} is the free-space Green's function for the Helmholtz equation \[ \Delta u + k^2 u = 0. \] Direct calculation of~\cref{fmmpt} requires~$\mathcal O(N^2)$ operations, while the FMM requires~$\mathcal O(N)$ work in the low frequency regime~\cite{greengard_huang_etc} and~$\mathcal O(N \log N)$ work in the high frequency regime~\cite{wideband3d}. When solving boundary value problems for partial differential equations (PDEs) in three dimensions, such sums arise in the discretization of layer potentials defined on a surface~$\Gamma$. These potentials take the form: \begin{equation} u(\bx) = \int_\Gamma K(\bx,\bx') \, \sigma(\bx') \, da(\bx'). \label{layerpotdef} \end{equation} In \eqref{layerpotdef}, $K(\bx,\bx')$ is a Green's function for the PDE of interest, such as~\eqref{eq:greenfunhelm} or one of its directional derivatives. As a result, the governing equation is automatically satisfied, and it remains only to enforce the desired boundary condition. With a suitable choice for the kernel $K(\bx,\bx')$, this often leads to a Fredholm integral equation of the form \begin{equation} \sigma(\bx) + \int_\Gamma K(\bx,\bx') \, \sigma(\bx') \, da(\bx') = f(\bx), \qquad \text{for } \bx \in \Gamma. \label{fredholmeq} \end{equation} As we shall see below, this can be discretized with high-order accuracy using a suitable Nystr\"om method \cite{Atkinson95,atkinson_1997} \begin{equation} \sigma_i + w_{ii} \sigma_i + \sum_{j\neq i} w_{ij} \, K(\bx_i,\bx_j) \, \sigma_j \, = f(\bx_i). \end{equation} Here, $\bx_i$ and $w_{ij}$ are the quadrature nodes and weights, respectively, while $\sigma_i$ is an approximation to the true value~$\sigma(\bx_i)$. If the quadrature weights~$w_{ij}$ did not depend on the target location, i.e.~$w_{ij} = w_j$, then the above sum is a standard $N$-body calculation of the form~\eqref{fmmpt}. Unfortunately, when the integral equation comes from a layer potential corresponding to an elliptic PDE, the kernel~$K$ is typically singular or weakly singular so that simple high-order rules for smooth functions fail. Assuming the surface~$\Gamma$ is defined as the union of many smooth patches~$\Gamma_j$ (each with its own parameterization), high-order quadrature schemes require an analysis of the distance of the \emph{target} $\bx_i$ from each {\em patch}. More precisely, for a given target location~$\bx_i$ on the boundary, the integral in \eqref{layerpotdef} or \eqref{fredholmeq} can be split into three pieces: a self-interaction integral, a near field integral, and a far field integral: \begin{multline} \int_\Gamma K(\bx_i,\bx') \, \sigma(\bx') \, da(\bx') = \int_{\text{Self}(\bx_i)} K(\bx_i,\bx' ) \, \sigma(\bx') \, da(\bx') \, + \\ \int_{\text{Near}(\bx_i)} K(\bx_i,\bx' ) \, \sigma(\bx') \, da(\bx') \, + \int_{\text{Far}(\bx_i)} K(\bx_i,\bx' ) \, \sigma(\bx') \, da(\bx'). \end{multline} This splitting into target-dependent regions is essential for maintaining high-order accuracy. The region $\text{Self}(\bx_i)$ is simply the patch on which $\bx_i$ itself lies. The integral over this patch involves a singular integrand (due to the kernel~$K$). The \emph{Near} field is defined precisely in Section~\ref{sec:local}, but consists of patches close enough to $\bx_i$ such that the integrand is \emph{nearly singular} even though it is formally smooth. The \emph{Far} region consists of all other patches, sufficiently far from~$\bx_i$ such that high-order quadratures for smooth functions can be applied. \begin{definition} \label{farfielddef} Suppose that $\bx_i$ is in the far field of a patch $\Gamma_m$, and that \begin{equation} \sum_{j=1}^M w_{j} \, K(\bx_i,\bs_j) \, \sigma_j \, \approx \int_{\Gamma_m} K(\bx_i,\bx' ) \, \sigma(\bx') \, da(\bx') \end{equation} to the desired precision. Then $\bs_j$ and $w_{j}$ will be referred to as the {\em far field} quadrature nodes and weights. Note that these are independent of $\bx_i$. \end{definition} A related task in the solution process is evaluating the computed solution $u(\bx)$ at target locations~$\bx$ which are off-surface, but possibly arbitrarily close to the surface. In this case as well, while the integrands are formally smooth, evaluating the integral presents a similar challenge owing to the nearly singular behavior of the integrand. The evaluation of the potential can still be split into two pieces in a similar manner: a near field integral and a far field integral \begin{multline} \int_\Gamma K(\bx,\bx') \, \sigma(\bx') \, da(\bx') = \int_{\text{Near}(\bx)} K(\bx,\bx' ) \, \sigma(\bx') \, da(\bx') \\ + \int_{\text{Far}(\bx)} K(\bx,\bx' ) \, \sigma(\bx') \, da(\bx'). \end{multline} The strategy outlined above can be used for evaluating the contributions of the near and far regions for targets off-surface as well. While methods exist for the Self and Near calculations, the use of a fast algorithm such as the FMM requires coupling these somewhat complicated quadrature schemes to the Cartesian oct-tree data structures that divide up space into a hierarchy of regular, adaptively refined cubes. Unfortunately, the surface patches~$\Gamma_m$ of the domain boundary~$\Gamma$ may be of vastly different sizes and do not, in general, conform to a spatial subdivision strategy based on the density of quadrature nodes as points in $\mathbb{R}^3$. That is, many patches (curvilinear triangles) $\Gamma_j$ are likely to cross leaf node boundaries in the oct-tree structure. If this were not the case, then far field interactions within the FMM could be handled by computing multipole expansions from entire patches, followed by direct calculation of the Self and Near quadratures (analogous to the direct, near neighbor calculations in a point-based FMM). Thus, one of the issues we address here concerns modifications of the FMM so that speed is conserved for far field interactions, but in a manner where the overhead for near field quadrature corrections is modest, even when the surface patches are nonuniform. Furthermore, while we will restrict our attention here to Nystr\"om-style discretizations, the same concerns must be addressed for collocation and Galerkin-type methods. Similar issues arise when coupling adaptive mesh refinement (AMR) data structures to complicated geometries using Cartesian cut-cell methods~\cite{meshgenabm, Bell1994,johanssencolella}. In the present paper, we develop an efficient algorithm which allows for the straightforward coupling of adaptive FMM data structures with locally corrected quadrature schemes. Our goal is to achieve $\mathcal O(N)$ or $\mathcal O (N \log N)$ performance for surfaces with $O(N)$ discretizations points, depending on whether one is in the low or high frequency regime, respectively. Moreover, we would like the constant implicit in this notation to be as close to the performance of point-based FMMs as possible. We will concentrate on the use of generalized Gaussian quadrature rules~\cite{bremer_2012c,bremer_2013,bremer-2015,bremer} for the self interactions on curvilinear triangles, and adaptive integration for the Near region (nearly singular interactions). It is, perhaps, surprising that adaptive integration on surface patches can be competitive with other schemes such as Quadrature By Expansion (QBX), singularity subtraction, or coordinate transformations \cite{bruno2001fast,bruno_garza_2020,malhotra19,erichsen1998quadrature,Siegel2018ALT,Wala2018,Wala2020,ying}. The key is that we have developed a careful, precision and geometry-dependent hierarchy of interpolators on each patch, after which the adaptive integration step is inexpensive when amortized over all relevant targets. As a side-effect, our scheme also provides rapid access to entries of the fully discretized system matrix which is essential for fast direct solvers. \begin{remark} Generalized Gaussian quadrature was already coupled to an FMM in \cite{bremer_2012c}, but the question of how to design a robust algorithm that is insensitive to large variation in triangle dimensions was not directly addressed. In some sense, the present paper is devoted to two separate issues raised in \cite{bremer_2012c}: the first is to accelerate adaptive quadrature itself, and the second is to describe an FMM implementation that works for multi-scale discretizations. \end{remark} \begin{remark} It is worth noting that most locally corrected quadrature schemes, such as Duffy transformations \cite{Duffy}, are designed for a target that is mapped to the origin of a local coordinate system or the vertex of a triangular patch and, hence, are suitable only for self interactions as defined above. Near interactions are not addressed. An exception is Quadrature by Expansion (QBX) which provides a systematic, uniform procedure for computing layer potentials using only smooth quadratures and extrapolation \cite{klockner_2013,Siegel2018ALT,Wala2018}. These have been successfully coupled to FMMs in \cite{Wala2018,Wala2020}. Many aspects of the FMM modifications described here can be used in conjunction with QBX instead. Another exception is Erichsen-Sauter rules \cite{erichsen1998quadrature}, which do include schemes for adjacent panels, but appear to be best suited for modest accuracy. \end{remark} The paper is organized as follows: in Section~\ref{sec:prelim}, some basic facts regarding polynomial approximation and integration on triangles are presented. In Section~\ref{sec:surface}, we describe the classical boundary integral equation for acoustic scattering from a \emph{sound-soft} boundary, governed by the Helmholtz equation, as well as discretization and integration methods for curvilinear surfaces. Section~\ref{sec:local} provides the algorithmic details involved in locally corrected quadrature schemes. The coupling of these quadrature schemes to FMMs is presented in Section~\ref{sec:fmmcoupling}, and numerical examples demonstrating the performance of the scheme are presented in Section~\ref{sec:examples}. Finally, in Section~\ref{sec:conclusions}, we discuss avenues for further research, and the application of our scheme to fast direct solvers. \section{Interpolation and integration on triangles} \label{sec:prelim} For the sake of simplicity, we assume that we are given a surface triangulation represented as a collection of charts $\bX^j = \bX^j(u,v)$, which map the standard right triangle \begin{equation} T_{0} = \{ (u,v): u\geq 0\,, v\geq 0 \,, u+v \leq 1 \} \subset \bbR^{2} \label{eq:stdsimplex} \end{equation} to the surface patch $\Gamma_j$. All discretization and integration is done over $T_{0}$, incorporating the mapping function $\bX^j$ and its derivatives as needed. In this section, we summarize the basic polynomial interpolation and integration rules we will use for smooth functions $f: T_{0} \to \bbR$. A useful spectral basis is given by the orthogonal polynomials on $T_{0}$, known as Koornwinder polynomials~\cite{koornwinder_1975}. They are described analytically by the formula: \begin{equation} K_{nm}(u,v) = c_{nm} \, (1-v)^{m} \, P_{n-m}^{(0,2m+1)}(1-2v) \, P_{m}\left( \frac{2u+v-1}{1-v} \right) , \qquad m\leq n , \end{equation} where~$P_{n}^{(a,b)}$ is the Jacobi polynomial of degree~$n$ with parameters~$(a,b)$, $P_{m}$~is the Legendre polynomial of degree~$m$, and~$c_{nm}$ is a normalization constant such that \begin{equation} \int_{T_{0}} \vert K_{nm}(u,v) \vert^2 \, du \, dv = 1. \end{equation} For convenience, our definition is slightly different from that in~\cite{koornwinder_1975}. It is easy to see that there are~$n_{p} = p(p+1)/2$ Koornwinder polynomials of total degree less than~$p$. By a straightforward change of variables, and using the orthogonality relationships for Legendre and Jacobi polynomials, it is easy to show that \begin{equation} \int_{T_0} K_{nm}(u,v) \, K_{n'm'}(u,v) \, du \, dv = 0, \qquad \text{for } n\neq n' \text{ and } m\neq m'. \end{equation} The Koornwinder polynomials form a complete basis for~$L^2(T_0)$, and can easily be used to approximate smooth functions on~$T_0$ to arbitrary precision. \subsection{Polynomial approximation and integration} \label{sec:approx} As in standard spectral approximation methods for functions defined on intervals or tensor products of intervals \cite{trefethen2013approx}, smooth functions~$f$ defined on~$T_0$ can be interpolated, approximated, and integrated using a Koornwinder polynomial basis. To this end, suppose that~$f$ is defined by a $p$th-order Koornwinder expansion with coefficients $c_{nm}$: \begin{equation} f(u,v) = \sum_{n=0}^{p-1} \sum_{m=0}^{n} c_{nm} \, K_{nm}(u,v). \end{equation} Then, the square matrix~$\matrixsym{U}$ that maps the~$n_p$ coefficients in the above expansion to values of~$f$ at a selection of~$n_p$ interpolation nodes, denoted by~$(u_j,v_j) \subseteq T_0$, has elements \begin{equation} \elem{U}_{nm,j} = K_{nm}(u_j,v_j) \label{eq:defumat}. \end{equation} Let the matrix~$\matrixsym{V} = \matrixsym{U}^{-1}$. Then~$\matrixsym{V}$ maps values of~$f$ at the interpolation nodes~$(u_j,v_j)$ to coefficients in a Koornwinder polynomial expansion. Suppose now that $f:T_{0}\to \bbR$ is an arbitrary smooth function (not necessarily a polynomial) and let the values of~$f$ at the interpolation points be denoted by~$f_{i} = f(u_{i},v_{i})$. Then, a $p$th-order approximation to $f$ is given by \begin{equation} f(u,v) \approx \sum_{n=0}^{p-1}\sum_{m=0}^{n} c_{nm} \, K_{nm}(u,v) , \end{equation} where \begin{equation} \label{eq:v2c} c_{nm} = \sum_{j=1}^{n_{p}} \elem{V}_{(nm),j} \, f_{j} . \end{equation} We define the \emph{conditioning of the interpolation procedure} as the condition number of the matrices~$\matrixsym{U}$ or $\matrixsym{V}$. Much like interpolation operators on the interval, these matrices are not well-conditioned for arbitrary selections of nodes~$(u_j,v_j)$, as we will briefly discuss in the next two sections. An $n$-point quadrature rule for computing the integral of a function~$f$ on~$T_0$ is a collection of nodes and weights, $(u_{i},v_{i})$, $w_{i}$, $i=1,2,\ldots n$, such that \begin{equation} \begin{aligned} \int_{T_{0}} f(u,v) \, du \, dv &= \int_{0}^{1} \int_{0}^{1-u} f(u,v) \, du \,dv \\ &\approx \sum_{i=1}^{n} w_{i} \, f_{i}, \end{aligned} \end{equation} where $f_{i} = f(u_{i},v_{i})$ is the value of $f$ at the $i$th quadrature node. The accuracy of such a quadrature rule is very dependent on the choice of the quadrature nodes; if the rule is to be exact for a selection of~$n$ functions, then the values of~$w_i$ are determined wholly by the selection of~$(u_i,v_i)$. Since the node selection provides additional degrees of freedom, Gaussian-type quadrature rules are possible. On the triangle, a quadrature rule would be perfectly Gaussian if it integrated~$3n$ functions exactly, since there are~$3n$ parameters (the two coordinates of the nodes, and the weights). In one dimension, it is well-known that choosing the nodes as the roots of a suitable orthogonal polynomial leads to a perfect Gaussian rule \cite{trefethen2013approx}. In two dimensions, such perfect rules do not exist, but approximately Gaussian quadrature rules can be constructed. \subsection{High-order quadrature rules on the simplex~$T_0$} First described in~2010~\cite{xiao2010numerical}, what we will refer to as Xiao-Gimbutas quadratures are a set of Gaussian-like rules obtained through the solution of a nonlinear least-squares problem using Newton's method. The resulting nodes are contained in the interior of~$T_0$, and all the weights are positive. Various kinds of symmetry can also be specified. For a given $p>0$, these rules are designed to use the minimum number of nodes with positive weights so that the resulting quadrature rule is \emph{exact} for all polynomials of total degree~$<p$. As noted above, there are~$n_p = p(p+1)/2$ such polynomials. While not perfect Gaussian rules, the Xiao-Gimbutas quadratures achieve remarkably high-order. A rule with 48 weights and nodes, for example, is exact for polynomials of degree 16, of which there are $n_{16} = 136$. The rule has only $3 \times 48 = 144$ free parameters. For the sake of convenience we would like the quadrature nodes to serve as interpolation/approximation nodes as well. There are, however, far fewer Xiao-Gimbutas nodes than functions we would like to interpolate (namely $n_p$). Instead of using an even higher order Xiao-Gimbutas rule, with at least $n_p$ nodes, we choose an alternative quadrature scheme, introduced by Vioreanu and Rokhlin in 2014~\cite{vioreanu_2014}. The Vioreanu-Rokhlin nodes of order~$p$, obtained via a similar optimization procedure, are a collection of~$n_p$ nodes which can be used simultaneously for high-order polynomial interpolation, approximation, and integration on $T_{0}$. We refer the reader to~\cite{vioreanu_2014} for a thorough discussion. For our purposes, it suffices to note that the interpolation operators computed using these nodes are extremely well-conditioned. As a quadrature rule, the nodes and weights are Gaussian-like; they have positive weights and integrate more polynomials than there are nodes in the quadrature. For example, the Vioreanu-Rokhlin rule that interpolates polynomials of degree $p=16$, with $n_{16}=136$ nodes, integrates all $378$ polynomials of degree up to $p' = 27$. A perfect Gaussian rule would integrate~$3n_p = 408$ functions exactly. The relationship between the order of the interpolation scheme~$p$ and the order of the quadrature $p'$ is somewhat complicated, and obtained empirically~\cite{vioreanu_2014}. \section{Acoustic scattering from a sound-soft boundary} \label{sec:surface} Let~$\Omega$ be a bounded region in~$\bbR^{3}$, with smooth boundary~$\partial\Omega = \Gamma$. Given a function~$f$ defined on~$\Gamma$, a function~$u$ defined in $\bbR^{3} \setminus \Omega$ is said to satisfy the exterior Dirichlet problem for the Helmholtz equation if \begin{equation}\label{eq:extdir} \begin{aligned} (\Delta + k^2) \, u &= 0 &\qquad &\text{in } \bbR^{3} \setminus \Omega,\\ u &= f & &\text{on } \Gamma, \\ \lim_{r \to \infty} \, r \left(\frac{\partial u}{\partial r} - ik u \right) &= 0. & & \end{aligned} \end{equation} In acoustics, Dirichlet problems such as this arise when~$\partial\Omega$ is \emph{sound-soft} and $f = - u^{in}$, where $u^{in}$ is an impinging acoustic wave. A standard approach for solving the Dirichlet problem is to let \begin{equation} \label{eq:urep} u = \mathcal D_{k}[\sigma] - ik \, \mathcal S_{k}[\sigma], \end{equation} where~$\sigma$ is an unknown density function defined on~$\Gamma$. Here~$\mathcal S$ and~$\mathcal D$ are the single layer and double layer operators, respectively, given by \begin{align} \mathcal S_{k}[\sigma](\bx) &= \int_{\Gamma} G_{k}(\bx,\by) \, \sigma(\by) \, da(\by), \label{eq:sldef} \\ \mathcal D_{k}[\sigma](\bx) &= \int_{\Gamma} \left( \bn(\by) \cdot \nabla_{\by}G_{k}(\bx,\by) \right) \, \sigma(\by) \, da(\by), \label{eq:dldef} \end{align} where $\bn(\by)$ is the outward normal at~$\by \in \Gamma$, and $G_k(\bx,\by)$ is given by~\eqref{eq:greenfunhelm}. The representation~\eqref{eq:urep} automatically satisfies the Helmholtz equation and the radiation condition in~\eqref{eq:extdir}. Imposing the boundary condition, and using standard jump relations for layer potentials~\cite{colton_kress}, we obtain the following second-kind integral equation along~$\Gamma$ for the density $\sigma$: \begin{equation} \label{eq:inteq} \frac{1}{2}\sigma(\bx) + \mathcal D_{k}[\sigma](\bx) \textendash ik \mathcal S_{k}[\sigma](\bx) = f(\bx), \qquad \bx \in \Gamma. \end{equation} This involves a slight abuse of notation: for $\bx \in \Gamma$, $\mathcal D_{k}[\sigma](\bx)$ should be evaluated in the principal value sense. When solving~\eqref{eq:inteq}, the accurate evaluation of the layer potentials $\mathcal S_{k}[\sigma]$, $\mathcal D_{k}[\sigma]$ on~$\Gamma$ is essential for either direct or iterative solvers. We will focus here on the evaluation of the single layer potential $\mathcal S[\sigma]$, assuming~$\sigma$ is known. Only minor modifications are needed to address the double layer potential, as well as other scalar or vector-valued layer potentials that arise in electrostatics, elastostatics, viscous flow, or electromagnetics. \subsection{Surface parameterizations} While some simple boundaries (such as spheres, ellipsoids and tori) can be described by global parameterizations, in general it is necessary to describe a complicated surface~$\Gamma$ as a collection of surface patches, each of which is referred to as a \emph{chart}. The collection of charts whose union defines $\Gamma$ will be referred to as an \emph{atlas}. More precisely, we assume that the surface is the disjoint union of patches~$\Gamma_j$ \begin{equation} \Gamma = \cup_{j=1}^{\Npat} \, \Gamma_{j}, \end{equation} and that the patch~$\Gamma_{j}$ is parameterized by a non-degenerate chart~$\bX^j: T_{0} \to \Gamma_{j}$, where $T_{0}$ is the standard simplex~\eqref{eq:stdsimplex}. Given these charts~$\bX^j$, a local coordinate system can be defined on patch~$\Gamma_j$ by taking its partial derivatives. For this, we define \begin{equation} \bX^j_u \equiv \frac{\partial \bX^j}{\partial u}, \qquad \bX^j_v \equiv \frac{\partial \bX^j}{\partial v}, \qquad \bn^j \equiv \bX^j_u \times \bX^j_v. \end{equation} Finally, we assume that the triplet~$\bX^j_u$, $\bX^j_v$, $\bn^j$ forms a right-handed coordinate system with~$\bn^j$ pointing into the unbounded region $\bbR^3 \setminus \Omega$. In general, these vectors are neither orthogonal nor of unit length. The area element on the patch~$\Gamma_j$ is determined by the Jacobian~$J^j$, \begin{equation}\label{eq:loc} \begin{aligned} da(\bX^j) &= \vert \bX^j_u \times \bX^j_v \vert \, du \, dv \\ &= J^j(u,v) \, du \, dv. \end{aligned} \end{equation} \subsection{Discretization and integration} \label{sec:disc} If~$f$ is a function defined on~$\Gamma$, then its integral can be decomposed as a sum over patches: \begin{equation}\label{eq:intsplit} \begin{aligned} \int_{\Gamma} f(\bx) \, da(\bx) &= \sum_{j=1}^{\Npat} \int_{\Gamma_{j}} f(\bx) \, da(\bx) \\ &= \sum_{j=1}^{\Npat} \int_{T_{0}} f\left(\bX^{j}(u,v)\right) \, J^{j}(u,v) \, du dv \\ &= \sum_{j=1}^{\Npat} \int_{u=0}^{1} \int_{v=0}^{1-u} f\left(\bX^{j}(u,v)\right) \, J^{j}(u,v) \, du dv, \end{aligned} \end{equation} where the Jacobian is given in~\eqref{eq:loc}. If, in addition, $f$ is smooth, then each of the integrals on~$T_{0}$ in~\eqref{eq:intsplit} can be approximated using Xiao-Gimbutas or Vioreanu-Rokhlin quadrature rules, as discussed in Section~\ref{sec:prelim}. Using the latter. we have \begin{equation} \int_{\Gamma} f(\bx) \, da(\bx) \approx \sum_{j=1}^{\Npat} \sum_{\ell=1}^{n_p} w_\ell \, f\left(\bX^{j}(u_\ell,v_\ell)\right) \, J^{j}(u_\ell,v_\ell), \end{equation} where~$n_p$ is the number of nodes in the quadrature, which varies depending on the desired order of accuracy. For a patch~$\Gamma_{j}$ and a target $\bx$, however, the integrand appearing in $S_{k}[\sigma](\bx)$ is only smooth when $\bx$ is in the far field. When $\bx$ is either on $\Gamma_j$ or nearby, we will need to modify our quadrature approach, as described at the outset. Furthermore, in practice, we will only be given approximations of the charts~$\bX^j$ and the function~$f$ (or the density~$\sigma$) on each patch to finite order. In what follows, we define a $p$th-order approximation as one for which the truncation error is~$\mathcal O(h^p)$, where $h$ is, say, the diameter of the patch $\Gamma_j$. For a scalar function~$f$, it can be approximated to $p$th-order in~$L^2(T_0)$ using Koornwinder polynomials as \begin{equation} f(u,v) \approx \sum_{n+m < p} f_{nm} \, K_{nm}(u,v). \end{equation} The coefficients~$f_{nm}$ can be computed using the values-to-coefficients matrix~$\matrixsym{V}$, as described in Section~\ref{sec:approx}. The charts~$\bX^j$ will generally be approximated using a vector version of the above formula: \begin{equation} \begin{aligned} \bX^j(u,v) &\approx \sum_{n+m <p} \begin{pmatrix} x^j_{nm} \\ y^j_{nm} \\ z^j_{nm} \end{pmatrix} K_{nm}(u,v) \\ &=\sum_{n+m <p} \bx_{nm}^j \, K_{nm}(u,v) \end{aligned} \end{equation} It is important to note that even if the charts~$\bX^j$ are approximated to accuracy~$\epsilon$, i.e. \begin{equation} \bX^j(u,v) = \sum_{n+m <p} \bx_{nm}^j \, K_{nm}(u,v) + \mathcal O(\epsilon), \end{equation} it does not follow that the Jacobian~$J^j$ will be evaluated to precision~$\epsilon$ as well. The function~$J^j$ is non-linear and usually requires a higher-order approximation than the individual components of the chart itself. This cannot be avoided unless analytic derivative information is provided for each patch~$\Gamma_j$. This affects the accuracy of numerical approximations to surface integrals for a fixed set of patches (but not the asymptotic convergence rate). \begin{remark} In the following, we use the same order of discretization for representing the layer potential densities $\sigma$ and the surface information, i.e. the charts $\bX^{j}$ and their derivatives $\bX^{j}_{u}$, and $\bX^{j}_{v}$. While this choice is made for convenience of notation and software implementation, our approach to evaluating layer potentials extends in a straightforward manner to the case where different orders of discretization are used for representing the surface information and layer potentials densities. \end{remark} \section{Locally corrected quadratures} \label{sec:local} For a target location $\bx \in \Gamma_j$, let us first consider the {\em self interaction} \begin{equation} \Sself[\sigma](\bx) = \int_{\Gamma_{j}} G_k(\bx,\by) \, \sigma(\by) \, da(\by). \label{sselfdef} \end{equation} In~\cite{bremer_2012c,bremer_2013}, the authors designed quadrature rules for exactly this purpose, under the assumption that~$\sigma$ and~$J^j$ are well-approximated by polynomials (and therefore representable by Koornwinder expansions). The quadrature schemes in these papers involve a rather intricate set of transformations but yield a set of precomputed tables which, when composed with the chart $\bX^j$, yield the desired high-order accuracy. Briefly, for any $\bx \in \Gamma_{j}$, and all $\sigma$ of the form $\sigma(u,v) = \sum_{nm} c_{nm} K_{nm}(u,v)$, there exist $N(\bx)$ nodes $(u_{\bx,\ell},v_{\bx,\ell}) \in T_{0}$ and associated quadrature weights $w_{\bx,\ell}$, such that \begin{multline} \label{eq:errestself} \bigg | \int_{u=0}^{1} \int_{v=0}^{1-u} G_{k}(\bx, \bX^{j}(u,v)) \, \sigma(u,v) \, J^{j}(u,v) \, du \, dv \\ - \sum_{\ell=1}^{N(\bx)} G_{k}(\bx, \bX^{j}(u_{\bx,\ell},v_{\bx,\ell})) \, \sigma(u_{\bx,\ell},v_{\bx,\ell}) \, J^{j}(u_{\bx,\ell},v_{\bx,\ell}) w_{\bx,\ell} \bigg | \leq \varepsilon \cdot \| \sigma \|_{\mathbb{L}^{2}(\Gamma_{j})}. \end{multline} As mentioned in the introduction, in the original paper~\cite{bremer_2012c}, which was focused on quadrature design, a simple coupling to FMMs was mentioned that relied on the underlying discretization being uniformly high-order. Near field interactions were done using \emph{on-the-fly} adaptive integration. In their subsequent paper~\cite{bremer_2013}, this type of expensive adaptive integration was used for all non-self interactions (i.e. no FMM-type acceleration was used at all). Such an approach cannot be directly accelerated with standard FMMs since the effective quadrature weights are functions of both the source and target locations. Recall that for $\bx \in \bbR \setminus \Gamma$, we split the single layer potential $\mathcal S_{k}[\sigma](\bx)$ into two pieces: \begin{equation} \label{eq:near-far-split-off} \begin{aligned} \mathcal S_{k}[\sigma](\bx) &= \int_{\Gamma} G_k(\bx,\by) \, \sigma(\by) \, da(\by) \\ &=\sum_{\ell=1}^{\Npat} \int_{\Gamma_{\ell}} G_k(\bx,\by) \, \sigma(\by) \, da(\by) = \Snear[\sigma](\bx) + \Sfar[\sigma](\bx) \, , \end{aligned} \end{equation} and when $\bx$ lies {\em on} the boundary, say on patch $\Gamma_j$, then the single layer potential $\mathcal S[\sigma](\bx)$ is split into three pieces: \begin{equation} \label{eq:near-far-split-on} \mathcal S_{k}[\sigma](\bx) = \Sself[\sigma](\bx) + \Snear[\sigma](\bx) + \Sfar[\sigma](\bx), \end{equation} where $\Sself[\sigma](\bx)$ is defined in \eqref{sselfdef}, and the near and far regions associated with a target and the corresponding definitions of $\Snear[\sigma](\bx)$ and $\Sfar[\sigma](\bx)$ are described below. For this, it turns out to be easier to first take the point of view of a patch rather than a target. Let~$\bc_{j}$ denote the centroid of the patch~$\Gamma_{j}$, \begin{equation} \begin{aligned} \bc_{j} &= \int_{\Gamma_{j}} \bx \, da(\bx) \\ &= \int_{T_{0}} \bX^{j}(u,v) \, du \, dv , \end{aligned} \end{equation} and let \begin{equation} \label{eq:rjdef} R_{j} = \min_{R>0} \{ R \mid \, \Gamma_{j} \subset B_{R}(\bc_{j}) \}, \end{equation} where~$B_{R}(\bc_j)$ is the ball of radius~$R$ centered at~$\bc_j$. That is, $B_{R_j}(\bc_j)$ is the ball of minimal radius containing the patch $\Gamma_j$. Letting $\eta > 1$ be a free parameter for the moment, we define the near field of the patch $\Gamma_{j}$, denoted by $N_{\eta}(\Gamma_{j})$, to be the set of points that do not lie on $\Gamma_j$ but are within the ball $B_{\eta R_j}(\bc_j)$ (see Fig.~\ref{fig:near-field-def}). Thus, \begin{equation} N_{\eta}(\Gamma_{j}) = \{\bx \in \bbR^3 \setminus \Gamma_j \mid \, d(\bc_{j},\bx) \leq \eta R_{j} \} \, . \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{nearfieldfig.pdf} \caption{The smallest sphere containing surface patch~$\Gamma_j$ centered at~${\bf c}_j$ and the near field region $N_\eta(\Gamma_j)$. $\eta>1$ is a free parameter whose selection is based on the order of accuracy of the far field quadrature.} \label{fig:near-field-def} \end{figure} Given the collection of near field regions of the form $N_{\eta}(\Gamma_{j})$, let $T_{\eta}(\bx)$ denote the dual list: that is, the collection of patches $\Gamma_{j}$ for which the point $\bx \in N_{\eta}(\Gamma_{j})$, \begin{equation} T_{\eta}(\bx) = \{ \Gamma_{j} \mid \bx \in N_{\eta}(\Gamma_{j}) \}. \end{equation} Similarly, for $\bx \in \bbR^3 \setminus \Gamma$, we denote the far field of $\bx$ by \begin{equation} F_{\eta}(\bx) = \{ \Gamma_{j} \mid \bx \notin N_{\eta}(\Gamma_{j}) \}. \end{equation} When $\bx \in \Gamma_i$ is a boundary point, we let \begin{equation} F_{\eta}(\bx) = \{ \Gamma_{j}, j \neq i \mid \bx \notin N_{\eta}(\Gamma_{j}) \}. \end{equation} Then referring to the near-far split of the layer potential for targets off and on-surface described in~\cref{eq:near-far-split-off}, and~\cref{eq:near-far-split-on}, respectively, the near and far part of the layer potentials $\Snear$, and $\Sfar$ are given by \begin{equation} \Snear[\sigma](\bx) = \sum_{\Gamma_{\ell} \in T_{\eta}(\bx)} \int_{\Gamma_{\ell}} G_k(\bx, \by) \, \sigma(\by) \, da(\by) \end{equation} and \begin{equation} \Sfar[\sigma](\bx) = \sum_{\Gamma_{\ell} \in F_{\eta}(\bx)} \int_{\Gamma_{\ell}} G_k(\bx, \by) \, \sigma(\by) \, da(\by). \end{equation} As noted in the beginning of the section, $\Sself[\sigma](\bx)$ can be computed using the generalized Gaussian quadratures of \cite{bremer_2012c,bremer_2013}. By virtue of their separation from the source patches, all of the integrands in $\Sfar$ are smooth and can be computed using either Vioreanu-Rokhlin or Xiao-Gimbutas quadrature rules, with weights that are independent of the target location~$\bx$. The accuracy of these rules, which is affected by the free parameter $\eta$, is discussed in Section~\ref{sec:etachoice}. It remains only to develop an efficient scheme for evaluating the integrals which define~$\Snear[\sigma]$. At present, there do not exist quadrature rules that are capable of simultaneously accounting for the singularity in the Green's function and the local geometric variation in an efficient manner. The approach developed below involves a judicious combination of precomputation and a greedy, adaptive algorithm applied, for every target point~$\bx$, to each~$\Gamma_{\ell}\in T_{\eta}(\bx)$. Once the near field quadratures have been computed, they can be saved using only $O(N)$ storage. When solving an integral equation iteratively, this can be used to accelerate subsequent applications of the integral operator. \begin{remark} The evaluation of near field quadratures does {\em not} affect the overall complexity of computing layer potentials, assuming~$\eta$ is not too large. This follows from the fact that, for each target~$\bx$, there are only~$O(1)$ patches contained in~$T_{\eta}(\bx)$. Thus, the cost of all near field contributions is of the order~$O(N)$. Since there can be several patches in the near field, however, this computation tends to be the rate limiting step in the overall quadrature generation procedure. \end{remark} \begin{remark} Without entering into a detailed literature review, it should be noted that coordinate transformation methods such as those in \cite{bruno2001fast,malhotra19,erichsen1998quadrature,ying} can also be used for computing near field interactions for surface targets. However, these methods don't apply easily to off-surface evaluation. An alternative to our procedure is Quadrature by Expansion (QBX) \cite{klockner_2013,Siegel2018ALT,Wala2018,Wala2020}, which handles singular and nearly-singular integrals in a unified manner and (like the method of this paper) works both on and off surface. There are distinct trade-offs to be made in QBX-based schemes and the scheme presented here. In the end, the best method will be determined by accuracy, efficiency and ease of use. At present, we have found the adaptive quadrature approach to be the most robust and fastest in terms of overall performance. \end{remark} \subsection{Selecting the near field cutoff \label{sec:etachoice}} In this section, we discuss the choice of the parameter $\eta$, which defines the near field for each patch~(see Fig.~\ref{fig:near-field-def}). Once $\eta$ is fixed, accuracy considerations will determine whether the interpolation nodes used on each patch are sufficient for accurate calculation of the far field $\Sfar$, or whether we will need to increase the order of the far field quadrature. As $\eta$ increases, the near field for each patch obviously grows, so that the number of targets for which we will apply specialized quadrature increases. Since we would like to store these near field quadratures for the purpose of repeated application of the integral operator, both the storage and CPU requirements also grow accordingly. On the other hand, as $\eta$ decreases, the integrand in $\Sfar$ becomes less smooth, and the number of quadrature nodes needed to achieve the desired precision in the far field will grow. If it exceeds the number of original nodes $n_p$ to achieve $p$th-order convergence, we will have to {\em oversample} the layer potential density~$\sigma$. That is, we will have to interpolate~$\sigma$ to a larger number of quadrature nodes. This increases the computational cost of evaluating the far field via the FMM. Balancing the cost of the near field and far field interactions sets the optimal value for~$\eta$. Based on extensive numerical experiments, we have found that for the highest order methods ($p>8$), $\eta = 1.25$ works well. For orders of accuracy $4<p\leq 8$, we recommend $\eta = 2$, and for the lowest orders of accuracy $p\leq 4$, we recommend $\eta = 2.75$. \subsection{Oversampling via $p-$refinement} \label{sec:oversamp} For a patch $\Gamma_{j}$, suppose that an order $q$ Vioreanu-Rokhlin quadrature rule is the smallest order quadrature rule which accurately computes the contribution of $\Gamma_{j}$ to all targets $\bx \not \in N_{\eta}(\Gamma_{j})$ to the desired precision $\varepsilon$. Then, the oversampling factor for $\Gamma_{j}$ is defined as the ratio of $q(q+1)/(p(p+1))$. For a given $\eta$, rather than trying to estimate the oversampling factor needed for $\Sfar$ analytically, we compute the far field quadrature order $q$ needed for a specified precision numerically. For each patch $\Gamma_{j}$, we first identify the $10$ farthest targets in $N_{\eta}(\Gamma_{j})$. If the cardinality of $N_{\eta}(\Gamma_{j})$ is less than $20$, we choose the farthest $|N_{\eta}(\Gamma_{j})|/2$ targets from the list, and append $15$ randomly chosen targets on the boundary of the sphere $\partial B_{\eta R_{j}} (\boldsymbol{c}_{j})$. We denote this set of targets by $F(\Gamma_{j})$. For the $nm$-th Koornwinder polynomial $K_{nm}$, let \begin{equation} I^j_{nm} (\bx) = \int_{T_{0}} G_{k}(\bx, \bX^{j}(u,v)) \, K_{nm}(u,v) \, J^{j}(u,v) \, du \, dv , \end{equation} and let $\tilde{I}^j_{nm,q}(\bx)$ denote the approximation to the integral computed using the $q$th-order Vioreanu-Rokhlin quadrature. Then, the far field order $q_{j}$ for patch $\Gamma_{j}$ is chosen according to the following criterion: $q_j$ is the smallest $q$ such that all of the integrals $\tilde{I}^j_{nm,q}(\bx)$, for $0\leq m\leq n \leq p$ and $\bx \in F(\Gamma_{j})$, agree to a prescribed tolerance~$\varepsilon d_{j}/\| \matrixsym{V} \|$ with the corresponding integrals obtained from using a~$(q+1)$th-order Vioreanu-Rokhlin quadrature. Here, $d_{j}$ is given by \begin{equation} d_{j} = \min_{\ell=1\ldots n_{p}} \sqrt{J^{j}(u_{\ell},v_{\ell}) w_{\ell}}, \end{equation} where $u_{\ell},v_{\ell}$ are the order $p$ Vioreanu-Rokhlin nodes and $w_{\ell}$ are the corresponding weights and $\| \matrixsym{V} \|$ is the operator norm of the values to interpolation matrix $\matrixsym{V}$ defined in~\cref{sec:approx}. That is to say, \begin{equation} \label{eq:qj} q_{j} = \min_{q} \text{ such that } \max_{\bx \in F(\Gamma_{j})} \sqrt{\sum_{n+m<p} |\tilde{I}^j_{nm,q}(\bx) - \tilde{I}^j_{nm,q+1}(\bx)|^2} \leq \varepsilon \frac{d_{j}}{\| \matrixsym{V} \|} \,. \end{equation} This seemingly arbitrary choice of scaling $\varepsilon$ allows us to obtain an estimate for the relative error of the contribution of $\Gamma_{j}$ to the layer potential in an $L^{2}$ sense, and will be clarified in the error analysis at the end of the section. It should be noted that $d_{j}$ scales proportionally to a linear dimension of $\Gamma_{j}$ (for example, like $R_{j}$). On the other hand, if $\bx \in \mathbb{R}^{3} \setminus N_{\eta}(\Gamma_{j})$, the kernel $G_{k}(\bx,\bX^{j}(u,v))$ in the integrand of $I^{j}_{nm}(\bx)$ is smoother than the corresponding kernel for $\bx\in F(\Gamma_{j})$. Thus, the above result also implies that for all $\bx \in \mathbb{R}^{3} \setminus \Gamma_{j}$, \begin{equation} \sqrt{\sum_{n+m<p}\left| \tilde{I}^{j}_{nm,q}(\bx) - I^{j}_{nm} (\bx)\right|^2} \leq \varepsilon \frac{d_{j}}{\| \matrixsym{V} \|} \, . \end{equation} However, for analyzing the error in evaluating the layer potential, we wish to obtain an estimate for the contribution of a discretized patch $\Gamma_{j}$ to the layer potential $\mathcal S_{k}[\sigma](\bx)$ denoted by $L_{j}(\bx)$, \begin{equation} \label{eq:fi} L_{j}(\bx) = \int_{u=0}^{1} \int_{v=0}^{1-u} G_{k}(\bx, \bX^{j}(u,v)) \, \sigma(u,v) \, J^{j}(u,v) \, du \, dv \,. \end{equation} Since the density~$\sigma$ is known on patch~$\Gamma_{j}$ through its samples~$\sigma^{j}_{\ell}$ located at the Vioreanu-Rokhlin nodes, we have that \begin{equation} \label{eq:sdef} \sigma(u,v) = \sum_{n+m < p} s^{j}_{nm} \, K_{nm}(u,v), \qquad \text{where} \qquad s^{j}_{nm} = \sum_{\ell = 1}^{n_p} V_{(nm),\ell} \, \sigma^j_\ell , \end{equation} with~$\matrixsym{V}$ the values-to-coefficients matrix in~\eqref{eq:v2c}. Inserting the above expression into~\eqref{eq:fi} we have: \begin{equation} \label{eq:ldef} \begin{aligned} L_{j}(\bx) &=\sum_{n+m < p} s^{j}_{nm} \int_{u=0}^{1} \int_{v=0}^{1-u} G_{k}(\bx, \bX^{j}(u,v)) \, K_{nm}(u,v) \, J^{j}(u,v) \, du \, dv \\ &= \sum_{n+m < p} s^{j}_{nm} \, I^j_{nm}(\bx) \\ &= \sum_{\ell = 1}^{n_p} \left( \sum_{n+m < p} V_{(nm),\ell} \, I^j_{nm}(\bx) \right) \sigma^j_\ell . \end{aligned} \end{equation} Suppose next that $\tilde{L}_{j,q_{j}}(\bx)$ denotes an approximation to $L_{j}(\bx)$, where each of the integrals $I_{nm}^{j}$ are replaced by $\tilde{I}_{nm,q_{j}}(\bx)$. Let $w_{j,\ell} = \sqrt{J^{j}(u_{\ell},v_{\ell}) w_{\ell}}$, and let $\matrixsym{W}$ be the diagonal matrix whose entries are $w_{j,\ell}$, $\ell=1,2\ldots n_{p}$, and let $e_{nm} = I^{j}_{nm}(\bx) - \tilde{I}^{j}_{nm,q_{j}}(\bx)$, $n+m<p$. Then for all $\bx \in \mathbb{R}^{3} \setminus N_{\eta}(\Gamma_{j})$, it follows that \begin{equation} \label{eq:errestfar} \begin{aligned} |L_{j}(\bx) - \tilde{L}_{j,q_{j}}(\bx) | &= \boldsymbol{e}^{T} \matrixsym{V} \matrixsym{W}^{-1} \matrixsym{W} \begin{bmatrix} \sigma^{j}_{1} \\ \vdots \\ \sigma^{j}_{n_{p}} \end{bmatrix} \\ & \leq \frac{\varepsilon d_{j}}{\| \matrixsym{V} \|} \cdot \| \matrixsym{V} \| \cdot \| \matrixsym{W}^{-1} \| \sqrt{\sum_{\ell=1}^{n_{p}} |\sigma^{j}_{\ell}|^2 w_{j,\ell}^2}\\ & \leq \varepsilon \left( \| \sigma \|_{\mathbb{L}^{2}(\Gamma_{j})} + O(\varepsilon) \right) \, , \end{aligned} \end{equation} The last inequality follows from the fact that $\| \matrixsym{W}^{-1} \| = 1/d_{j}$, and that \begin{equation} \sum_{\ell=1}^{n_{p}} |\sigma^{j}_{\ell}|^2 w_{j,\ell}^2 = \sum_{\ell=1}^{n_{p}} |\sigma^{j}_{\ell}|^2 w_{\ell} J^{j}(u_{\ell},v_{\ell}) = \int_{\Gamma_{j}} |\sigma(\by)|^2 \, da(\by) + O(\varepsilon) \,. \end{equation} \begin{remark} The same procedure as described above directly applies to the double layer potential with kernel \mbox{$K(\bx,\by) = \bn(\by) \cdot \nabla_{\by}G_{k}(\bx,\by)$}. For the normal derivative of the single layer potential, with kernel \mbox{$K(\bx,\by) = \bn(\bx) \cdot \nabla_{\bx}G_{k}(\bx,\by)$}, the procedure above can't be applied since $\bn(\bx)$ isn't well-defined at off surface target points. However, the operator is simply the adjoint of the double layer, and therefore we use the same~$q_j$ as estimated for that case. \end{remark} \begin{remark} The $q_j$ computed in equation~\eqref{eq:qj} will of course depend on the kernel $G_k$, and any normalization factors. The oversampling factors were computed based on the Green's function $G_k(r) = e^{ikr}/(4\pi r)$. \end{remark} \begin{remark} A simple calculation shows that \begin{equation} L_{j,q_{j}} = \sum_{\ell=1}^{n_{q_{j}}} G_{k}(\bx, \bX^{j}(u_{\ell},v_{\ell})) J^{j}(u_{\ell},v_{\ell}) w_{\ell} \tilde{\sigma}^{j}_{\ell} \, , \end{equation} where $u_{\ell},v_{\ell}$ now are the order $q_{j}$ Vioreanu-Rokhlin nodes on $T_{0}$, $w_{\ell}$ the corresponding quadrature weights, and $\tilde{\sigma}^{j}_{\ell}$ is the interpolated density obtained by evaluating \begin{equation} \tilde{\sigma}^{j}_{\ell} = \sum_{n+m<p} s^{j}_{nm} K_{nm}(u_{\ell},v_{\ell}) \, , \end{equation} where $s^{j}_{nm}$ is defined in~\cref{eq:sdef}. Since the same nodes $\bX^{j}(u_{\ell},v_{\ell})$ on $\Gamma_{j}$ can be used for all $\bx \in \mathbb{R}^{3} \setminus N_{\eta}(\Gamma_{j})$, the far part of the layer potential evaluation can be trivially coupled to fast multipole methods. \end{remark} \subsection{Near field quadrature} \label{sec:nearfieldquad} Finally, we turn our attention to the evaluation of~$\Snear[\sigma](\bx)$, for which the integrands are nearly-singular and we wish to develop a \emph{high performance} variant of adaptive integration. Let us consider a patch $\Gamma_{j} \in T_{\eta}(\bx)$, and the integral $L_{j}(\bx)$ defined in~\cref{eq:fi}, which is also a near field integral for all $\bx \in N_{\eta}(\Gamma_{j})$. It follows from~\cref{eq:ldef} that \begin{equation} \begin{aligned} L_{j}(\bx) &= \sum_{\ell = 1}^{n_p} \left( \sum_{n+m < p} V_{(nm),\ell} \, I^j_{nm}(\bx) \right) \sigma^j_\ell \\ &= \sum_{\ell = 1}^{n_p} a^j_\ell(\bx) \, \sigma^j_\ell. \end{aligned} \end{equation} The numbers~$a^j_\ell(\bx)$ are the matrix entries which map the function values~$\sigma^j_\ell$ on patch~$\Gamma_j$ to the induced near field potential at location~$\bx$. If we approximate each~$I^j_{nm}(\bx)$ by~$\tilde I^j_{nm}(\bx)$ to precision~$\frac{\varepsilon d_{j}}{\| \matrixsym{V} \|}$, then using the same error analysis as in~\cref{sec:oversamp}, we have that \begin{equation} \label{eq:errestnear} \left| \sum_{\ell = 1}^{n_p} a^j_\ell(\bx) \, \sigma^j_\ell - \int_{u=0}^{1} \int_{v=0}^{1-u} G_{k}(\bx, \bX^{j}(u,v)) \, \sigma(u,v) \, J^{j}(u,v) \, du \, dv \right| \leq \varepsilon \| \sigma \|_{\mathbb{L}^{2}(\Gamma_{j})} \, . \end{equation} We compute~$\tilde I^j_{nm}(\bx)$ by adaptive integration on $T_{0}$. That is, for precision $\varepsilon$, we compute the integral on $T_{0}$ using $q$th-order Vioreanu-Rokhlin nodes, and compare it to the integral obtained by \begin{enumerate} \item marking the midpoint of each edge of $T_0$, \item subdividing $T_{0}$ into $4$ smaller right triangles, which we will call its {\em descendants}, and \item using $q$th order Vioreanu-Rokhlin nodes on each descendant. \end{enumerate} The subdivision process is repeated until, for each triangle $T$, its contribution to the total integral agrees with the contribution computed using its descendants with an error less than~$\varepsilon \cdot |T|/|T_{0}|$. Done naively, this adaptive integration process dominates the cost of quadrature generation because of the large number of targets in $N_{\eta}(\Gamma_{j})$. Note, however, that as we vary~$n,m$, for a fixed target~$\bx$, the integrand of $I^j_{nm}(\bx)$ includes the same kernel values~$G(\bx,\bX^{j}(u,v))$. Moreover, the adaptive grids generated for different targets have significant commonality. Thus, we can reuse the function values of~$\bX^{j}(u,v)$,~$K_{nm}(u,v)$ and~$J^{j}(u,v)$ if they have already been computed on any descendant triangle (see Fig.~\ref{fig:multipletargpot}). The resulting scheme incurs very little increase in storage requirements; this significantly improves the overall performance. \begin{figure} \centering \includegraphics[width=\linewidth]{comp-reutilization2.pdf} \caption{Adaptive integration grids used for the red target (left), blue target (center). The black grid (right) is the common set of triangles in both of the grids for which the function values of $\bX^{j},K_{nm}$, and $J^{j}$ are reutilized.} \label{fig:multipletargpot} \end{figure} \begin{remark} Adaptive integration often results in much greater accuracy than requested. With this in mind, we set $\varepsilon$ in the termination criterion to be somewhat larger than the precision requested. Our choice is based on extensive numerical experimentation and the full set of parameters used in our implementation is available at~\url{https://gitlab.com/fastalgorithms/fmm3dbie}. \end{remark} \begin{remark} To further improve the performance of computing~$I^{j}_{nm}(\bx)$, we make use of {\em two} parameters:~$\eta$ and~$\eta_1 < \eta$. We only use adaptive integration for the nearest targets, inside $N_{\eta_1}(\Gamma_j)$. For targets $\bx \in N_{\eta}(\Gamma_{j}) \setminus N_{\eta_{1}}(\Gamma_{j}) $, we use a single oversampled quadrature without any adaptivity. This is slightly more expensive in terms of function evaluations, but eliminates the branching queries of adaptive quadrature and allows the use of highly optimized linear algebra libraries. From extensive numerical experiments, we have found that $\eta_{1} = 1.25$ provides a significant speedup. \end{remark} \subsection{Error Analysis} There are two sources of error in the computation of $\mathcal S_{k}[\sigma]$: the discretization error due to discretizing the charts $\bX^{j}$ using order $p$ Vioreanu-Rokhlin nodes, and the quadrature error due to using different quadrature rules for evaluating integrals over the discretized patches. As shown in~\cite{Atkinson95}, the discretization error can be bounded by \begin{multline} \label{eq:errdisc} \left| \mathcal S_{k}[\sigma](\bx) - \sum_{j=1}^{\Npat} \int_{u=0}^{1} \int_{v=0}^{1-u} G_{k}(\bx, \bX^{j}(u,v)) \sigma(u,v) J^{j}(u,v) \, du \, dv \right| \\ = \left| \mathcal S_{k}[\sigma](\bx) - \sum_{j=1}^{\Npat} L_{j}(\bx) \right| \leq C h^{p}, \end{multline} where $h = \max_{j} R_{j}$, and some domain dependent constant $C$. For a given $\bx \in \Gamma_{m}$, our method approximates the layer potential as \begin{multline} I(\bx) = \sum_{\ell=1}^{N(\bx)} G_{k}(\bx, \bX^{m}(u_{\bx,\ell},v_{\bx,\ell})) \, \sigma(u_{\bx,\ell},v_{\bx,\ell}) \, J^{m}(u_{\bx,\ell},v_{\bx,\ell}) w_{\bx,\ell} \ + \\ \sum_{\Gamma_{j} \in N_{\eta}(\bx)} \sum_{\ell=1}^{n_{p}} a_{\ell}^{j}(\bx) \sigma^{j}_{\ell} \ + \sum_{\Gamma_{j} \in F_{\eta}(\bx)} \sum_{\ell=1}^{n_{q_{j}}} G_{k}(\bx, \bX^{j}(u_{\ell,q_{j}},v_{\ell,q_{j}})) J^{j}(u_{\ell,q_{j}},v_{\ell,q_{j}}) w_{\ell,q_{j}} \tilde{\sigma}^{j}_{\ell} \, , \end{multline} which are approximations to $\Sself$, $\Snear$, and $\Sfar$ respectively. Here $(u_{\bx,\ell},v_{\bx,\ell}), w_{\bx,\ell}$ are the auxiliary nodes on the self patch, $a^{j}_{\ell}(\bx)$ are the near quadrature corrections computed via adpative integration, $\tilde{\sigma}^{j}_{\ell}$ is the oversampled density, and $(u_{\ell,q},v_{\ell,q}), w_{\ell,q}$, $\ell=1,2,\ldots n_{q}$ are the order $q$ Vioreanu-Rokhlin nodes on $T_{0}$. Using the estimates in~\cref{eq:errestself,eq:errestnear,eq:errestfar}, combined with~\cref{eq:errdisc}, we conclude that \begin{equation} \left| \mathcal S_{k}[\sigma](\bx) - I(\bx) \right| \leq C h^{p} + \varepsilon \| \sigma \|_{\mathbb{L}^{2}(\Gamma)} \,. \end{equation} \begin{remark} As shown in~\cite{Atkinson95}, the discretization error for evaluating $\mathcal D_{k}[\sigma]$ is $O(h^{p-1})$. The quadrature error analysis remains the same and we can evaluate $\mathcal D_{k}[\sigma](\bx)$ with accuracy $O(h^{p-1}) + \varepsilon \| \sigma \|_{\mathbb{L}^{2}(\Gamma)}$. \end{remark} \section{Coupling quadratures to FMMs} \label{sec:fmmcoupling} For a complete description of three-dimensional FMMs applied to sums of the form \eqref{fmmpt}, we refer the reader to the original papers \cite{fmm2,wideband3d,greengard-1997,greengard-huang}. In order to understand the modifications needed for evaluating layer potentials, however, we will need to make reference to the adaptive oct-tree data structures on which the FMM is built. We briefly summarize that construction here. \subsection{Level-restricted, adaptive oct-trees} Suppose for the moment that we are given a collection of $N$ points, contained in a cube $C$. We will superimpose on $C$ a hierarchy of refinements as follows: the root of the tree is $C$ itself and defined as {\em level 0}. Level $l+1$ is obtained from level $l$ recursively by subdividing each cube at level $l$ into eight equal parts, so long as the number of points in that cube at level $l$ is greater than some specified parameter $s$. The eight cubes created in the above step are referred to as its children. Conversely, the box which was divided is referred to as their parent. When the refinement has terminated, $C$ is covered by disjoint childless boxes at various levels of the hierarchy (depending on the local density of the given points). These childless boxes are referred to as leaf nodes. For any box $D$ in the hierarchy, other boxes at the same level that touch $D$ are called its {\em colleagues}. For simplicity, we assume that the oct-tree satisfies a standard restriction - namely, that two leaf nodes which share a boundary point must be no more than one refinement level apart. In creating the adaptive data structure as described above, it is very likely that the level-restriction criterion is not met. Fortunately, assuming that the tree constructed to this point has $O(N)$ leaf nodes and that its depth is of the order $O(\log N)$, it is straightforward to enforce the level-restriction in a second step requiring~$O(N \log N)$ effort with only a modest amount of additional refinement~\cite{treebook}. \subsection{Precomputation} To reiterate, on input, we assume we are given a surface~$\Gamma$ consisting of (curvilinear) triangles~$\Gamma_j$, \begin{equation} \Gamma = \cup_{j=1}^{\Npat} \, \Gamma_{j}, \end{equation} each given to the desired order of accuracy~$p$. Each~$\Gamma_j$ is then discretized using~$n_p$ points which are the images under the map~$\bX^j: T_0 \to \bbR^3$ of the Vioreanu-Rokhlin nodes on the standard simplex~$T_0$. We will refer to these as the {\em discretization nodes}, on which we assume that samples of the density~$\sigma$ are known. The total number of such points is $N = \Npat \times n_p$. As above, we let~$\bc_j$ denote the centroid of the $j$th patch and $R_j$ the radius of the smallest sphere centered at $\bc_j$ that contains~$\Gamma_j$. We assume there are $N_T$ targets, which could be either the discretization nodes themselves, a collection of off-surface points, or both. In coupling the FMM to local quadratures, we need to determine, for each surface patch, which targets are in its near field and what order Vioreanu-Rokhlin quadratures are needed for the far field computation. Both are controlled by the parameter $\eta$, as discussed in \Cref{sec:etachoice}. The default value for $\eta$ is $2.75$, $2$, or $1.25$ depending on whether the desired order of accuracy is $p \leq 4$, $4 < p \leq 8$ or $p> 8$, respectively. The first step is to build an adaptive oct-tree based on the patch centroids $\{ \bc_j \}$ and the target locations, with one minor modification. That modification is to prevent triangle centroids associated with large triangles from propagating to fine levels during the tree construction. For this, suppose $\bc_j$ is in some box, denoted $D(\bc_j)$ at level $l$, and let $d$ denote the linear dimension of $D(\bc_j)$. If $2 \eta \, R_j > d$, then we leave the centroid associated with $D(\bc_j)$, while allowing smaller triangles and/or targets to be associated with the children. We will say that $\Gamma_j$ is {\em tethered} at level $l$. The near field for each patch is now easy to determine. For each triangle $\Gamma_j$, let $D(\bc_j)$ denote the box to which the triangle centroid is associated - either a leaf node or the box at a coarser level $l$ if is tethered there. Clearly, if $D(\bc_{j})$ is not a leaf box, then the near field region $B_{\eta R_j}(\bc_j)$ is contained within $D(\bc_j)$ and its colleagues. If $D(\bc_{j})$ is a leaf box, then the near field region $B_{\eta R_j}(\bc_j)$ is contained within the union of $D(\bc_{j})$, its colleagues, and leaf boxes which are larger in size than $D(\bc_{j})$ and which share a boundary with $D(\bc_{j})$. Scanning those colleagues, all targets $\bx$ that do not lie on $\Gamma_j$ itself and satisfy the criterion \[ |\bx-\bc_{j}| < \eta R_{j} \] are assigned to the {\em near field list} for $\Gamma_j$. One can then compute the near field quadratures using the method of \Cref{sec:nearfieldquad} for each point in the target list. This requires storing a matrix of dimension $N_{near}(j) \times n_{p}$, where $N_{near}(j)$ is the size of the target list. We will denote this matrix by ${\cal N}_j$. Assuming one wishes to evaluate the layer potential on surface, we also need to compute the self interactions for each triangle using the generalized Gaussian quadrature scheme of \cite{bremer_2012c,bremer_2013}, as described in \Cref{sec:local}. This requires storing an $n_p \times n_p$ matrix for each patch, which we will denote by ${\cal S}_j$. Once the near field work has been carried out, the far field quadrature order $q_j$ is determined, as described in \Cref{sec:oversamp}. One can then interpolate from the $n_p$ discretization nodes on $\Gamma_j$ to the $n_{q_j}$ quadrature nodes on $\Gamma_j$ using the Koornwinder basis for interpolation. We will denote by $N_{over}$ the total number of oversampled points used: $N_{over} = \sum_{i = 1}^{\Npat} n_{q_i}$. \subsection{Fast evaluation of layer potentials} The simplest FMM-based scheme for evaluating a layer potential is to call the point-based FMM in the form \eqref{fmmpt}, with $N_{over}$ sources and $N_T$ targets. For every target, if it is in the near field of patch $\Gamma_j$, one subtracts the contribution made in the naive, point-based FMM calculation from the $n_q$ oversampled points on that patch. The potential at the target can then be incremented by the appropriate, near field quadrature-corrected interactions, using the stored matrix ${\cal N}_j$. If the target is on surface (one of the discretization nodes on $\Gamma_j$ itself), the correct self interaction is obtained from the precomputed matrix ${\cal S}_j$. We refer to the algorithm above as the \emph{subtract-and-add} method. It has the drawback that it could suffer from catastrophic cancellation for dense discretizations with highly adaptive oct-trees since the near field point contributions within the naive FMM call are spurious and could be much greater in magnitude than the correct contributions. (In practice, we have not detected any such loss of accuracy, at least for single or double layer potentials.) For readers familiar with the FMM, it is clear that one could avoid the need to compute and then subtract spurious contributions, by disabling the direct (near neighbor) interaction step in the FMM. When looping over the leaf nodes, for each source-target pair, one can first determine whether the source is on a patch for which the target is in the far field. If it is, carry out the direct interaction. If it is in the near field, omit the direct interaction. When the FMM step is completed, the subsequent processing takes place as before, but there is no need to subtract any spurious contributions. It is, perhaps, surprising that the \emph{subtract-and-add} method is faster in our current implementation, even though more flops are executed. This is largely because of the logical overhead and the bottlenecks introduced in loop unrolling and other compiler-level code optimizations. \begin{remark} The adaptive oct-tree used in the point FMM is different from the one used for determining the near field of the patches. The latter is constructed based on centroid and target locations, while the point FMM oct-tree is constructed based on \emph{oversampled} source and target locations. Thus, different termination criteria can be chosen for the construction of these oct-trees in order to optimize the performance of the separate tasks. Since the additional processing required for evaluating layer potentials is decoupled from the algorithm used for accelerating the far field interactions, one could use any fast hierarchical algorithm like the FMM, an FFT-based scheme like fast Ewald summation, or a multigrid-type PDE solver. \end{remark} \section{Numerical examples} \label{sec:examples} In this section, we illustrate the performance of our approach. For Examples \ref{subsec-cpu}, \ref{subsec-aspect-ratio}, and \ref{subsec-num-order}, we consider a twisted torus as the geometry (typical of stellarator design in plasma physics applications). The boundary $\Gamma$ is parameterized by $\bX: [0,2\pi]^2 \to \Gamma$ with \begin{equation} \bX(u,v) = \sum_{i=-1}^{2} \sum_{j=-1}^{1} \delta_{i,j} \begin{bmatrix} \cos{v} \cos{((1-i)\, u+j\, v)} \\ \sin{v} \cos{((1-i) \, u+j \, v)} \\ \sin{((1-i)\, u+j \, v)} \end{bmatrix} \, , \end{equation} where the non-zero coefficients are $\delta_{-1,-1}=0.17$, $\delta_{-1,0} = 0.11$, $\delta_{0,0}=1$, $\delta_{1,0}=4.5$, $\delta_{2,0}=-0.25$, $\delta_{0,1} = 0.07$, and $\delta_{2,1}= -0.45$. (See Fig. \ref{fig:stell-disc}.) The code was implemented in Fortran and compiled using the GNU Fortran 10.2.0 compiler. We use the point-based FMMs from the FMM3D package (\url{https://github.com/flatironinstitute/FMM3D}). All CPU timings in these examples were obtained on a laptop using a single core of an Intel i5 2.3~GHz processor. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{stellarator.pdf} \caption{The boundary of a stellarator like geometry. The surface is colored using its z-coordinate} \label{fig:stell-disc} \end{figure} We use the following metrics to demonstrate the performance of our approach. As above, for discretization order $p$, we let $n_p = p(p+1)/2$, and we let $q_{j}$ denote the far-field quadrature order for $\Gamma_{j}$. The user-specified precision is denoted by $\varepsilon$. Recall that the total number of discretization points on the boundary is denoted by $N = \Npat \cdot n_p$ and that the total number of oversampled nodes is denoted by $\Nover = \sum_{j=1}^{\Npat} n_{q_j}$. We define the oversampling parameter by $\alpha= \Nover/N$. The memory requirements per discretization node for storing all interactions in $\Snear$ are given by \begin{equation} m = \frac{n_p \left( \sum_{j=1}^{\Npat} N_{near}(j) + n_p \right)}{N}. \end{equation} This accounts for both off-surface targets and on surface evaluation. Let $\tinit$ denote the time required to precompute all near field quadrature corrections and let $\tlp$ denote the time for evaluating the layer potential given the precomputed near field quadratures Then, the quantities $\sinit= N/\tinit$, and $\slp=N/\tlp$, are the speeds of the corresponding steps, measured in points processed per second. One feature of the surface triangulation that has some influence on speed is the aspect ratio of the patches. Letting $\sigma_{1},\sigma_{2}$ be the eigenvalues of the first fundamental form of $\Gamma_{j}$, we define its aspect ratio by \begin{equation} a_{j} = \sqrt{\frac{\int_{\Gamma_{j}} \left(\frac{\sigma_{1}}{\sigma_{2}}\right)^2 da(\by)}{\int_{\Gamma_{j}} da(\by)}} \, . \end{equation} We let $\amax = \max_{j} a_{j}$ and $\aavg = \sum_{j} a_{j}/\Npat$, the maximum aspect ratio and the average aspect ratio over all triangles, respectively. \subsection{Memory and oversampling requirements \label{subsec-cpu}} To illustrate the performance of our method as a function of the order of accuracy $p$ and the requested precision $\varepsilon$, we consider the evaluation of the single layer potential $\mathcal S[\sigma]$ with frequency $k=1$ on the stellarator geometry discretized with $\Npat=2400$ (the diameter of the stellarator with $k=1$ is approximately 1.7 wavelengths). In~\Cref{tab:numerical-perf-res}, we tabulate the memory requirements per point $m$, the oversampling $\alpha$, and the speeds $\sinit$ and $\slp$, as we vary $p$ and $\varepsilon$. The scheme behaves as expected: for fixed $p$, as $\varepsilon \rightarrow 0$, $\alpha$ increases while $\sinit$ and $\slp$ decrease. The oversampling parameter $\alpha$ depends on both $p$ and $\varepsilon$, as discussed in \Cref{sec:oversamp}. \afterpage{ \input{perf-table-res.tex} \input{asp-table-res4.tex} \begin{figure} \centering \begin{subfigure}[t]{.45\linewidth} \centering \includegraphics[width=.95\linewidth]{conv-green.pdf} \caption{Relative $L^2$ error in Green's identity, denoted by $\varepsilon_{g}$.} \end{subfigure} \quad \begin{subfigure}[t]{.45\linewidth} \centering \includegraphics[width=.95\linewidth]{conv-cfie.pdf} \caption{Relative $L^{\infty}$ error in solution to integral equation, denoted by $\varepsilon_{a}$.} \end{subfigure} \caption{Relative errors of layer potential evaluations. In both figures, the dashed colored lines are reference curves for $h^{p-1}$ with the corresponding $p$. The dashed black line is a reference line for the specified tolerance~$\varepsilon$. } \label{fig:ooc} \end{figure} \clearpage } \subsection{Effect of aspect ratio \label{subsec-aspect-ratio}} To investigate the effect of triangle quality on the performance of our method, we vary the average aspect ratio of the discretization. The task at hand is again to compute $\mathcal S[\sigma]$ with $k=1$ on the stellarator, while varying the triangulation without a significant change in the total number of patches. In~\Cref{tab:numerical-asp-res}, we tabulate $\aavg$, $\Npat$, $\alpha$, $m$, $\sinit$, and $\slp$ for $p=4$ and $\varepsilon = 5 \cdot 10^{-7}$. We note that (except for the oversampling parameter), the performance of the approach deteriorates as the average aspect ratio of the discretization is increased, especially in the precomputation phase. \subsection{Order of convergence \label{subsec-num-order}} To demonstrate the accuracy of our approach, we consider two tests. First, we verify Green's identity along the surface: \begin{equation} \frac{u}{2} = \mathcal S_{k} \left[\frac{\partial u}{\partial n}\right] - \mathcal D_{k}[u] \, , \end{equation} where $u$ is the solution to the Helmholtz equation in the interior of the domain $\Omega$ generated by a point source located in the exterior. The second test is to use the combined field representation \eqref{eq:urep} to solve the Dirichlet problem for an unknown density~$\sigma$. With the right-hand side in the corresponding integral equation \eqref{eq:inteq} taken to be a known solution $u$, $\sigma$ satisfies \begin{equation} \frac{\sigma}{2} + \mathcal D_{k}[\sigma] - ik \mathcal S_{k}[\sigma] = u, \qquad \text{on } \Gamma. \end{equation} We can then check that $u$ is correctly reproduced at any point in the exterior. For both of these tests, we discretize the stellarator using $\Npat = 600$, $2400$, and $9600$, and compute the layer potentials with a tolerance of $\varepsilon = 5 \times 10^{-7}$ for both the quadratures and the FMM. In~\Cref{fig:ooc}, we plot the relative $L^{2}$ error in Green's identity $\varepsilon_{g}$ (left), and the relative $L^{\infty}$ error at a point in the interior $\varepsilon_{a}$ (right) as we vary the order of discretization. Note that in both tests, the errors decrease at the rate $h^{p-1}$ until the tolerance $\varepsilon$ is reached. This is consistent with the analysis in \cite{Atkinson95}. \subsection{Large-scale examples} We demonstrate the performance of our solver on several large-scale problems. We first solve for the electrostatic field induced by an interdigitated capacitor, followed by Dirichlet and Neumann boundary value problems governed by the Helmholtz equation in the exterior of an aircraft. Our last example involves scattering in a medium with multiple sound speeds, modeled after a Fresnel lens. The results in this section were obtained using an Intel Xeon Gold 6128 Desktop with 24 cores. \subsubsection*{Interdigitated capacitor} \label{subsec:cap} A challenging problem in electrostatics is the calculation of the capacitance of a configuration of two perfect compact conductors with complicated contours, which may also be close to touching. (See~\cref{fig:cap}.) The capacitance is defined as the ratio $C = Q/V$, where $V$ is the potential difference between the conductors, $Q$ is the total charge held on one conductor and $-Q$ is the total charge held on the other. In simulations, $C$ can be computed in two ways. First, one can solve the Dirichlet problem for the electrostatic potential $u$, with $u= 0$ on one conductor and $u = 1$ on the other. From the computed solution, the total charge can be obtained via the integral~\cite{jackson} \begin{equation} Q = \int_\Gamma \frac{\partial u}{\partial n}(\bx') \, da(\bx'). \end{equation} The capacitance is then $C = Q/1 = Q$. A second (equivalent) approach, which we will take here, is to place a net charge~$q_{1} = -1$ on one conductor~$\Omega_1$ with boundary $\Gamma_{1}$ and a net charge of~$q_{2} = 1$ on the other conductor~$\Omega_2$ with boundary $\Gamma_{2}$. One can then determine the corresponding potential difference by solving the following boundary value problem for the potential~$u$ in the domain~$E$ exterior to $\Omega_1$ and $\Omega_2$, i.e. the domain~$E = \bbR^3 \setminus (\Omega_1 \cup \Omega_2)$: \begin{equation} \begin{aligned} \Delta u &= 0, &\quad &\bx \in E , \\ u &= V_{i}, & &\bx \text{ on } \Gamma_{i}, , \\ -\int_{\Gamma_{j}} \frac{\partial u}{\partial n} \, da &= q_{i}, & & \\ u &\to 0 & &\text{ as } |\bx| \to \infty. \end{aligned} \end{equation} Here, the constants $V_{1}$, $V_{2}$ are unknowns as well as the potential $u$. The \emph{elastance} of the system is then given by $P = (V_{2}-V_{1})/(q_{2}-q_{1}) = (V_{2}-V_{1})/2$. It is the inverse of the corresponding capacitance $C = 1/P = 2/(V_{2}-V_{1})$. This formulation (which can involve more than two conductors) is generally referred to as the \emph{elastance} problem (see~\cite{elastance-ref} and the references therein). PDEs of this type where Dirichlet data is specified up to an unknown constant are sometimes called modified Dirichlet problems~\cite{mikhlin-book}. We represent the solution $u$ using the combined field representation, \begin{equation} u = S_{0}[\rho] + D_{0}[\rho] , \end{equation} where~$\rho$ is an unknown density. Imposing the boundary conditions on~$\Gamma_{i}$, $\rho$ must satisfy the integral equation \begin{equation} \begin{aligned} \rho/2 + D_{0}[\rho] + S_{0}[\rho] &= V_{i}, &\quad &\bx \text{ on } \Gamma_{i} , \\ \int_{\Gamma_{i}} \rho &= -q_{i}, & &i=1,2 . \end{aligned} \label{eq:cap-inteq} \end{equation} We discretize the surface~$\Gamma$ with~$\Npat=29,888$ and~$p=4$, and then solve the resulting linear system of size $N=298,882$ using GMRES. We set the quadrature tolerance $\varepsilon=5\times 10^{-7}$. For this setup, $\tinit=6.7$s, $\alpha=2.98$, $m=154.2$. GMRES converged to a relative residual of $5 \times 10^{-7}$ in 25 iterations, and the solution was obtained in $128.4$s. The reference capacitance for the system was computed by refining each patch until it had converged to 5 significant digits, given by $2237.1$. The relative error in the computed capacitance was $2.2 \times 10^{-4}$. In~\cref{fig:cap}, we plot the computational mesh and the solution $\rho$ on the surface of the conductors. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{capacitor-final.pdf} \caption{(left) A 4th-order computational mesh of the boundary, and (right) the solution $\rho$ to~\cref{eq:cap-inteq}} \label{fig:cap} \end{figure} \subsubsection*{Scattering from an airplane (sound-soft)} \label{subsec:plane} In this section we demonstrate the performance of our method on a moderate frequency acoustic scattering problem. The model airplane is $49.3$ wavelengths long, with a wingspan of $49.2$ wavelengths and a vertical height of $13.7$ wavelengths, which we assume has a sound-soft boundary, satisfying Dirichlet boundary conditions (see section \ref{sec:surface}). The plane also has several multiscale features: 2 antennae on the top of the fuselage, and 1 \emph{control unit} on the bottom of the fuselage. (See \Cref{fig:plane}.) The plane is discretized with $\Npat=125,344$, and $p=4$ resulting in $N=1,253,440$ discretization points. The ratio of the largest to the smallest patch size, measured by the enclosing sphere radius $R_{j}$ in~\cref{eq:rjdef}, is $483.9$. In~\cref{fig:hist-plane}, we plot a histogram of the patch sizes $R_{j}$, and the aspect ratios of the patches on the plane. The worst case patch has an aspect ratio of $35$, but only $322$ patches out of the total $125,344$ have an aspect ratio of greater than $10$. \begin{figure}[b] \begin{center} \includegraphics[width=0.75\linewidth]{plane-hist.pdf} \end{center} \caption{Histogram of size of patches $R_{j}$ (left), and aspect ratio $a_{j}$ right.} \label{fig:hist-plane} \end{figure} In order to have an analytic reference solution, we assume the Dirichlet boundary data $u|_\Gamma$ for the governing exterior Helmholtz equation is generated using a collection of $123$ interior sources, $19$ of which are in the tail. Using a combined field representation \[ u = \mathcal D_{k}[\sigma] - ik \, \mathcal S_{k}[\sigma], \] imposing the Dirichlet condition yields the combined field integral equation for the unknown density~$\sigma$: \begin{equation} \frac{\sigma}{2} + \mathcal D_{k}[\sigma] - ik \mathcal S_{k}[\sigma] = u|_\Gamma \, . \label{eq:plane-inteq} \end{equation} For a quadrature tolerance of $5 \times 10^{-7}$, $\tinit=75.86$s, the oversampling factor $\alpha=4.51$, and the memory cost per discretization point is $m=149.6$. GMRES converged to a relative residual of $5 \times 10^{-7}$ in $59$ iterations, and the solution was obtained in $4,694$s. We plot the solution on a $301 \times 301$ lattice of targets on a slice which cuts through the wing edge, whose normal is given by $(0,0,1)$. For targets in the interior of the airplane, we set the error to $1\times 10^{-6}$ since the computed solution there is not meaningful. The layer potential evaluation at all targets required only $\tinit + \tquad = 104.37$s. In~\cref{fig:plane}, we plot the density $\sigma$ on the airplane surface and the relative error on the slice with $301 \times 301$ targets. The maximum relative error at all targets is $5 \times 10^{-4}$. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{plane-dir-final.pdf} \end{center} \caption{The solution $\sigma$ to~\cref{eq:plane-inteq}, and the relative error in the solution at a grid of $301\times301$ targets on a horizontal slice intersecting the wing edge. Zoomed in views of one antenna (top right) and the control unit (bottom left) indicate the extent of the fine multiscale features.} \label{fig:plane} \end{figure} \subsubsection*{Scattering from an airplane (sound-hard)} \label{subsec:plane2} In this section we solve the Helmholtz equation in the exterior of the plane, assuming Neumann boundary conditions instead. These arise in the modeling of sound-hard scatterers \cite{colton_kress}. We will use the following regularized combined field integral representation of~\cite{bruno_2012} for the solution: \begin{equation} \label{eq:plane-rep_hard} u=\mathcal S_{k}[\sigma]+i\alpha \mathcal D_{k}\left[ \mathcal S_{i|k|}[\sigma] \right]. \end{equation} Applying the boundary condition $\partial u/\partial n=g$ along~$\Gamma$ leads to the second-kind integral equation: \begin{equation} \label{eq:plane-int_hard_far} -\frac{\sigma}{2} + \mathcal S'_{k}[\sigma] + i\alpha \mathcal D'_{k}\left[ \mathcal S_{i|k|}[\sigma] \right] = g. \end{equation} Using a well-known Calder\'on identity for the operator $\mathcal D'_{i|k|} \mathcal S_{i|k|}$~\cite{nedelec}, this equation can be re-written so as to avoid the application of the hypersingular operator~$\mathcal D_k'$: \begin{equation} \label{eq:plane-int_hard_self} -\left( \frac{2+i\alpha}{4} \right) \sigma + \mathcal S'_{k}[\sigma] + i\alpha \left( \mathcal D'_{k}-\mathcal D'_{i|k|} \right) \left[ \mathcal S_{i|k|}[\sigma] \right] + i\alpha \mathcal S'^2_{i|k|}[\sigma] = g. \end{equation} A number of different possibilities for the regularizing operator are available depending on the frequency range (see~\cite{colton_kress_inverse, vico_2014}). Examining the above integral equation, it is clear that a total of four separate FMM calls and four local quadrature corrections will be needed. In order to have an analytic reference solution, we generate the Neumann boundary data $u|_\Gamma$ for the governing exterior Helmholtz equation by using the same~$123$ interior sources as in the Dirichlet problem. For a quadrature tolerance of $5 \times 10^{-7}$, $\tinit=387.4$s, the oversampling factor $\alpha=4.51$, and the memory cost per discretization point is $m=598.3$. GMRES converged to a relative residual of $5 \times 10^{-7}$ in $35$ iterations, and the solution was obtained in $7617$s. We plot the solution on the $301 \times 301$ lattice of targets used for the Dirichlet problem. As before, for targets in the interior of the airplane, we set the error to $1\times 10^{-6}$ since the computed solution there is not meaningful. The layer potential evaluation at all targets required only $\tinit + \tquad = 159.6$s. In~\cref{fig:plane-neu}, we plot the induced density~$\sigma$ on the airplane surface and the relative error on the slice with $301 \times 301$ targets. The maximum relative error at all targets is $5 \times 10^{-4}$. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{plane-neu-final.pdf} \end{center} \caption{The solution~$\sigma$ to integral equation~\eqref{eq:plane-int_hard_self}, and the relative error in the solution to the PDE at a grid of~$301\times301$ targets on a horizontal slice intersecting the wing edge. Zoomed in views of one antenna (top right) and the control unit (bottom left) indicate the extent of the fine multiscale features.} \label{fig:plane-neu} \end{figure} \subsubsection*{Scattering through a Fresnel lens (multiple sound speeds)} \label{subsec:lens} In this section we solve the Helmholtz transmission problem, i.e. scattering through media with various sound speeds, in a Fresnel lens geometry (see~\Cref{fig:fresnel}). The Helmholtz parameter for the interior region (the lens) is~$k=\omega\sqrt{\epsilon\mu}$ and for the exterior region (free space) is $k_0=\omega\sqrt{\epsilon_0\mu_0}$. We assume that the known incoming field~$u^{\In}$ exists only in the exterior, so that the total field in the exterior is given by~$u_t = u_0 + u^{\In}$ where~$u_0$ is the scattered field. In the interior, the total field is merely $u_t = u$, with~$u$ the scattered field. In the piecewise constant sound speed setup, we enforce the following transmission conditions across interfaces: \begin{equation} \begin{aligned} u_0-u&=-u^{\In}|_\Gamma, \\ \frac{1}{\epsilon_0}\frac{\partial u_0}{\partial n}-\frac{1}{\epsilon_1}\frac{\partial u}{\partial n}&=-\frac{1}{\epsilon_0}\frac{\partial u^{\In}}{\partial n}\Big|_\Gamma. \end{aligned} \end{equation} The scattered field in the exterior region,~$u_0$, and in the interior region,~$u$, are represented as, respectively~\cite{colton_kress}: \begin{equation} \begin{aligned} u_0&=\epsilon_0\mathcal D_{k_0}[\rho]+ \epsilon_0^2\mathcal S_{k_0}[\sigma], \\ u&=\epsilon\mathcal D_{k}[\rho]+ \epsilon^2\mathcal S_{k}[\sigma]. \end{aligned} \end{equation} This leads to the system of boundary integral equations along~$\Gamma$ \begin{equation} \begin{aligned} \Big(\frac{\epsilon_0+\epsilon}{2}\Big)\rho+\Big(\epsilon_0\mathcal D_{k_0}-\epsilon\mathcal D_{k}\Big)[\rho]+\Big(\epsilon_0^2\mathcal S_{k_0}-\epsilon^2\mathcal S_{k}\Big)[\sigma]& = -u^{\In},\\ \Big(\frac{\epsilon_0+\epsilon}{2}\Big)\sigma-\Big(\mathcal D'_{k_0}-\mathcal D'_{k}\Big)[\rho]-\Big(\epsilon_0\mathcal S'_{k_0}-\epsilon\mathcal S'_{k}\Big)[\sigma]&= - \frac{1}{\epsilon_0} \frac{\partial u^{\In}}{\partial n}. \end{aligned} \label{eq:trans-sys} \end{equation} In the following example, the Fresnel lens has~$\epsilon=2$,~$\mu=1$ and as usual, in free-space~$\epsilon_0=\mu_0=1$. The angular frequency is set to be~$\omega=1+\sqrt{2}$, and the annular step size in the lens equals 1. Relative to the exterior wavenumber, the Fresnel lens is 19.11 wavelengths in diameter and has a height of 0.84 wavelengths. The geometry was designed in GiD~\cite{gid}, and a 4th-order curvilinear mesh was constructed using the method described in~\cite{vico2020mesh}. The mesh consists of 62,792 curvilinear triangles, each discretized to 5th order yielding a total of 941,880 discretization points. See Figure~\ref{fig:fresnel}. We solve the transmission problem in response to an incoming plane wave $u^{inc}=e^{ik_{0}z}$. For a quadrature tolerance of $5 \times 10^{-7}$, $\tinit=238.2$s, the oversampling factor $\alpha=3.07$, and the memory cost per discretization point is $m=762.0$. GMRES converged to a relative residual of $5 \times 10^{-7}$ in $294$ iterations, and the solution was obtained in $23029$s. In~\cref{fig:fresnel} we plot the absolute value of the total field $|u_{t}| = |u_{0} + u^{inc}|$ on a $1000 \times 1000$ lattice of targets in the $yz$ plane in the exterior region. The layer potential evaluation at all targets required only $\tinit+\tquad=81.1$s. We also plot the real part of the density $\rho$ on the surface of the lens. The accuracy of the solution is estimated by solving a transmission problem whose boundary data is computed using known solutions to the Helmholtz equation in the interior and exterior (the interior Helmholtz solution is the potential due to a point source in the exterior and the exterior Helmholtz solution is the potential due to a point source in the interior). The maximum relative error for the computed solution at the same target grid as above is $8.4\times 10^{-5}$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fresnel-final.pdf} \caption{On the left, we illustrate the triangulation of the lens surface. On the right is a plot of the real part of the solution $\rho$ to~\cref{eq:trans-sys} on the surface of the lens and the absolute value of the total field above the lens where the central focusing of the beam is clearly visible.} \label{fig:fresnel} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper, we have presented a robust, high-order accurate method for the evaluation of layer potentials on surfaces in three dimensions. While our examples have focused on the Laplace and Helmholtz equations, the underlying methodology extends naturally to other problems in mathematical physics where the governing Green's function is singular -- thereby requiring specialized quadrature schemes -- but compatible with fast multipole acceleration in the far field. To determine the highest performance scheme, we implemented generalized Gaussian quadrature \cite{bremer_2012c,bremer_2013,bremer-2015,bremer}, QBX \cite{klockner_2013,Siegel2018ALT,Wala2018,Wala2020}, and coordinate transformation schemes similar to \cite{bruno2001fast,malhotra19,ying}. After various code optimizations (at least for surfaces defined as collections of curved, triangular patches), we found that generalized Gaussian quadrature with careful reuse of precomputed, hierarchical, adaptive interpolation tables was most efficient, as illustrated in the preceding section. It may be that an even better local quadrature scheme emerges in the future (in particular, extensions of the idea presented in~\cite{wu2020corrected} look quite promising). In that case, as discussed in \Cref{sec:fmmcoupling}, coupling to the FMM will be fundamentally unchanged. A useful feature of locally corrected quadrature rules (of any kind) is that the procedure is trivial to parallelize by assigning a different patch to each computational thread. Thus, acceleration on multi-core or high performance platforms is straightforward. Finally, although we limited ourselves here to evaluating layer potentials and solving integral equations iteratively, we note that our quadrature generation scheme and oct-tree data structure are compatible for use with fast direct solvers \cite{bremer-2015,greengard-2009,ho-2012,martinsson-2005,borm_2015, guo_2016,liu_2016,coulier_2015,minden_2016}. These require access to small blocks of the system matrix in an effort to find a compressed representation of the inverse. There are still a number of open questions that remain to be addressed, including the development of rules for surfaces with edges and corners and the coupling of layer potential codes with volume integral codes to solve inhomogeneous or variable-coefficient partial differential equations using integral equation methods. These are all ongoing areas of research. \section*{Acknowledgments} \label{sec:ack} We would like to thank Alex Barnett and Lise-Marie Imbert-G\'erard for many useful discussions, and Jim Bremer and Zydrunas Gimbutas for sharing several useful quadrature codes. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of a Quadro P6000, used for some of the visualizations presented in this research. L. Greengard was supported in part by the Office of Naval Research under award number~\#N00014-18-1-2307. M. O'Neil was supported in part by the Office of Naval Research under award numbers~\#N00014-17-1-2059,~\#N00014-17-1-2451, and~\#N00014-18-1-2307. F. Vico was supported in part by the Office of Naval Research under award number~\#N00014-18-2307, the Generalitat Valenciana under award number AICO/2019/018, and by the Spanish Ministry of Science and Innovation (Ministerio Ciencia e Innovaci\'{o}n) under award number PID2019-107885GB-C32. We would also like to thank the anonymous referees for many helpful comments that led to a much-improved manuscript. \bibliographystyle{elsarticle-num}
2023-04-23T08:17:53.759Z
2021-04-26T02:20:22.000Z
redpajama/arxiv
arxiv_0000
824
14,033
dbeebf3a705dbb0459fba1e290e0005ca265c7b4
\section{Introduction} Simulating the dynamics of quantum many-body systems is a central challenge in Physics, Chemistry and the Material Sciences as well as in other areas of science and technology. While for classical algorithms this task is in general intractable, quantum circuits offer a way around the classical bottlenecks by way of `circuitizing' the time evolution of the system in question. However, present-day quantum computing devices allow for the programming of only small and noisy quantum circuits, a state of matters that places severe constraints on the types of applications these devices may be used for in practice. The qubit and gate costs of circuitization procedures have therefore rightfully become key factors in determining the feasibility of any potential application and increasingly more efficient algorithms are continuously being devised. We propose a novel approach to resource-efficient Hamiltonian dynamics simulations on quantum circuits that we argue offers certain advantages, which directly translate to a shorter algorithm runtime, over state-of-the-art quantum simulation algorithms~\cite{Berry1,2018arXiv180500675H} (see Sec.~\ref{sec:comp} for a detailed comparison). We accomplish this by utilizing a series expansion of the quantum time-evolution operator in its off-diagonal elements wherein the operator is expanded around its diagonal component~\cite{ODE,ODE2,pmr}. This expansion allows one to effectively integrate out the diagonal component of the evolution, thereby reducing the overall gate and qubit complexities of the algorithm as compared to existing methods. In our approach, the time evolution is broken up into identical short-time segments, each of which is accurately approximated using a number of terms in the off-diagonal series that is logarithmic in the inverse of the required precision. Each segment is then executed with the help of the linear combination of unitaries (LCU) lemma~\cite{Berry1}. Our algorithm enables the simulation of a wide range of realistic models, including systems of spins, bosons or fermions. The paper is organized as follows. In Sec.~\ref{sec:off}, we introduce the off-diagonal expansion insofar as it applies to the time-evolution operator. In Sec.~\ref{sec:qtea}, we present the Hamiltonian dynamics algorithm that we construct based on the expansion and in Sec.~\ref{sec:comp} we provide a comparison between the present scheme and two of the leading approaches to quantum simulations, the Taylor-series based approach of Berry {\it et al.}~\cite{Berry1} and the interaction-picture representation approach devised by Low and Wiebe~\cite{2018arXiv180500675H}. We examine several examples in some detail. A summary and some conclusions are given in Sec.~\ref{sec:summary}. \section{Off-diagonal series expansion of the time-evolution operator\label{sec:off}} We next derive an expansion of the time evolution operator based on the off-diagonal series expansion recently introduced in Refs.~\cite{ODE,ODE2,pmr} in the context of quantum Monte Carlo simulations. While we focus in what follows on time-independent Hamiltonians for simplicity, we note that an extension of the following derivation to include time-dependent Hamiltonians also exists~\cite{timeDepHamSim}. \subsection{Permutation matrix representation of the Hamiltonian} We begin by casting the Hamiltonian in the form \beq \label{eq:basic} H=\sum_{i=0}^M D_i P_i= D_0+ \sum_{i=1}^M D_i P_i \,, \eeq where the $D_i$ operators are diagonal in some known basis, which we will refer to as the computational basis and denote by $\{ |z\rangle \}$, $P_0 :=\mathbb{1}$, and the $P_i$ operators (for $i>0$) are permutation operators, i.e., \hbox{$P_i| z \rangle=| z'(i,z) \rangle$} where $z'\neq z$, i.e., they do not have any fixed points (equivalently, their diagonal elements are all zero). While the above formulation may appear restrictive it is important to note that any Hamiltonian can be written in this form. In particular, for models of spin-$1/2$ particles (qubits), the $D_i$'s are diagonal in the Pauli-$Z$ basis, and the $P_i$'s are a tensor products of Pauli-$X$ operators, $P_i \in \{\mathbb{1},X\}^{\otimes N}$ where $N$ is the number of spins. We will refer to the principal diagonal matrix $D_0$ as the diagonal component of the Hamiltonian, while the set $\{D_i P_i \}_{i=1}^M$ of off-diagonal operators (in the computational basis) give the system its `off-diagonal dimension'. We will call `diagonal energies' the (real) numbers obtained by acting with $D_0$ on computational basis states: \hbox{$D_0 | z \rangle = E_z| z \rangle$}. Similarly, by applying the generalized permutation operator $D_i P_i$ on a basis state, we obtain \hbox{$D_i P_i | z \rangle = d_i(z')| z' \rangle$}, where $d_i(z')$ will be in general a complex number ($z'$ depends on $z$ and $i$). With these notations in hand, we move on to discuss the off-diagonal series expansion of the time-evolution operator. \subsection{Expansion of the time-evolution operator} We next consider the evolution of a state under a time-independent Hamiltonian $H$ for time $t$. We expand the time evolution operator $\e^{-i H t}$ using the off-diagonal series expansion. We first consider the action of $\e^{-i H t}$ on a basis state $|z\rangle$: \begin{eqnarray} \e^{-i H t} |z\rangle= \sum_{n=0}^{\infty}\frac{(-i t)^n}{n!} H^n | z \rangle =\sum_{n=0}^{\infty}\frac{(-i t)^n}{n!} \Big(\sum_{i=0}^M D_i P_i\Big)^n | z \rangle = \sum_{n=0}^{\infty} \frac{(-i t)^n}{n!} \sum_{{S}_j^{(n)} \in \mathcal{S}_{n}} {S}_j^{(n)} | z \rangle \,, \end{eqnarray} where in the last step we have also expanded the multinomial $(\sum_{i} D_i P_i)^n$, and $\mathcal{S}_{n}$ denotes the set of all $(M+1)^n$ operators that appear in the expansion of the multinomial $(\sum_{i} D_i P_i)^n$. We proceed by `stripping' all the diagonal operators off the sequences ${S}_j^{(n)}$. We do so by evaluating their action on the relevant basis states, leaving only the off-diagonal operators unevaluated inside the sequence (for example, for the $n=2$ sequence $D_1P_1D_0 $ we write $D_1P_1D_0 \ket{z}=E_z D_1P_1\ket{z}=E_z D_1\ket{z_1}=E_{z}d_1(z_1)\ket{z_1}=E_{z}d_1(z_1)P_1\ket{z}$, where $\ket{z_1}=P_1\ket{z}$). Collecting all terms together, we arrive at: \begin{eqnarray}\label{eq:snsq} \e^{-i H t} |z\rangle= \sum_{q=0}^{\infty} \sum_{{\bf{i}}_q} d_{{\bf i}_q} P_{{\bf{i}}_q} | z \rangle \Bigl( \sum_{n=q}^{\infty} \frac{(-i t)^n}{n!} {\!\!\!\!\!\!\!\!}\sum_{\substack{k_0,\ldots,k_q \\\text{s.t.}\sum_i k_i=n-q} }{\!\!\!\!\!\!\!\!}{E^{k_0}_{z} \cdots E^{k_{q}}_{z_{q}}} \Bigr)\,, \end{eqnarray} where the boldfaced index ${\bf i}_q = (i_1,\ldots,i_q)$ is a tuple of indices $i_j$, with $j=1,\ldots, q$, each ranging from $1$ to $M$ and $P_{{\bf i}_q} := P_{i_q} \cdots P_{i_2}P_{i_1}$. In addition, similar to the diagonal energy $E_{z}=\langle z | D_0|z\rangle$, we denote $E_{z_j}=\langle z_j | D_0|z_j\rangle$ are the energies of the states $|z\rangle,|z_1\rangle, \ldots, |z_q\rangle$ obtained from the action of the ordered $P_{i_j}$ operators appearing in the sequence $P_{{\bf i}_q}$ on $|z\rangle$, then on $|z_1\rangle$, and so forth. Explicitly, $P_{i_1}|z\rangle=|z_1\rangle, P_{i_2}|z_1\rangle=|z_2\rangle$, etc. (Note that the sequence of states, and similarly the energies, should actually be denoted $|z_1(z,i_1)\rangle, |z_2(z,i_1,i_2)\rangle, \ldots$. For conciseness we will be using the abbreviations $|z_1\rangle,|z_2\rangle,\ldots$) Last, we have denoted $d_{{\bf i}_q}=\prod_{j=1}^q d_{i_j}(z_j)$ where \beq\label{eq:dj} d_{i_j}(z_j) = \langle z_j | D_{i_j}|z_j\rangle \eeq can be considered the `hopping strength' of $P_{i_j}$ with respect to $|z_j\rangle$ (see Ref.~\cite{ODE} for a complete and detailed derivation). The infinite sum in parentheses in Eq.~(\ref{eq:snsq}) evaluates to the efficiently calculable \emph{divided-differences} representation~\cite{dd:67,deboor:05} \beq \sum_{n=q}^{\infty} \frac{(-i t)^n}{n!} {\!\!\!\!\!\!\!\!}\sum_{\substack{k_0,\ldots,k_q \\\text{s.t.}\sum_i k_i=n-q} }{\!\!\!\!\!\!\!\!}{E^{k_0}_{z} \cdots E^{k_{q}}_{z_{q}}} = \e^{-i t [E_{z},\ldots,E_{z_q}]} \,, \eeq where the complex coefficient $\e^{-i t [E_{z},\ldots,E_{z_q}]}$ is the \emph{divided difference of the exponential function} over the multi-set of the energies $\{E_{z},\ldots, E_{z_q}\}$~\cite{dd:67,deboor:05} (more details can be found in Appendix~\ref{app:dd}). We may therefore write \beq \e^{-i H t}|z\rangle = V_z(t) \ket{z}\,, \eeq where \beq\label{eq:Vz} V_z(t)=\sum_{q=0}^{\infty} \sum_{{\bf i}_q} \alpha_{{\bf i}_q}^{(z)}(t) P_{{\bf i}_q} \eeq and where we have denoted \beq \alpha_{{\bf i}_q}^{(z)}(t) =\e^{-i t [E_z,\ldots,E_{z_q}]} d_{{\bf i}_q}. \eeq (In the special case of $q=0$, $\alpha_{0}^{(z)}(t)=\e^{-i t E_z}$.) In Appendix~\ref{app:ddDelta}, we show that one can pull out a global phase from $\e^{-i t [E_z,\ldots,E_{z_q}]}$ to obtain $\e^{-i t E_z} \e^{-i t [\Delta E_z,\ldots,\Delta E_{z_q}]}$ where \hbox{$\Delta E_{z_j} = E_{z_j}-E_z$} (and specifically $\Delta E_{z} = 0$). Therefore, we can write $\alpha_{{\bf i}_q}^{(z)}(t)$ as: \beq\label{eq:ddDelta} \alpha_{{\bf i}_q}^{(z)}(t) =\e^{-i t E_z} \e^{-i t [\Delta E_z,\ldots,\Delta E_{z_q}]} d_{{\bf i}_q} \,, \eeq where the divided-difference inputs are now energy differences rather than total diagonal energies. \section{The Hamiltonian dynamics algorithm\label{sec:qtea}} \subsection{Preliminaries}\label{sec:pre} We first set some definitions and notations that will be used in the description of the algorithm. We denote the max norm of a matrix $A$ by \hbox{$\Vert A\Vert_{\rm max}=\max_{i,j}|A_{ij}|$}, where $A_{ij}$ are the matrix elements of $A$ in the computational basis. For every diagonal matrix $D_i$ (with $i>0$) we define the bounds $\Gamma_i \geq \Vert D_i\Vert_{\rm max}$, and denote $\Gamma_{{\bf i}_q} = \prod_{j=1}^q \Gamma_{i_j}$. We define the dimensionless time $T= t\Gamma$ with $\Gamma=\sum_{i=1}^M \Gamma_i$, the repetition number $r=\lceil T/\ln(2) \rceil$, and the short time interval $\Delta t = t/r \approx \ln(2) /\sum_{i=1}^M \Gamma_i$. \subsection{Decomposition to short-time evolutions} To simulate the time evolution of $\e^{-i H t}$, we execute $r$ times in succession a short-time circuit for the operator \beq U=\e^{-i H \Delta t} \,. \eeq Hereafter we omit the explicit dependence on $\Delta t$ for brevity. We write \begin{align}\label{eq:udt} U&=U \sum_z |z\rangle \langle z|= \sum_z U|z\rangle \langle z|=\sum_z V_z|z\rangle \langle z|, \end{align} where $V_z$ is given by Eq.~\eqref{eq:Vz} upon replacing $t$ with $\Delta t$. We can rewrite $U$ as follows: \begin{align}\label{eq:udt} &U= \sum_z \e^{-i \Delta t E_z} \sum_{q=0}^{\infty} \sum_{{\bf i}_q} \e^{-i \Delta t [\Delta E_z,\ldots,\Delta E_{z_q}]} d_{{\bf i}_q}P_{{\bf i}_q} |z\rangle \langle z| \nonumber\\ &= \Big(\sum_z \sum_{q=0}^{\infty} \sum_{{\bf i}_q} \e^{-i \Delta t [\Delta E_z,\ldots,\Delta E_{z_q}]} d_{{\bf i}_q} P_{{\bf i}_q} |z\rangle \langle z|\Big) \e^{-i \Delta t D_0} := U_{{\rm od}} \e^{-i \Delta t D_0}\,. \end{align} {We thus find that the off-diagonal expansion enables the effective decoupling of the evolution due to the diagonal part of the Hamiltonian from the evolution due its off-diagonal part, allowing us $U$ as a product of $U_{{\rm od}}$ and $\e^{-i \Delta t D_0}$. In the special case where the off-diagonal part of the Hamiltonian is zero (thus, $d_{{\bf i}_q}=0$ for all ${{\bf i}_q}$), our method reduces directly to simulating diagonal Hamiltonians on a quantum computer.} The circuit implementation of the {diagonal} unitary $\e^{-i \Delta t D_0}$ can be done with a gate cost ${\cal O}(C_{D_0})$ where $C_{D_0}$ is the gate cost of calculating a matrix element of $D_0$~\cite{NielsenChuang} (see Appendix~\ref{app:UD0} for more details). {This cost depends only of the locality of $D_0$, and is independent of its norm}. To simulate $U_{{\rm od}}$ we will use the LCU technique~\cite{Berry1}, starting with writing $U_{{\rm od}}$ as a sum of unitary operators. To do that, we first note that \hbox{$\vert\e^{-i \Delta t [\Delta E_{z_q},\ldots,\Delta E_{z}]}\vert\leq \Delta t^q/ q!$} (this follows from the mean-value theorem for divided differences~\cite{deboor:05}). In addition, $d_{{\bf i}_q}/\Gamma_{{\bf i}_q}$ are complex numbers lying inside the unit circle. Therefore, the norm of the complex number \beq\label{eq:beta} \beta_{{\bf i}_q}^{(z)} =\frac{q!}{\Gamma_{{\bf i}_q}\Delta t^q} \e^{-i \Delta t [\Delta E_{z},\ldots,\Delta E_{z_q}]} d_{{\bf i}_q} \eeq is not larger than 1. We can thus write $\beta_{{\bf i}_q}^{(z)}$ as the average of two phases \begin{align} &\beta_{{\bf i}_q}^{(z)} =\cos \phi_{{\bf i}_q}^{(z)} \e^{i \chi_{{\bf i}_q}^{(z)}}= \frac{1}{2} \Big( \e^{i (\chi_{{\bf i}_q}^{(z)} +\phi_{{\bf i}_q}^{(z)})}+ \e^{i (\chi_{{\bf i}_q}^{(z)} -\phi_{{\bf i}_q}^{(z)})} \Big). \end{align} Using this notation, we can write $U_{\rm od}$ as \beq\label{eq:sum of unitary} U_{\rm od}= \sum_{k=0,1} \sum_{q=0}^{\infty} \sum_{{\bf i}_q} \frac{\Gamma_{{\bf i}_q} \Delta t^q}{2q!} U_{{\bf i}_q}^{(k)} \,, \eeq where \begin{align} U_{{\bf i}_q}^{(k)}&= \sum_{z} \e^{i (\chi_{{\bf i}_q}^{(z)} +(-1)^k\phi_{{\bf i}_q}^{(z)})} P_{{\bf i}_q} |z\rangle \langle z|=P_{{\bf i}_q}\Phi_{{\bf i}_q}^{(k)} \,, \end{align} and \hbox{$\Phi_{{\bf i}_q}^{(k)}=\sum_{z} \e^{i (\chi_{{\bf i}_q}^{(z)} +(-1)^k\phi_{{\bf i}_q}^{(z)})} |z\rangle \langle z|$} is a (diagonal) unitary transformation. Since $P_{{\bf i}_q}$ is a bona-fide permutation matrix, it follows that $U_{{\bf i}_q}^{(k)}$ is a unitary transformation. Thus, Eq.~\eqref{eq:sum of unitary} is the short-time off-diagonal evolution operator $U_{\rm od}$ represented as a linear combination of unitary transformations. \subsection{The LCU setup} To simulate the evolution under $U_{\rm od}$ on a finite-size circuit, we truncate the series, Eq.~\eqref{eq:sum of unitary}, at some maximal order $Q$, which leads to the approximate \beq\label{eq:tildeU} \widetilde{U}_{\rm od} =\sum_{k=0,1} \sum_{q=0}^{Q} \sum_{{\bf i}_q} \frac{\Gamma_{{\bf i}_q} \Delta t^q}{2q!} U_{{\bf i}_q}^{(k)} \,. \eeq Since the coefficients of the off-diagonal operator expansion fall factorially with $q$ (similar to the truncation of the Taylor series in Ref.~\cite{Berry1}), setting \beq\label{eq:Q} Q={\cal O}\Bigl(\frac{\log (T/\epsilon)}{\log \log (T/\epsilon)}\Bigr) \,, \eeq ensures\footnote{Formally, Eq.~\eqref{eq:Q} should read $Q={\cal O}\Bigl(\frac{\log (T/\epsilon)}{W( \log (T/\epsilon))}\Bigr)$ where $W(x)$ is the $W$-Lambert function~\cite{Corless1996}. The $W$-Lambert function can be approximated as $W(x)=\log x - \log \log x +o(1)$. } that the error per evolution segment is smaller than $\epsilon/r$: \beq \sum_{q=Q+1}^\infty\frac1{q!}\Bigr(\frac{T}{r}\Bigr)^q=\sum_{q=Q+1}^\infty\frac{\ln(2)^q}{q!}\leq\frac{\epsilon}{r} \,, \eeq {where the last step follows from the inequality $q!\geq(q/e)^q$}. This choice ensures that the overall error is bounded by $\epsilon$ (as measured by the spectral-norm of the difference between the approximation and the true dynamics). We next provide the details of the circuit we implement to execute the LCU routine and the resource costs associated with it. \subsubsection{State preparation} The first ingredient of the LCU is the preparation of the state \begin{align}\label{eq:psi0} |\psi_0\rangle &=\frac1{\sqrt{s}} \sum_{q=0}^{Q} \sum_{{\bf i}_q}\sqrt{\Gamma_{{\bf i}_q}\frac{ \Delta t^q}{q!}} |{\bf i}_q\rangle\Bigl(\frac{\ket{0}+\ket{1}}{\sqrt{2}}\Bigr)\, \end{align} where \hbox{$|{\bf i}_q\rangle=\ket{i_1}\cdots\ket{i_q}\ket{0}^{\otimes(Q-q)}$} is shorthand for $Q$ quantum registers, each of which has dimension $M$ (equivalently, a quantum register with \hbox{$\lceil Q \log (M+1)\rceil$} qubits). In addition, since $\sum_{{\bf i}_q} \Gamma_{{\bf i}_q} = (\sum_i \Gamma_i)^q$ \beq\label{eq:s2} s=\sum_{q=0}^{Q} \frac{ \Delta t^q}{q!}\sum_{{\bf i}_q}\Gamma_{{\bf i}_q}=\sum_{q=0}^{Q}\frac{ (\sum_i \Gamma_i\Delta t)^q}{q!}\approx 2\,, \eeq by construction {[recall that $\sum_{i=1}^M \Gamma_i \Delta t \approx \ln(2)$]}. We construct $\ket{\psi_0}$ in two steps: Starting with the state $|0\rangle^{\otimes Q}$ we transform the first register to the normalized version of \beq \ket{0}+\sqrt{\sum_{q=1}^Q\frac{(\Gamma\Delta t)^q}{q!}} \ket{1}. \eeq where $\Gamma=\sum_{i=1}^M \Gamma_i$. Then the $\ket{0}$ state of the $q$-th register ($q=2,\ldots,Q$) is transformed to the normalized version of \beq \sqrt{\frac{(\Gamma\Delta t)^{q-1}}{(q-1)!}}\ket{0}+\sqrt{\sum_{q'=q}^Q\frac{(\Gamma\Delta t)^{q'}}{q'!}} \ket{1}. \eeq conditioned on the $(q-1)$-th register being in the $\ket{1}$ state. The resulting state, up to normalization, is \beq |0\rangle^{\otimes Q} \to \sum_{q=0}^{Q} \sqrt{\frac{(\Gamma\Delta t)^q}{q!}} |1\rangle^{\otimes q}|0\rangle^{\otimes (Q-q)}. \eeq The gate cost of this step is ${\cal O}(Q)$. Next, we act on each of the registers with a unitary transformation that takes a $\ket{1}$ state to the normalized version of $\sum_{i=1}^{M}\sqrt{ \Gamma_i}\ket{i}$. Finally we apply a Hadamard transformation on the last (qubit) register, resulting in the state $|\psi_0\rangle$. The gate cost of this step is ${\cal O}(M)$~\cite{1629135}. Denoting the unitary transformation that takes $|0\rangle^{\otimes Q+1}$ to $|\psi_0\rangle$ by $B$, we find that the gate cost of $B$ is ${\cal O}(M Q)$~\cite{Berry1}. \subsubsection{Controlled-unitary transformation\label{sec:cut}} The second ingredient of the LCU routine is the construction of the controlled operation \beq\label{eq:CU} U_C|{\bf i}_q\rangle|k\rangle|z\rangle= |{\bf i}_q\rangle|k\rangle U_{{\bf i}_q}^{(k)} |z\rangle=|{\bf i}_q\rangle|k\rangle P_{{\bf i}_q}\Phi_{{\bf i}_q}^{(k)}|z\rangle\,, \eeq where $|k\rangle$ is a single qubit {ancillary} state in the computational basis. The number of ancilla qubits here is $\lceil Q \log (M+1)\rceil+1$. Equation~\eqref{eq:CU} indicates that $U_C$ can be carried out in two steps: a controlled-phase operation ($U_{C\Phi}$) followed by a controlled-permutation operation ($U_{CP}$). The controlled-phase operation $U_{C \Phi}$ requires a somewhat intricate calculation of non-trivial phases. We therefore carry out the required algebra with the help of additional ancillary registers and then `push' the results into phases. The latter step is done by employing the unitary \begin{align} &U_{\text{ph}}|\varphi\rangle=\e^{-i \varphi}|\varphi\rangle \,, \end{align} whose implementation cost depends only on the precision with which we specify $\varphi$ and is independent of Hamiltonian parameters~\cite{NielsenChuang} (for completeness we provide an explicit construction of $U_{\text{ph}}$ in Appendix~\ref{app:Uph}). With the help of the (controlled) unitary transformation \beq\label{eq:F} U_{\chi\phi}|{\bf i}_q\rangle|k\rangle|z\rangle\ket{0}= |{\bf i}_q\rangle|k\rangle|z\rangle|\chi_{{\bf i}_q}^{(z)} +(-1)^k\phi_{{\bf i}_q}^{(z)}\rangle \,, \eeq we can write \beq U_{C{\Phi}}=U_{\chi\phi}^\dagger (\mathbb{1} \otimes U_{\text{ph}}) U_{\chi\phi} \,, \eeq so that \beq\label{eq:ucphi} U_{C{\Phi}}|{\bf i}_q\rangle|k\rangle|z\rangle=|{\bf i}_q\rangle|k\rangle\Phi_{{\bf i}_q}^{(k)}|z\rangle \,. \eeq This is illustrated in Fig.~\ref{fig:UCphi}. \begin{figure}[h!] \begin{center} \hspace{1em}\Qcircuit @C=1em @R=0.2em @!R{ \ket{{\bf i}_q} & & \ctrl{3}&\qw& \ctrl{3}&\qw \\ \ket{k} & & \ctrl{2}&\qw&\ctrl{2}&\qw\\ \ket{z} & & \ctrl{1}&\qw&\ctrl{1}&\qw & &\Phi_{{\bf i}_q}^{(k)} \ket{z} \\ \ket{0} & & \gate{U_{\chi\phi}}&\gate{U_{\rm ph}}&\gate{U_{\chi\phi}^\dagger}&\qw } \end{center} \caption{A circuit description of the controlled phase $U_{C\Phi}$ in terms of $U_{\chi\phi}$ and $U_{\rm ph}$. } \label{fig:UCphi} \end{figure} Note that $U_{\chi\phi}$ is a `classical' calculation sending computational basis states to computational basis states. We provide an explicit construction of $U_{\chi\phi}$ in Appendix~\ref{app:Phi}. We find that its gate and qubit costs are ${\cal O}(Q^2 +Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$ and ${\cal O}(Q)$, respectively, where $C_{\Delta D_0}$ is the cost of calculating the change in diagonal energy due to the action of a permutation operator and {$k_{\rm od}$ is an upper bound on the `off-diagonal locality', i.e., the locality of the $P_i$'s}~\cite{PhysRevA.52.3457,Berry1}. The construction of $U_{C P}$ is carried out by a repeated execution of the simpler unitary transformation \hbox{$U_p|i\rangle|z\rangle = |i\rangle P_i|z\rangle$}. Recall that $P_i$ are the off-diagonal permutation operators that appear in the Hamiltonian. The gate cost of $U_p$ is therefore ${\cal O}(M (k_{\rm od} +\log M))$. For spin models, each $P_i$ is a tensor product of up to $k_{\rm od}$ Pauli $X$ operators. Applying this transformation to the $Q$ ancilla quantum registers, we obtain \hbox{$|{\bf i}_q\rangle|z\rangle\to|{\bf i}_q\rangle P_{{\bf i}_q}|z\rangle$} with a gate cost of \hbox{${\cal O}(Q M (k_{\rm od} + \log M))$}. A sketch of the circuit is given in Fig.~\ref{fig:Cp}. We can thus conclude that the total gate cost of implementing $U_C$ is ${\cal O}(Q^2 +Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$. \begin{figure}[h!] \begin{center} \hspace{1em}\Qcircuit @C=1em @R=0.2em @!R { \ket{i_1} & & \ctrl{4}&\qw&\qw&\qw &\qw&\qw&\qw &\\ \ket{i_2} & & \qw&\ctrl{3}&\qw&\qw &\qw&\qw&\qw &\\ \vdots & & & & & & & & \\ \ket{i_Q} & & \qw&\qw&\qw&\qw &\qw&\ctrl{1}&\qw& \\ \ket{z} & & \gate{U_p}&\gate{U_p}&\qw&\cdots & &\gate{U_p}&\qw& & P_{{\bf i}_q}\ket{z} } \end{center} \caption{A circuit description of $U_{C P}$.} \label{fig:Cp} \end{figure} \subsubsection{Oblivious amplitude amplification} To realize $\widetilde{U}_{\textrm{od}}$, the LCU technique calls for the execution of a combination of the state preparation unitary $B$ and the controlled-unitary transformation $U_C$ which together form an oblivious amplitude amplification (OAA) procedure~\cite{Berry1}. Let $\ket{\psi}$ be the current state of the system, then under the action of \hbox{$W=B^\dagger U_C B$}, the state becomes \begin{align} W\ket{0}^{\otimes Q+1}\ket{\psi}=\frac1{s}\ket{0}^{\otimes Q+1}\widetilde{U}_{\rm od}\ket{\psi}+\sqrt{1-\frac1{s^2}}\ket{\Psi^\perp}, \end{align} such that $\ket{\Psi^\perp}$ is supported on a subspace orthogonal to $\ket{0}^{\otimes Q+1}$. If $s=2$ and $\widetilde{U}_{\rm od}$ is unitary then the OAA ensures that \begin{align}\label{eq:A} A\ket{0}^{\otimes Q+1}\ket{\psi}&=\ket{0}^{\otimes Q+1}\widetilde{U}_{\rm od}\ket{\psi}, \end{align} where $A = -W R W^\dagger R W$ and $R=1-2 (\ket{0}\bra{0})^{\otimes Q+1}$. Under these conditions, the action of $W \, (\mathbb{1}\otimes\e^{-i\Delta t D_0})$ on the state at time $t$, namely $\ket{\psi(t)}$, advances it by one time step to $\ket{\psi(t+\Delta t)}$. This is illustrated in Fig.~\ref{fig:OAA}. \begin{figure}[h!] \begin{center} \hspace{1cm} \Qcircuit @C=0.7em @R=0.3em @!R{ \ket{\psi(t)} &&&\ghost{\e^{-i\Delta t D_0}}&\qw& \ghost{W}&\qw&\ghost{W^\dagger}&\qw&\ghost{-W} &\qw&&&&\ket{\psi(t+\Delta t)}\\ \ket{0} &&&\multigate{-1}{\e^{-i\Delta t D_0}}&\qw& \ghost{W}&\ghost{R}&\ghost{W^\dagger}&\ghost{R} &\ghost{-W}&\qw \\ \ket{0} &&&\qw&\qw& \multigate{-2}{W}&\multigate{-1}{R}&\multigate{-2}{W^\dagger}&\multigate{-1}{R} & \multigate{-2}{-W}&\qw } \end{center} \caption{A circuit diagram for a single short-time evolution step $U=\e^{-i H \Delta t}$. The bottom register consists of $Q$ sub-registers, each of which containing $\log M$ qubits. The middle line is a single-qubit register.} \label{fig:OAA} \end{figure} In Ref~\cite{Berry1}, a robust version of OAA was given for the case of non-unitary $\widetilde{U}_{\rm od}$ and $s \neq 2$. It is shown that if $\vert s - 2\vert = {\cal O}(\delta)$ and $\Vert \widetilde{U}_{\rm od}-U\Vert ={\cal O}(\delta)$, where $U$ is the (ideal) unitary transformation then \beq \Vert{\tr_{\rm anc}(PA\ket{0}^{\otimes Q+1}\ket{\psi})-U\ket{\psi}\bra{\psi}U^\dagger}\Vert={\cal O}(\delta)\,, \eeq where $\tr_{\rm anc}$ stands for trace over the ancilla registers {[recall that $s \approx 2$ as per Eq.~(\ref{eq:s2})]}. Thus the overall error after $r$ repetitions is ${\cal O}(r\delta)$, so we require $\delta={\cal O}(\epsilon/r)$ to obtain an overall error of ${\cal O}(\epsilon)$. These conditions are satisfied with setting $\Delta t$ as in Sec.~\ref{sec:pre} and choosing $Q$ as in Eq.~\eqref{eq:Q}. For convenience, we provide a glossary of symbols in Table~\ref{tbl:glossary}. A summary of the gate and qubit costs of the simulation circuit and the various sub-routines used to construct it is given in Table~\ref{tbl:resource}. \vspace{0.3cm} \begin{table*}[t!] \begin{center} \begin{tabular}{ |c|l|} \hline Symbol&Meaning\\ \hline $M$ & number of off-diagonal terms, c.f., Eq.~\eqref{eq:basic} \\ $\Gamma_i$ & max-norm of $D_i$, $i=1,\ldots,M$ \\ $T=t\sum_{i=1}^{M} \Gamma_i$ & dimensionless time \\ $Q$ & off-diagonal series expansion truncation order, $Q={\cal O}\Bigl(\frac{\log (T/\epsilon)}{\log \log (T/\epsilon)}\Bigr)$\\ $k_{\rm d}$ & locality of $D_0$ \\ $k_{\rm od}$ & upper bound on locality of $P_i$ \\ $C_{D_0}$ & cost of calculating a diagonal energy (a single $D_0$ matrix element) \\ $C_{\Delta D_0}$ & cost of calculating the change to a diagonal energy due to the \\ ~&action of a $P_i$ \\ $C_{D}$ & cost of calculating a single $D_i$ matrix element ($i\neq0$) \\ \hline \end{tabular} \end{center} \caption{\label{tbl:glossary}{\bf Glossary of symbols.} } \end{table*} \begin{table*}[t!] \begin{center} \begin{tabular}{ |c||c|c|c| } \hline Unitary &Description&Gate cost&Qubit cost\\ \hline \hline $\e^{-i\Delta t H}$ & short-time evolution&${\cal O}(C_{D_0}+Q^2 +Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$ & ${\cal O}(Q\log M)$ \\\hline $\e^{-i\Delta t D_0}$ & diagonal evolution &${\cal O}(C_{D_0})$ & ${\cal O}(1)$ \\\hline $W$ & $W=B^{\dagger} U_C B$ &${\cal O}(Q^2 +Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$& ${\cal O}(Q\log M)$ \\\hline $B$ & LCU state preparation &${\cal O}(QM)$ & ${\cal O}(Q\log M)$ \\\hline $U_C$ & LCU controlled unitary & ${\cal O}(Q^2 +Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$ & ${\cal O}(Q\log M)$ \\\hline $U_{CP}$ & controlled permutation &${\cal O}(QM(k_{\rm od}+\log M))$ & ${\cal O}(Q\log M)$ \\\hline $U_{C\Phi}$ & controlled phase & ${\cal O}(Q^2 + Q M (C_{\Delta D_0}+k_{\rm od} +\log M))$ & ${\cal O}(Q\log M)$ \\\hline \end{tabular} \end{center} \caption{\label{tbl:resource}{\bf A summary of resources for the circuit and the various sub-routines.} } \end{table*} \section{Comparison to existing approaches and examples\label{sec:comp}} {In this section, we compare the resource costs of our algorithm against those of two state-of-the-art existing approaches, and further provide a brief analysis of the complexity of our algorithm for a number of physical models.} {Since our approach is based on an application of the LCU technique, we first compare the resource costs of our algorithm's LCU sub-routine against that of the Taylor series-based method of Berry~{\it et al.}~\cite{Berry1}.} One of main differences in costs between the two series expansions stems from the different way in which the Hamiltonians are decomposed. In the Taylor series-based LCU the Hamiltonian is written as a sum of unitary operators $H=\sum_{i=1}^L c_i U_i$. For qubit Hamiltonians, these unitary operators will generally be tensor products of single-qubit Pauli operators (although of course in some cases, more compact decompositions can be found). The off-diagonal decomposition, on the other hand, casts the Hamiltonian as a sum of generalized permutation operators, as given in Eq.~\eqref{eq:basic}; a representation that is generally considerably more compact. (For example, for qubit Hamiltonians, all operators that flip the same subset of qubits are grouped together.) This in turn implies that the number of terms in the decomposition of the Hamiltonian will generally be considerably smaller in the off-diagonal representation (i.e., $M \ll L$). This difference directly translates to reduced gate and qubit costs (a summary is given in Table~\ref{tbl:resource}). Another key difference is in the respective dimensionless time constants. In the off-diagonal expansion approach the dimensionless time constant is given by \hbox{$T=t \sum_{i=1}^M \Gamma_i$}, while in the Taylor series approach it is \hbox{$T'=t\sum_{i=1}^L c_i$}. In both approaches the dimensionless time determines the cutoff of the respective expansions, and controls the overall gate and qubit costs of the algorithm. Indeed, as we show below, in general one has \hbox{$\sum_{i=1}^M \Gamma_i \ll \sum_{i=1}^L c_i$}, which directly translates to a reduced simulation cost in favor of the off-diagonal expansion. To be more quantitative, we provide an explicit comparison between the off-diagonal and Taylor expansions for a few spin models in Table~\ref{tbl:comp}. The `price' we pay for the above savings is the additional ${\cal O}(Q^2)$ operations per time step required for calculating the divided-difference coefficient. {However, we note, that since $Q$ scales logarithmically with $T$, and $T$ is typically much smaller than $T'$, the advantages arising from the use of divided differences asymptotically outweigh this added complexity.} \begin{table*}[t!] \begin{small} \begin{center} \begin{tabular}{ |c|c|c| } \hline Hamiltonian&\multicolumn{2}{|c|}{$H=\sum_{ij} J_{ij} Z_i Z_j$}\\\hline Method & this paper &Taylor series LCU~\cite{Berry1}\\\hline No. of LCU unitaries &0& $N^2$ \\\hline Dimensionless time ($T$)& 0& $t\sum_{ij} |J_{ij}|$ \\\hline Comments & $H$ is diagonal &- \\ \hline \end{tabular} \\\vspace{0.3cm} \begin{tabular}{ |c|c|c| } \hline Hamiltonian&\multicolumn{2}{|c|}{$H=\sum_{ij} J_{ij} Z_i Z_j +\sum_{ij} \tilde{J}_{ij} Z_i X_j $} \\ \hline Method & this paper &Taylor series LCU~\cite{Berry1}\\\hline No. of LCU unitaries & $N+1$ & $2N^2$ \\\hline Dimensionless time ($T$)& $t{\sum_{j}|\sum_i\tilde{J}_{ij}|}$& $t\sum_{ij} (|J_{ij}|+|\tilde{J}_{ij}|)$ \\\hline Comments & $D_0=\sum_{ij} J_{ij} Z_i Z_j $ &- \\ ~ & $D_j=\sum_{i} J_{ij} Z_i$ &~ \\ \hline \end{tabular} \\\vspace{0.3cm} \begin{tabular}{ |c|c|c| } \hline Hamiltonian&\multicolumn{2}{|c|}{$H=\sum_{ijk} J_{ijk} Z_i Z_j Z_k+\sum_{ijk} \tilde{J}_{ijk} Z_i Z_j X_k $ }\\ \hline Method & this paper &Taylor series LCU~\cite{Berry1}\\\hline No. of LCU unitaries &$N$& $2N^3$ \\\hline Dimensionless time ($T$)& $t{\sum_k|\sum_{ij}\tilde{J}_{ijk}|}$ & $t\sum_{ijk} (|J_{ijk}|+|\tilde{J}_{ijk}|)$ \\\hline Comments & $D_0=\sum_{ijk} J_{ijk} Z_i Z_j Z_k $&- \\ ~ & $D_k=\sum_{ij} \tilde{J}_{ijk} Z_i Z_j$ &~ \\ \hline \end{tabular} \end{center} \end{small} \caption{\label{tbl:comp}{\bf A comparison between the proposed method and the Taylor series-based approach~\cite{Berry1}.} In the table, $N$ denotes the number of qubits. {The table illustrates two important features of the proposed method as compared to the Taylor series-based approach for Hamiltonians written in the Pauli basis. Firstly, the Taylor series-based approach treats diagonal and off-diagonal components of the Hamiltonian in the LCU algorithm on an equal footing, while our method requires only the off-diagonal part as an input to the LCU algorithm. This is shown in the row labeled by `No. of LCU unitaries'. Secondly, in the Taylor series-based approach, each Pauli operator in the decomposition of $H$ is considered as a unitary, leading to a dimensionless time that is proportional to the sum of the absolute values of all the coefficients in the decomposition. In our approach on the other hand, all the diagonal operators that act in the same way on basis states are grouped into a single diagonal operator ($D_j$ in the table). Therefore, in our algorithm, the dimensionless time is proportional to the sum of the norm of all `grouped' diagonal operators (sans the diagonal component of the Hamiltonian). Due to this grouping, the dimensionless time of the present method will be in general extensively smaller than that of the Taylor series-based method. Having a smaller dimensionless time translates to savings in gate and qubit resources as well as to a shorter runtime of the algorithm.}} \end{table*} {As an alternative to the Taylor series-based algorithm, recently Low and Wiebe~\cite{2018arXiv180500675H} have proposed a framework within which the dynamics is formulated in the interaction picture using a (truncated) Dyson series expansion. There, the time-ordered multi-dimensional integrals of the Dyson series are approximated via Riemann sums and implemented using control registers, ridding the simulation cost of most of its dependence on the diagonal component of the Hamiltonian. Our algorithm is similar in this way to the interaction picture approach, as the off-diagonal series expansion may be viewed as explicitly integrating the Dyson integrals (the reader is referred to Refs.~\cite{2020arXiv201009888K,ODE2} for more details pertaining to the relation between the off-diagonal series expansion and the Dyson series). There are however a few notable differences between the two algorithms, that translate to differences in resource scaling. The main difference is that the cost of the interaction picture approach still has a poly-logarithmic dependence on the norm of the diagonal part of the Hamiltonian while in our method the dynamics due to the diagonal part of the Hamiltonian is completely decoupled from that of its off-diagonal part. This decoupling ensures that our algorithm has no dependence on the norm of the diagonal component of the Hamiltonian. In addition, the poly-logarithmic dependence on the various problem parameters in the interaction picture algorithm is obtained under certain assumptions on the gate cost of implementing specific unitary oracles. The power of the logarithmic polynomial ($\gamma$ in Ref.~\cite{2018arXiv180500675H}) is left undetermined in general. As mentioned above in the context of the Taylor series-based algorithm, the price paid for the decoupling is an additional ${\cal O}(Q^2)$ operations per time step, with $Q$, the expansion order, scaling logarithmically with the algorithm's dimensionless time $T$, which does not depend on the diagonal norm of the Hamiltonian.} In the next subsections, we briefly analyze the off-diagonal circuit complexity for three models of scientific interest: the (Fermi-)Hubbard model, that of electronic structure and the Schwinger model. \subsection{The Fermi-Hubbard model} We first examine the asymptotic cost of implementing the Fermi-Hubbard model~\cite{Hubbard}, which serves as a model of high-temperature superconductors. The Fermi-Hubbard Hamiltonian is given by \begin{eqnarray}\label{eq:FH-H-intro} H =U \sum_{i=1}^{N} a^{\dagger}_{i \uparrow} a_{i \uparrow} a^{\dagger}_{i \downarrow} a_{i \downarrow} - t_{\textrm h} \sum_{\langle i j\rangle\sigma} \left(a^{\dagger}_{i \sigma} a_{j \sigma} + a^{\dagger}_{j\sigma} a_{i \sigma}\right) \,, \end{eqnarray} describing $N$ electrons with spin $\sigma \in \{\uparrow,\downarrow\}$ hopping between neighboring sites on a $d$-dimensional hyper-cubic lattice whose adjacency matrix is given by $\langle i j\rangle$ with hopping strength $t_{\textrm h}$. In addition, the model has an on-site interaction term with strength $U$ between opposite-spin electrons occupying the same site. The Fermi-Hubbard model can be mapped to qubits in a number of different ways~\cite{jwt,BK02,Verstraete_2005,DK20}. For concreteness, we consider the Jordan-Wigner transformation (JWT)~\cite{jwt} which maps the second-quantized operator $a_{j \sigma}$ to an operator on $j$ qubits according to \beq a_{j \sigma} \to\left( \prod_{k=1}^{j-1} Z_{k \sigma} \right)\frac{X_{j \sigma}-i Y_{j \sigma}}{2} \eeq so that \hbox{$a^{\dagger}_{j \sigma} a_{j \sigma} = (\mathbb{1}+Z_{j \sigma})/2$}. To write the Fermi-Hubbard Hamiltonian in the form of Eq.\eqref{eq:basic}, we rewrite the JWT as \begin{align} a_{j \sigma} \to\left( \prod_{k=1}^{j-1} Z_{k \sigma} \right)\frac{\mathbb{1}+Z_{j \sigma}}{2}X_{j \sigma}\,, \quad a_{j \sigma}^\dagger \to\left( \prod_{k=1}^{j-1} Z_{k \sigma} \right)\frac{\mathbb{1}-Z_{j \sigma}}{2}X_{j \sigma}. \end{align} Applying the transformation to the Hamiltonian, Eq.~\eqref{eq:FH-H-intro}, we arrive at: \beq\label{eq:fh_ham_off} H= D_0 +\sum_{\langle i j \rangle\sigma} D_{ij\sigma} X_{i\sigma} X_{j\sigma} \,, \eeq where we have identified \begin{align} D_0 = \frac{U}{4} \sum_{j=1}^N (\mathbb{1}+Z_{j \uparrow})(\mathbb{1}+Z_{j \downarrow}) \quad {\textrm{and}} \quad D_{ij\sigma} = -\frac1{2} t_{\textrm h} \prod_{k=i}^{j} Z_{k\sigma}\,. \end{align} The product structure of $D_{ij\sigma}$ implies that their max-norm is simply given by $t_{\textrm h}$ for all $i,j,\sigma$. The number of off-diagonal terms is $M=N d$. Therefore the dimensionless time $T$ of the simulation algorithm is $T=t M t_{\textrm h}=t N d t_{\textrm h}$. For comparison, in the Taylor series decomposition, the number of terms in the Hamiltonian is $L=3N+2 M$, and the dimensionless time parameter is $T' \sim 3 N U t +2 T$. Note that due to the independence of $T$ on the on-site repulsion strength $U$, the off-diagonal expansion algorithm offers a favorable scaling as compared to the Taylor series-based LCU in the Mott-insulating regime $U\gg t_{\textrm h}$. \subsection{Hamiltonian simulation of electronic structure} Another model of major practical relevance is the simulation of electronic structure in the framework of which the stationary properties of electrons interacting via Coulomb forces in an external potential are of interest. This problem was recently analyzed in detail in Ref.~\cite{PhysRevX.8.011044}, where a `plane wave dual basis Hamiltonian' formulation was proposed, which diagonalizes the potential operator leading to a Hamiltonian representation with ${\cal O}(N^2)$ second-quantized terms, where $N$ is the number of basis functions. Using JWT to map the model to qubits, one arrives at \begin{align} \label{eq:jw_ham} H & = \sum_{\substack{p, \sigma \\ \nu \neq 0}}\left(\frac{\pi}{\Omega \, k_\nu^2} - \frac{k_\nu^2}{4 \, N} + \frac{2\pi}{\Omega} \sum_{j}\zeta_j \frac{\cos\left[k_\nu \cdot \left(R_j-r_p\right)\right]}{k_\nu^2}\right) Z_{p,\sigma}\\ & + \frac{\pi}{2\,\Omega } \sum_{\substack{(p, \sigma) \neq (q, \sigma') \\ \nu \neq 0}} \frac{\cos \left[k_\nu \cdot r_{p-q}\right]}{k_\nu^2} Z_{p,\sigma} Z_{q,\sigma'}+ \sum_{\nu \neq 0} \left(\frac{k_\nu^2}{2}- \frac{\pi \, N}{\Omega \, k_\nu^2} \right) \mathbb{1}\nonumber\\ & + \frac{1}{4\, N} \sum_{\substack{p \neq q \\ \nu\neq0, \sigma}} k_\nu^2 \cos \left[k_\nu \cdot r_{q - p} \right] \left(X_{p,\sigma} Z_{p + 1,\sigma} \cdots Z_{q - 1,\sigma} X_{q,\sigma} + Y_{p,\sigma} Z_{p + 1,\sigma} \cdots Z_{q - 1,\sigma} Y_{q,\sigma} \right)\nonumber, \end{align} where, $R_j$ and $r_p$ denote nuclei and electron coordinates, respectively, $\zeta_j$ are nuclei charges and $k_{\nu}$ is a vector of the plane wave frequencies at the $\nu$-th harmonic of the computational cell in three dimensions whose volume we denote by $\Omega$ (see Ref.~\cite{PhysRevX.8.011044}). The permutation matrix representation dictates that we write the Hamiltonian above as \beq\label{eq:es_ham_off} H= D_0 + \sum_{p \neq q, \sigma} D_{pq\sigma} X_{p\sigma} X_{q\sigma} \eeq where all the diagonal terms are grouped together to form \begin{align} \label{eq:jw_ham_D0} D_0 & = \sum_{\substack{p, \sigma \\ \nu \neq 0}}\left(\frac{\pi}{\Omega \, k_\nu^2} - \frac{k_\nu^2}{4 \, N} + \frac{2\pi}{\Omega} \sum_{j}\zeta_j \frac{\cos\left[k_\nu \cdot \left(R_j-r_p\right)\right]}{k_\nu^2}\right) Z_{p,\sigma}\\ &+ \frac{\pi}{2\,\Omega } \sum_{\substack{(p, \sigma) \neq (q, \sigma') \nu \neq 0}} \frac{\cos \left[k_\nu \cdot r_{p-q}\right]}{k_\nu^2} Z_{p,\sigma} Z_{q,\sigma'} + \sum_{\nu \neq 0} \left(\frac{k_\nu^2}{2}- \frac{\pi \, N}{\Omega \, k_\nu^2} \right) \mathbb{1}\nonumber, \end{align} and are integrated out of the LCU. Off-diagonal ($p\neq q$) terms are also grouped as \begin{eqnarray} D_{pq\sigma} = \frac{1}{4 N} \sum_{\nu\neq0} k_\nu^2 \cos \left[k_\nu \cdot r_{q - p} \right] \left( Z_{p + 1,\sigma} \cdots Z_{q - 1,\sigma} \right) \left( \mathbb{1}_{pq}+Z_{p\sigma} Z_{q \sigma}\right)\,. \end{eqnarray} We notice that in the off-diagonal representation, the Hamiltonian of Eq.~\eqref{eq:es_ham_off} has a structure similar to that of Eq.~\eqref{eq:fh_ham_off}, with $k_{\rm od}=2$. Similar to the Fermi-Hubbard model, each $D_{ij\sigma}$ has a product structure and their max-norm is simply given by $\frac{1}{2 N} \vert\sum_{\nu} k_\nu^2 \cos \left[k_\nu \cdot r_{q - p} \right] \vert$ for all $p,q,\sigma$. The number of terms in the off-diagonal part of the Hamiltonian in this representation is $M=2(N^2-N)$, and thus the dimensionless time $T$ of the simulation algorithm is \beq T=t (N-1) \sum_{p \neq q} \Bigl\vert\sum_{\nu\neq0} k_\nu^2 \cos \left[k_\nu \cdot r_{q - p} \right] \Bigr\vert. \eeq For comparison, in the Taylor series-based LCU approach the number of terms in the Hamiltonian is $L=2N+6(N^2-N)$, and the dimensionless time parameter is \begin{align} T'= t\Biggl( \sum_{\substack{p \neq q \\ \nu \neq 0}} \Bigl(\frac{\pi}{\Omega } \frac{1}{k_\nu^2}+ \frac{1}{2\, N} k_\nu^2\Bigr) \Bigl\vert\cos \left[k_\nu \cdot r_{q - p} \right] \Bigr\vert+ 2 \sum_{\substack{p \\ \nu \neq 0}}\Bigl\vert\frac{\pi}{\Omega \, k_\nu^2} - \frac{k_\nu^2}{4 \, N} + \frac{2\pi}{\Omega} \sum_{j}\zeta_j \frac{\cos\left[k_\nu \cdot \left(R_j-r_p\right)\right]}{k_\nu^2} \Bigr\vert\Biggr). \end{align} In particular, the dimensionless parameter in the current scheme depends only on the magnitude of the two-electron interaction and can take values much smaller than $T'$ due to a `destructive interference' of the cosine terms evaluated at different values of $[k_\nu \cdot r_{q - p}]$. \subsection{The Schwinger model} The Schwinger model~\cite{PhysRev.128.2425} is an Abelian low-dimensional gauge theory describing two-dimensional (one spatial plus time) Euclidean quantum electrodynamics with a Dirac fermion. Despite being a simplified model, the theory exhibits rich properties, similar to those seen in more complex theories such as QCD (e.g., confinement and spontaneous symmetry breaking). The model can be converted to an equivalent spin model~\cite{PhysRevResearch.2.023015,sc1,sc2} whose Hamiltonian is \begin{eqnarray} H = \frac1{2 a^2 g^2}\sum_{i=1}^{N-1} (X_i X_{i+1} + Y_i Y_{i+1})+\frac{m}{a g^2} \sum_{i=1}^N (-1)^i Z_i +\sum_{i=1}^{N-1} \left[\epsilon_0\mathbb{1}+\frac{1}{2}\sum_{j=1}^i \left( Z_j+(-\mathbb{1})^j\right) \right]^2\,, \end{eqnarray} where $\epsilon_0$ is a constant (that can be set to zero), $g, m$ and $a$ are the fermion-gauge field coupling, mass and lattice spacing, respectively and $N$ is the number of lattice sites. In permutation matrix representation, the Hamiltonian is written as \hbox{$H=D_0 + \sum_i D_i X_i X_{i+1}$} where the diagonal component $D_0$ is given by \beq D_0=\frac{m}{a g^2} \sum_{i=1}^N (-1)^i Z_i+\sum_{i=1}^{N-1} \left[\epsilon_0\mathbb{1}+\frac{1}{2}\sum_{j=1}^i \left( Z_j+(-\mathbb{1})^j\right) \right]^2 \eeq and \hbox{$D_i =1/(2 a^2 g^2)(\mathbb{1}-Z_i Z_{i+1})$}. It follows then that the number of off-diagonal terms is $M=N$ and the off-diagonal dimensionless time is $T=t N/(2 a^2 g^2)$. For comparison, in the Taylor series-based LCU approach the number of terms $L$ to which the Hamiltonian is decomposed is proportional to $N^2$ due to the diagonal term, and the dimensionless time parameter $T'$ scales as {${\cal O} (t(N^2 + m N/(a g^2)+ N/(a^2 g^2)))$}. We thus find that the off-diagonal formulation provides in this case a scaling advantage over a Taylor series-based approach. \section{Summary and conclusions\label{sec:summary}} We proposed a quantum algorithm for simulating the dynamics of general time-independent Hamiltonians. Our approach consisted of expanding the time evolution operator using an off-diagonal series; a parameter-free Trotter error-free method that was recently developed in the context of quantum Monte Carlo simulations~\cite{ODE,ODE2,pmr}. This expansion enabled us to simulate the time evolution of states under general Hamiltonians using alternating segments of diagonal and off-diagonal evolutions, with the latter implemented using the LCU technique~\cite{Berry1}. We argued that our scheme provides considerable savings in gate and qubit costs for certain classes of Hamiltonians, specifically Hamiltonians that are represented in a basis in which the diagonal component is dominant. In fact, we find that for optimal savings one should choose the basis of representation such that the norm of the off-diagonal component of the Hamiltonian is minimal. In this work, we focused only on time-independent Hamiltonians. The algorithm can be extended to the time-dependent case by writing the time-evolution operator in a Dyson series and appropriately discretizing the Dyson time integrals~\cite{timeDepHamSim}. We believe that further improvements to our algorithm can likely be made. In Appendix~\ref{app:alt}, we provide a slightly modified representation of the Hamiltonian which simplifies, to an extent, the circuit construction, specifically the implementation of the `classical' calculation $U_{\chi \phi}$, which requires additional auxiliary ${\cal O}(Q)$ ancillas beyond those required by the LCU. It would not be unreasonable to assume that it is possible to encode all the classical calculation directly into phases, eliminating this extra cost. \section*{Acknowledgements} We thank Eleanor Rieffel for useful discussions and Yi-Hsiang Chen for valuable comments. Work by AK (quantum algorithm development) was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award DE-SC0020280. Work by IH (off-diagonal series expansion and resource analysis) was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) Quantum Computing Application Teams (QCATS) program, under field work proposal number ERKJ347. Part of this work was done while AK was at the Joint Center for Quantum Information and Computer Science (QuICS) at the University of Maryland. \bibliographystyle{unsrtnat}
2023-04-23T08:17:53.888Z
2021-06-22T02:14:15.000Z
redpajama/arxiv
arxiv_0000
831
8,164
310f2fbcef8f7dd8c3e83a0d14f165507e332759
\section{Introduction}\label{sec:intro} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{figs/fig1.pdf} \captionsetup{font=small} \caption{ Overview of the method and latent space representation. We start from an original image $I_o$ that can be edited $t(\cdot)$ in various ways: its feature extraction $f(t(I_o))$ spawns the shaded region in the embedding space. The edited versions should be recoverable by nearest neighbor search on quantized representations. In the regular (non-active) case, $f(I_o)$ is quantized by the index as \includegraphics[width=0.5em]{figs/assets/c1.pdf}. When the image is edited, $t(I_o)$ switches cells and the closest neighbor returned by the index is the wrong one \includegraphics[width=0.5em]{figs/assets/c2.pdf}. In active indexing: $I_o$ is modified in an imperceptible way to generate $I^\star$ such that $f(I^\star)$ is further away from the boundary. When edited copies $f(t(I^\star))$ are queried, retrieval errors are significantly reduced. \label{fig:fig1}} \end{figure} The traceability of images on a media sharing platform is a challenge: they are widely used, easily edited and disseminated both inside and outside the platform. In this paper, we tackle the corresponding task of Image Copy Detection (ICD), \textit{i.e}.\@ finding whether an image already exists in the database; and if so, give back its identifier. ICD methods power reverse search engines, photography service providers checking copyrights, or media platforms moderating and tracking down malicious content (\textit{e.g}.\@ Microsoft's \cite{photodna} or Apple's \cite{neuralhash}). Image identification systems have to be robust to identify images that are edited (cropping, colorimetric change, JPEG compression \ldots) after their release~\citep{douze2021disc, wang2022benchmark}. The common approach for content-based image retrieval reduces images to high dimensional vectors, referred to as \emph{representations}. Early representations used for retrieval were hand-crafted features such as color histograms~\citep{swain1991color}, GIST~\citep{oliva2001modeling}, or Fisher Vectors~\citep{perronnin2010large}. As of now, a large body of work on self-supervised learning focuses on producing discriminative representations with deep neural networks, which has inspired recent ICD systems. In fact, \emph{all} submissions to the NeurIPS2021 Image Similarity challenge~\citep{papakipos2022results} exploit neural networks. They are trained to provide invariance to potential image transformations, akin to data augmentation in self-supervised learning. Scalability is another key requirement of image similarity search: searching must be fast on large-scale databases, which exhaustive vector comparisons cannot do. In practice, ICD engines leverage approximate neighbor search algorithms, that trade search accuracy against scalability. Approximate similarity search algorithms speed up the search by \emph{not} computing the exact distance between all representations in the dataset~\citep{johnson2019faiss, guo2020scann}. First they lower the number of scored items by partitioning the representation space, and evaluate the distances of only a few subsets. Second, they reduce the computational cost of similarity evaluation with quantization or binarization. These mechanisms make indexing methods subject to the curse of dimensionality. In particular, in high-dimensional spaces, vector representations lie close to boundaries of the partition~\citep{bohm2001searching}. Since edited versions of an original image have noisy vector representations, they sometimes fall into different subsets or are not quantized the same way by the index. All in all, it makes approximate similarity search very sensitive to perturbations of the edited image representations, which causes images to evade detection. In this paper, we introduce a method that improves similarity search on large databases, provided that the platform or photo provider can modify the images before their release (see Fig.~\ref{fig:fig1}). We put the popular saying ``attack is the best form of defense'' into practice by applying image perturbations and drawing inspiration from adversarial attacks. Indeed, representations produced with neural networks are subject to \emph{adversarial examples}~\citep{szegedy2013intriguing}: small perturbations of the input image can lead to very different vector representations, making it possible to create adversarial queries that fool image retrieval systems~\citep{liu2019whos, tolias2019targeted,dolhansky2020adversarial}. In contrast, we modify an image to make it \emph{more} indexing friendly. With minimal changes in the image domain, the image representation is pushed towards the center of the indexing partition, rising the odds that edited versions will remain in the same subset. This property is obtained by minimizing an indexation loss by gradient descent back to the image pixels, like for adversarial examples. For indexing structures based on product quantization~\citep{jegou2010pq}, this strategy amounts to pushing the representation closer to its quantized codeword, in which case the indexation loss is simply measured by the reconstruction error. Since the image quality is an important constraint here, the perturbation is shaped by perceptual filters to remain invisible to the human eye. Our contributions are: \begin{itemize}[leftmargin=1cm,itemsep=0cm,topsep=-0.1cm] \item a new approach to improve ICD and retrieval, when images can be changed before release; \item an adversarial image optimization scheme that adds minimal perceptual perturbations to images in order to reduce reconstruction errors, and improve vector representation for indexing; \item experimental evidence that the method significantly improves index performance. \end{itemize} \section{Related Work} \label{sec:related} \paragraph{Image watermarking} hides a message in a host image, such that it can be reliably decoded even if the host image is edited. Early methods directly embed the watermark signal in the spatial or transform domain like DCT or DWT~\citep{cox2007digital}. Recently, deep-learning based methods jointly train an encoder and a decoder to learn how to watermark images~\citep{zhu2018hidden,ahmadi2020redmark,zhang2020udh}. Watermarking is an alternative technology for ICD. Our method bridges indexing and watermarking, where the image is modified before publication. Regarding retrieval performance, active indexing is more robust than watermarking. Indeed, the embedded signal reinforces the structure naturally present in the original image, whereas watermarking has to hide a large secret keyed signal independent of the original feature. App. \ref{sec:watermarking} provides a more thorough discussion and experimental results comparing indexing and watermarking. \paragraph{Active fingerprint} is more related to our work. As far as we know, this concept was invented by Voloshynovskiy \textit{et al}.\@ ~\cite{voloshynovskiy2012active}. They consider that the image $I\in \mathbb{R}^N$ is mapped to $x \in \mathbb{R}^N$ by an invertible transform $W$ such that $WW^\top$. The binary fingerprint is obtained by taking the sign of the projections of $x$ against a set of vectors $b_1,., b_L \in \mathbb{R}^N$ (à la LSH). Then, they change $x$ to strengthen the amplitude of these projections so that their signs become more robust to noise. They recover $I^\star$ with $W^\top$. This scheme is applied to image patches in~\citep{7533094} where the performance is measured as a bit error rate after JPEG compression. Our paper adapts this idea from fingerprinting to indexing, with modern deep learning representations and state-of-the-art indexing techniques. The range of transformations is also much broader and includes geometric transforms. \section{Preliminaries: Representation Learning and Indexing} For the sake of simplicity, the exposure focuses on image representations from SSCD networks~\citep{pizzi2022sscd} and the indexing technique IVF-PQ~\citep{jegou2010pq}, since both are typically used for ICD. Extensions to other methods can be found in Sec.~\ref{sec:generalization}. \subsection{Deep descriptor learning} Metric embedding learning aims to learn a mapping $f: \mathbb{R}^{c\times h\times w} \to \mathbb{R}^d$, such that measuring the similarity between images $I$ and $I'$ amounts to computing the distance $\norm{f(I) - f(I')}$. In recent works, $f$ is typically a neural network trained with self-supervision on raw data to learn metrically meaningful representations. Methods include contrastive learning~\citep{chen2020simclr}, self-distillation~\citep{grill2020bootstrap, caron2021dino}, or masking random patches of images~\citep{he2022masked, assran2022masked}. In particular, SSCD~\citep{pizzi2022sscd} is a training method specialized for ICD. It employs the contrastive self-supervised method SimCLR~\citep{chen2020simclr} and entropy regularization~\citep{sablayrolles2018catalyser} to improve the distribution of the representations. \subsection{Indexing} Given a dataset $\mathcal X = \{x_i\}_{i=1}^{n}\subset \mathbb{R}^d$ of $d$-dimensional vector representations extracted from $n$ images and a query vector $x_q$, we consider the indexing task that addresses the problem: \begin{align} x^* := \mathop{\mathrm{argmin}}_{x \in \mathcal X} \; \norm{x - x_q}. \end{align} This exact nearest neighbor search is not tractable over large-scale databases. Approximate search algorithms lower the amount of scored items thanks to space partitioning and/or accelerate the computations of distances thanks to quantization and pre-computation. \paragraph{Space partitioning and cell-probe algorithms.} As a first approximation, nearest neighbors are sought only within a fraction of $\mathcal{X}$: at indexing time, $\mathcal{X}$ is partitioned into $\mathcal X = \bigcup_{i=1}^{b} \mathcal{X}_i$. At search time, an algorithm $Q: \mathbb{R}^d \to \{1,..,b\}^{k'}$ determines a subset of ${k'}$ buckets in which to search, such that ${k'}=|Q(x_q)| \ll b$, yielding the approximation: \begin{align} \mathop{\mathrm{argmin}}_{x \in \mathcal X} \; \norm{x-x_q} \approx \mathop{\mathrm{argmin}}_{x \in \bigcup_{i\in Q(x_q)} \mathcal{X}_i} \; \norm{x-x_q}. \end{align} A well known partition is the KD-tree~\citep{bentley1975kdtree} that divides the space along predetermined directions. Subsequently, locality sensitive hashing (LSH)~\citep{indyk1998lsh, gionis1999lsh} and derivative~\citep{datar2004lsh,pauleve2010locality} employ various hash functions for bucket assignment, which implicitly partitions the space. We focus on the popular clustering and Inverted Files methods~\citep{sivic2003video}, herein denoted by IVF. They employ a codebook $\mathcal{C} = \{c_i\}_{i=1}^{k}\subset\mathbb{R}^d$ of $k$ centroids (also called ``visual words'' in a local descriptor context), for instance learned with k-means over a training set of representations. Then, $Q$ associates $x$ to its nearest centroid $q_\mathrm{c}(x)$ such that the induced partition is the set of the $k$ Voronoï cells. When indexing $x$, the IVF stores $x$ in the bucket associated with $c_i=q_\mathrm{c}(x)$. When querying $x_q$, IVF searches only the ${k'}$ buckets associated to centroids $c_i$ nearest to $x_q$. \paragraph{Efficient metric computation and product quantization.} Another approximation comes from compressed-domain distance estimation. Vector Quantization (VQ) maps a representation $x \in \mathbb{R}^d$ to a codeword $q_\mathrm{f}(x) \in \mathcal{C} = \{C_i\}_{i=1}^{K}$. The function $q_\mathrm{f}$ is often referred to a \emph{quantizer} and $C_i$ as a \emph{reproduction value}. The vector $x$ is then stored as an integer in $\{1, .., K\}$ corresponding to $q_\mathrm{f}(x)$. The distance between $x$ and query $x_q$ is approximated by $\norm{q_\mathrm{f}(x) - x_q}$, which is an ``asymmetric'' distance computation (ADC) because the query is not compressed. This leads to: \begin{align} \mathop{\mathrm{argmin}}_{x \in \mathcal X} \; \norm{x-x_q} \approx \mathop{\mathrm{argmin}}_{x \in \mathcal X} \; \norm{q_\mathrm{f}(x)- x_q} . \end{align} Binary quantizers ({a.k.a}.\@ sketches, \cite{charikar2002similarity} lead to efficient computations but inaccurate distance estimates~\citep{weiss2008spectral}. Product Quantization (PQ)~\citep{jegou2010pq} or derivatives \cite{ge2013optimized} offer better estimates. In PQ, a vector $x\in \mathbb{R}^d$ is split into $m$ subvectors in $\mathbb{R}^{d/m}$: $x=(x^1, \ldots, x^m)$. The product quantizer then quantizes the subvectors: $q_\mathrm{f}: x \mapsto (q^1(x^1), \ldots, q^m(x^m))$. If each subquantizer $q^j$ has $K_s$ reproduction values, the resulting quantizer $q_\mathrm{f}$ has a high $K=(K_s)^m$. The squared distance estimate is decomposed as: \begin{align} \norm{q_\mathrm{f}(x)-x_q}^2 = \sum_{j=1}^m \norm{q^j(x^j)-x_q^j}^2. \end{align} This is efficient since $x$ is stored by the index as $q_\mathrm{f} (x)$ which has $m\log_2 K_s$ bits, and since summands can be precomputed without requiring decompression at search time. \section{Active Indexing}\label{section:method} Active indexing takes as input an image $I_o$, adds the image representation to the index and outputs an activated image $I^\star$ with better traceability properties for the index. It makes the feature representation produced by the neural network more compliant with the indexing structure. The activated image is the one that is disseminated on the platform, therefore the alteration must not degrade the perceived quality of the image. Images are activated by an optimization on their pixels. The general optimization problem reads: \begin{align} I^\star := \mathop{\mathrm{argmin}}_{I \in \mathcal{C}(I_o)} \; \mathcal{L}\left(I;I_o\right), \label{eq:active_image} \end{align} where $\mathcal{L}$ is an indexation loss dependent on the indexing structure, $\mathcal{C}(I_o)$ is the set of images perceptually close to $I_o$. Algorithm~\ref{alg:1} and Figure~\ref{fig:fig1} provide an overview of active indexing. \begin{wrapfigure}{R}{0.45\textwidth} \vspace{-0.7cm} \resizebox{1.0\linewidth}{!}{ \begin{minipage}{0.5\textwidth} \begin{algorithm}[H] \caption{Active indexing for IVF-PQ} \label{alg:1} \begin{algorithmic} \State \textbf{Input}: $I_o$: original image; $f$: feature extractor; \State Add $x_o = f(I_o)$ to Index, get $q(x_o)$; \State Initialize $\delta_0 = 0_{(c\times h\times w)}$; \For{$t = 0, ..., N-1$} \State $I_t \gets I_o + \alpha \,.\, H_{\mathrm{JND}}(I_o) \odot \mathrm{tanh}(\delta_t)$ \State $x_{t}\gets f(I_{t})$ \State $\mathcal{L} \gets \mathcal L _{\mathrm{f}} (x_{t}, q(x_o)) + \lambda \mathcal L _{\mathrm{i}} (\delta_t)$ \State $\delta_{t+1} \gets \delta_t + \eta \times \mathrm{Adam}(\mathcal{L})$ \EndFor \State \textbf{Output}: $I^\star=I_N$ activated image \end{algorithmic} \end{algorithm} \end{minipage} } \end{wrapfigure} \subsection{Image optimization dedicated to IVF-PQ (``activation'')} The indexing structure IVF-PQ involves a coarse quantizer $q_\mathrm{c}$ built with k-means clustering for space partitioning, and a fine product quantizer $q_\mathrm{f}$ on the residual vectors, such that a vector $x \in \mathbb{R}^d$ is approximated by $q(x) = q_\mathrm{c}(x) + q_\mathrm{f}\left( x-q_\mathrm{c}(x) \right)$. We solve the optimization problem~\eqref{eq:active_image} by iterative gradient descent, back-propagating through the neural network back to the image. The method is classically used in adversarial example generation~\citep{szegedy2013intriguing, carlini2017c&w} and watermarking~\citep{vukotic2020classification, fernandez2022sslwatermarking}. Given an original image $I_o$, the loss is an aggregation of the following objectives: \begin{align} \label{eq:objective} & \mathcal L _{\mathrm{f}} (x,q(x_o)) = \norm{x - q(x_o)}^2 \textrm{\qquad with } x_o = f(I_o) ,\, x = f(I) \\ & \mathcal L _{\mathrm{i}} (I,I_o) = \norm{I - I_o}^2. \end{align} $ \mathcal L _{\mathrm{i}} $ is a regularization on the image distortion. $ \mathcal L _{\mathrm{f}} $ is the indexation loss that operates on the representation space. $ \mathcal L _{\mathrm{f}} $ is the Euclidean distance between $x$ and the target $q(x_o)$ and its goal is to push the image feature towards $q(x_o)$. With IVF-PQ as index, the representation of the activated image gets closer to the quantized version of the original representation, but also closer to the coarse centroid. Finally, the losses are combined as $\mathcal{L}(I;I_o) = \mathcal L _{\mathrm{f}} (x,q(x_o)) + \lambda \mathcal L _{\mathrm{i}} (I,I_o)$. \subsection{Perceptual attenuation} It is common to optimize a perturbation $\delta$ added to the image, rather than the image itself. The adversarial example literature often considers perceptual constraints in the form of an $\ell_p$-norm bound applied on $\delta$ (\cite{madry2017towards} use $\norm{\delta}_\infty < \varepsilon = 8/255$). Although a smaller $\varepsilon$ makes the perturbation less visible, this constraint is not optimal for the human visual system (HVS), \textit{e.g}.\@ perturbations are more noticeable on flat than on textured areas of the image (see App.~\ref{subsec:linf}). We employ a handcrafted perceptual attenuation model based on a Just Noticeable Difference (JND) map~\citep{wu2017enhanced}, that adjusts the perturbation intensity according to luminance and contrast masking. Given an image $I$, the JND map $H_{\mathrm{JND}}(I)\in \mathbb{R}^{c\times h \times w}$ models the minimum difference perceivable by the HVS at each pixel and additionally rescales the perturbation channel-wise since the human eye is more sensible to red and green than blue color shift (see App.~\ref{sec:perceptual} for details). The relation that links the image $I$ sent to $f$, $\delta$ being optimized and the original $I_o$, reads: \begin{align} I = I_o + \alpha \,.\, H_{\mathrm{JND}}(I_o) \odot \mathrm{tanh}(\delta), \label{eq:scaling} \end{align} with $\alpha$ a global scaling parameter that controls the strength of the perturbation and $\odot$ the pointwise multiplication. Coupled with the regularization $ \mathcal L _{\mathrm{i}} $~\eqref{eq:objective}, it enforces that the activated image is perceptually similar, \textit{i.e}.\@ $I^\star\in \mathcal{C}(I_o)$ as required in~\eqref{eq:active_image}. \subsection{Impact on the indexing performance} Figure~\ref{fig:fig1} illustrates that the representation of the activated image gets closer to the reproduction value $q(f(I_o))$, and farther away from the Voronoï boundary. This is expected to make image similarity search more robust because (1) it decreases the probability that $x=f(t(I_o))$ ``falls'' outside the bucket; and (2) it lowers the distance between $x$ and $q(x)$, improving the PQ distance estimate. Besides, by design, the representation stored by the index is invariant to the activation. Formally stated, consider two images $I$, $J$, and one activated version $J^\star$ together with their representations $x,y,y^\star$. When querying $x=f(I)$, the distance estimate is $\norm{q(y^\star)- x} = \norm{q(y)- x}$, so the index is oblivious to the change $J\rightarrow J^\star$. This means that the structure can index passive and activated images at the same time. Retrieval of activated images is more accurate but the performance on passive images does not change. This compatibility property makes it possible to select only a subset of images to activate, but also to activate already-indexed images at any time. \section{Experimental Results} \subsection{Experimental setup}\label{sec:experimental} \paragraph{Dataset.} We use DISC21~\citep{douze2021disc} a dataset dedicated to ICD. It includes 1M reference images and 50k query images, 10k of which are true copies from reference images. A disjoint 1M-image set with same distribution as the reference images is given for training. Images resolutions range from 200$\times$200 to 1024$\times$1024 pixels (most of the images are around 1024$\times$768 pixels). The queries used in our experiments are \emph{not} the queries in DISC21, since we need to control the image transformations in our experiments, and most transformations of DISC21 were done manually so they are not reproducible. Our queries are transformations of images \emph{after active indexing}. These transformations range from simple attacks like rotation to more realistic social network transformations which created the original DISC21 queries (see App. \ref{subsec:dataset}). \paragraph{Metrics.} For retrieval, our main metric is Recall $1$@$1$ ($R$@$1$\@ for simplicity), which corresponds to the proportion of positive queries where the top-1 retrieved results is the reference image. For copy detection, we use the same metric as the NeurIPS Image Similarity Challenge~\citep{douze2021disc}. We retrieve the $k=10$ most similar database features for every query; and we declare a pair is a match if the distance is lower than a threshold $\tau$. To evaluate detection efficiency, we use the 10k matching queries above-mentioned together with 40k negative queries (\textit{i.e}.\@ not included in the database). We use precision and recall, as well as the area under the precision-recall curve, which is equivalent to the micro average precision (\emph{$\mu$AP}). While $R$@$1$\@ only measures ranking quality of the index, $\mu$AP takes into account the confidence of a match. As for image quality metric, we use the Peak Signal-to-Noise Ratio (PSNR) which is defined as $10\log_{10} \left( 255^2 / \mathrm{MSE}(I, I')^2 \right)$, as well as SSIM~\citep{wang2004ssim} and the norm $\norm{I-I'}_\infty$. \paragraph{Implementation details.}\label{par:details} The evaluation procedure is: (1) we train an index on the 1M training images, (2) index the 1M reference images, (3) activate (or not) 10k images from this reference set. (4) At search time, we use the index to get closest neighbors (and their distances) of transformed versions from a query set made of the 10k images. Unless stated otherwise, we use a IVF4096,PQ8x8 index (IVF quantizer with 4096 centroids, and PQ with 8 subquantizers of $2^8$ centroids), and use only one probe on IVF search for shortlist selection ($k'=1$). Compared to a realistic setting, we voluntarily use an indexing method that severely degrades learned representations to showcase and analyze the effect of the active indexing. For feature extraction, we use an SSCD model with a ResNet50 trunk~\citep{he2016resnet}. It takes image resized to 288$\times$288 and generates normalized representations in $\mathbb{R}^{512}$. Optimization~\eqref{eq:active_image} is done with the Adam optimizer~\citep{kingma2014adam}, the learning rate is set to $1$, the number of iterations to $N=10$ and the regularization to $\lambda=1$. In~\eqref{eq:scaling}, the distortion scaling is set to $\alpha=3$ (leading to an average PNSR around $43$~dB). In this setup, activating 128 images takes around 6s ($\approx$ 40ms/image) with a 32GB GPU. It can be sped-up at the cost of some accuracy (see App.~\ref{sec:speedup}). \subsection{Active vs. Passive}\label{sec:act_vs_passive} This section compares retrieval performance of active and passive indexing. We evaluate $R$@1 when different transformations are applied to the 10k reference images before search. The ``Passive'' lines of Tab.~\ref{tab:act_vs_pas_retrieval} show how the IVF-PQ degrades the recall. This is expected, but the IVF-PQ also accelerates search 500$\times$ and the index is 256$\times$ more compact, which is necessary for large-scale applications. Edited images are retrieved more often when they were activated for the index: increase of up to $+60$ $R$@$1$\@ for strong brightness and contrast changes, close to results of the brute-force search. We also notice that the performance of the active IVF-PQ$^{k'=1}$ is approximately the same as the one of the passive IVF-PQ$^{k'=16}$, meaning that the search can be made more efficient at equal performance. For the IVF-PQ$^\dagger$ that does less approximation in the search (but is slower and takes more memory), retrieval on activated images is also improved, though to a lesser extent. \begin{table}[t] \centering \captionsetup{font=small} \caption{ Comparison of the index performance between activated and passive images. The search is done on a 1M image set and $R$@1 is averaged over 10k query images submitted to different transformations before search. \textbf{Random}: randomly apply 1 to 4 transformations. \textbf{Avg.}: average on the transformations presented in the table (details in App. \ref{subsec:transformations}). \textbf{No index}: exhaustive brute-force nearest neighbor search. \textbf{IVF-PQ}: \textsc{IVF4096,PQ8x8} index with $k'$=1 (16 for \textbf{IVF-PQ}$^{16}$). \textbf{IVF-PQ}$^\dagger$: \textsc{IVF512,PQ32x8} with $k'=32$. } \label{tab:act_vs_pas_retrieval} \vspace{-0.3cm} \resizebox{1.0\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{4pt} \def1.1{1.1} \begin{tabular}{ l |l| l| c| *{15}{p{0.04\textwidth}}} \multicolumn{1}{c}{} & \multicolumn{1}{c}{\rot{Search (ms)}} & \multicolumn{1}{c}{\rot{Bytes/vector}} & \multicolumn{1}{c}{\rot{Activated}} & \rot{Identity} & \rot{Contr. 0.5} & \rot{Contr. 2.0} & \rot{Bright. 0.5} & \rot{Bright. 2.0} & \rot{Hue 0.2} & \rot{Blur 2.0} & \rot{JPEG 50} & \rot{Rot. 25} & \rot{Rot. 90} & \rot{Crop 0.5} & \rot{Resi. 0.5} & \rot{Meme} & \rot{Random} & \rot{Avg.} \\ \midrule No index & 252 & 2048 & \ding{55} & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.90 & 0.99 \\ \midrule & & & \ding{55} & 1.00 & 0.73 & 0.39 & 0.73 & 0.28 & 0.62 & 0.48 & 0.72 & 0.07 & 0.14 & 0.14 & 0.72 & 0.14 & 0.13 & 0.45 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-2}{*}{IVF-PQ} & \multirow{-2}{*}{0.38} \cellcolor{white!0} & \multirow{-2}{*}{8} \cellcolor{white!0} & \checkmark & 1.00 & 1.00 & 0.96 & 1.00 & 0.92 & 1.00 & 0.96 & 0.99 & 0.10 & 0.50 & 0.29 & 1.00 & 0.43 & 0.32 & 0.75 \\ \midrule & & & \ding{55} & 1.00 & 1.00 & 0.90 & 1.00 & 0.78 & 0.99 & 0.95 & 0.99 & 0.35 & 0.57 & 0.57 & 1.00 & 0.56 & 0.39 & 0.79 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-2}{*}{IVF-PQ$^{16}$} & \multirow{-2}{*}{0.42} \cellcolor{white!0} & \multirow{-2}{*}{8} \cellcolor{white!0} & \checkmark & 1.00 & 1.00 & 1.00 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 0.43 & 0.88 & 0.75 & 1.00 & 0.84 & 0.50 & 0.88 \\ \midrule & & & \ding{55} & 1.00 & 1.00 & 0.99 & 1.00 & 0.95 & 1.00 & 0.99 & 1.00 & 0.72 & 0.87 & 0.88 & 1.00 & 0.87 & 0.61 & 0.92 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-2}{*}{IVF-PQ$^\dagger$} & \multirow{-2}{*}{1.9} \cellcolor{white!0} & \multirow{-2}{*}{32} \cellcolor{white!0} & \checkmark & 1.00 & 1.00 & 0.99 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 0.75 & 0.92 & 0.91 & 1.00 & 0.92 & 0.63 & 0.94 \\ \bottomrule \end{tabular} \endgroup } \vspace{-0.3cm} \end{table} As for copy detection, Figure~\ref{fig:prc} gives the precision-recall curves obtained for a sliding value of $\tau$, and corresponding $\mu$AP. Again, we observe a significant increase ($\times 2$) in $\mu$AP with active indexing. Note that the detection performance is much weaker than the brute-force search even in the active case because of the strong approximation made by space partitioning (more details in Sec.~\ref{sec:space_partitioning}). Example of activated images are given in Fig.~\ref{fig:qualitative_short} (more in App. \ref{sec:more_qualitative}), while the qualitative image metrics are as follows: PSNR$=43.8\pm 2.2$~dB, SSIM$=0.98 \pm 0.01$, and $\norm{I-I'}_\infty=14.5 \pm 1.2$. These results are computed on 10k images, the $\pm$ indicates the standard deviation. \subsection{Image quality trade-off} For a fixed index and neural extractor, the performance of active indexing mainly depends on the scaling $\alpha$ that controls the activated image quality. In Fig. \ref{fig:psnr}, we repeat the previous experiment for different values of $\alpha$ and plot the $\mu$AP against the average PSNR. As expected, lower PSNR implies better $\mu$AP. For instance, at PSNR 30~dB, the $\mu$AP is augmented threefold compared to the passive case. Indeed, for strong perturbations the objective function of \eqref{eq:objective} can be further lowered, reducing even more the gap between representations and their quantized counterparts. \begin{figure}[b] \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/exp/psnr_loss_muap.pdf} \captionsetup{font=small} \caption{PSNR trade-off. As the PSNR decreases, the {\color{orange}$\mu$AP\@ (orange)} gets better, because the {\color{blue} distance (blue)} between activated representations $x$ and $q(x)$ decreases.} \label{fig:psnr} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/exp/qualitative_short.jpg} \captionsetup{font=small} \caption[Caption]{Activated images. \emph{Left:} reference from DISC (\href{http://www.flickr.com/photos/11805179@N04/1816673349/}{R000643.jpg} and \href{http://www.flickr.com/photos/36449457@N00/8585081884/}{R000761.jpg}), \emph{middle:} activated image, right: pixel-wise difference.} \label{fig:qualitative_short} \end{minipage} \end{figure} \subsection{Generalization}\label{sec:generalization} \paragraph{Generalization to other neural feature extractors.} We first reproduce the experiment of Sec.~\ref{par:details} with different extractors, that cover distinct training methods and architectures. Among them, we evaluate a ResNext101~\citep{xie2017aggregated} trained with SSCD~\citep{pizzi2022sscd}, a larger network than the ResNet50 used in our main experiments ; the winner of the descriptor track of the NeurIPS ISC, \textsc{Lyakaap}-dt1~\citep{yokoo2021isc}, that uses an EfficientNetv2 architecture~\citep{tan2021efficientnetv2} ; networks from DINO~\citep{caron2021dino}, either based on ResNet50 or ViT~\citep{dosovitskiy2020vit}, like the ViT-S model~\citep{touvron2021training}. Table~\ref{tab:extractors} presents the $R$@$1$\@ obtained on 10k activated images when applying different transformations before search. The $R$@$1$\@ is better for activated images for all transformations and all neural networks. The average improvement on all transformations ranges from $+12\%$ for DINO ViT-s to $+30\%$ for SSCD ResNet50. \begin{table}[t] \centering \captionsetup{font=small} \caption{ $R$@$1$\@ for different transformations before search. We use our method to activate images for indexing with IVF-PQ, with different neural networks used as feature extractors. } \label{tab:extractors} \vspace{-0.3cm} \resizebox{1.0\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{4pt} \def1.1{1.1} \begin{tabular}{ l| l| c| *{15}{p{0.04\textwidth}}} \multicolumn{1}{c}{\rot{Name}} & \multicolumn{1}{c}{\rot{Arch.}} & \multicolumn{1}{l}{\rot{Activated}} & \rot{Identity} & \rot{Contr. 0.5} & \rot{Contr. 2.0} & \rot{Bright. 0.5} & \rot{Bright. 2.0} & \rot{Hue 0.2} & \rot{Blur 2.0} & \rot{JPEG 50} & \rot{Rot. 25} & \rot{Rot. 90} & \rot{Crop 0.5} & \rot{Resi. 0.5} & \rot{Meme} & \rot{Random} & \rot{Avg.} \\ \midrule & & \ding{55} & 1.00 & 0.73 & 0.39 & 0.73 & 0.28 & 0.62 & 0.48 & 0.72 & 0.07 & 0.14 & 0.14 & 0.72 & 0.14 & 0.13 & 0.45 \\ \rowcolor{apricot!30} \cellcolor{white!0} & \multirow{-2}{*}{ResNet50} \cellcolor{white!0}& \checkmark & 1.00 & 1.00 & 0.96 & 1.00 & 0.92 & 1.00 & 0.96 & 0.99 & 0.10 & 0.50 & 0.29 & 1.00 & 0.43 & 0.32 & 0.75 \\ \cmidrule{2-18} & & \ding{55} & 1.00 & 0.88 & 0.68 & 0.88 & 0.57 & 0.84 & 0.46 & 0.79 & 0.46 & 0.63 & 0.53 & 0.80 & 0.48 & 0.28 & 0.66 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-4}{*}{SSCD} & \multirow{-2}{*}{ResNext101} \cellcolor{white!0} & \checkmark & 1.00 & 1.00 & 0.96 & 1.00 & 0.90 & 0.99 & 0.77 & 0.97 & 0.53 & 0.85 & 0.64 & 1.00 & 0.74 & 0.37 & 0.84 \\ \midrule & & \ding{55} & 1.00 & 0.66 & 0.65 & 0.65 & 0.52 & 0.71 & 0.52 & 0.82 & 0.07 & 0.20 & 0.51 & 0.84 & 0.62 & 0.18 & 0.57 \\ \rowcolor{apricot!30} \cellcolor{white!0} & \multirow{-2}{*}{ResNet50} \cellcolor{white!0} & \checkmark & 1.00 & 0.99 & 0.88 & 0.99 & 0.75 & 0.93 & 0.72 & 0.94 & 0.08 & 0.25 & 0.57 & 0.99 & 0.82 & 0.23 & 0.72 \\ \cmidrule{2-18} & & \ding{55} & 1.00 & 0.89 & 0.71 & 0.86 & 0.64 & 0.75 & 0.74 & 0.90 & 0.14 & 0.18 & 0.57 & 0.88 & 0.61 & 0.25 & 0.65 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-4}{*}{DINO} & \multirow{-2}{*}{ViT-s} \cellcolor{white!0} & \checkmark & 1.00 & 0.99 & 0.94 & 0.99 & 0.92 & 0.98 & 0.89 & 0.99 & 0.15 & 0.28 & 0.63 & 0.99 & 0.77 & 0.32 & 0.77 \\ \midrule & & \ding{55} & 1.00 & 0.25 & 0.08 & 0.16 & 0.01 & 0.51 & 0.54 & 0.84 & 0.18 & 0.16 & 0.23 & 0.79 & 0.16 & 0.18 & 0.36 \\ \rowcolor{apricot!30} \cellcolor{white!0} \multirow{-2}{*}{ISC-dt1} & \multirow{-2}{*}{EffNetv2} \cellcolor{white!0} & \checkmark & 1.00 & 0.57 & 0.16 & 0.33 & 0.01 & 0.88 & 0.79 & 0.97 & 0.20 & 0.24 & 0.29 & 0.97 & 0.26 & 0.26 & 0.49 \\ \bottomrule \end{tabular} \endgroup } \end{table} \vspace*{-0.1cm} \paragraph{Generalization to other indexes.} The method easily generalizes to other types of indexing structures, the only difference being in the indexation loss $ \mathcal L _{\mathrm{f}} $~\eqref{eq:objective}. We present some of them below: \begin{itemize}[leftmargin=0.5cm,itemsep=0cm,topsep=-0.1cm] \item \textbf{PQ and OPQ}.\quad In PQ~\citep{jegou2010pq}, a vector $x \in \mathbb{R}^d$ is approximated by $q_\mathrm{f}(x)$. $ \mathcal L _{\mathrm{f}} $ reads $\norm{x-q_\mathrm{f}(x_o)}$. In OPQ~\citep{ge2013optimized}, vectors are rotated by matrix $R$ before codeword assignment, such that $RR^\top = I$. $ \mathcal L _{\mathrm{f}} $ becomes $\norm{x-R^\topq_\mathrm{f}(Rx_o)}$. \item \textbf{IVF.} \quad Here, we only do space partitioning. Employing $ \mathcal L _{\mathrm{f}} = \norm{x- q_\mathrm{c} (x_o)}$ (``pushing towards the cluster centroid'') decreases the odds of $x$ falling in the wrong cell (see Sec.~\ref{sec:space_partitioning}). In this case, an issue can be that similar representations are all pushed together to a same centroid, which makes them less discriminate. Empirically, we found that this does not happen because perceptual constraint in the image domain prevents features from getting too close. \item \textbf{LSH.} \quad Locality Sensitive Hashing maps $x\in \mathbb{R}^d$ to a binary hash $b(x)\in \mathbb{R}^L$. It is commonly done with projections against a set of vectors, which give for $j \in [1,..,L]$, $b_j(x) = \mathrm{sign} (w_j^\top x)$. The objective $ \mathcal L _{\mathrm{f}} = -1/L \sum_{j} \mathrm{sign}(b(x_o))\cdot w_j^\top x$, allows to push $x$ along the LSH directions and to improve the robustness of the hash. \end{itemize}\vspace*{0.2cm} Table~\ref{tab:indexes} presents the $R$@$1$\@ and $\mu$AP\@ obtained on the 50k query set. Again, results are always better in the active scenario. We remark that active indexing has more impact on space partitioning techniques: the improvement for IVF is higher than with PQ and the LSH binary sketches. As to be expected, the impact is smaller when the indexing method is more accurate. \begin{table}[h] \centering \resizebox{0.6\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{3pt} \begin{tabular}{ c|c |cc|cc} \toprule \multirow{2}{*}{Index} & \multirow{2}{*}{Search time} & \multicolumn{2}{c|}{$R$@$1$\@ avg.} & \multicolumn{2}{c}{$\mu$AP\@ } \\ & & Passive & Activated & Passive & Activated \\ \midrule IVF 1024 & 0.32 ms & 0.47 & \textbf{0.83} & 0.16 & \textbf{0.43} \\ OPQ 8x8 & 5.71 ms & 0.92 & \textbf{0.94} & 0.48 & \textbf{0.55} \\ PCA64, LSH & 0.99 ms & 0.72 & \textbf{0.83} & 0.25 & \textbf{0.39} \\ \bottomrule \end{tabular} \endgroup } \captionsetup{font=small} \caption{\makebox{$R$@$1$\@ averaged on transformations presented in Tab.~\ref{tab:act_vs_pas_retrieval} and $\mu$AP\@ for different indexing structures}} \label{tab:indexes} \end{table} \section{Analyses}\label{sec:analyses} We provide insights on the method for IVF-PQ, considering the effects of quantization and space partitioning. For an image $I$ whose representation is $x=f(I)\in\mathbb{R}^d$, $\hat{x}$ denotes the representation of a transformed version: $\hat{x} = f(t(I))\in\mathbb{R}^d$, and $x^\star$ the representation of the activated image $I^\star$. For details on the images and the implementation used in the experimental validations, see Sec. \ref{sec:experimental}. \begin{figure}[b!] \begin{minipage}{0.45\textwidth} \centering \vspace{3pt} \includegraphics[width=1.05\linewidth, trim={0 0.8em 0 0em},clip]{figs/exp/prc.pdf} \captionsetup{font=small} \caption{Precision-Recall curve for ICD with 50k queries and 1M reference images (more details for the experimental setup in Sec. \ref{sec:experimental}). $p_\mathrm{f}^{\mathrm{ivf}}$ is the probability of failure of the IVF (Sec. \ref{sec:space_partitioning}). } \label{fig:prc} \end{minipage}\hfill \begin{minipage}{0.51\textwidth} \centering \includegraphics[width=0.8\textwidth, trim={0 0.15cm 0 0.23cm}, clip]{figs/exp/distances.pdf} \captionsetup{font=small} \caption{ Distance estimates histograms (sec. \ref{sec:quantization}). With active indexing, $\|x- q(x)\|^2$ is reduced ({\color{blue}$\leftarrow$}), inducing a shift ({\color{orange}$\leftarrow$}) in the distribution of $\|\hat{x}- q(x)\|^2$, where $t(I)$ a hue-shifted version of $I$. $y$ is a random query. } \label{fig:dists} \end{minipage} \end{figure} \subsection{Product quantization: impact on distance estimate}\label{sec:quantization} We start by analyzing the distance estimate considered by the index: \begin{equation} \|\hat{x}-q(x)\|^2 = \|x-q(x)\|^2 + \|\hat{x}-x\|^2 + 2 (x-q(x))^\top (\hat{x}-x). \label{eq:distance} \end{equation} The activation aims to reduce the first term, \textit{i.e}.\@ the quantization error $\|x- q(x)\|^2$, which in turn reduces $\|\hat{x}- q(x)\|^2$. Figure~\ref{fig:dists} shows in blue the empirical distributions of $\|x- q(x)\|^2$ (passive) and $\|x^\star- q(x)\|^2$ (activated). As expected the latter has a lower mean, but also a stronger variance. The variation of the following factors may explain this: \emph{i)} the strength of the perturbation (due to the HVS modeled by $H_{\mathrm{JND}}$ in~\eqref{eq:scaling}), \emph{ii)} the sensitivity of the feature extractor $\|\nabla_x f(x)\|$ (some features are easier to push than others), \emph{iii)} the shapes and sizes of the Voronoï cells of PQ. The second term of \eqref{eq:distance} models the impact of the image transformation in the feature space. Comparing the orange and blue distributions in Fig.~\ref{fig:dists}, we see that it has a positive mean, but the shift is bigger for activated images. We can assume that the third term has null expectation for two reasons: \emph{i)} the noise $\hat{x}-x$ is independent of $q(x)$ and centered around 0, \emph{ii)} in the high definition regime, quantification noise $x-q(x)$ is independent of $x$ and centered on 0. Thus, this term only increases the variance. Since $x^\star-q(x)$ has smaller norm, this increase is smaller for activated images. All in all, $\|\hat{x}^\star-q(x)\|^2$ has a lower mean but a stronger variance than its passive counterpart $\|\hat{x}-q(x)\|^2$. Nevertheless, the decrease of the mean is so large that it compensates the larger variance. The orange distribution in active indexing is further away from the green distribution for negative pairs, \textit{i.e}.\@ the distance between an indexed vector $q(x)$ and an independent query $y$. \subsection{Space partitioning: impact on the IVF probability of failure}\label{sec:space_partitioning} We denote by $p_\mathrm{f} := \mathbb{P}(q_\mathrm{c}(x) \neq q_\mathrm{c} (\hat{x}) )$ the probability that $\hat{x}$ is assigned to a wrong bucket by IVF assignment $q_\mathrm{c}$. In the single-probe search ($k'=1$), the recall (probability that a pair is detected when it is a true match, for a given threshold $\tau$ on the distance) is upper-bounded by $1 - p_\mathrm{f}$: \begin{align} R_\tau = \mathbb{P} \left(\{ q_\mathrm{c}(\hat{x}) = q_\mathrm{c}(x) \} \cap \{ \| \hat{x}-q(x)\| < \tau \} \right) \leq \mathbb{P} \left(\{ q_\mathrm{c}(\hat{x}) = q_\mathrm{c}(x) \} \right) =1 - p_\mathrm{f}. \end{align} In other terms, even with a high threshold $\tau \rightarrow \infty$ (and low precision), the detection misses representations that ought to be matched, with probability $p_\mathrm{f}$. It explains the sharp drop at recall $R=0.13$ in Fig.~\ref{fig:prc}. This is why it is crucial to decrease $p_\mathrm{f}$. The effect of active indexing is to reduce $\|\hat{x}-q_\mathrm{c}(x)\|$ therefore reducing $p_\mathrm{f}$ and increasing the upper-bound for $R$: the drop shifts towards $R=0.32$. This explanation suggests that pushing $x$ towards $q_\mathrm{c}(x)$ decreases even more efficiently $p_\mathrm{f}$. This makes the IVF more robust to transformation but this may jeopardize the PQ search because features of activated images are packed altogether. In a way, our strategy, which pushes $x$ towards $q(x)$, dispatches the improvement over the IVF and the PQ search. \section{Conclusion \& Discussion} We introduce a way to improve ICD in large-scale settings, when images can be changed before release. It leverages an optimization scheme, similar to adversarial examples, that modifies images so that (1) their representations are better suited for indexing, (2) the perturbation is invisible to the human eye. We provide grounded analyses on the method and show that it significantly improves retrieval performance of activated images, on a number of neural extractors and indexing structures. Activating images takes time (in the order of 10~ms/image) but one advantage is that the database may contain both active and passive images: active indexing does not spoil the performance of passive indexing and vice-versa. This is good for legacy compliance and also opens the door to flexible digital asset management strategies (actively indexing images of particular importance). The method has several limitations. First, it is not agnostic to the indexing structure and extractor that are used by the similarity search. Second, an adversary could break the indexing system in several ways. In a black-box setting (no knowledge of the indexing structure and neural network extractor), adversarial purification~\citep{shi2021online} could get rid of the perturbation that activated the image. In a semi-white-box setting (knowledge of the feature extractor), targeted mismatch attacks against passive indexing like ~\cite{tolias2019targeted} may also work. Adversarial training ~\citep{madry2017towards} could be a defense. For instance, it is interesting to know if adversarial training prevents active indexing, or if the perceptual perturbation that is used in our method is still able to push features in the latent space of a robust and defended neural network. \newpage \subsection*{Ethics Statement} \paragraph{Societal impact statement.} Content tracing is a double-edged sword. On the one hand, it allows media platforms to more accurately track malicious content (pornographic, terrorist, violent images, \textit{e.g}.\@ Apple's NeuralHash and Microsoft's PhotoDNA) and to protect copyright (\textit{e.g}.\@ Youtube's Content ID). On the other hand it can be used as a means of societal and political censorship, to restrict free speech of specific communities. However, we still believe that research needs to be advanced to improve global moderation in the internet. We also believe that advantages that a better copy detection could bring are more numerous than its drawbacks. \paragraph{Environmental impact statement.} We roughly estimated that the total GPU-days used for running all our experiments to $200$, or $\approx 5000$ GPU-hours. Experiments were conducted using a private infrastructure and we estimate total emissions to be in the order of a ton CO$_2$eq. Estimations were conducted using the \href{https://mlco2.github.io/impact#compute}{MachineLearning Impact calculator} presented in \cite{lacoste2019quantifying}. We do not consider in this approximation: memory storage, CPU-hours, production cost of GPUs/ CPUs, etc. as well as the environmental cost of training the neural networks used as feature extractors. Although the cost of the experiments and the method is high, it could possibly allow a reduction of the computations needed in large data-centers thanks to improved performance of indexing structures. \subsection*{Reproducibility Statement} \emph{The implementation will be made available.} Models used for feature extraction (\href{https://github.com/facebookresearch/sscd-copy-detection/}{SSCD}, \href{https://github.com/facebookresearch/dino}{DINO}, \href{https://github.com/lyakaap/ISC21-Descriptor-Track-1st}{ISC-dt1}) can be downloaded in their respective repositories. It builds upon the open-source Pytorch~\citep{paszke2019pytorch} and FAISS~\citep{johnson2019faiss} libraries. The main dataset used in the experiments (DISC21) can be freely downloaded on its webpage \href{https://ai.facebook.com/datasets/disc21-dataset/}{https://ai.facebook.com/datasets/disc21-dataset/}. Dataset processing is described in App. \ref{subsec:dataset}. \section{Details on the Perceptual Attenuation Model}\label{sec:perceptual} \subsection{Just Noticeable Difference map} \begin{figure}[b] \centering \includegraphics[width= 0.23\textwidth]{figs/qualitative/ref.jpg} \hspace{0.1\textwidth} \captionsetup{font=small} \includegraphics[width= 0.23\textwidth]{figs/qualitative/heatmap.png} \caption[Caption]{A reference image $I$ from DISC21 (\href{http://www.flickr.com/photos/61368956@N00/5060849004/}{R002815.jpg}), and the associated perceptual heatmap $H_{\mathrm{JND}}(I)$.} \label{fig:heatmap} \vspace*{-0.5cm} \end{figure} The maximum change that the human visual system (HVS) cannot perceive is sometimes referred to as the just noticeable difference (JND)~\cite{krueger1989reconciling}. It is used in many applications, such as image/video watemarking, compression, quality assessment (JND is also used in audio). JND models in pixel domain directly calculate the JND at each pixel location (\textit{i.e}.\@ how much pixel difference is perceivable by the HVS). The JND map that we use is based on the work of \cite{chou1995perceptually}. We use this model for its simplicity, its efficiency and its good qualitative results. More complex HVS models could also be used if even higher imperceptibility is needed (\cite{watson1993dct, yang2005just, zhang2008just, jiang2022jnd} to cite a few). The JND map takes into account two characteristics of the HVS, namely the luminance adaptation (LA) and the contrast masking (CM) phenomena. We follow the same notations as \cite{wu2017enhanced}. The CM map $\mathcal{M}_C$ is a function of the image gradient magnitude $\mathcal{C}_l$ (the Sobel filter of the image): \begin{equation} \mathcal{M}_C(x) = 0.115 \times \frac{\alpha \cdot \mathcal{C}_l(x)^{2.4}} { \mathcal{C}_l(x)^{2} + \beta^2} \textrm{\quad , with \,} \mathcal{C}_l = \sqrt{ \nabla_x I(x)^2 + \nabla_y I(x)^2}, \end{equation} where $x$ is the pixel location, $I(x)$ the image intensity, $\alpha = 16$, and $\beta = 26$. It is an increasing function of $\mathcal{C}_l$, meaning that the stronger the gradient is at $x$, the more the image is masking a local perturbation, and the higher the noticeable pixel difference is. LA takes into account the fact that the HVS presents different sensitivity to background luminance (\textit{e.g}.\@ it is less sensible in dark backgrounds). It is modeled as: \begin{align} \mathcal{L}_A (x) = \begin{cases} \displaystyle 17 \times \left( 1-\sqrt{\frac{B(x)}{127}} \right) & \textrm{\quad if\,} B(x)<127 \\ \displaystyle \frac{3 \times \left( B(x) - 127 \right)}{128} +3 & \textrm{\quad if\,}B(x)\geq 127, \end{cases} \end{align} where $B(x)$ is the background luminance, which is calculated as the mean luminance value of a local patch centered on $x$. Finally, both effects are combined with a nonlinear additivity model: \begin{equation} H_{\mathrm{JND}} = \mathcal{L}_A + \mathcal{M}_C - C \cdot \min \{ \mathcal{L}_A, \mathcal{M}_C \}, \end{equation} where $C$ is set to $0.3$ and determines the overlapping effect. For color images, the final RGB heatmap is $H_{\mathrm{JND}} = [\alpha_R H, \alpha_G H, \alpha_B H]$, where $(\alpha_{R}, \alpha_{G}, \alpha_{B})$ are inversely proportional to the mixing coefficients for the luminance: $(\alpha_{R}, \alpha_{G}, \alpha_{B}) = 0.072 / (0.299, 0.587, 0.114)$. \subsection{Comparison with $\ell_\infty$ Constraint Embedding}\label{subsec:linf} Figure~\ref{fig:linf_vs_perc} shows the same image activated using either the $\ell_\infty$ constraint (commonly used in the adversarial attack literature) or our perceptual constraint based on the JND model explained above. Even with very small $\varepsilon$ ($4$ over 255 in the example bellow), the perturbation is visible especially in the flat regions of the images, such as the sea or sky. \cite{laidlaw2021perceptual} also show that the $\ell_\infty$ is not a good perceptual constraint. They use the LPIPS loss~\citep{zhang2018unreasonable} as a surrogate for the HVS to develop more imperceptible adversarial attacks. Although a similar approach could be used here, we found that at this small level of image distortion the LPIPS did not capture CM and LA as well as the handcrafted perceptual models present in the compression and watermarking literature. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width= 0.48\textwidth]{figs/qualitative/linf4_psnr36,4.png} \includegraphics[width= 0.48\textwidth]{figs/qualitative/diff_linf.png} \captionsetup{font=small} \caption{$\ell_\infty=4$, $\mathrm{PSNR}=36.4$~dB, $\mathrm{SSIM}=0.91$} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width= 0.48\textwidth]{figs/qualitative/linf24_psnr34,4.png} \includegraphics[width= 0.48\textwidth]{figs/qualitative/diff_perc.png} \captionsetup{font=small} \caption{$\ell_\infty=23$, $\mathrm{PSNR}=34.4$~dB, $\mathrm{SSIM}=0.94$} \end{subfigure} \captionsetup{font=small} \caption{ Activated images, either with (a) the $\ell_\infty \leq 4$ constraint or with (b) our perceptual model (best viewed on screen). We give the corresponding measures between the original and the protected image, as well as the pixel-wise difference. The perturbation on the right is much less perceptible thanks to the perceptual model, even though its $\ell_\infty$ distance with the original image is much higher. } \label{fig:linf_vs_perc} \end{figure} \section{More Experiments Details} \subsection{Dataset}\label{subsec:dataset} The dataset DISC 2021 was designed for the Image Similarity Challenge~\citep{douze2021disc} and can be downloaded in the dataset webpage: \href{https://ai.facebook.com/datasets/disc21-dataset/}{https://ai.facebook.com/datasets/disc21-dataset/}. We want to test performance on edited versions of activated images but in DISC query set transformations are already applied to images. Therefore the query set cannot be used as it is. We create a first test set ``Ref10k'' by selecting the 10k images from the reference set that were originally used to generate the queries (the ``dev queries'' from the downloadable version). We also re-create a query set ``Query50k''. To be as close as possible, we use the same images that were used for generating queries in DISC. Edited images are generated using the AugLy library~\citep{papakipos2022augly}, following the guidelines given in the ``Automatic Transformations" section of the DISC paper. Therefore, the main difference between the query set used in our experiments and the original one is that ours do not have manual augmentations. \subsection{Transformations seen at test time}\label{subsec:transformations} They cover both spatial transformations (crops, rotation, etc.), pixel-value transformations (contrast, hue, jpeg, etc.) and ``everyday life'' transformations with the AugLy augmentations. All transformations are illustrated in Fig.~\ref{fig:all_transformations}. The parameters for all transformations are the ones of the torchvision library~\citep{marcel2010torchvision}, except for the crop and resize that represent area ratios. For the Gaussian blur transformation we use alternatively $\sigma$, the scaling factor in the exponential, or the kernel size $k_b$ (in torchvision $k_b = (\sigma-0.35)/0.15$). The ``Random'' transformation is the one used to develop the 50k query set. A series of simple 1-4 AugLy transformations are picked at random, with skewed probability for a higher number. Among the possible transformations, there are pixel-level, geometric ones, as well as embedding the image as a screenshot of a social network GUI. \begin{table}[t] \centering \captionsetup{font=small} \caption{Illustration of all transformations evaluated in Tab.~\ref{tab:act_vs_pas_retrieval}.} \label{fig:all_transformations} \begin{tabular}{*{5}{l}} Identity & Contrast 0.5 & Contrast 2.0 & Brightness 0.5 & Brightness 2.0 \\ \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/none.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/contrast1.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/contrast2.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/brightness1.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/brightness2.jpg}\end{minipage} \\ \\ Hue 0.2 & Blur 2.0 & JPEG 50 & Rotation 25 & Rotation 90 \\ \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/hue.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/blur.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/jpeg.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/rotation1.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/rotation2.jpg}\end{minipage} \\ \\ Crop 0.5 & Resize 0.5 & Meme & Random \\ \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/centercrop.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=\linewidth]{figs/attacks/resize.jpg}\end{minipage} & \begin{minipage}{.12\linewidth}\includegraphics[width=\linewidth]{figs/attacks/meme.jpg}\end{minipage} & \begin{minipage}{.16\linewidth}\includegraphics[width=1.7\linewidth]{figs/attacks/auto.jpg}\end{minipage} & \\ \end{tabular} \end{table} \begin{figure}[b!] \centering \includegraphics[width=0.8\linewidth,trim={0 0.4cm 0 0.2cm}, clip]{figs/exp/transformations.pdf} \vspace*{-0.5cm} \captionsetup{font=small} \caption{ Average $R$@1 over 10k images indexed with IVF-PQ. } \label{fig:tranformations} \end{figure} \section{More Experimental Results} \subsection{Detailed metrics on different image transformations} On Fig.~\ref{fig:tranformations}, we evaluate the average $R$@$1$\@ over the 10k images from the reference dataset. The experimental setup is the same as for Tab.~\ref{tab:act_vs_pas_retrieval} but a higher number of transformation parameters are evaluated. As expected, the higher the strength of the transformation, the lower the retrieval performance is. The decrease in performance is significantly reduced with activated images. \vspace*{-0.3cm} \subsection{Additional ablations} \paragraph{Speeding-up the optimization.}\label{sec:speedup} In our experiments, the optimization is done using 10 iterations of gradient descent, which takes approximately 40ms/image. If the indexation time is important (often, this is not the case and only the search time is), it can be reduced at the cost of some accuracy. We activated 10k reference images, with the same IVF-PQ indexed presented in Sec.~\ref{sec:act_vs_passive} with only one step of gradient descent with a higher learning rate. Activation times are computed on average. The $R$@$1$\@ results in Tab.~\ref{tab:speedup} indicate that the speed-up in the image optimization has a small cost in retrieval accuracy. Specifically, it reduces the $R$@$1$\@ for unedited images. The reason is that the learning rate is too high: it can cause the representation to be pushed too far and to leave the indexing cell. This is why a higher number number of steps and a lower learning rate are used in practice. If activation time is a bottleneck, it can however be useful to use less optimization steps. \begin{table}[h] \centering \captionsetup{font=small} \caption{$R$@$1$\@ for different transformations applied before search, with either 1 step at learning rate 10, or 10 steps at learning rate 1.} \label{tab:speedup} \vspace*{-0.5cm} \resizebox{0.95\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{4pt} \def1.1{1.1} \begin{tabular}{ l|l| *{15}{c}} \multicolumn{1}{c}{} & \multicolumn{1}{l}{\rot{Activation}} & \rot{Identity} & \rot{Contr. 0.5} & \rot{Contr. 2.0} & \rot{Bright. 0.5} & \rot{Bright. 2.0} & \rot{Hue 0.2} & \rot{Blur 2.0} & \rot{JPEG 50} & \rot{Rot. 25} & \rot{Rot. 90} & \rot{Crop 0.5} & \rot{Resi. 0.5} & \rot{Meme} & \rot{Random} & \rot{Avg.} \\ \midrule Passive & - & 1.00 & 0.73 & 0.39 & 0.73 & 0.28 & 0.62 & 0.48 & 0.72 & 0.07 & 0.14 & 0.14 & 0.72 & 0.14 & 0.13 & 0.45 \\ \rowcolor{apricot!30} lr=1 - 10 steps & 39.8 ms/img & 1.00 & 1.00 & 0.96 & 1.00 & 0.92 & 1.00 & 0.96 & 0.99 & 0.10 & 0.50 & 0.29 & 1.00 & 0.43 & 0.32 & 0.75 \\ lr=10 - 1 step & 4.3 ms/img & 0.99 & 0.99 & 0.92 & 0.99 & 0.84 & 0.99 & 0.95 & 0.99 & 0.10 & 0.39 & 0.25 & 0.99 & 0.36 & 0.27 & 0.72 \\ \bottomrule \end{tabular} \endgroup } \vspace*{-0.3cm} \end{table} \paragraph{Data augmentation at indexing time and EoT.} Expectation over Transformations~\citep{athalye2018eot} was originally designed to create adversarial attacks robust to a set of image transformations. We follow a similar approach to improve robustness of the marked image against a set of augmentations $\mathcal{T}$. At each optimization step, we randomly sample $A$ augmentations $\{t_i\}_{i=1}^A$ in $\mathcal{T}$ and consider the average loss: $ \mathcal L _{\mathrm{f}} = \sum_{i=1}^{A} \mathcal{L}(I,t_i;I_o) /A $. In our experiments, $\mathcal{T}$ encompasses rotations, Gaussian blurs, color jitters and a differentiable approximation of the JPEG compression~\cite{shin2017jpeg}. $A$ is set to $8$ and we always take the un-augmented image in the chosen set of augmentations. We activated 10k reference images, with the same IVF-PQ as Sec.~\ref{sec:act_vs_passive} with or without using EoT. Table \ref{tab:eot} shows the average $R$@$1$\@ performance over the images submitted to different transformations before search. EoT brings a small improvement, specifically on transformations where base performance is low (\textit{e.g}.\@ rotation or crops here). However, it comes at a higher computational cost since each gradient descent iteration needs $A$ passes through the network, and since fewer images can be jointly activated due to GPU memory limitations (we need to store and back-propagate through $A$ transformations for every image). If the time needed to index or activate an image is not a bottleneck, using EoT can therefore be useful. Otherwise, it is not worth the computational cost. \begin{table}[h] \centering \captionsetup{font=small} \caption{$R$@$1$\@ for different transformations applied before search, with or without EoT when activating the images.} \label{tab:eot} \vspace*{-0.5cm} \resizebox{0.95\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{4pt} \def1.1{1.1} \begin{tabular}{ l |l| *{15}{c}} \multicolumn{1}{c}{} & \multicolumn{1}{l}{\rot{Activation}} & \rot{Identity} & \rot{Contr. 0.5} & \rot{Contr. 2.0} & \rot{Bright. 0.5} & \rot{Bright. 2.0} & \rot{Hue 0.2} & \rot{Blur 2.0} & \rot{JPEG 50} & \rot{Rot. 25} & \rot{Rot. 90} & \rot{Crop 0.5} & \rot{Resi. 0.5} & \rot{Meme} & \rot{Random} & \rot{Avg.} \\ \midrule Without EOT & 40 ms & 1.00 & 1.00 & 0.96 & 1.00 & 0.92 & 1.00 & 0.96 & 0.99 & 0.10 & 0.50 & 0.29 & 1.00 & 0.43 & 0.32 & 0.75 \\ \rowcolor{apricot!30} With EOT & 870 ms & 1.00 & 1.00 & 0.95 & 1.00 & 0.92 & 1.00 & 0.95 & 0.99 & 0.14 & 0.64 & 0.33 & 1.00 & 0.45 & 0.33 & 0.76 \\ \bottomrule \end{tabular} \endgroup } \vspace*{-0.8cm} \end{table} \pagebreak \section{Active Indexing vs. Watermarking}\label{sec:watermarking} \paragraph{Discussion.} Watermarking and active indexing both modify images for tracing and authentication, however there are significant differences between them. Watermarking embeds arbitrary information into the image. The information can be a message, a copyright, a user ID, etc. In contrast, active indexing modifies it to improve the efficiency of the search engine. Watermarking also focuses on the control over the False Positive Rate of copyright detection, \textit{i.e}.\@ a bound on the probability that a random image has the same message as the watermarked one (up to a certain distance). Although watermarking considers different settings than indexing methods, it could also be leveraged to facilitate the re-identification of near-duplicate images. In this supplemental section, we consider it to address a use-case similar to the one we address in this paper with our active indexing approach. In this scenario, the watermark encoder embeds binary identifiers into database images. The decoded identifier is then directly mapped to the image (as the index of a list of images). \paragraph{Experimental setup.} In the rest of the section, we compare active indexing against recent watermarking techniques based on deep learning. \begin{itemize}[leftmargin=0.5cm,itemsep=0cm,topsep=-0.1cm] \item For indexing, we use the same setting as in Sec.~\ref{sec:experimental} (IVF-PQ index with 1M reference images). When searching for an image, we look up the closest neighbor with the help of the index. \item For watermarking, we encode $20$-bit messages into images, which allows to represent $2^{10}\approx 10^6$ images (the number of reference images). When searching for an image, we use the watermark decoder to get back an identifier and the corresponding image in the database. \end{itemize} Like before, we use $R$@$1$\@ as evaluation metric. For indexing, it corresponds to the accuracy of the top-1 search result. For watermarking, the $R$@$1$\@ also corresponds to the word accuracy of the decoding, that is the proportion of images where the message is perfectly decoded. Indeed, with $20$-bit encoding almost all messages have an associated image in the reference set, so an error on a single bit causes a mis-identification (there is no error correction\footnote{In order to provide error correction capabilities, one needs longer messages. This makes it more difficult to insert bits: in our experiments, with 64 bits we observe a drastic increase of the watermarking bit error rate. }). We use two state-of-the-art watermarking methods based on deep learning: SSL Watermarking~\citep{fernandez2022sslwatermarking}, which also uses an adversarial-like optimization to embed messages, and HiDDeN~\citep{zhu2018hidden}, which encodes and decodes messages thanks to Conv-BN-ReLU networks. The only difference with the original methods is that their perturbation $\delta$ is modulated by the handcrafted perceptual attenuation model presented in App.~\ref{sec:perceptual}. This approximately gives the same image quality, thereby allowing for a direct comparison between active indexing and watermarking. \paragraph{Results.} Tab.~\ref{tab:watermarking} compares the $R$@$1$\@ when different transformations are applied before search or decoding. Our active indexing method is overall the best by a large margin. For some transformations, watermarking methods are not as effective as passive indexing, yet for some others, like crops for HiDDeN, the watermarks are more robust. \begin{table}[h] \centering \captionsetup{font=small} \caption{$R$@$1$\@ for different transformations applied before search, when using either watermarking or active indexing. Results are averaged on 1k images. Best result is in \textbf{bold} and second best in \textit{italic}. } \label{tab:watermarking} \resizebox{0.99\linewidth}{!}{ \begingroup \setlength{\tabcolsep}{4pt} \def1.1{1.1} \begin{tabular}{ p{4.0cm}| *{14}{c}c} \multicolumn{1}{c}{} & \rot{Identity} & \rot{Contr. 0.5} & \rot{Contr. 2.0} & \rot{Bright. 0.5} & \rot{Bright. 2.0} & \rot{Hue 0.2} & \rot{Blur 2.0} & \rot{JPEG 50} & \rot{Rot. 25} & \rot{Rot. 90} & \rot{Crop 0.5} & \rot{Resi. 0.5} & \rot{Meme} & \rot{Random} & \rot{Avg.} \\ \midrule Passive indexing & \bf 1.00 & 0.73 & 0.39 & 0.73 & 0.28 & 0.62 & 0.48 & \it 0.72 & \it 0.07 & 0.14 & 0.14 & \it 0.72 & 0.14 & 0.13 & 0.45 \\ Active indexing (ours) & \bf 1.00 & \bf 1.00 & \bf 0.96 & \bf 1.00 & \bf 0.92 & \bf 1.00 & \bf 0.96 & \bf 0.99 & \bf 0.10 & \bf 0.50 & \it 0.29 & \bf 1.00 & 0.43 & \bf 0.32 & \bf 0.75 \\ \midrule \parbox{4.0cm}{SSL Watermarking \citep{fernandez2022sslwatermarking}} & \bf 1.00 & \it 0.98 & \it 0.53 & \it 0.98 & \it 0.63 & \it 0.85 & 0.13 & 0.00 & 0.00 & \it 0.15 & 0.11 & 0.00 & \it 0.46 & 0.07 & 0.42 \\ \midrule \parbox{3.5cm}{HiDDeN\footnote{} \citep{zhu2018hidden}} & \it 0.94 & 0.87 & 0.36 & 0.85 & 0.55 & 0.00 & \it 0.81 & 0.00 & 0.00 & 0.00 & \bf 0.92 & 0.44 & \bf 0.77 & \it 0.16 & \it 0.48 \\ \bottomrule \end{tabular} \endgroup } \end{table} \footnotetext{Our implementation. As reported in other papers from the literature, results of the original paper are hard to reproduce. Therefore to make it work better, our model is trained on higher resolution images (224$\times$224), with a payload of $20$-bits, instead of 30 bits embedded into 128$\times$128. Afterwards, the same network is used on images of arbitrary resolutions, to predict the image distortion which is later rescaled as in Eq.~\eqref{eq:scaling}. In this setting the watermark can not always be inserted (6\% failure).} \newpage \section{More Qualitative Results}\label{sec:more_qualitative} Figure~\ref{fig:more_qualitative2} gives more examples of activated images from the DISC dataset, using the same parameters as in Sec.~\ref{sec:act_vs_passive}. The perturbation is very hard to notice (if not invisible), even in flat areas of the images because the perceptual model focuses on textures. We also see that the perturbation forms a regular pattern. This is due to the image (bilinear) resize that happens before feature extraction. Figure~\ref{fig:more_qualitative} gives example of an image activated at several values of perturbation strength $\alpha$ of Eq.~\eqref{eq:scaling} (for instance, for $\alpha=20$ the image has PSNR $27$dB and for $\alpha=1$ the image has PSNR $49$dB). The higher the $\alpha$, the more visible the perturbation induced by the activation is. Nevertheless, even with low PSNR values ($<35$dB), it is hard to notice if an image is activated or not. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{figs/qualitative/psnr/psnr.pdf} \captionsetup{font=small} \caption{Example of one activated image at different levels of $\alpha$.} \label{fig:more_qualitative} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth, trim={0 0.4cm 0 0.4cm}, clip]{figs/qualitative/diffs/qual_0.jpg} \includegraphics[width=0.9\textwidth, trim={0 0.4cm 0 0.4cm}, clip]{figs/qualitative/diffs/qual_2.jpg} \includegraphics[width=0.9\textwidth, trim={0 0.4cm 0 0.4cm}, clip]{figs/qualitative/diffs/qual_3.jpg} \includegraphics[width=0.9\textwidth, trim={0 0.4cm 0 0.4cm}, clip]{figs/qualitative/diffs/qual_4.jpg} \includegraphics[width=0.9\textwidth, trim={0 0.4cm 0 0.4cm}, clip]{figs/qualitative/diffs/qual_1.jpg} \captionsetup{font=small} \caption{Example of activated images for $\alpha=3.0$. (Left) original images, (Middle) activated images, (Right) pixel-wise difference. Images are \href{http://www.flickr.com/photos/87738888@N03/8039927199/}{R000005.jpg}, \href{http://www.flickr.com/photos/79653482@N00/7766743380/}{R000045.jpg}, \href{http://www.flickr.com/photos/12420018@N03/5452964454/}{R000076.jpg}, \href{http://www.flickr.com/photos/56594044@N06/5933951717/}{R000172.jpg} and \href{http://www.flickr.com/photos/31369133@N04/5570867046/}{R000396.jpg}. } \label{fig:more_qualitative2} \end{figure}
2023-04-23T08:17:55.013Z
2022-10-20T02:16:00.000Z
redpajama/arxiv
arxiv_0000
851
11,000
05117ae996b8a8c1114bb1f58a9cdc1286619a64
\section{Introduction} In the evolution of commercial cellular networks from 4G to next generation solutions, multiple-input multiple-output (MIMO) technology constitutes a salient milestone \cite{Yang2015,Zheng2015,Xie2017,Wang2016,Wang2020,Ma2021,Wang2019,Wan2021,Fang2017}, leading to significant spectral and power efficiency improvements. However, they require accurate channel state information (CSI) \cite{Palomar03,XingTSP201501,Xie2016,Wu2019,Jiang2010,Xu2022}. Therefore, pilot-aided channel estimation plays an important role in multi-antenna system design \cite{Hassibi2003,Yuan2018,You2015,Jin2016,Qi2015,Wu2020}. Based on the noise-contaminated pilot observations, the MIMO channel matrix can be estimated by relying on performance metrics such as the least square error \cite{Biguesh2006}, minimum mean squared error (MSE) \cite{Soysal2010,Soysal2010-2}, mutual information (MI)\cite{Coldrey2007,Coldrey2008}, etc. Then, by leveraging the estimated CSI, MIMO transceivers can be designed for optimizing the overall system performance \cite{Pastore2016,Soysal2010,Soysal2010-2}. The specific choice of the training sequence has a pivotal impact on the performance of channel estimation for MIMO systems \cite{Ma2017,Yuan2018}, and the key task of channel estimation is to recover the MIMO channel matrix instead of a scalar parameter. Hence, MIMO channel estimation constitutes a multiple-parameter estimation problem, where the training sequence optimization relies either on signal processing oriented \cite{Wong2004,Shariati2014} or on information theoretic metrics \cite{Gu2019}. It has been shown based on matrix-monotonic optimization \cite{XingTSP201501} that in fact a diverse variety of training sequence optimization techniques relying on heterogeneous performance metrics intrinsically aim for maximizing a matrix-valued signal-to-noise ratio (SNR) during the channel estimation procedure. It is widely exploited that MIMO transceivers are capable of striking a trade-off between the spatial diversity gain and the spatial multiplexing gain attained by the MIMO channels. When multiple data streams are transmitted simultaneously from the source to destination, the transceiver optimization constitutes a multi-objective optimization problem \cite{Xing2021,Xing2021-2,Fei2017}. Given the CSI, diverse performance metrics, such as the capacity, or the sum MSE of the estimation may be optimized \cite{Palomar03}. In order to accommodate as many optimization objective functions (OFs) as possible, majorization theory is used in \cite{Marshall79,Palomar03} to formulate unified objective functions~\footnote{Explicitly, we will use the adjective ``unified'' upon referring to unifying diverse objective functions, and the adjective ``joint'' upon referring to the optimization of both the training sequence and precoder.}. Recently, the framework of matrix-monotonic optimization was proposed for MIMO transceiver optimization relying on diverse OFs \cite{XingTSP201501,Xing2021,Xing2021-2}. Again, as pointed out in \cite{XingTSP201501}, the MIMO precoder optimization task is reminiscent of maximizing a matrix-valued SNR in the data transmission procedure. When channel estimation is more accurate, at first right the illusion might appear that the successive data transmission will have improved performance. However, when either the pilot overload or the pilot power is increased, either the effective throughput or the power - or potentially both - have to be reduced for data transmission for the sake of fair comparison. This tradeoff reveals that the joint optimization of the transmit precoder (TPC) and of the training sequence is of critical importance for MIMO systems. Hence, there is a rich body of literature on joint training and transceiver optimization \cite{Pastore2016,Soysal2010}, but in most of the related literature, the joint optimization tends to rely on a single performance metric. For example, joint TPC and training sequence optimization relying on MI maximization was extensively studied in \cite{Pastore2016,Soysal2010,Soysal2010-2}. However, using a joint training sequence and transceiver weight optimization framework relying on multiple performance metrics - rather than a single one, like the MI in~\cite{Pastore2016,Soysal2010,Soysal2010-2} - has important theoretical and practical benefits. Having said that, there is a distinct paucity of literature on this topic - probably because their joint optimization is much more challenging than that of its separate counterparts. This becomes particularly challenging, when both the time domain as well as the inter-element correlation of the channel's transmit side are taken into account \cite{Pastore2016}. The main challenge arises from the complex performance analysis of the associated random matrix variables \cite{Soysal2010,Soysal2010-2,Hassibi2003}. The distinct contributions of this paper are shown in Table~\ref{comparison}. \begin{table}[h!] \centering \vspace{-3mm} \caption{Key attributes of this paper compared to the existing MIMO joint design of training sequences and transceivers with power constrained} \vspace{-2mm} \label{comparison} \begin{tabular}{| c | c | c | c | c |} \hline & \cite{Soysal2010} & \cite{Pastore2016} & Our \\ \hline\hline transmit-side correlation channel model & $\surd$ & $\surd$ & $\surd$ \\ \hline statistical CSI known at transmitter & $\surd$ & $\surd$ & $\surd$ \\ \hline estimated CSI known at transmitter & & & $\surd$ \\ \hline different objective functions & & $\surd$ & $\surd$ \\ \hline maximum antenna number in simulation & 3 & 2 & 16 \\ \hline \end{tabular} \vspace{-1mm} \end{table} A close scrutiny reveals that the nature of training optimization and of transceiver optimization is quite similar to each other \cite{XingTSP201501}. Hence we are inspired to investigate the joint optimization of the training sequence design and the TPC design based on the matrix-monotonic optimization framework of \cite{XingTSP201501}. A novel matrix-monotonic optimization technique termed as joint matrix-monotonic optimization is proposed for the joint optimization of the training sequence and of a linear TPC, which involves a pair of matrix variables, namely the training sequence matrix and linear TPC matrix. Our main contributions are as follows: \begin{itemize} \item A whole suite of performance metrics is considered in the joint optimization of linear transceivers and training sequences, including the effective MI, effective sum MSE, effective weighted MI, effective weighted MSE, as well as the effective general Schur-convex and Schur-concave functions. Furthermore, a range of practical settings is considered. Specifically, for channel estimation, the power of the amplifier is limited, hence the estimation accuracy can be only improved by increasing the length of the training sequence instead of increasing the power of the amplifiers. \item For the joint optimization of linear TPCs and training sequences, the spatial correlation of antennas experienced at the transmitter is taken into account. Moreover, both statistical CSI and estimated CSI are discussed. More explicitly for the former scenario, the transmitter only has statistical CSI information, while the receiver relies on the estimated CSI. By contrast, for the latter case, the transmitter and the receiver have the same estimated CSI, i.e. the feedback channel between the transmitter and receiver are assumed to be perfect. \item Our matrix-monotonic optimization framework of \cite{XingTSP201501} derived for separate transceiver optimization or training optimization is extended here to the new joint matrix-monotonic optimization concept, which jointly optimizes our linear TPC and the training sequence. Based on our new joint matrix-monotonic optimization, the optimal structures of the linear TPC matrix and the training sequence matrix are derived. Therefore, the resultant joint optimization problems can be significantly simplified. Although the original joint optimization problems are complex, the proposed solution is of appealingly low complexity, which provides valuable guidelines for practical system designs. \end{itemize} The rest of the paper is organized as follows. In Section~\ref{Section_Signal_Model}, the signal models of channel estimation and information transmission are provided, respectively. The general joint training sequence and linear transceiver optimization problem is formulated and solved in Section~\ref{Section_Joint_Optimization}. Specifically, the joint optimization of the training sequence and transceiver is investigated in Section~\ref{Section_Transmitter_Statistical} when the transmitter has only statistical CSI. By contrast, the transmitter relying on estimated CSI is investigated in Section~\ref{Section_Transmitter_Estimated_CSI}. Our simulation results are discussed in Section~\ref{Section_Simulation}, while our conclusions are offered in Section~\ref{Section_Conclusions}. Fig.~\ref{flow} illustrates the flow of mathematical analysis. \begin{figure}[t] \centering \includegraphics[width=7.2cm]{flow.eps} \vspace{-3mm} \caption{Flow of the mathematical analysis. } \vspace{-5mm} \label{flow} \end{figure} \noindent \textbf{Notation:} In order to clarify the following mathematical derivations, we introduce the notations, symbols and definitions used throughout this paper. Referring to the fundamental matrix operations, the symbols ${\boldsymbol{Z}}^{\rm{H}}$ and ${\boldsymbol{Z}}^{\rm{T}}$ denote the Hermitian transpose and the transpose of a general matrix ${\boldsymbol{Z}}$, respectively. The trace and determinant of a square matrix ${\boldsymbol{Z}}$ are denoted as ${\rm{Tr}}({\boldsymbol{Z}})$ and $|{\boldsymbol{Z}}|$, respectively. For a positive semidefinite matrix $\bm{Z}$, the matrix ${\boldsymbol{Z}}^{\frac{1}{2}}$ or ${\boldsymbol{Z}}^{{1}/{2}}$ is the Hermitian square root of ${\boldsymbol{Z}}$, which is also positive semidefinite. The mathematical notation ${z^ + }$ represents $\max \{ 0,z\} $. For matrix decompositions in this paper, the notations ${\boldsymbol \Lambda} \searrow $ and ${\boldsymbol \Lambda} \nearrow $ represent rectangular or square diagonal matrices with their diagonal elements sorted in decreasing and increasing order, respectively. \section{Signal Models for Channel Estimation and Data Transmission} \label{Section_Signal_Model} A typical point-to-point MIMO system is considered. Specifically, an $N_{\rm{R}} \times N_{\rm{T}}$ MIMO channel $\bm{H}$ associated with transmit-side spatial antenna correlation is investigated, where the channel $\bm{H}$ can be formulated as in \cite{Pastore2016,Xing2021,Soysal2010}: \begin{align}\label{Channel_Model} {\bm{H}}={\bm{H}}_{\rm{W}}{\bm \Psi}^{1/2}, \end{align}where the entries of ${\bm{H}}_{\rm{W}}$ are independent and identically distributed (i.i.d.) Gaussian random variables having zero mean and unit variance. In the signal model (\ref{Channel_Model}), the positive definite matrix ${\bm \Psi}$ represents the transmit correlation matrix. Both channel estimation and data transmission are considered assuming the channel model (\ref{Channel_Model}), which are performed consecutively. In the first phase, training sequences are transmitted. Based on the noise-contaminated training signal observations, the channel matrix is estimated at the receiver. In the second phase, based on the estimated channel matrix, the transmit precoded signals are constructed and transmitted, which are then recovered at the receiver. In the following, we discuss the signal models of these two phases respectively. \subsection{Channel Estimation and Training Optimization} For the training based channel estimation, the training sequence $\bm{X}$ is transmitted to the receivers, yielding the received signal sequence of \begin{align} \label{Channel_Estimation_Model} {\bm{Y}}={\bm{H}}{\bm{X}}+{\bm{N}}, \end{align} where $\bm{N}$ is the additive noise matrix at the destination. Based on the signal model in (\ref{Channel_Estimation_Model}), the key task is to recover the channel matrix $\bm{H}$ from the noise-contaminated observation $\bm{Y}$ as accurately as possible. For a linear channel estimator, the estimated channel equals \cite{Pastore2016} \begin{align} \label{Estimated_Channel} {\bm{\widehat H}}={\bm{Y}}{\bm{G}}_{\rm{E}}, \end{align} where the estimated channel matrix ${\bm{\widehat H}}$ and the true channel matrix ${\bm{H}}$ satisfy \begin{align}\label{Error_Model} {\bm{H}}=&{\bm{\widehat H}}+ \Delta{\bm{H}}, \end{align} where $\Delta\bm{H}$ is the estimation error. Then the MSE matrix of the corresponding channel estimation can be expressed as \begin{align} \label{MMSE_Matrix} \bm{E}_{\rm{MSE}}= {\mathbb{E}}\{\Delta\bm{H}^{\rm{H}}\Delta\bm{H} \} =\ &({\bm{I}} -{\bm{X}}{\bm{G}}_{\rm{E}})^{\rm{H}}{\bm{R}}_{\rm{H}} ({\bm{I}}-{\bm{X}}{\bm{G}}_{\rm{E}}) +{\bm{G}}_{\rm{E}}^{\rm{H}}{\bm{R}}_{\rm{N}}{\bm{G}}_{\rm{E}}, \end{align}where the channel's correlation matrix ${\bm{R}}_{\rm{H}}$ and the noise covariance matrix ${\bm{R}}_{\rm{N}}$ are defined as \begin{align} {\bm{R}}_{\rm{H}}&={\mathbb{E}}\{{\bm{H}}^{\rm{H}}{\bm{H}}\}=N_{\rm{R}}{\bm \Psi}, \ \bm{R}_{\rm{N}} =\mathbb{E} \{\bm{N}^{\rm{H}} \bm{N}\}, \end{align} where the scalar $N_{\rm{R}}$ denotes the number of receive antennas. In order to minimize $\bm{E}_{\rm{MSE}}$, the optimal ${\bm{G}}_{\rm{E}}$ can be chosen as the linear minimum mean squared error (LMMSE) channel estimator as in \cite{Kay93,Ding09}: \begin{align}\label{LMMSE_Estimator} {\bm{G}}_{{\rm{E}},{\text{Opt}}}=&({\bm{X}}^{\rm{H}}{\bm{R}}_{\rm{H}}{\bm{X}}+{\bm{R}}_{\rm{N}})^{-1} {\bm{X}}^{\rm{H}}{\bm{R}}_{\rm{H}}. \end{align}Therefore, the channel estimation MSE matrix $\bm{E}_{\rm{MSE}}$ in (\ref{MMSE_Matrix}) can be reformulated as \begin{align} \label{MSE_Matrix_Channel_Estimation} \bm{E}_{\rm{MSE}}=\mathbb{E}\{ \Delta{\bm{H}}^{\rm{H}} \Delta{\bm{H}} \} =&({\bm{R}}_{\rm{H}}^{-1}+{\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}})^{-1} =N_{\rm{R}}\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}}\right)^{-1} . \end{align} Meanwhile, based on the LMMSE estimator in (\ref{LMMSE_Estimator}), the resultant channel estimation error $\Delta\bm{H}$ can be written in the following form \cite{Kay93} \begin{align} \Delta\bm{H}=\Delta\bm{H}_{\rm{W}}{\bm \Phi}^{1/2}, \end{align} where $\Delta\bm{H}_{\rm{W}}$ is a random matrix whose elements are i.i.d. Gaussian distributed with zero mean and unit variance, while ${\bm \Phi}$ is given in (\ref{MSE_Matrix_Channel_Estimation}) as \begin{align} {\bm \Phi} =& \left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\rm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1}. \end{align} Moreover, upon applying the LMMSE channel estimator of (\ref{LMMSE_Estimator}), the correlation matrix of the estimated channel ${\mathbb{E}}\{\bm{\widehat H}^{\rm{H}}\bm{\widehat H}\}$ equals \begin{align} {\mathbb{E}}\{\bm{\widehat H}^{\rm{H}}\bm{\widehat H}\} =\! {\bm{R}}_{\rm{H}}\!-\! ({\bm{R}}_{\rm{H}}^{-1}\!+ \! {\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}})^{-1} =\! N_{\rm{R}}{\bm{\Psi}}\! - \! N_{\rm{R}}\left({\bm{\Psi}}^{-1}\!+\!N_{\rm{R}}{\bm{X}}{\bm{R}}_{\rm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \triangleq {\bm \Pi}. \end{align} When the LMMSE channel estimator is adopted, the channel estimation MSE matrix $\bm{E}_{\rm{MSE}}$ is a function of the training sequence $\bm{X}$. Therefore, the performance of channel estimation may indeed be improved by optimizing the choice of the training sequence $\bm{X}$ \cite{XingTSP201501}. The most widely used performance metrics are the MI \cite{Gu2019} and MSE \cite{XingTSP201501}. In the following, these two classic training optimization techniques are reviewed in order to reveal the general optimal structure of the training sequence $\bm{X}$. As for MI maximization, the corresponding training optimization problem is formulated as in \cite{Coldrey2007}, \cite{Coldrey2008}, \cite{Gu2019}: \begin{align} \textbf{P. 1:} \ \ \max_{{\bm{X}}} \ \ & {\rm{log}}\ {\rm{det}}\big( {\bm{R}}_{\rm{H}}^{-1}+{\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}}\big), \ {\rm{s.t.}} \ {\rm{Tr}}({\bm{X}}{\bm{X}}^{\rm{H}})\le P_{\rm{T}}T_{\rm{T}}, \end{align} where $P_{\rm{T}}$ and $T_{\rm{T}}$ are the power and the time interval of channel estimation, respectively, and $P_{\rm{T}}T_{\rm{T}}$ is the energy allocated to the channel estimation. On the other hand, the training optimization aiming for minimizing the sum MSE can be formulated in the following form \cite{Wong2004} \begin{align} \textbf{P. 2:} \ \ \min_{{\bm{X}}} \ \ & {\rm{Tr}}[ ({\bm{R}}_{\rm{H}}^{-1}+{\bm{X}}{\bm{R}}_{\rm{N}}^{-1} {\bm{X}}^{\rm{H}})^{-1}], \ {\rm{s.t.}} \ {\rm{Tr}}({\bm{X}}{\bm{X}}^{\rm{H}})\le P_{\rm{T}}T_{\rm{T}}. \end{align}Based on \textbf{P. 1} and \textbf{P. 2}, it may be inferred that using different performance metrics for MIMO channel estimation results in different optimal sequences, because multiple parameters have to be estimated in MIMO channel estimation. Different performance metrics result in different tradeoffs. Therefore, a general MIMO training optimization results in a multi-objective optimization problem. From a multi-objective optimization perspective, all the optimal solutions of the training optimization associated with different performance metrics are the Pareto optimal\footnote{Pareto optimality refers to the fact that, in a multi-objective optimization problem, it is impossible to improve any of the objectives without degrading at least one of the other objectives.} solutions of the following matrix-monotonic optimization problem \cite{XingTSP201501} \begin{align} \textbf{P. 3:} \ \ \max_{\bm{X}} \ \ & {\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}}, \ {\rm{s.t.}} \ {\rm{Tr}}({\bm{X}}{\bm{X}}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}. \end{align} We would like to highlight that in the above optimization problem, the OF is a positive semi-definite matrix, which may be viewed as a matrix-valued SNR in the channel estimation procedure \cite{XingTSP201501}. Bearing in mind that the constraint in \textbf{P. 3} is unitary invariant, the matrix-monotonic optimization \textbf{P. 3} is of a vector optimization nature, which is a function of the vector consisting of the eigenvalues of ${\bm{X}}{\bm{R}}_{\rm{N}}^{-1}{\bm{X}}^{\rm{H}}$. The different Pareto optimal solutions of \textbf{P. 3} achieve different levels of fairness among the different accuracies of the multiple estimated parameters. It is worth noting that, from a practical implementation perspective, the channel estimation performance is usually improved based on gathering more energy at the receiver instead of increasing the transmit power, because a practical power amplifier has a limited linear operating region. In other words, the channel estimation accuracy is improved by increasing the number of columns in $\bm{X}$, instead of increasing the amplitudes of the elements of $\bm{X}$. Based on the matrix-monotonic optimization framework of \cite{XingTSP201501}, we have the following conclusions for training optimization. \noindent \textbf{Conclusion 1:} In MIMO channel estimation relying on an LMMSE estimator, the optimal training sequences satisfy the following structure \begin{align} {\bm{X}}_{\rm{opt}}={\bm{U}}_{\bm{X}}{\bm \Lambda}_{\bm{X}}{\bm{U}}_{{\bm{R}}_{\rm{N}}}^{\rm{H}}, \end{align} where the unitary matrix ${\bm{U}}_{{\bm{R}}_{\rm{N}}}$ is defined based on the eigenvalue decomposition (EVD) as \begin{align} {\bm{R}}_{\rm{N}}^{-1}={\bm{U}}_{{\bm{R}}_{\rm{N}}}{\bm \Lambda}_{{\bm{R}}_{\rm{N}}}^{-1}{\bm{U}}_{{\bm{R}}_{\rm{N}}}^{\rm{H}} \ \text{with} \ {\bm \Lambda}_{{\bm{R}}_{\rm{N}}}^{-1}\searrow. \end{align}The unitary matrix ${\bm{U}}_{\bm{X}}$ is determined by the specific performance metrics. Finally, ${\boldsymbol \Lambda}_{\bm{X}}$ is a rectangular diagonal matrix, which is also determined by the specific performance metric. \noindent \textbf{Conclusion 2:} Referring to ${\bm{U}}_{\bm{X}}$ for both MI maximization and MSE minimization, the optimal ${\bm{U}}_{\bm{X}}$ obeys \cite{XingTSP201501} \begin{align} {\bm{U}}_{\bm{X}}={\bm{U}}_{\bm{\Psi}}, \end{align}where the unitary matrix ${\bm{U}}_{\bm{\Psi}}$ is defined based on the following EVD: \begin{align} {\bm{\Psi}}={\bm{U}}_{\bm{\Psi}}{\bm \Lambda}_{\bm{\Psi}}{\bm{U}}_{\bm{\Psi}}^{\rm{H}} \ \text{with} \ {\bm \Lambda}_{\bm{\Psi}}\searrow. \end{align} Based on \textbf{Conclusion 2}, the structure ${\bm{U}}_{\bm{X}}={\bm{U}}_{\bm{\Psi}}$ is applied in the general joint optimization of the training and the transceiver in Section~\ref{Section_Joint_Optimization}. Moreover, it is also worth noting that at high SNRs\footnote{Channel estimation is performed invariably at high SNR.} the MI maximization \textbf{P. 1} has the optimal solution of ${\bm \Lambda}_{\bm{X}}=\bm{I}$ i.e., uniformly allocating the powers along spatial directions \cite{XingTSP201501}. By contrast, for MSE minimization, the optimal ${\bm \Lambda}_{\bm{X}}$ is the water-filling solution \cite{XingTSP201501}. Undoubtedly, uniform power allocation is the simplest scheme for training optimization, which can be adopted as a suboptimal power allocation scheme along the spatial directions. \subsection{Transceiver Optimization} Upon assuming that the estimated CSI is available at the transmitter with the aid of a perfect feedback channel, the transmitted signal is preprocessed by a channel-dependent TPC and the signal ${\bm{y}}$ received at the destination is of the following form \cite{Palomar03}: \begin{align}\label{Signal_Model_Transceiver} {\bm{y}}&={\bm{H}}{\bm{F}}{\bm{s}}+{\bm{n}}, \end{align}where ${\bm{F}}$ is the TPC matrix, ${\bm{s}}$ is the signal vector and ${\bm{n}}$ is the additive receiver noise having the covariance matrix ${\bm{R}}_{{\bm{n}}}=\sigma_{\rm{N}}^2{\bm{I}}$. The covariance matrix of ${\bm{s}}$ is assumed to be an identity matrix, i.e., ${\mathbb{E}}\{{\bm{s}}{\bm{s}}^{\rm{H}}\}={\bm{I}}$. In data transmission, based on the channel estimation error model of (\ref{Error_Model}), the signal model of (\ref{Signal_Model_Transceiver}) used for data transmission is rewritten as \begin{align}\label{Signal_Model_Data_T} {\bm{y}}&={\bm{\widehat H}}{\bm{F}}{\bm{s}}+\Delta\bm{H}{\bm{F}}{\bm{s}}+{\bm{n}} ={\bm{\widehat H}}{\bm{F}}{\bm{s}}+\underbrace{\Delta\bm{H}_{\rm{W}}\bm{\Phi}^{\frac{1}{2}} {\bm{F}}{\bm{s}}+{\bm{n}}}_{\triangleq \bm{v}}, \end{align}where $\bm{v}$ is the equivalent noise that consists of the additive noise and the channel estimation error. The covariance matrix $\bm{R}_{\bm{v}}$ of the equivalent noise $\bm{v}$ can be expressed as \cite{Pastore2016,Ding09} \begin{align} \bm{R}_{\bm{v}}=[\sigma_{\rm{N}}^2+{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}. \end{align} As for the transceiver optimization, the receiver has the information including the estimated channel matrix $\bm{\widehat H}$ and the covariance matrix $\bm{R}_{\bm{v}}$. Again, at the transmitter, we assume that either the estimated CSI ${\bm{\widehat H}}$ or only the statistical CSI given in (\ref{Channel_Model}) is available. In the following, both of these two cases are discussed. Note that similarly to training optimization, for the MIMO TPC designs there are also diverse performance metrics that correspond to distinctly different OFs \cite{Palomar03,XingTSP201501}. First, the TPC is designed by maximizing the capacity or MI. The corresponding optimization problem is given by \cite{Palomar03} \begin{align} \textbf{P. 4:}\! \max_{{\bm{F}}} \! \mathbb{E}\!\left\{\!{\rm{log}}\det\!\left({\bm{I}}\!\!+\!\!{\bm{F}}^{\rm{ H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}} {\bm{F}}\!\right)\!\right\} \! {\rm{s.t.}} \bm{R}_{\bm{v}}\!=\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}}) \!\le \! P_{\rm{D}}, \end{align}where $ P_{\rm{D}}$ is the maximum power during the data transmission. In \textbf{P. 4}, when the transmitter has the same CSI at the receiver, the expectation $\mathbb{E}\{\cdot\}$ can be removed. Otherwise, when only statistical CSI is available at the transmitter, $\mathbb{E}\{\cdot\}$ is performed over ${\bm{\widehat H}}$ for the optimization of ${\bm{F}}$. On the other hand, the TPC optimization aiming for minimizing the sum MSE may be written as \cite{Palomar03} \begin{align} \textbf{P. 5:} \! \min_{{\bm{F}}} \! \mathbb{E}\!\left\{{\rm{Tr}}[\!({\bm{I}}\!+\!{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}}\!)^{-1}]\! \right\}\! {\rm{s.t.}} \bm{R}_{\bm{v}}\!=\![\sigma_{\rm{N}}^2\!+\!{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}}) \!\le\! P_{\rm{D}}. \end{align} In order to extend the optimization \textbf{P. 4} to a general case, a weighted MI is formulated as \cite{Xing2021} \begin{align} \textbf{P. 6:} \! \max_{{\bm{F}}} \! \mathbb{E}\!\left\{ \! {\rm{log}}\det\!\left(\!{\bm{I}}\!\!+\!\!{\bm{A}}^{\rm{H}}{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}} {\bm{F}}{\bm{A}}\!\right)\! \right\} \! {\rm{s.t.}} \bm{R}_{\bm{v}}\!\!=\!\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, \! \! {\rm{Tr}}(\!{\bm{F}}{\bm{F}}^{\rm{H}}\!)\! \!\le\! \! P_{\rm{D}}, \end{align}where $\bm{A}$ is a complex matrix of appropriate dimensions. In \textbf{P. 6}, it can be observed that the weighting matrix $\bm{A}$ is applied to the matrix-valued SNR. Similarly, the optimization problem of weighted MSE minimization can be written as \cite{XingTSP201501} \begin{align} \textbf{P. 7:} \! \min_{{\bm{F}}}\! \mathbb{E}\left\{ \! {\rm{Tr}}\![\!\bm{W}(\!{\bm{I}}\!+\!{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}})^{-1}\!]\! \right\} {\rm{s.t.}} \bm{R}_{\bm{v}}\!=\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I},\!\! {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})\! \le\! P_{\rm{D}}, \end{align} where the positive semidefinite matrix $\bm{W}$ is the weighting matrix. Generally speaking, for linear transceiver optimization having additively Schur-convex objectives, the problem can be represented as \cite{Palomar03,XingTSP201501,Xing2021} \begin{align} \textbf{P. 8:}\! \min_{{\bm{F}}}\! \mathbb{E}\!\left\{ \! f_{{\rm{Convex}}}^{\rm{A\!-\!Schur}}\!\left(\!{\bf{d}}\![\!({\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}} {\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}}\!\!+\!\!{\bf{I}}\!)^{-1}\!]\! \right)\! \right\}\!\! {\rm{s.t.}} \! \! \bm{R}_{\bm{v}}\!\!=\!\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, \!\! {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})\!\!\le\!\! P_{\rm{D}}, \end{align} where $f_{{\rm{Convex}}}^{\rm{A\!-\!Schur}}(\cdot)$ is an additively Schur-convex function of the vector consisting of the diagonal elements of $({\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}} {\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}}\!+\!{\bf{I}})^{-1}$. The symbol ${\bf{d}}(\bm{Z})$ represents the vector consisting of the diagonal elements of $\bm{Z}$. On the other hand, the linear transceiver optimization having additively Schur-concave objective is given by \cite{Palomar03,XingTSP201501,Xing2021} \begin{align} \textbf{P. 9:}\! \min_{{\bm{F}}}\! \mathbb{E}\!\left\{ \! f_{{\rm{Concave}}}^{\rm{A\!-\!Schur}}\!\left(\!{\bf{d}}\![\!({\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}} {\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}}\!\!+\!\!{\bm{I}}\!)^{-1}\!] \!\right)\! \right\}\! \! {\rm{s.t.}}\! \! \bm{R}_{\bm{v}}\!=\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}\!(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})\!]\bm{I}, \!\!{\rm{Tr}}\!({\bm{F}}{\bm{F}}^{\rm{H}}\!)\!\le\! P_{\rm{D}}, \end{align} where $f_{{\rm{Concave}}}^{\rm{A\!-\!Schur}}(\cdot)$ is an additively Schur-concave function of the vector consisting of the diagonal elements of $({\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}} {\bm{R}}_{{\bm{v}}}^{-1}{\bm{\widehat H}}{\bm{F}}+{\bf{I}})^{-1}$. In order to accommodate the OFs in \textbf{P. 4} to \textbf{P. 9}, the unified general optimization problem is formulated as follows \begin{align} \textbf{P. 10:} \max_{{\bm{F}}} f_{\rm{unified}}\!\left(\!{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}\bm{R}_{\bm{v}}^{-1}{\bm{\widehat H}}{\bm{F}}\!\right)\! {\rm{s.t.}} \bm{R}_{\bm{v}}\!=\![\sigma_{\rm{N}}^2\!\!+\!\!{\rm{Tr}}\!(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})\!]\bm{I}, \!{\rm{Tr}}\! ({\bm{F}}{\bm{F}}^{\rm{H}})\!\le\! P_{\rm{D}}. \end{align} It is worth noting that \textbf{P. 10} aims for maximizing a performance metric, while some optimization problems from the set of \textbf{P. 4} to \textbf{P. 9} aim for minimizing some performance metrics. In order to make the mathematical formulas consistent, when the optimization problem considered aims for minimizing a nonnegative function, its OF is replaced by its inverse function. When the optimization problem considered aims for minimizing a negative function, its OF is replaced by its negative counterpart. It should be highlighted that the expectation operation is contained in \textbf{P. 10}, and has either different mathematical meanings when the transmitter has either estimated CSI or statistical CSI. Specifically, when the transmitter relies on estimated CSI, the TPC $\bm{F}$ is optimized based on a specific realization of ${\bm{\widehat H}}$ and then the expectation operation is applied over the resultant OF. On the other hand, when the transmitter has only statistical information, the TPC $\bm{F}$ is optimized over the whole distribution of ${\bm{\widehat H}}$, instead of a realization of ${\bm{\widehat H}}$. As for the TPC optimizations from the set \textbf{P. 4} to \textbf{P. 9}, when the estimated CSI is available at the transmitter, the optimal solutions are the Pareto optimal solutions of the following matrix-monotonic optimization \cite{XingTSP201501} \begin{align} \textbf{P. 11:} \max_{{\bm{F}}} {\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}{\bm{F}} \ {\rm{s.t.}} \ \bm{R}_{\bm{v}}=[\sigma_{\rm{N}}^2+{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})\le P_{\rm{D}}. \end{align} On the other hand, when the transmitter only has statistical information, the optimal solutions of \textbf{P. 4} to \textbf{P. 9} aim for maximizing the distribution of ${\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}{\bm{F}}$. When ${\bm{\widehat H}}$ has only column correlations, maximizing the distribution of a random positive semidefinite matrix is equivalent to maximizing its expectation, i.e., ${\bm{F}}^{\rm{H}}\mathbb{E}\{{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}\}{\bm{F}}$ \cite{TrainingXing}. This is because when the distribution is maximized, for any given realization in the original distribution, it is always possible to find a realization from the optimized distribution, which is larger than the original realization and has the same probability density function (pdf). The detailed proof can be found in \cite{TrainingXing}. As a result, when ${\bm{\widehat H}}$ has only column correlations and the transmitter has only statistical CSI, the optimal solutions of \textbf{P. 4} to \textbf{P. 9} are the Pareto optimal solutions of the following matrix-monotonic optimization \cite{Xing2021} \begin{align} \textbf{P. 12:} \max_{{\bm{F}}} {\bm{F}}^{\rm{H}}\mathbb{E}\{{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}\}{\bm{F}} \ {\rm{s.t.}} \bm{R}_{\bm{v}}=[\sigma_{\rm{N}}^2+{\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}\bm{\Phi})]\bm{I}, {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})\le P_{\rm{D}}. \end{align} \textbf{P. 12} can be viewed as the optimization of the distribution of ${\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}{\bm{F}}$. When multiple data streams are transmitted, the MIMO TPC optimization is a multi-objective optimization problem. Generally speaking, there is no solution that is optimal for all OFs. It is plausible that the channel estimation procedure has a grave impact on the TPC optimization. Note that in \textbf{P. 10} and \textbf{P. 11}, the positive definite matrix $\bm{\Phi}$ is a function of the training sequence $\bm{X}$. Moreover, it is also worth highlighting that the communication resources are limited, i.e., $(T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}$, where $T$ is the channel's coherence time. The more resources are allocated to channel estimation, the more accurately the channel matrix can be estimated, but leaving less resources for information transmissions. As a result, there exist tradeoffs between the optimizations of \textbf{P. 3} and \textbf{P. 10}. The focus of our work is hence to jointly optimize these two matrix-monotonic optimization problems. \section{Unified Optimization of the Training Sequence and the TPC} \label{Section_Joint_Optimization} Based on the discussions in Section~\ref{Section_Signal_Model}, when having perfect CSI at the transmitter, the matrix-monotonic optimization problem \textbf{P. 10} can be simplified to the following matrix-monotonic optimization \cite{XingTSP201501} \begin{align} \label{MM_Precoder} \max_{\bm{F}} \ \ & {\bm{F}}^{\rm{H}}{\bm{H}}^{\rm{H}}{\bm{R}}_{\bm{n}}^{-1}{\bm{H}}{\bm{F}}, \ {\rm{s.t.}} \ {\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}}) \le P_{{\rm{D}}}, \end{align} which is of the same form as \textbf{P. 3}. This fact reveals that there exits a duality between the TPC optimization and training sequence optimization \cite{XingTSP201501}, albeit the constraints in \textbf{P. 3} and \textbf{P. 10} have slightly different physical meanings from a physical perspective. Specifically, the constraint in \textbf{P. 3} guarantees having limited total energy for channel estimation. On the other hand, the constraint in \textbf{P. 10} assures that the maximum power is lower than a specific threshold. \subsection{Unified Optimization Problems} For the joint optimization of the training sequence $\bm{X}$ and the TPC $\bm{F}$, a joint performance metric is a function of both $\bm{X}$ and $\bm{F}$. Again, here the training optimization is carried out by only using the statistical CSI. By contrast, the TPC optimization is carried out by using either the statistical CSI or the estimated CSI. As a result, for the joint optimization considered, the performance metric should be an average performance that is independent of the instantaneous CSI. The corresponding unified joint optimization of the training sequence and the TPC is written in the following form \begin{align} \textbf{P. 13:} \ \max_{{\bm{F}},{\bm{X}},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left(\frac{{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{\widehat H}}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) \nonumber \\ \ {\rm{s.t.}} \ & \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}) \le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} In contrast to \textbf{P. 10}, there are three optimization variables in \textbf{P. 13}, namely ${\bm{F}}$, ${\bm{X}}$, and $T_{\rm{T}}$. The discrete time interval $T_{\rm{T}}$ is the training length, which must satisfy $1\le T_{\rm{T}} \le T\!-\!1$. The scalar $T_{\rm{T}}$ determines the amount of resources allocated to channel estimation and hence also the resources left for data transmission. After introducing $T_{\rm{T}}$ in the OF, \textbf{P. 13} aims for striking a compelling trade-off between the channel estimation accuracy and data transmission efficiency, which is distinctly different from \textbf{P. 10}. In order to boldly differentiate the traditional designs given by \textbf{P. 10}, the performance metrics in \textbf{P. 13} given by \textbf{P. 4} to \textbf{P. 9} are termed as the effective metrics. For example, when considering the MI maximization problem \textbf{P. 4}, \textbf{P. 13} aims for maximizing the \textit{effective MI} (termed as achievable rate in \cite{Pastore2016}) instead of the original MI. The corresponding OF is \begin{align}\label{Objective_Effective_MI} \frac{T-T_{\rm{T}}}{T}f_{\rm{unified}}\left(\frac{{\bm{ F}}^{\rm{H}}\bm{\widehat H}^{\rm{H}}{\bm{\widehat H}} {\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) &=\frac{T-T_{\rm{T}}}{T} {\mathbb{E}\left\{{\rm{log \ det}}\left(\bm{I} +\frac{{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{\widehat H}}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right)\right\}}. \end{align} The OF in (\ref{Objective_Effective_MI}) simultaneously maximizes the MI and data transmission time interval $T-T_{\rm{T}}$. As discussed above, for the optimization problems \textbf{P. 4} to \textbf{P. 9}, there are several ones aiming for minimizing a specific performance metric. When these kind of optimizations are unified in \textbf{P. 13}, the OFs are replaced by their corresponding inverse functions. For example, for the weighted MSE minimization of \textbf{P. 7}, the corresponding OF of \textbf{P. 13} is formulated as the inverse of the weighted MSE, i.e., \begin{align} \frac{T-T_{\rm{T}}}{T}f_{\rm{unified}}\left(\frac{{\bm{ F}}^{\rm{H}}\bm{\widehat H}^{\rm{H}}{\bm{\widehat H}} {\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) &=\frac{T-T_{\rm{T}}}{T} \frac{1}{\mathbb{E}\left\{{\rm{Tr}}\left[\bm{W}\left(\bm{I} +\frac{{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{\widehat H}}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right)^{-1}\right]\right\}}. \end{align} Here, similar to the concept of effective MI, the following term \begin{align} \frac{T}{T-T_{\rm{T}}}\mathbb{E}\left\{{\rm{Tr}}\left[\bm{W}\left(\bm{I} +\frac{{\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{\widehat H}}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right)^{-1}\right]\right\} \end{align}may be viewed as the effective weighted MSE performance. Minimizing the effective MSE aims for simultaneously minimizing the traditional MSE and maximizing data transmission time interval. \subsection{Joint Optimization for Statistical CSI at the Transmitter} \label{Section_Transmitter_Statistical} In this section, we investigate the case in which the transmitter only has statistical CSI but the destination has more accurate estimated CSI. Here we would like to highlight that in contrast to most of the existing studies on transceiver optimization \cite{Xing2021}, in the joint optimization of a linear TPC and training sequence, only the average performances are considered instead of their counterparts relying on the instantaneous CSI. Strictly speaking, only the upper bounds of the system performance are investigated. Following the rationale in \cite{Pastore2016}, the expectation operations are moved into the OFs of \textbf{P. 4} to \textbf{P. 9}. The resultant joint optimization problem becomes \begin{align} \textbf{P. 14:} \ \ \max_{{\bm{F}},{\bm{X}},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T}f_{\rm{unified}}\left(\frac{{\bm{ F}}^{\rm{H}}\mathbb{E}\{\bm{\widehat H}^{\rm{H}}{\bm{\widehat H}} \}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) \nonumber \\ \ {\rm{s.t.}} \ \ & \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}) \le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} In the OF of \textbf{P. 14}, the average matrix-valued SNR becomes: \begin{align} & \frac{{\bm{ F}}^{\rm{H}}\mathbb{E}\{\bm{\widehat H}^{\rm{H}}{\bm{\widehat H}} \}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}= \frac{{\bm{ F}}^{\rm{H}}\bm{\Pi}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}, \ \text{with} \ \bm{\Pi}=N_{\rm{R}}{\bm{\Psi}}\!-\!N_{\rm{R}}{\bm \Phi}. \end{align} It can be concluded that \textbf{P. 14} maximizes this matrix-valued SNR. As a result, our joint matrix-monotonic optimization is formulated as \begin{align}\label{Bi-Matrix-Monotonic} \textbf{P. 15:} \ \max_{{\bm{F}},{\bm{X}}} \ & \frac{{\bm{ F}}^{\rm{H}}\bm{\Pi}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\nonumber \\ \ {\rm{s.t.}} \ & \bm{\Pi}=N_{\rm{R}}{\bm{\Psi}}\!-\!N_{\rm{R}}{\bm \Phi}, \ \ \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}) \le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Explicitly this joint matrix-monotonic optimization is intrinsically different from traditional single matrix-monotonic optimization applied separately for transceiver design and training design, since a pair of optimization variables, namely the linear TPC $\bm{F}$ and the training sequence $\bm{X}$, are jointly optimized. The optimal solutions of \textbf{P. 14} belong to the Pareto-optimal solution set of \textbf{P. 15}. We would like to point out that \textbf{P. 13} optimizes the distribution of ${\bm{F}}^{\rm{H}}{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}{\bm{F}}$ by relying on the statistical CSI available at transmitter. Provided that ${\bm{\widehat H}}$ has only column correlations, there exists an equivalence between optimizing the distribution of a random positive semidefinite matrix and maximizing its expectation, i.e., ${\bm{F}}^{\rm{H}}\mathbb{E}\{{\bm{\widehat H}}^{\rm{H}}{\bm{R}}_{\bm{v}}^{-1}{\bm{\widehat H}}\}{\bm{F}}$ \cite{TrainingXing}. As a result, in \textbf{P. 14}, there is an approximation, since the expectation operation is moved into the OF. However, based on our discussion concerning \textbf{P. 12}, this approximation does not change the fact that the optimal solutions of \textbf{P. 13} belong to the Pareto-optimal solution set of \textbf{P. 15}. We can conjecture that the approximation in \textbf{P. 14} might change the correspondence between the optimal solutions of \textbf{P. 13} and the Pareto-optimal solutions of \textbf{P. 15}. It is worth noting that the constraint ${\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})\le P_{\rm{D}}$ is equivalent to the following constraint \begin{align} \label{power_constraint_1} \sigma_{\rm{N}}^2{\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}}) \le \sigma_{\rm{N}}^2P_{\rm{D}}, \end{align} which is also equivalent to the following one \begin{align} \label{power_constraint_2} \sigma_{\rm{N}}^2{\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})+P_{\rm{D}}{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})\le \sigma_{\rm{N}}^2P_{\rm{D}}+P_{\rm{D}}{\rm{Tr}}({\boldsymbol \Phi}{\bm{F}}{\bm{F}}^{\rm{H}}). \end{align}The power constraint (\ref{power_constraint_2}) is equivalent furthermore to the following constraint \begin{align} \label{power_constraint_3} \frac{{\rm{Tr}}[(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\boldsymbol \Phi}){\bm{F}}{\bm{F}}^{\rm{H}}]}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\boldsymbol \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})} \le P_{\rm{D}}. \end{align} Here, it is worth noting that the equivalence implies that, (\ref{power_constraint_3}) can be proved and vice versa from (\ref{power_constraint_1}). Therefore, based on the equivalence between (\ref{power_constraint_1}) and (\ref{power_constraint_3}), the joint matrix-monotonic optimization problem \textbf{P. 15} is equivalent to the following one \begin{align} \label{Matrix_Monotonic_AA} \textbf{P. 16:} \ \max_{{\bm{F}},{\bm{X}}} \ \ &\frac{{\bm{ F}}^{\rm{H}}{\bm \Pi}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\nonumber \\ {\rm{s.t.}} \ & \bm{\Pi}=N_{\rm{R}}{\bm{\Psi}}\!-\!N_{\rm{R}}{\bm \Phi}, \ \ \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\rm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \nonumber \\ &\frac{{\rm{Tr}}[(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\boldsymbol \Phi}){\bm{F}}{\bm{F}}^{\rm{H}}]}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\boldsymbol \Phi}{\bf{F}}{\bf{F}}^{\rm{H}})}\le P_{\rm{D}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Upon defining the following new matrix variable \begin{align} \label{Definition_F} {\bm{\widetilde F}}={(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\bm \Phi})^{1/2}}{\left[{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right]^{-1/2}}\!{\bm{F}}, \end{align}the optimization problem \textbf{P. 16} can be simplified into the following one \begin{align} \textbf{P. 17:} \ \max_{{\bm{\widetilde F}},\bm{X}} \ \ & {\bm{\widetilde F}}^{\rm{H}}(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}(N_{\rm{R}}\bm{\Psi}-N_{\rm{R}}{\bm{\Phi}})(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}{\bm{\widetilde F}}\nonumber \\ \ {\rm{s.t.}} \ &\bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\rm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}({\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}})\le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align}Based on the fundamental results of matrix-monotonic optimization \cite{XingTSP201501,Xing2021}, we have the following theorem concerning the optimal ${\bm{\widetilde F}}$. \noindent \textbf{Theorem 1:} The Pareto-optimal solution of ${\bm{\widetilde F}}$ for \textbf{P. 17} satisfies the following structure \begin{align} {\bm{\widetilde F}}_{\rm{opt}}={\bm{V}}_{\bm{\mathcal{H}}}{\bm \Lambda}_{\bm{\widetilde F}}{\bm{U}}_{\bm{\widetilde F}}^{\rm{H}}, \end{align}where ${\bm{V}}_{\bm{\mathcal{H}}}$ and ${\bm{U}}_{\bm{\widetilde F}}$ are unitary matrices and ${\bm \Lambda}_{\bm{\widetilde F}}$ is a diagonal matrix. The unitary matrix ${\bm{V}}_{\bm{\mathcal{H}}}$ is defined based on the following SVD \begin{align} (N\bm{\Psi}-N{\bm{\Phi}})^{1/2}(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}={\bm{U}}_{\bm{\mathcal{H}}}{\bm \Lambda}_{\bm{\mathcal{H}}} {\bm{V}}_{\bm{\mathcal{H}}}^{\rm{H}} \ \text{with} \ {\bm \Lambda}_{\bm{\mathcal{H}}} \searrow. \end{align} On the other hand, the unitary matrix ${\bm{U}}_{\bm{\widetilde F}}$ is determined by the specific OFs \cite{XingTSP201501}. In the following, the optimal solutions of ${\bm{U}}_{\bm{\widetilde F}}$ are enumerated briefly and the detailed derivations are provided in \cite{XingTSP201501}. \noindent \textbf{Conclusion 3:} For the optimization problems \textbf{P. 4} to \textbf{P. 9}, the corresponding optimal values of ${\bm{U}}_{\bm{\widetilde F}}$ are listed as follows: \begin{align} & \textbf{P. 4:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!=\!{\bm{U}}_{\rm{Arb}}; \ \textbf{P. 5:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!=\!{\bm{I}}; \ \textbf{P. 6:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!=\!{\bm{U}}_{\bm{A}}; \nonumber \\ &\textbf{P. 7:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!=\! {\bm{U}}_{\bm{W}}; \ \textbf{P. 8:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!=\!{\bm{U}}_{\rm{DFT}}; \ \textbf{P. 9:} \ {\bm{U}}_{{\bm{\widetilde F}},{\rm{opt}}}\!\!=\!\!{\bm{I}}, \end{align}where the unitary matrices ${\bm{U}}_{\bm{A}}$ and ${\bm{U}}_{\bm{W}}$ are defined based on the following EVDs \begin{align} & \bm{A}\bm{A}^{\rm{H}}=\bm{U}_{\bm{A}}\bm{\Lambda}_{\bm{A}}\bm{U}_{\bm{A}}^{\rm{H}} \ \text{with} \ {\bm \Lambda}_{\bm{A}} \searrow, \ \bm{W}=\bm{U}_{\bm{W}}\bm{\Lambda}_{\bm{W}}\bm{U}_{\bm{W}}^{\rm{H}} \ \text{with} \ {\bm \Lambda}_{\bm{W}} \searrow. \end{align} Moreover, ${\bm{U}}_{\rm{Arb}}$ denotes an arbitrary unitary matrix of appropriate dimensions and ${\bm{U}}_{\rm{DFT}}$ represents a DFT unitary matrix of suitable dimensions, respectively. According to the definition of ${\bm{\widetilde F}}$ in (\ref{Definition_F}), the following equality always holds \begin{align} (\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\bm \Phi})^{-1/2}{\bm{\widetilde F}}=\left[{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right]^{-1/2}{\bm{F}}, \end{align} based on which the following equality can be derived \begin{align} \frac{1}{{\sigma}_{\rm{N}}^2\!+\!{\rm{Tr}}\!(\!{\bm\Phi}{\bm{F}}{\bm{F}}^{\rm{H}})} \!\!=\!\!\frac{{\rm{Tr}}\![\!(\sigma_{\rm{N}}^2{\bm{I}}\!+\!P_{\rm{D}}{\bm \Phi})^{-1/2}{\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}}(\sigma_{\rm{N}}^2{\bm{I}}\!+\!P_{\rm{D}}{\bm\Phi}\!)^{-1/2}\!]}{{\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})}\!\! =\!\!\frac{{\rm{Tr}}[(\sigma_{\rm{N}}^2{\bm{I}}\!+\!P_{\rm{D}}{\bm \Phi})^{-1}{\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}}]}{P_{\rm{D}}}, \end{align}where the second equality is due to the fact that for the optimal ${\bm{F}}$, we have ${\rm{Tr}}({\bm{F}}{\bm{F}}^{\rm{H}})=P_{\rm{D}}$. Finally, when the optimal solution ${\bm{\widetilde F}}$ has been computed, the optimal ${\bm{F}}$ equals \begin{align} {\bm{F}}_{\rm{opt}}=\sqrt{\frac{P_{\rm{D}}}{{\rm{Tr}}[(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\bm \Phi})^{-1}{\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}}]}}(\sigma_{\rm{N}}^2{\bm{I}}+P_{\rm{D}}{\bm \Phi})^{-1/2}{\bm{\widetilde F}}_{\rm{opt}}. \end{align} Based on \textbf{Conclusion 1}, \textbf{Conclusion 2} and \textbf{Theorem 1}, we define the following variables and parameters \begin{align} \big[\bm{\Lambda}_{\bm{\widetilde F }}^{\rm{T}}\bm{\Lambda}_{\bm{\widetilde F }}\big]_{i,i}=f_i^2, \ \big[\bm{\Lambda}_{\bm{\Psi}}\big]_{i,i}=\psi_i, \ \big[\bm{\Lambda}_{\bm{X }}\bm{\Lambda}_{\bm{X }}^{\rm{T}}\big]_{i,i}=x_i^2, \ \big[{\bm \Lambda}_{{\bm{R}}_{\rm{N}}}^{-1} \big]_{i,i}=1/\sigma_{i}^2. \end{align} The joint matrix-monotonic optimization problem \textbf{P. 17} is equivalent to the following one: \begin{align} \textbf{P. 18:} \ \max_{\{f_i^2\},\{x_i^2\}} \ \ & {\bm{U}}_{\bm{\widetilde F}}\bm{\Lambda}_{\rm{IN}}{\bm{U}}_{\bm{\widetilde F}}^{\rm{H}} \nonumber \\ \ {\rm{s.t.}} \ & \big[\bm{\Lambda}_{\rm{IN}}\big]_{i,i}= \frac{N_{\rm{R}}^2f_i^2x_i^2 \psi_i/\sigma_i^2}{\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/\sigma_i^2+P_{\rm{D}}} \nonumber \\ & \sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}, \ \sum_{i}^{N_{\rm{Data}}}x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Based on this, the joint optimization problem \textbf{P. 14} can be further rewritten as \begin{align} \textbf{P. 19:} \ \max_{\{f_i^2\},\{x_i^2\},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left(\left\{ \frac{N_{\rm{R}}^2f_i^2x_i^2 \psi_i/\sigma_i^2}{\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/\sigma_i^2+P_{\rm{D}}} \right\}_{i=1}^{N_{\rm{Data}}} \right) \nonumber \\ \ {\rm{s.t.}} \ & \sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}, \ \sum_{i}^{N_{\rm{Data}}}x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} The specific optimal solutions of \textbf{P. 19} are determined by the particular mathematical formulas of the OF in \textbf{P. 19}. In the following, two specific examples are given to show how to solve \textbf{P. 19}. Their methodology may also be applied to other OFs. \subsection{Specific Examples for Statistical CSI} In this subsection, some specific joint optimization problems are investigated one-by-one to show how to jointly optimize the training sequence and the transceiver. \noindent \textbf{Effective MI Maximization:} Based on the results derived above, the joint optimization problem \textbf{P. 14} is finally simplified into the following form for the optimization problem of effective MI maximization \begin{align} \textbf{P. 20:} \ \max_{\{f_n^2\},\{x_n^2\},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T}\sum_{i=1}^{N_{\rm{Data}}}\log\left(1+ \frac{N_{\rm{R}}^2f_i^2x_i^2 \psi_i/\sigma_i^2}{\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/\sigma_i^2+P_{\rm{D}}} \right) \nonumber \\ \ {\rm{s.t.}} \ & \sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}, \ \sum_{i=1}^{N_{\rm{Data}}}x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} \textbf{P. 20} can be optimized in an alternating manner with respect to $\{f_n^2\}$, $\{x_n^2\}$, and $T_{\rm{T}}$. When $T_{\rm{T}}$ is given, for fixed $x_i^2$ values, the optimal solutions of $f_i^2$ are the following water-filling solutions \cite{Boyd04} \begin{align} f_i^2=\left( \frac{1}{\mu_f}-\frac{1}{h_i} \right)^{+}, \end{align} where $\mu_f$ is the Lagrange multiplier corresponding to the constraint $\sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}$. The parameter $h_i$ is defined as \begin{align} h_i=\frac{N_{\rm{R}}^2x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}}.\end{align} Then, for fixed $f_i^2$ values, the optimal solutions of $x_i^2$ are the following water-filling solutions: \begin{align} x_i^2=\left(\frac{-c_i(a_i+2b_i)+\sqrt{c_i^2(a_i+2b_i)^2-4(a_i+b_i)b_i(c_i^2-a_ic_i/\mu_x))}} {2(a_i+b_i)b_i}\right)^{+}, \end{align} where the parameters $a_i$, $b_i$ and $c_i$ are defined as follows \begin{align}\label{Definition_a_b_c} a_i=N_{\rm{R}}^2f_i^2\psi_i/\sigma_i^2, \ b_i=N_{\rm{R}}\sigma_{\rm{N}}^2/\sigma_i^2, \ c_i= \sigma_{\rm{N}}^2\psi_i^{-1}+P_{\rm{D}}. \end{align} Finally, the optimization of $T_{\rm{T}}$ can be carried out using a one-dimensional search\cite{Pastore2016}\cite{Soysal2010}, given its discrete nature. \noindent \textbf{Effective Weighted MSE Minimization} According to the previous discussions, the joint optimization problem \textbf{P. 14} aims for maximizing a performance metric. For the effective weighted MSE minimization, the OF of the joint optimization problem \textbf{P. 14} is formulated as the reciprocal of the effective weighted MSE, i.e., \begin{align} \frac{T-T_{\rm{T}}}{T}f_{\rm{unified}}\left(\frac{{\bm{ F}}^{\rm{H}}\mathbb{E}\{\bm{\widehat H}^{\rm{H}}{\bm{\widehat H}} \}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right)=\frac{T-T_{\rm{T}}}{T}\frac{1}{{\rm{Tr}}\left[\bm{W}\left(\bm{I}+\frac{{\bm{F}}^{\rm{H}}\mathbb{E}\{{\bm{\widehat H}}^{\rm{H}}{\bm{\widehat H}}\}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\bm \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right)^{-1}\right]}, \end{align} based on which the joint optimization problem \textbf{P. 14} with respect to the effective weighted MSE performance finally becomes equivalent to \begin{align} \textbf{P. 21:} \ \ \min_{\{f_n^2\},\{x_n^2\},T_{\rm{T}}} \ \ & \frac{T}{T-T_{\rm{T}}}\sum_{i=1}^{N_{\rm{Data}}}\frac{w_i}{1+ \frac{N_{\rm{R}}^2f_i^2x_i^2\psi_i/\sigma_i^2}{\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/\sigma_i^2+P_{\rm{D}}} } \nonumber \\ \ {\rm{s.t.}} \ \ & \sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}, \ \sum_{i=1}^{N_{\rm{Data}}}x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} \textbf{P. 21} can also be optimized in an alternating manner with respect to $\{f_n^2\}$, $\{x_n^2\}$, and $T_{\rm{T}}$. When $T_{\rm{T}}$ is given, for fixed $x_i^2$ values, the optimal solutions of $f_i^2$ are the following water-filling solutions: \begin{align} f_i^2=\left( \sqrt{\frac{w_i}{\mu_f h_i} }- \frac{1}{h_i}\right)^{+}, \end{align} where $\mu_f$ is the Lagrange multiplier corresponding to the constraint $\sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}$. On the other hand, for fixed $f_i^2$ values, the optimal solutions of $x_i^2$ are the following water-filling solutions: \begin{align} x_i^2=\left(\frac{1}{a_i+b_i}\sqrt{\frac{w_ia_ic_i}{\mu_x}}-\frac{c_i}{a_i+b_i}\right)^{+}, \end{align} where $a_i$, $b_i$ and $c_i$ are defined in (\ref{Definition_a_b_c}). Similar to effective MI maximziation, the optimization of $T_{\rm{T}}$ can be also performed using a one-dimensional search, given its discrete nature. \subsection{Joint Optimization using the Receiver's Estimated CSI at the Transmitter} \label{Section_Transmitter_Estimated_CSI} In this section, we investigate the case in which the transmitter and the receiver both have the same estimated CSI, assuming that a feedback channel is available between them. In this case, based on the above mathematical formulation along with the our TPC optimization, the joint optimization problem can be further rewritten as follows: \begin{align} \textbf{P. 22:} \ \max_{{\bm{F}},{\bm{X}},T_{\rm{T}}} \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left(\frac{{\bm{F}}^{\rm{H}}({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H}}_{\rm{W}}({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\boldsymbol \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) \nonumber \\ \ {\rm{s.t.}} \ & \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1}, {\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}) \le P_{\rm{D}}, {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \nonumber \\ & (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} This is equivalent to the following by replacing the sum power constraint ${\rm{Tr}}(\bm{F}\bm{F}^{\rm{H}}) \le P_{\rm{D}}$ by (\ref{power_constraint_3}) \begin{align} \textbf{P. 23:} \ \max_{{\bm{F}},{\bm{X}},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left(\frac{{\bm{F}}^{\rm{H}}({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}{\bm{F}}}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\boldsymbol \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})}\right) \nonumber \\ \ {\rm{s.t.}} \ \ & \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1}, \frac{{\rm{Tr}}[(\sigma_{\rm{N}}^2{\bf{I}}+P_{\rm{D}}{\boldsymbol \Phi}){\bm{F}}{\bm{F}}^{\rm{H}}]}{\sigma_{\rm{N}}^2+{\rm{Tr}}({\boldsymbol \Phi}{\bm{F}}{\bm{F}}^{\rm{H}})} \le P_{\rm{D}}, \nonumber \\ & {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Similar to \textbf{P. 17}, based on the definition of ${\bm{\widetilde F}}$ in (\ref{Definition_F}), \textbf{P. 23} can further be reformulated as \begin{align} \textbf{P. 24:} \ \max_{{\bm{\widetilde F}},{\bm{X}},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\big({\bm{\widetilde F}}^{\rm{H}}(\sigma_{\rm{N}}^2{\bf{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times ({\bm{\Psi}}-{\boldsymbol \Phi})^{1/2}(\sigma_{\rm{N}}^2{\bf{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}{\bm{\widetilde F}}\big) \nonumber \\ \ {\rm{s.t.}} \ \ & \bm{\Phi}=\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}({\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}})\le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} By exploiting that all elements of $\bm{H}_{\rm{W}}$ are i.i.d. random variables, \textbf{P. 24} is equivalent to \begin{align} \textbf{P. 25:} \ \max_{{\bm{\widetilde F}},{\bm{X}},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\big({\bm{\widetilde F}}^{\rm{H}}\bm{\Sigma}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Sigma}^{1/2} {\bm{\widetilde F}}\big) \nonumber \\ \ {\rm{s.t.}} \ \ & \bm{\Sigma}\!=\!(\sigma_{\rm{N}}^2{\bf{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2}({\bm{\Psi}}-{\boldsymbol \Phi})(\sigma_{\rm{N}}^2{\bf{I}}+P_{\rm{D}}{\boldsymbol \Phi})^{-1/2},\bm{\Phi}\!=\!\left({\bm{\Psi}}^{-1}+N_{\rm{R}}{\bm{X}}{\bm{R}}_{\bm{N}}^{-1} {\bm{X}}^{\rm{H}}\right)^{-1} \nonumber \\ & {\rm{Tr}}({\bm{\widetilde F}}{\bm{\widetilde F}}^{\rm{H}})\le P_{\rm{D}}, \ {\rm{Tr}}(\bm{X}\bm{X}^{\rm{H}}) \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Based on our matrix-monotonic optimization framework, the OF of \textbf{P. 25} equals \begin{align}\label{objective_function_distribution} f_{\rm{unified}}\big( {\bm{\widetilde F}}^{\rm{H}} \bm{\Sigma}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Sigma}^{1/2} {\bm{\widetilde F}} \big) =& f_{\rm{unified}}\left( \Big\{f_i^2\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\Big\}_{i=1}^{N_{\rm{Data}}}\right), \end{align} where the elements of the diagonal matrix $\bm{\Lambda}_{\bm{\Sigma}}$ are defined as \begin{align} \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}}. \end{align} The equality in (\ref{objective_function_distribution}) exploits the fact that for any unitary matrix $\bm{U}$, $\bm{H}_{\rm{W}}$ and $\bm{H}_{\rm{W}}\bm{U}$ have the same distribution. Therefore, based on (\ref{objective_function_distribution}) the joint optimization of our linear TPC and training sequence is formulated as \begin{align} \textbf{P. 26:} \ \ \max_{\{f_i^2\},\{x_i^2\},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left( \Big\{f_i^2\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\Big\}_{i=1}^{N_{\rm{Data}}}\right) \nonumber \\ \ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}},\sum_{i=1}^{N_{\rm{Data}}} f_i^2 \le P_{\rm{D}}, \nonumber \\ & \sum_{i=1}^{N_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} In the case when the transmitter has estimated CSI, for a given transmit power $P_{\rm{D}}$, the optimal solution of $f_i^2$ can be expressed as a function of $P_{\rm{D}}$ and $\left\{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2})\right\}$ , i.e., \begin{align} f_i^2=p_i\left(P_{\rm{D}},\left\{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2})\right\}_{i=1}^{N_{\rm{Data}}} \right). \end{align} As a result, the optimization problem \textbf{P. 26} may also be shown to be equivalent to \begin{align} \textbf{P. 27:} \ \ \max_{\{x_i^2\},T_{\rm{T}}} \ \ & \frac{T-T_{\rm{T}}}{T} f_{\rm{unified}}\left( \left\{f_i^2\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right\}_{i=1}^{N_{\rm{Data}}}\right) \nonumber \\ \ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ &f_i^2=p_i\left(P_{\rm{D}},\left\{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2})\right\}_{i=1}^{N_{\rm{Data}}} \right) \nonumber \\ & \sum_{i=1}^{N_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} It is worth highlighting that the statistical expectation operation involved in \textbf{P. 27} makes the optimization problem difficult to solve. To overcome this difficulty, a pair of algorithms are proposed. For the first one, the power allocation in the training optimization adopts the suboptimal solutions given by \textbf{P. 1} or \textbf{P. 2}. As a result, in \textbf{P. 27} there is only a single real scalar optimization variable $T_{\rm{T}}$ that can be optimized by using a simple one-dimensional search. The expectation operation in \textbf{P. 27} can be carried out numerically. On the other hand, the second one is based on the approximations that replace the eigenvalues $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2})$ in \textbf{P. 27} by $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\mathbb{E}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2})$. \subsection{Specific Examples for Estimated CSI} \noindent \textbf{Effective MI Maximization:} In this subsection, we investigate the effective MI maximization in detail, where the MI term in the OF of \textbf{P. 27} equals \begin{align} f_{\rm{unified}}\!\left(\!\! \left\{f_i^2\lambda_i\!\left(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\right)\!\right\}_{i=1}^{N_{\rm{Data}}}\!\right) \! \!= \! \! \mathbb{E}\!\left\{\!\sum_{i=1}^{N_{\rm{Data}}} \log\!\!\left(\!1\!+\!f_i^2\lambda_i\!\left(\!\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\!\right)\right)\! \right\}\!. \end{align}It is widely exploited that for the MI maximization the optimal $f_i^2$ obeys \cite{Palomar03,XingTSP201501} \begin{align} f_i^2=\left( \frac{1}{\mu_f}-\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )} \right)^{+}. \end{align} The Lagrange multiplier $\mu_f$ is equivalent to \begin{align} \frac{1}{\mu_f}=\frac{P_{\rm{D}}\!+\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\!\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2} {\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}{{\widetilde N}_{\rm{Data}}}, \end{align}where ${\widetilde N}_{\rm{Data}}$ is the number of eigen-channels allocated non-zero powers during the data transmission phase. Therefore, substituting the value of $\mu_f$ into the OF, the following result holds \begin{align}\label{Objective_MI_b} & f_{\rm{unified}}\left( \left\{f_i^2\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right\}_{i=1}^{N_{\rm{Data}}}\right) \nonumber \\ =& \mathbb{E}\!\!\left\{\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}} \log\left(\frac{P_{\rm{D}}\!+\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\!\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2} {\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}{{\widetilde N}_{\rm{Data}}}\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right) \! \right\}. \end{align} Based on (\ref{Objective_MI_b}), the effective MI maximization problem is written in the following form \begin{align} \textbf{P. 28:} \ \max_{\{x_i^2\},T_{\rm{T}}} \ & \frac{T-T_{\rm{T}}}{T}\mathbb{E}\!\!\left\{\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}} \log\left(\frac{P_{\rm{D}}\!+\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\!\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2} {\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}{{\widetilde N}_{\rm{Data}}}\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right) \! \right\}\nonumber \\ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ & \sum_{i=1}^{{N}_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} Because of the statistical expectation operation in \textbf{P. 28}, it is challenging to derive the optimal solutions of \textbf{P. 28} in closed forms. In the following two kinds of algorithms are proposed. In order to avoid complex optimizations involving statistical expectations, some suboptimal solutions are used for $\{x_i^2\}$. Specifically, since channel estimation is usually performed at high SNRs, for MI maximization, the resources are assumed to be allocated uniformly among $\{x_i^2\}$, i.e., $\{x_i^2=P_{\rm{T}}T_{\rm{T}}/{N_{\rm{Data}}}\}$. Therefore, \textbf{P. 28} is simplified to the following problem: \begin{align} \textbf{P. 29:} \ \max_{T_{\rm{T}}} \ & \frac{T-T_{\rm{T}}}{T}\mathbb{E}\!\!\left\{\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}} \log\left(\frac{P_{\rm{D}}\!+\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\!\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2} {\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}{{\widetilde N}_{\rm{Data}}}\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right) \! \right\}\nonumber \\ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ & x_i^2 = P_{\rm{T}}T_{\rm{T}}/{N_{\rm{Data}}}, (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} The second rationale is to replace the eigenvalues $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )$ by their expectations, i.e., by $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\mathbb{E}}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )$. As a result, \textbf{P. 28} can be approximated as follows: \begin{align} \textbf{P. 30:} \ \max_{\{x_i^2\},T_{\rm{T}}} \ & \frac{T-T_{\rm{T}}}{T}\sum_{i=1}^{{\widetilde N}_{\rm{Data}}} \log\left(\frac{P_{\rm{D}}\!+\!\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\!\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\mathbb{E}\{ {\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}{{\widetilde N}_{\rm{Data}}}\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\mathbb{E}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )\right) \nonumber \\ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ & \sum_{i=1}^{{ N}_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} It is worth noting that this approximation is accurate at high SNRs as shown in Section~\ref{Section_Simulation}, but the proof is omitted due to space limitation. The optimization problem \textbf{P. 30} can be efficiently solved via alternating optimization, where the solution of $\{x_i^2\}$ may be found via the MATLAB function ``\emph{fmincon}''. \noindent \textbf{Effective Weighted MSE Minimization:} In this subsection, the effective weighted MSE minimization is investigated. For the effective weighted MSE minimization, the MSE term of the OF of \textbf{P. 27} may be expressed as \begin{align} f_{\rm{unified}}\!\left( \!\! \left\{f_i^2\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} ) \!\!\right\}_{i=1}^{{ N}_{\rm{Data}}}\right) \!\!=\! \! \left[\mathbb{E}\left\{\frac{\left(\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{\sqrt{w_i}}{\sqrt{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right)^2 }{P_{\rm{D}}+\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right\}\!+\!\!\!\!\!\!\sum_{i>{\widetilde N}_{\rm{Data}}}\!\!\!\!\!\omega_i\!\right]^{-1}\!\!\!, \end{align} based on which \textbf{P. 27} is further reformulated as follows: \begin{align} \textbf{P. 31:} \ \max_{\{x_i^2\},T_{\rm{T}}} \ & \frac{T-T_{\rm{T}}}{T}\left[\mathbb{E}\left\{\frac{\left(\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{\sqrt{w_i}}{\sqrt{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right)^2 }{P_{\rm{D}}+\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right\}\!+\!\!\!\!\!\!\sum_{i>{\widetilde N}_{\rm{Data}}}\!\!\!\omega_i\right]^{-1} \nonumber \\ {\rm{s.t.}} \ \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ & \sum_{i=1}^{{ N}_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} It is worth noting that solving the optimization problem \textbf{P. 31} is still very challenging due to the statistical expectation operation. In the following, a pair of approximations are used. Firstly, similar to \textbf{P. 29}, in order to avoid complex mathematical operations, suboptimal power allocation schemes may be applied to $\{x_i^2\}$. At high SNR, the resources are supposed to be allocated proportionally among $\{x_i^2\}$ for sum MSE minimization. Since we are considering the average performance, the suboptimal uniform power allocation scheme is adopted for $\{x_i^2\}$ in the following for simplicity, i.e., $\{x_i^2=P_{\rm{T}}T_{\rm{T}}/{N_{\rm{Data}}}\}$. Thus, \textbf{P. 31} is simplified to \begin{align} \textbf{P. 32:} \ \min_{T_{\rm{T}}} \ & \frac{T}{T-T_{\rm{T}}}\mathbb{E}\left\{\frac{\left(\sum_{i=1}^{\widetilde N_{\rm{Data}}}\frac{\sqrt{w_i}}{\sqrt{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right)^2 }{P_{\rm{D}}+\sum_{i=1}^{\widetilde N_{\rm{Data}}}\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}} \right\}\!+\!\frac{T}{T-T_{\rm{T}}}\!\!\!\!\sum_{i>{\widetilde N}_{\rm{Data}}}\!\!\!{\omega_i} \nonumber \\ {\rm{s.t.}} \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}}, \nonumber \\ &x_i^2 = P_{\rm{T}}T_{\rm{T}}/{N_{\rm{Data}}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} In \textbf{P. 32}, the single optimization variable $T_{\rm{T}}$ can be found by a one-dimensional search. In the second solution, the eigenvalues $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )$ are replaced by their expectations, i.e., by $\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}{\mathbb{E}}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )$. Thus, \textbf{P. 31} is relaxed to \begin{align} \textbf{P. 33:} \ \min_{\{x_i^2\},T_{\rm{T}}} \ \ & \frac{T}{T-T_{\rm{T}}}\frac{\left(\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{\sqrt{w_i}}{\sqrt{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2} \mathbb{E}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\right)^2 }{P_{\rm{D}}+\sum_{i=1}^{{\widetilde N}_{\rm{Data}}}\frac{1}{\lambda_i(\bm{\Lambda}_{\bm{\Sigma}}^{1/2}\mathbb{E}\{{\bm{H}}_{\rm{W}}^{\rm{H}}{\bm{H }}_{\rm{W}}\}\bm{\Lambda}_{\bm{\Sigma}}^{1/2} )}}\!+\!\frac{T}{T-T_{\rm{T}}}\!\!\!\!\sum_{i>{\widetilde N}_{\rm{Data}}}\!\!\!{\omega_i} \nonumber \\ {\rm{s.t.}} \ & \big[\bm{\Lambda}_{\bm{\Sigma}}\big]_{i,i}=\frac{N_{\rm{R}}x_i^2\psi_i/\sigma_i^2} {\sigma_{\rm{N}}^2\psi_i^{-1}+N_{\rm{R}}\sigma_{\rm{N}}^2x_i^2/ \sigma_i^2+P_{\rm{D}}} \nonumber \\ & \sum_{i=1}^{N_{\rm{Data}}} x_i^2 \le P_{\rm{T}}T_{\rm{T}}, \ (T-T_{\rm{T}})P_{\rm{D}}+P_{\rm{T}}T_{\rm{T}}\le E_{\rm{total}}. \end{align} The optimization problem \textbf{P. 33} can be solved efficiently by alternating between $\{x_i^2\}$ and $T_{\rm{T}}$, where the solution of $\{x_i^2\}$ may be found by the MATLAB function ``\emph{fmincon}''. \subsection{Complexity Analysis} For the proposed algorithms, the computational complexity mainly arises from the matrix decompositions, water-filling algorithms, numbers of iterations, one-dimensional search over the training interval $T_{\rm{T}}$ and the numerical computations of the statistical expectation. For an $N\times N$ matrix, the complexity of its matrix decomposition is on the order of $\mathcal{O}(N^3)$. Moreover, for an $N$-dimensional separate optimization problem, the complexity of the water-filling algorithm is as low as $\mathcal{O}(N)$. We will demonstrate in Section~\ref{Section_Simulation} that typically only one or two iterations may be required in Fig.~\ref{Fig_Convergence_MI_Maximization} and Fig.~\ref{Fig_Convergence_MSE_Minimization}. The one-dimensional search will increase the computational complexity at an order of $\mathcal{O}(T_{\rm{T}})$. For the following complexity analysis, $L_{\rm{S}}$ denotes the number of independent trials used for the numerical computations of the statistical expectation. Upon using statistical CSI at the transmitter, the total computational complexities of the proposed algorithms are $\mathcal{O}(N^3)+\mathcal{O}(NT_{\rm{T}}+L_{\rm{S}}T_{\rm{T}})$. On the other hand, in the case of estimated CSI at transmitter, the total computational complexities of the proposed algorithms using the eigen-value approximations are also $\mathcal{O}(N^3)+\mathcal{O}(NT_{\rm{T}}+L_{\rm{S}}T_{\rm{T}})$. However, in this case the total computational complexities of the proposed algorithms using uniform power allocation for training are a little bit different and the total complexity is $\mathcal{O}(N^3)+\mathcal{O}(L_{\rm{S}}T_{\rm{T}})$. \section{Simulation Results and Discussions} \label{Section_Simulation} \begin{figure}[t] \centering \vspace{-8mm} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=8cm]{MI_statistical_snr.eps} \vspace{-15mm} \caption{Joint optimization for effective MI maximization using statistical CSI at the transmitter. } \label{Fig_MI_s_Maximization} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=8cm]{MSE_statistical_snr.eps} \vspace{-15mm} \caption{Joint optimization for effective MSE Minimization using Statistical CSI at the transmitter. } \label{Fig_MSE_s_Minimization} \end{minipage} \vspace{-10mm} \end{figure} In this section, several numerical results are provided for the proposed joint optimization of our linear TPC and training sequence. Firstly, the widely used exponential correlation model is used for $\bm{\Psi}$, i.e., $[\bm{\Psi}]_{i,j}=\theta^{|i-j|}$ \cite{XingTSP201501,Ding09}. In the following simulations, $\theta=0.9$ is chosen. For the cases in which the receiver relies on the estimated CSI and the transmitter has only statistical CSI, the joint optimization used for effective MI maximization is investigated first. In Fig.~\ref{Fig_MI_s_Maximization}, the effective MI represents the values of the OF of \textbf{P. 20}, with antenna settings of $N_{\rm{T}}=N_{\rm{R}}=N=8$ and $T=256$ vs the SNR. In order to characterize the performance of the proposed algorithm, the direction search algorithm of \cite{Pastore2016} and the fixed point algorithm of \cite{Soysal2010} are also used in Fig.~\ref{Fig_MI_s_Maximization}. At low SNRs, the effective MI of the proposed algorithm only has modest advantages over the direction search algorithm of \cite{Pastore2016} and the fixed point algorithm of \cite{Soysal2010}. However, there is a significant performance gap between the proposed algorithm and the algorithm using uniform power allocation, which shows the superiority of the proposed optimization algorithm. With the SNR increasing, the performance gap between the proposed algorithm and the algorithm employing uniform power allocation is reducing, which is due to the fact that the optimal power allocation required for effective MI maximization tends to be uniform at high SNRs. Furthermore, the effective MI of the proposed algorithm is much higher than that of the direction search algorithm of \cite{Pastore2016} and that of the fixed point algorithm of \cite{Soysal2010} at high SNRs. In contrast to the MI, the MSE exhibits higher signal recovery accuracy at the receiver. If we directly use MSE as our performance metric, the trivial conclusion emerges that all the resources should be allocated to channel estimation. To overcome this deficiency, the OF of the optimization problem \textbf{P. 21} is the effective MSE instead of the traditional MSE used for transceiver optimization. The effective MSE minimization aims for simultaneously minimizing the data estimation MSE and maximizing the data transmission time interval. In Fig.~\ref{Fig_MSE_s_Minimization}, we compare the effective MSEs of the direction search algorithm \cite{Pastore2016}, of the proposed algorithm, and of the proposed algorithm using uniform power allocation, but only statistical CSI at the transmitter. At high SNRs, the effective MSE of the proposed algorithm is better than that of the direction search algorithm of \cite{Pastore2016}. Additionally, the effective MSE of the proposed algorithm is always lower than that of the proposed algorithm employing uniform power allocation. \begin{figure}[t] \centering \vspace{-8mm} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=8cm]{MI_estimated.eps} \vspace{-15mm} \caption{Joint optimization for effective MI maximization using estimated CSI at transmitter. } \label{Fig_MI_e_Maximization} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=8cm]{MSE_estimated.eps} \vspace{-15mm} \caption{Joint Optimization for effective MSE minimization using estimated CSI at transmitter. } \label{Fig_MSE_e_Minimization} \end{minipage} \vspace{-10mm} \end{figure} In the simulations, the other case is also taken into account, in which the transmitter and receiver have the same estimated CSI. In this case, the optimization problems \textbf{P. 29} and \textbf{P. 30} are used for the effective MI maximization. In Fig.~\ref{Fig_MI_e_Maximization}, the effective MIs are demonstrated for $T=256$ at SNR$=30$dB and for different antenna settings of $N=4,8,16$. Each point of the curves in Fig.~\ref{Fig_MI_e_Maximization} is an average of $10^4$ independent realizations used for calculating the statistical expectations. It can be concluded from the numerical results that there always exist a best operating point for resource allocation between channel estimation and data transmission. Moreover, it can be seen that for the optimal operating point, the resources allocated to channel estimation are much lower than those allocated for data transmission. Furthermore, Fig.~\ref{Fig_MI_e_Maximization} shows that the suboptimal solution in \textbf{P. 29} using uniform power allocation for training has almost the same performance as the optimized power allocation employed for training in \textbf{P. 30}. Similar conclusions can be drawn for the effective MSE minimization characterized in Fig.~\ref{Fig_MSE_e_Minimization}. For the effective MSE minimization, the optimization problems \textbf{P. 32} and \textbf{P. 33} are considered. It can also be seen that the suboptimal solution in \textbf{P. 32} associated with uniform power allocation for training has almost the same performance as the optimized power allocation delivered for training in \textbf{P. 33}. Last but not the least, the convergence behaviors of the proposed algorithms are investigated. It is worth highlighting that for the joint optimization using estimated CSI at the transmitter, the TPC is derived to be a function of the training sequence. In turn, this function is substituted into the original OF. As a result, there is no convergence issue that should be investigated. For the case of statistical CSI at the transmitter, the convergence behaviors of the proposed algorithm are shown in Fig.~\ref{Fig_Convergence_MI_Maximization} and Fig.~\ref{Fig_Convergence_MSE_Minimization}. It is observed that for both effective MI maximization and effective MSE minimization, the proposed algorithms exhibit excellent pretty good convergence properties for all simulation settings. \begin{figure}[t] \centering \vspace{-8mm} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=8cm]{MI_convergence.eps} \vspace{-15mm} \caption{The convergence behavior the effective MI maximization using statistical CSI at transmitter. } \label{Fig_Convergence_MI_Maximization} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=8cm]{MSE_convergence.eps} \vspace{-15mm} \caption{The convergence behavior the effective MSE minimization using statistical CSI at transmitter. } \label{Fig_Convergence_MSE_Minimization} \end{minipage} \vspace{-10mm} \end{figure} \section{Conclusions} \label{Section_Conclusions} The joint optimization of linear transceivers and training sequences was investigated for various performance metrics, including the effective MI, effective MSE, effective weighted MI, effective weighted MSE, effective additively Schur-convex and effective additively Schur-concave functions. The new joint matrix-monotonic optimization framework was proposed, which has two matrix variables, i.e., the linear TPC matrix and the training matrix. Based on our new framework of joint matrix-monotonic optimization, the optimal structures of linear transceivers and training sequences were derived for the cases in which the transmitter relies either on statistical CSI or estimated CSI in the face of transmit-side spatial correlation. Based on the optimal structures, the explicit optimization algorithms were proposed for the joint optimization considered. Finally, several numerical simulations were presented for corroborating our theoretical results.
2023-04-23T08:17:56.311Z
2022-09-23T02:10:14.000Z
redpajama/arxiv
arxiv_0000
886
14,710
db2fd334d31cfb560860d566766e84c90b3c5eb2
\section{Introduction} Text-to-speech (TTS) aims to synthesize high-quality speech for any given text \cite{taylor2009text}. TTS is an important research direction in artificial intelligence (AI) and received wide attention from academia and industry \cite{tan2021survey}. It has a wide range of applications, such as navigation, announcement, smart assistants, and other speech-enabled devices. With the development of deep learning technology, high-quality training data has become a necessary condition for training a reliable neural network model. Therefore, to build a robust TTS system, a high-quality speech dataset is required. For the mainstream languages such as Chinese and English, there are a lot of large-scale high-quality speech data, such as LJSpeech \cite{ito2017lj}, libriTTS\cite{zen2019libritts}, AiShell\cite{shi2020aishell}, etc. However, for some low-resource languages such as Mongolian, such data is scarce. In order to address this, we developed a open-source speech dataset for the Mongolian language. We named our dataset MnTTS, and it is primarily geared to build high-quality Mongolian TTS systems. Mongolian language belongs to the Mongolian branch of the Mongolian language family of the Altai language family, which is the most famous and widely spoken language of the Mongolian language family. Mongolian is mainly used in Mongolian-inhabited areas of China, Mongolia and the Siberian Federal District of the Russian Federation. At the same time, Mongolian is also the main national language in Inner Mongolia Autonomous Region of China. In the world, the number of speakers is about 6 million~\cite{puthuval2017language}. Furthermore, there is a growing awareness of the importance of increasing the number of Mongolian speakers reflected in many language rescue initiatives \footnote{\url{https://news.mn/en/791396/}} launched by the government. Therefore, the study of speech synthesis technology for Mongolian is of great significance for education, transportation, communication and other fields in ethnic minority areas. Note that the Mongolian language used in Mongolia country is mainly spelled in Cyrillic scripts~\cite{cohen2005english} because of the influence of the former Soviet Union in the 1950s and 1960s, while used in China is mainly spelled in traditional scripts~\cite{rohsenow2004fifty}. This paper focus on the traditional scripts. Currently, there is no Mongolian speech dataset of sufficient quality for building TTS systems, especially recently proposed end-to-end (E2E) neural architectures \cite{wang2017tacotron,sotelo2017char2wav,arik2017deep,skerry2018towards,shen2018natural,ren2019fastspeech,ren2020fastspeech,2018Neural}, such as Tacotron\cite{wang2017tacotron}, Tacotron2 \cite{shen2018natural} models. Armed with WaveNet-like vocoders, the effect of synthetic speech has reached the level of human pronunciation. To further speed up the inference process, non-autoregressive TTS models and vocoders, like FastSpeech \cite{ren2019fastspeech}, FastSpeech2 \cite{ren2020fastspeech}, MelGAN \cite{kumar2019melgan}, VocGAN \cite{yang2020vocgan}, HiFi-GAN \cite{kong2020hifi} etc., are proposed and achieved excellent performance. This work aims to fill the gap for Mongolian by introducing the MnTTS dataset. To the best of our knowledge, it is the first open-source dataset developed for building Mongolian TTS systems. Our dataset contains around 8 hours of high-quality speech data read by a 22-year-old professional female Mongolian announcer. The dataset was carefully annotated by native transcribers and includes politics, business, sports, entertainment, and other fields. The manuscript covers all Mongolian alphabets and rich word combinations. The MnTTS is freely available for both academic and commercial use under the Creative Commons Attribution 4.0 International License\footnote{\label{licenses}\url{https://creativecommons.org/licenses/by/4.0/}}. By introducing the MnTTS database, we plan to promote the development of Mongolian intelligent information processing technology, which will play an important role in promoting the development of artificial intelligence technology for ethnic minorities in China. We believe that the MnTTS database will be a valuable resource for the TTS research community, and our experience will benefit other researchers planning to develop speech datasets for low-resource languages. We note that the primary application domain of MnTTS is speech synthesis. However, we believe that our data will also be useful for speech recognition, speech enhancement and other related fields. To demonstrate the reliability of MnTTS, we combine the Fastspeech2 \cite{ren2020fastspeech} model and HiFi-GAN \cite{kong2020hifi} vocoder to build our baseline system since the FastSpeech2 is the state-of-the-art non-autoregressive acoustic model and HiFi-GAN is the state-of-the-art non-autoregressive vocoder. We evaluated the system using the subjective mean opinion score (MOS) and real time factor (RTF) measures. The experiment results showed that the TTS model built using our dataset achieve 4.46 in MOS and $3.30\times10^{-1}$ in RTF for the female speaker, respectively, which assures the usability for practical applications. In addition, we performed an analysis on the robustness issue. Note that we found some unstable phenomena such as word skipping and word missing in the synthetic examples, and we conducted an in-depth analysis of the reasons. At last, the MnTTS dataset, training recipe, and pretrained models are publicly available~\ref{github}. The rest of the paper is organized as follows. Section II briefly reviews related works. Section III describes the MnTTS construction procedures and reports the statistics information. Section IV explains the TTS experimental setup and discusses obtained results. Section V concludes the paper and highlights future research directions. \section{Related works} \subsection{TTS dataset construction} The recent proliferation of human-machine interaction applications, such as smart speaker, smart home, smart car assistant, has attracted a great deal of attention to TTS research from both academia and industry~\cite{wang2017tacotron,sotelo2017char2wav,arik2017deep}. Consequently, many large-scale datasets, such as LJSpeech \cite{ren2019fastspeech}, libriTTS\cite{zen2019libritts}, AiShell\cite{shi2020aishell}, etc., have been collected and released freely \cite{ito2017lj,zen2019libritts,shi2020aishell,veaux2016superseded}. However, these datasets are mostly focus on resource-rich languages, such as English, Mandarin, and so on. To build a TTS dataset for low-resource language, some researchers proposed unsupervised \cite{ren2019almost}, semi-supervised \cite{chung2019semi}, and cross-lingual transfer learning \cite{tu2019end} based methods. Specifically, these methods first crawl some voice files automatically from the Internet, then adopt speech recognition models or extract some language knowledge from another similar language to label the raw audio files. Although the above methods provide some available speech corpora to support TTS model training and achieve acceptable speech synthesis performance, the overall quality of the synthesized speech is usually insufficient for practical applications due to the environmental noise and background noise etc.. We note that the simplest and most straightforward idea is to record and manually annotate audio recordings. Although such a process is notoriously laborious and time consuming. However, in order to promote the development of low resource language TTS, this method is a reliable method to obtain high quality speech synthesis results. In this paper, we will adopt the second approach to prepare our MnTTS dataset, and we believe that it is worthwhile to open source such high-quality Mongolian speech dataset. \subsection{Mongolian TTS} The study of Mongolian speech synthesis has a long history. In recent years, with the development of deep learning technology, the research on Mongolian speech synthesis has ushered in a new climax. In \cite{liu2020exploiting}, deep learning techniques were first introduced to Mongolian speech synthesis, and DNN-based acoustic models trained on 5-hours training data were used instead of HMM acoustic models to improve the overall performance of Mongolian TTS. In \cite{8706263}, a Tacotron-based Mongolian speech synthesis system trained on about 17 hours of training data was implemented. Among them, the overall performance of the tacotron-based Mongolian TTS system achieved significant improvement compared with the traditional methods. Similarly, Huang et al.\cite{9675192} realized Mongolian emotional speech synthesis based on transfer learning and emotional embedding, but their dataset was not shared also. The above mentioned works provide a solid foundation for the research of Mongolian TTS technology. However, the datasets involved are not publicly available. Therefore, it is necessary to build a high-quality open-source Mongolian TTS dataset, which will be the focus of this work. \section{MnTTS dataset} The MnTTS project was conducted with the approval of the xx lab. The female speaker participated voluntarily and was informed of the data collection and use protocols through a consent form. \subsection{Text collection and narration} {The first step in building this dataset is to conduct text collection. For the scope of text collection, we initially selected mainstream platforms such as news websites, social media platforms, and books. In terms of topics, we will try to ensure that the selected texts have a wide coverage (e.g., politics, business, sports, entertainment, culture, etc.). We also use manual filtering to exclude inappropriate content, such as more sensitive political issues or issues related to privacy and violent pornography. Under this rule, we collected a total of 7,000 utterances as our final text scripts.} \begin{table}[] \centering \caption{\label{tab:statics}The MnTTS dataset specifications} \begin{tabular}{p{2cm}<{\centering}p{2cm}<{\centering}p{2cm}<{\centering}} \toprule[1pt] Category & \multicolumn{2}{c}{Statistics} \\ \hline \multirow{4}{*}{Character} & Total & 410044 \\ & Mean & 66 \\ & Min & 2 \\ & Max & 189 \\ \hline \multirow{4}{*}{Phoneme} & Total & 310565 \\ & Mean & 50 \\ & Min & 1 \\ & Max & 146 \\ \hline \multirow{4}{*}{Word} & Total & 63866 \\ & Mean & 10 \\ & Min & 1 \\ & Max & 28 \\ \bottomrule[1pt] \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \end{tabular} \end{table} \subsection{Text preprocessing} The traditional Mongolian language has a unique agglutinative language characteristic, which makes the expression of Mongolian letters in words variable, and its manifestation may vary in different contexts. Therefore, there is a serious phenomenon of homophony that lead to many incorrectly encoded letters in text scripts. To avoid such a problem as much as possible, we convert the Mongolian text to a Latin sequence representation. The text is processed in three steps: encoding correction, Latin conversion, and text regularization. \begin{itemize} \item \textbf{Encoding correction}: Firstly, we manually convert the incorrect encoding to the correct form \cite{liu2018phonologically} to correct the Mongolian character encoding. \item \textbf{Latin conversion}: After that, the corrected Mongolian characters are converted to the Latin representation according to the Alphabet-Latin mapping table \cite{liu2019building}. \item \textbf{Text regularization}: Finally, more than 140 \cite{liu2019building} kinds of regular expressions were designed to filter the special characters that appear in Mongolian with high frequency, such as date and Arabic numerals, to convert the irregular Mongolian text into the standardized Mongolian Latin character representation sequence. \end{itemize} \subsection{Audio recording and audio-text alignment} In order to ensure the quality of the audio recording, we invited a 22-year-old professional Mongolian announcer who is a native Mongolian-speaking girl. The whole recording process was done in a standard recording studio of xx university, and we used \textit{Adobe Audition} software for voice recording. The announcer follows our text script and reads it sentence by sentence. In addition, there is another volunteer to supervise the recording process and re-record if there are any problems, such as murmurs and unreasonable pauses in the recording. \begin{figure}[] \centering \centerline{ \quad \quad \includegraphics[width=\linewidth]{fig/fig2.png}} \caption{Word length statistics of the MnTTS dataset.} \label{fig:word} \vspace{-5mm} \end{figure} \begin{figure}[] \centering \centerline{ \quad \quad \includegraphics[width=0.95\linewidth]{fig/fig1.png}} \vspace{-2mm} \caption{Sentence duration statistics of the MnTTS dataset.} \label{fig:duration} \end{figure} \begin{figure*}[] \centering \centerline{ \quad \quad \includegraphics[width=\linewidth]{fig/fig10.png}} \caption{The overall architecture of our baseline system. \textcolor{red}{}} \label{fig:model} \end{figure*} After recording, we also perform further checks on the speech data. Specifically, we invited three Mongolian transcribers to carefully align each sentence with the content of the actual audio. Our textual content consists of Mongolian Latin sequences, referring to each Latin word in the sequence as a word and each letter in the Latin word as a character. the transcripts also contain punctuation marks, such as period (`. ') , comma (`,'), hyphen (`-'), question mark (`?') , exclamation mark (`!') , and so on. We removed excess noise and incorrectly pronounced parts of the audio file, with about 0.3 seconds of silence at the beginning and end of each audio segment. In the end, we kept about 6,000 utterances and the corresponding text data. Finally, we collated about 8 hours of speech data, which was stored in the following format: sampling rate of 44.1 k, sampling precision of 16 bit. \subsection{Dataset specifications} The statistical results of the MnTTS data are shown in Table \ref{tab:statics}. The total number of Mongolian characters in the whole data is 410,044, the average (Mean) number of characters per sentence is 66, the shortest (Min) sentence has 2 characters, and the longest (Max) sentence has 189 characters. For phoneme units, this dataset contains 310,565 phoneme units, and we focused on the mean value of 50, the maximum value of 146, and the minimum value of 1 in the statistical process. In addition, we counted the number of words in the data, and the results are shown in the figure, there are 63,866 words in this dataset, the average number of words in a sentence is 10, the shortest sentence has only one word, and the longest has 28 words. And we constructed a histogram Figure \ref{fig:word} using the lengths of the counted sentences, in which we can visually see that most of the sentences are concentrated in the length range of 8-10, and the lengths of the sentences follow a normal distribution. In terms of sentence duration, as shown in Figure \ref{fig:duration}, we counted the duration of all sentences, and we can see that most of the sentences are concentrated in the duration range of 4-5 seconds, And since this dataset contains a large number of Mongolian names of Mongolian names, which in turn leads to a large portion of the data being concentrated in the 0-1 second interval. \section{TTS experiments} To validate the proposed MnTTS dataset, we built the Mongolia TTS system based on FastSpeech2 \cite{ren2020fastspeech} model and HiFi-GAN vocoder and evaluated it using the subjective MOS and RTF measures in terms of naturalness and inference efficiency. Please visit our gitHub repository\ref{github} to enjoy the speech samples. \subsection{Experimental setup} The overall architecture of our baseline is shown in the Figure \ref{fig:model}. the FastSpeech2 model aims to convert input Mongolian scripts to the Mel-spectrogram feature, then the HiFi-GAN \cite{kong2020hifi} vocoder seeks to reconstruct the waveform from the Mel-spectrogram feature. We used TensorFlowTTS toolkit \footnote{\label{tensorflowtts}\url{https://github.com/TensorSpeech/TensorFlowTTS}} to build the baseline system. FastSpeech2 is the current state-of-the-art non-autoregressive speech synthesis model. Extract duration, pitch, and energy from speech waveforms, use them directly as conditional inputs in training, and use predicted values for inference. This model is not only faster to train, but also alleviates the one-to-many mapping problem in TTS (i.e., multiple phonetic variants corresponding to the same text). The encoder hidden size is set to 384 and the number of hidden layers is 4. The decoder hidden size is 384 and the number of hidden layers is 4. The number of convolutional layers of the predictor in the Variance adaptor is set to 2, and the predictor dropout rate is 0.5. The initial learning rate is 0.001, and the hidden dropout rate is set to 0.2. HiFi-GAN is a vocoder that is commonly used in both academia and industry in recent years. It can convert the spectrum generated by the acoustic model into high-quality audio. This vocoder uses a generative adversarial network as the basic generative model. The generator of HiFi-GAN mainly has two parts, one is the upsampling structure, which is composed of one-dimensional transposed convolution; the other is the multi-receptive field fusion module, which is mainly responsible for optimizing the sampling points obtained by the upsampling. poor network composition. There are two discriminators of HiFi-GAN, namely multi-scale and multi-period discriminators, which identify speech from two different angles. For generator, kernel size is 7, upsample scales is [8,8,2,2]. For discriminator List of period scales is [2,3,5,7,11], In Conv filters of each period discriminator the filters is 8. For the parameters of melgan discriminator, Pooling type for the input downsampling is AveragePooling1D, List of kernel size is [5,3] , Nonlinear activation function is LeakyReLU. Before baseline training, a Tacotron2 model trained with 100K steps is used to extract duration from attention alignments for duration predictor of FastSpeech2. After that we train Fastspeech2 with 200k steps. For the HiFi-GAN vocoder, we first train the generator for 100k steps, and then jointly train the generator and discriminator for 200k steps. All models were trained using the Tesla V100 GPUs. More details on model specifications and training procedures are provided in our GitHub repository\ref{github}. \begin{table}[] \centering \caption{\label{tab:mos}Mean opinion score (MOS) results with 95\% confidence intervals.} \label{tab:my-table} \begin{tabular}{ccccc} \toprule[1pt] System & MOS Score & & & \\ \hline Ground Truth & 4.72 ± 0.03 & & & \\ FastSpeech2 + Griffin-Lim & 4.23 ± 0.03 & & & \\ \hline \textbf{FastSpeech2 + HiFi-GAN } & \textbf{4.46 ± 0.06} & & & \\ \bottomrule[1pt] \end{tabular} \end{table} \begin{figure}[] \centering \centerline{ \quad \quad \includegraphics[width=0.8\linewidth]{fig/fig11.png}} \caption{Inference time (seconds) vs. utterance number for our model.} \label{fig:SPEED} \end{figure} \subsection{Naturalness evaluation} To evaluate naturalness, we conducted the mean opinion score (MOS) test \cite{streijl2016mean}. As an evaluation set, we randomly select 50 sentences of different lengths. These sentences are not used to train the model. The model-generated audio was randomly shuffled with the ground truth speech and sent along to listeners. In the test, 5 subjects are asked to use headphones and sit in a quiet environment to rate the naturalness of 250 generated speeches. All subjects were young Mongolian students, with Mongolian as their native language. The recordings were rated using a 5-point Likert scale: 5 for excellent, 4 for good, 3 for fair, 2 for poor, and 1 for bad. For a full comparison of naturalness, we compared our baseline synthetic speech with the \textbf{Ground Truth} speech. In addition, to verify the performance of HiFi-GAN, we added a \textbf{FastSpeech2+Griffin-Lim} baseline model for further comparison. Note that the FastSpeech2+Griffin-Lim system adopts the Griffin-Lim \cite{perraudin2013fast} algorithm instead of the HiF-iGAN vocoder to reconstruct the waveform. The subjective evaluation results are given in Table \ref{tab:mos}. According to the results, the best performance was achieved by the ground truth, as expected. The worst performer was FastSpeech2 + Griffin-Lim. Importantly, the Fastspeech2 + HiFi-GAN model achieved a MOS measure of above 4.4 and is not too far from the ground truth. This result demonstrates the utility of our MnTTS dataset for TTS applications. \begin{table}[] \centering \caption{Error types found in the 50-sentence test set(total number of words is 500).} \begin{tabular}{p{5cm}<{\centering}p{3cm}<{\centering}p{3cm}<{\centering}} \toprule[1pt] \multicolumn{1}{c}{\multirow{1}{*}{Error types}} & \multicolumn{2}{c}{\multirow{1}{*}{Number}} \\ \hline Repeated words & \multicolumn{2}{c}{0} \\ Skipped words & \multicolumn{2}{c}{0} \\ Mispronounced words & \multicolumn{2}{c}{2} \\ Incomplete words & \multicolumn{2}{c}{2} \\ Long pauses & \multicolumn{2}{c}{0} \\ Nonverbal sounds & \multicolumn{2}{c}{1} \\ \hline \multicolumn{1}{c}{Total} & \multicolumn{2}{c}{5}\\ \bottomrule[1pt] \label{tab:error} \end{tabular} \end{table} \subsection{Efficiency evaluation} The same 50-sentence test set from the previous section was used for the efficiency evaluation. The real-time-factor (RTF) metric was calculated by taking the total duration of the 50-sentence test set as the reference \cite{bulut2020low}. Specifically, the time we took to synthesize this test set was 62.14 seconds, and the total duration of the synthesized speech for the test set is 188.48 seconds. Therefore, we obtain the RTF as $3.30\times10^{-1}$ by a division operation. In addition, we also synthesize different numbers (ranging from 10–50) of sentences and then count the time required. The results are shown in Fig. \ref{fig:SPEED}, from which we find that the inference latency barely increases with the number of sentences for our model. This indicates that our model has good synthesis efficiency and still has fast synthesis speed in batch synthesis, which meets the practical requirements. \subsection{Robustness analysis} Although the sound quality of our synthesized voice achieved satisfactory results. However, we also found some unstable synthesis phenomena like skipping words, missing words, etc. during the evaluation. We identify five types of errors in the synthesized speech of the test set, including repeated words, skipped words, mispronounced words, incomplete words, long pauses, and nonverbal sounds. We invited 5 volunteers to label the error cases for all synthesized speech. Note that the total number of words in the test set is 500. We report the statistical results of the error cases in Table \ref{tab:error}. From the results, we can see that there are 2 words with mispronunciation, 2 words with incomplete pronunciation, and 1 word with nonverbal noise. After analysis, we concluded two reasons. First, the pretrained Tacotron model extracts the duration used to provide a supervised signal for the duration predictor of the FastSpeech2 model, but the tacotron model trained on 8-hours training data may not be robust enough, leading to errors in word duration. Second, HiFi-GAN is also trained based on 8-hours training data, and the vocoder may also be insufficiently trained, leading to noise in some words. In future work, we will continue to improve by increasing the amount of data to address these two issues. \section{Conclusion} This paper introduced the first open-source Mongolian speech dataset for TTS applications. The MnTTS dataset contains over 8 hours of speech data consisting of around 6,000 recordings. We released our dataset under the Creative Commons Attribution 4.0 International License, which permits both academic and commercial use. We shared our experience by describing the dataset construction and TTS evaluation procedures, which might benefit other researchers planning to collect speech data for other low-resource languages. To demonstrate the use of our dataset, we built baseline model based on FastSpeech2 and HiFi-GAN vocoder. The MOS and RTF evaluation results suggest that the baseline model trained on MnTTS are suitable for practical use. In future work, we plan to further extend our dataset by introducing new speakers and emotions. We also plan to explore the optimal hyper-parameter settings for the Mongolian TTS model, compare different TTS architectures, and conduct additional analysis. \bibliographystyle{IEEEtran}
2023-04-23T08:17:56.697Z
2022-09-23T02:10:25.000Z
redpajama/arxiv
arxiv_0000
898
3,777
98178c1c74b641417de91c2434698e3b4dd6be51
\section{Introduction} With the increase in travel demand, congestion has became an escalating issue nowadays impairing the efficiency of transportation systems. To mitigate this effect, substantial amount of efforts have been devoted into developing novel technologies and policies for traffic management in the last few decades. Central to the traffic management is the understanding of what factors affect travel time and/or traffic flow to what extent, and how to effectively predict travel time in real time. Moreover, the accuracy and reliability of the prediction usually plays an essential role in the deployment of those technologies and policies. Recent emerging sensing technologies bring in massive data from multiple data sources, which enable us to examine the travel time more closely. This research proposes a data-driven approach to holistically understand and predict highway travel time using massive data of traffic speeds, traffic counts, incidents, weather and events, all of which are acquired in real time from multiple data sources collected over the years. The prediction model selects the most related features from a high-dimensional feature space to better interpret the travel time that vary substantially both by time of day and by day to day. Because of the rising demand for reliable prediction of traffic state/travel time, a number of methods have been proposed in the last two decades. Among them, machine-learning based methods, coupled with basic traffic flow mechanisms, are gaining popularities and becoming the mainstream in literature. Just to name a few representative studies, linear time series analysis has been widely recognized and used for traffic states forecasting, such as Linear regression\cite{zhang2003short}, Auto-Regressive Integrated Moving Average (ARIMA) \cite{pace1998spatiotemporal,kamarianakis2005space,kamarianakis2003forecasting} and its extensions, including KARIMA\cite{van1996combining} which uses Kohonen self-organizing map; Seasonal ARIMA \cite{williams1998urban}; Vector-ARMA and STARIMA \cite{kamarianakis2003forecasting}. Other examples include Kalman filtering \cite{guo2014adaptive}, non-parametric regression models \cite{smith2002comparison,rahmani2015non}, and Support Vector machines \cite{cong2016traffic,wu2004travel} for predicting travel time and flow. \cite{mitrovic2015low} exploited compressed sensing to reduce the complexity of road networks, then support vector regression (SVR) is used for predicting travel speed on links. A Trajectory Reconstruction Model is used by \cite{ni2008trajectory} for travel time estimations. \cite{qi2014hidden} applied hidden Markov Model that incorporates traffic volume, lane occupancy, and traffic speed data. \cite{ramezani2012estimation} and \cite{yeon2008travel} also used Markov chains to predict travel time on arterial routes. A fuzzy logic model was adopted by \cite{vlahogianni2008temporal} to model the temporal evolutions of traffic flow. A hybrid empirical mode decomposition and ARIMA, or hybrid EMD-ARIMA model is developed by \cite{wang2016novel} for short-term freeway traffic speed prediction. In recent years, studies that use deep neural networks for traffic estimation start to emerge. For example, a restricted Boltzmann Machine(RBM)-based RNN model is used to forecast highway congestions \cite{ma2015large}. Stacked auto-encoder deep architecture is used by \cite{lv2015traffic} to predict traffic flows on major highways of California. In addition, a time Delay Neural Network (TDNN) model synthesized by Genetic Algorithm (GA) is proposed and used for short-term traffic flow prediction \cite{abdulhai2002short}. A stacked Restricted Boltzmann Machine(RBM) in combination with sigmoid regression is used for predicting short term traffic flow \cite{siripanpornchana2016travel}. Last but not least, a detailed review of short-term travel time prediction can be found in \cite{oh2015short} and \cite{wang2016novel}. Despite of tremendous research on travel time/flow prediction, many studies focus on exploring temporal correlation at a single location, or spatio-temporal correlation on a small-scale network (such as a corridor network). In fact, traffic states of two distant road segments can be strongly correlated temporally. However, only a few studies have taken such spatial-temporal correlations into considerations when building prediction models for large-scale networks. For example, \cite{kamarianakis2003forecasting} considered spatial correlations as function of distance and degree of neighbors when applying multivariate autoregressive moving-average (ARIMA) model to the forecasting of traffic speed. Furthermore, \cite{kamarianakis2012real} discussed the extensions of time series prediction model by considering correlations among neighbors and the utilization of LASSO for model selection. \cite{zou2014space} introduced a space–time diurnal (ST-D) method in which link-wise travel time correlation with a time lag is incorporated. \cite{cai2016spatiotemporal} proposed a k-nearest neighbors algorithm (k-NN) model to forecast travel time up to one hour in advance. This model uses redefined inter-segments distances by incorporating the grade of connectivity between road segments, and considers spatial-temporal correlations and state matrices to identify traffic states. \cite{min2011real} proposed a modified multivariate spatial-temporal autoregressive (MSTAR) model by leveraging the distance and average speed of road networks to reduce the number of parameters. Among literature, travel time/speed and traffic counts are the two most commonly used metrics to be predicted, oftentimes based on the features constructed by themselves in the prediction model. Information of other features relevant to traffic states, such as weather, road incidents, local events are rarely explored, which may also exhibit potential correlation and causality relations with congestion on road segments. Previous studies have shown that adverse weather conditions have detrimental effects on traffic congestion \cite{goodwin2002weather,sridhar2006relationship}. Moreover, different weather features can bring various levels of impacts on traffic delays \cite{maze2006whether}. In this study, we incorporate a complete set of weather features in the prediction model: temperature, dew point, visibility, weather type (rain/snow/fog etc), wind speed, wind guests, pressure, precipitation intensity and pavement condition (wet/dry). It has also been shown that travel time is sensitive to traffic incidents of various kinds \cite{cohen1999measurement,kwon2011decomposition}, including crashes, planned work zones and disabled vehicles. The actual impacts of incidents on the travel time of particular road segments depend on a number of features of the incidents, such as time, location, type, severity and the number of lanes closed. Thus, we will also take into consideration all those incidents features in our prediction model. In addition, the exploration of spatio-temporal correlations of traffic states in literature are limited to simple metrics, such as the distance between road segments, the degree of connections and the number of time lags. They usually are determined exogenously and do not necessarily reveal the actual observations. In our approach, spatio-temporal features of the network and travel demand are more extensively explored from a variety of data sets by the data-driven approach, and thus the prediction model can adapt to the real-world traffic conditions in response to diverse roadway/demand disruptions. For example, in urban areas with a daily commuting pattern, time dependent origin-destination (O-D) travel demand in the morning peak is adopted as a feature when predicting travel time of the afternoon peak, since demand patterns in morning peak and afternoon are oftentimes correlated as will be shown later. Based on the O-D demand, alternative routes of the target road segments of prediction interest can be derived and incorporated into the prediction model to explore their spatio-temporal correlations. The goal of this paper is two-fold: 1) analyze and interpret the spatio-temporal relation among highway congestion and various features such as weather, incidents, demand and travel speed in the context of dynamic networks, and 2) establish a reliable travel time prediction model for an arbitrary part of a large-scale network. The method incorporates features of both supply and demand including roadway network, travel demand, traffic speed, incidents, weather and local events, all of which are collected over several years. Comparing to existing methods in literature, this paper makes the following contributions, \begin{itemize} \item We consider a comprehensive list of data sets in the context of large-scale networks to extract features and explore their spatio-temporal relations with travel time. Those data sets include physical roadway networks, travel demand approximated by traffic counts, traffic speed, incidents, weather and local events, all of which are in high spatio-temporal resolutions and collected by time of day over the years. Existing studies usually focus on one single data set or a subset of them, on a small-scale network, and the spatial or temporal resolution is relatively coarse \cite{sridhar2006relationship,kwon2011decomposition,cohen1999measurement}. \item The proposed prediction model is able to provide reliable results 30 minutes in advance. This is more advantageous than most short-term data-driven traffic prediction of 5-15 minutes ahead, such as ARIMA, Kalman filtering and decision tree \cite{chien2003dynamic,zhang2003short,kwon2000day,guin2006travel}, just to name a few. \item The two case studies are conducted for afternoon hours (2-6pm) on busy and unreliable corridors, when travel time varies the most significantly, both from day to day and within day. Many existing methods are applied to the entire day on mildly congested roads, which may partially alleviate the prediction challenge. In this sense, our model attempts to most effectively capture factors impacting traffic throughput and congestion evolution by analyzing the time of day period with the highest travel time variability. \item Performance of all prediction models, including a time-series model as a benchmark, are estimated through multi-fold cross validations in this paper, rather than separating training data set and testing data set in an ad-hoc manner. Cross validation results in a more robust model selection process and reliable estimators for model errors comparing to the conventional train/test validation in many studies. \item By exploring the spatio-temporal correlation among multi-source features, the travel time prediction model can be interpreted with findings and insights from real-world traffic operation. \end{itemize} The rest of this paper is organized as follow. The proposed method for data analytics and travel time prediction is introduced in Section 2. The method is then applied to the following two case studies: (1)A 6-mile highway corridor of I-270 Northbound near Washington, DC. (2)A 2.3-mile highway segment of I-376 around downtown Pittsburgh. Results and findings are presented in Section 3. Conclusions and future works are discussed in Section 4. \section{Methodology} The proposed method has two main parts: data analytics and prediction model selection. The former aims to improving our understanding of the correlation among various features from multiple data sources and possible interpretation of congestion. The following methods are adopted: clustering, correlation analysis and principal component analysis. The latter part picks out a subset of features that are the most critical and robust in predicting travel time by incorporating the results from the analytics, prior knowledge of the network characteristics, and estimated recurrent travel demand. Finally, the best prediction model is selected out of several candidates, including LASSO (least absolute shrinkage and selection operator), stepwise regression, support vector regression and random forest. The overall procedure of the approach is shown in Fig \ref{data_proc}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Data_proc.png} \caption{Procedure of data analytics and model selection} \label{data_proc} \end{figure} \subsection{Multiple data sources related to travel time} \label{datasource} As for short-term travel time prediction, two widely used data sources in literature are speed/travel-time and traffic counts, which are also known as direct indicators of real-time traffic states. To understand spatio-temporal correlations of speed and counts among road segments, this research incorporates features from all road segments in the region of interest when predicting travel time for each road segment, comparing to only the road segment itself or a few ones in its vicinity in most existing studies. The intuition is that many road segments, though being distant away from the road segment of interest, can be on its parallel routes to the route containing the segment of interest. Or more generally, the segment of interest may be impacted as a result of the ripple effect from some perturbations to some distant segments. Thus, examining only the segment itself and adjacent segments may overlook critical spatio-temporal correlation among distant road segments. The speeds and counts of all road segments, measured for each of the time-of-day intervals (such as 5 minutes) are used as features. Apart from speeds and counts, other data sources have also been proven to effectively influence traffic congestion. Previous studies have shown that adverse weather conditions bring detrimental effects on traffic congestion \cite{goodwin2002weather,sridhar2006relationship}. Moreover, different weather features may bring in various levels of impacts on traffic delay \cite{maze2006whether}. In this study, we incorporate the following weather features in the model: temperature, dew point, visibility, weather type (rain/snow/fog etc), wind speed, wind guests, pressure, precipitation intensity and pavement condition (wet/dry). Travel time is also sensitive to traffic incidents of various kinds \cite{cohen1999measurement,kwon2011decomposition}, including crashes, planned work zones and disabled vehicles. The actual impacts on the travel time by an incident are dependent on a number of factors of the incident, such as time of day, location, type, severity and the number of lanes closed. In this study, multiple incident features for each road segment of study are extracted to encapsulate the location information of incidents to the segment, including several binary variables implying upstream or downstream, whether or not the incident is on its alternative routes, and whether or not the incident is along its opposite direction. Furthermore, the severity of incidents is incorporated into the set of incident features. The severity is considered with the following features: number of lane closed, number of motorists injured, and number of vehicles involved, all of which may be available from crash data. All those features of incidents are carefully examined in the correlation analysis and ultimately selected for the prediction model. Last but not least, local events, such as sports games and festivals would alter the daily travel demand, and many influence congestion on relevant road segments. The actual impacts of an event on traffic are related to its date/time, location and scale. Due to the sparsity and heterogeneity nature of events, it may be infeasible to incorporate all those factors for each event without overfitting. In principle, events with potential correlations are manually selected. The event type, location and time are included in the set of features. For instance, one event feature can be a binary variable indicating whether a particular type of event takes place in the evening. \subsection{Network characteristics and recurrent demand level} The characteristics of a road network as well as the daily travel demand are considered when forming the initial set of features to be selected for relating traffic speed and counts. First off, for any road segment of study, we look into those road segments that can potentially carry major recurrent traffic flow upstream or downstream. for the reason that their travel flow are most likely to have correlations with the segment of study. These segments can be extracted and selected by examining possible routes of the time-dependent origin-destination (OD) travel demand. Second, correlations between segments on alternative routes for each OD pair are considered in our approach. The travel demand, and thus congestion level, on alternative routes are usually correlated, even if some segments are distant from each other. Correlations among such segments would otherwise not be learned when the degree of connections in a graph is used to capture correlations. Alternative routes are extracted from possible routes for each origin-destination pair. Third, the day-to-day variations of daily commuting traffic are considered in this study. Travel demand during morning peaks can be highly correlated to demand during afternoon peak on the opposite direction of travel on the same day. When analyzing and predicting afternoon peak travel time, morning peak travel time on the other direction can be used as an indicator to approximate the demand level of daily commuting traffic. To sum up, for the road segment of study, speed/counts features extracted from the following segments are included in the initial speed/counts feature set prior to model selection and prediction: \begin{itemize} \item All upstream and downstream road segments that carry the same traffic flow along main routes with the segment of study, during the time period of prediction. \item All road segments on the alternative routes which are derived for all origin-destination pairs. \item When analyzing/predicting travel time of afternoon peaks, traffic counts on the opposite direction in morning peaks are included to approximate daily commute demand level. \end{itemize} We further illustrate how we can approximate the daily demand level. Cumulative traffic counts, from early in the morning to immediately before morning congestion starts, may reveal the demand level. Therefore, the demand level can be approximated by traffic counts of those locations surrounding the segment of study, observed at the same of time of day period from day to day. The time period can start from as early as 4am when commuting starts, to the latest possible time of day throughout the entire year prior to the morning traffic breaks down, (e.g., 6:30am). A traffic break-down can be defined as the time when travel speed drops below a certain threshold, such as 40 miles/hour. Traffic counts during the congestion are limited by the flow capacity of the furthest downstream link, and thus cannot be directly used to estimate the daily demand level. \subsection{Data analytics} We apply several data analytics techniques to the multi-source traffic data to gain a better understanding of how features are related to congestion, and provide insights for building a reliable prediction model. \subsubsection{Clustering} \label{cluster_section} In general, daily traffic pattern usually owns day-of-week effects and/or seasonal effects. For instance, winter or summer, monday, Tuesday through Thursday, and Friday, days within each of those category may exhibit similar patterns. Thus, we cluster days of year into several categories, then conduct data analytics and travel time prediction for each cluster independently. W can first apply K-means clustering to separate days into several clusters. Then, depending on the distribution of days of week in each cluster, we determine whether or not it is suitable to aggregated data based on their days of week. For example, if most Mondays fall into one cluster and rarely appear in another cluster, we can infer that Mondays exhibit a unique traffic pattern from other days and should be analyzed separately. The objective of the K-means clustering on observations $(x_1, x_2, \dots, x_n)$ is to find $k$ clusters of sets $S$, which satisfies \begin{equation} \arg \min_S \sum_{i=1}^{k}\sum_{x \in S_i} || x-\mu_i||^2 = \arg \min_S \sum_{i=1}^{k}|S_i|\text{Var} S_i \label{kmeans} \end{equation} where $\mu_i$ is the mean of all observations in set $S_i$. In another example, we would like days from the same month/seasons to be grouped together. In this case, we can apply hierarchical agglomerative clustering (HAC) \cite{zepeda2013hierarchical}, which aims to minimize the within-cluster sum of squared error, with additional constraints that observations in the same cluster must be in a connected graph. In other words, we can enforce that all of the days within the same cluster form a continuous range of dates. In terms of the number of clusters, our method explores a range of options, and selects one based on the goodness of fitting as well as the dimension of the training set. In the case that the size of training set is large enough, e.g. daily observations of more than three years are available, we can split data into more clusters. For hierarchical agglomerative clustering, it is essential that each cluster contains a sufficient amount of days to avoid over-fitting. \subsubsection{Correlation analysis} To identify the relationship among different traffic features, correlation analysis is conducted first. The Pearson's product-moment coefficient is defined by, \begin{equation} \rho_{X,Y} = \text{corr}(X,Y) = \frac{\text{cov}(X,Y)}{\theta_X \theta_Y}=\frac{E[(X-\mu_X)(Y-\mu_Y)]}{\theta_X \theta_Y} \label{corr} \end{equation} By calculating the correlation matrix, and conduct hypothesis testing on whether certain pairs of features are correlated, we analyze the data in the following ways: \begin{itemize} \item Calculate and evaluate the correlation among speed/count features of all road segments. We also explore the relationship of congestion among different road segments under various time lags. This helps us analyze how congestion propagates spatially and temporally. \item Test and analyze the correlation between incidents, weather features and the travel time on the segment of study. This helps us determine if these factors are correlated with congestion, and to what extent hazardous conditions, such as crash and wet surface, impact the segment of study. \item Correlation analysis results can also be used for feature selection. High correlation between features indicate redundancy in the feature set. In particular, if the two road segments exhibiting highly correlated speed/counts are adjacent to each other, we can either remove one of them, or replace both with their average in the feature set. \item As we approximate daily travel demand level using morning traffic counts, correlation analysis helps determine which road segments and time periods are the most critical in predicting afternoon-peak travel time. Apart from comparing the correlation coefficients and conducting hypothesis tests, we also plot the day-to-day morning traffic counts against afternoon-peak counts, in order to infer if the morning counts are useful. \end{itemize} \subsubsection{Principal component analysis} The selected features can be further explored by principal component analysis (PCA). PCA can break the entire set of features into several uncorrelated components via an orthogonal transformation. By conducting a PCA, the most important sources of variations from all the features can be found. PCA also allows us to compress the high-dimensional data by aggregating features into several critical dimensions. In our method, a PCA can be conducted by first gathering all initial speed/counts features as well as the travel time of the segment of study, combining them with other features of incidents, weather and local events, and applying singular value decomposition \cite{abdi2010principal} to all features. Finally, we sort all principal components by their importance. Next, we can analyze the composition of the top few principal components to understand the main source of variations in the feature space. Also, by comparing the top principal components between different clusters of days, we are able to discover which features are the keys to distinguish clusters. \subsection{Prediction model} Based on the results of data analytics, a travel time prediction model can be established in the following steps. \subsubsection{Dimension reduction (feature selection) from all the speed/counts features} \label{221} The number of features available from the aforementioned data sources are excessive comparing to the number of available data points. For instance, a segment of interest on I-270 northbound (to be shown later in the case study) has over 500 road segments of speed measurements in the regional network. When using five 5-min time lags, the prediction model has over 2,500 features from speed data only, all of which may have some degree of correlations with the travel time on I-270. There are around 260 workdays within a year that have afternoon peaks no longer than 5 hours. Predicting travel time with all those features, let alone features on weather/events/incidents, can be computational inefficient and more importantly, undertakes the risk of over-fitting. Hence, it is essential to reduce the dimension of the feature space before applying a prediction model. The number of speed/counts features in the initial feature set is dependent on the complexity of road network and the number of time lags considered. For a particular segment of interest, a significant portion of those features are uncorrelated or redundant. Various regression models can be used to pick out those redundant features, such as rigid regression and LASSO. For LASSO, by tuning the $\lambda$ value in the formula below, we make trade off between the resulted dimension of features and the bias of the mean estimator. \begin{equation} \min_{\beta \in \mathbb{R}}\{\frac{1}{N}||y-X\beta||^2_2 + \lambda||\beta||_1\} \label{lassoequ} \end{equation} Where $X, y, \beta$ are features, travel time of the segment of study, and the coefficients of features, respectively. For rigid regression, we can set a threshold for coefficients, below which the corresponding features are removed. We can achieve a similar flexibility by adjusting this threshold value. Although there is no strict rule on the amount of features to be retained in the final prediction model, the following factors can serve as metrics in determining a proper number: the size of the training data set, the complexity of the road network, and the expected minimum percentage of variation to be explained by the selected features (r-squared value). Notice that it is safe to leave slightly more features than it is necessary, since subsequence steps will further reduce the dimension of the entire feature space when weather/incidents/events are added. While regression-based methods can pick out features that are linearly correlated with the travel time on the segment of study, those models may not reveal the actual possibly non-linear relationship. Moreover, data may be noisy and relatively sparse, it may not be reliable to rely solely on those data-driven methods. Thus, besides regression models, we also need to carefully review the selected features, and make modifications if necessary. For example, if there is no segment selected along the downstream/upstream of the segment of study, we should add some of them into the feature set manually, as they may exhibit non-linear relation with the target segment, or their relations are hidden by other highly correlated features chosen by regression. After this, all selected features will be used to create a non-linear regression model, such as a random forest. \subsubsection{Model selection} After pre-selecting the speed/counts features, we add weather/incidents/events features to construct a comprehensive prediction model. Correlation analysis is first conducted, to help select features that are highly correlated with the travel time of the target segment. Next, we apply a regression model to those selected features only. In this research, three models are adopted: LASSO linear regression, stepwise regression and random forest. Finally, we choose the one with the best prediction performance evaluated through cross validation as the final prediction model. \section{Case studies} To evaluate the performance of our method and explore in-depth insights from data analytics, we conduct two case studies. Details and results of the two case studies are presented in the next two subsections. \subsection{I-270 Northbound} The first case study is conducted on a 6-mile-long corridor of I-270 Northbound, between Montgomery Ave. and Quince Orchard Rd. in D.C. metropolitan region. The stretch of highway of interest is marked in purple in the left figure of Fig. \ref{DC_map}. The time period of study is 2PM-6PM on weekdays. I-270 is a major corridor connecting D.C. metropolitan area with municipalities northwest of DC, such as Frederick and Gaithersburg. This segment is frequently congested during afternoon peak hours, and has substantial travel time variations from day to day and within days. \subsubsection{Data sources and features} As for the two case studies, travel time on road segments is measured in the manner of travel rate, which is the travel time over the entire stretch of study divided by their total length, namely the reciprocal of space mean speed. An initial features are constructed as follows: \begin{enumerate} \item Speed: TMC (Traffic Message Channel) based speed data from INRIX is used, in all 369 TMC segments which covers all major highways and arterials of the area of study. Historical data are available in 5-minutes intervals. Those TMCs are shown in the right map of Fig \ref{DC_map} and listed below: \begin{itemize} \item Upstream/downstream of I-270, both Northbound and Southbound.(Orange on the map) \item Roadway network of northern D.C. area.(Blue on the map) \item MD 335 Northbound, as an alternative route of I-270 Northbound for afternoon peaks.(Red on the map) \item I-495 North, a major eastbound and westbound highway of northern D.C. area.(Green on the map) \end{itemize} We will predict travel time of the stretch of study in 30 minutes advance in the real time. 6 time lags are considered, from 60 minutes to 30 minutes in advance in 5-min intervals. \begin{figure*}[!t] \centering {\includegraphics[width=2.5in]{DC_only_new}} \hfil {\includegraphics[width=2.5in]{DC_allroad_new}} \caption{Left: A 6-mile corridor of I-270 Northbound; Right: Speed data: 369 TMC segments} \label{DC_map} \end{figure*} \item Traffic counts: 5-min traffic flow counts from fixed location sensors on multiple locations of I-270 Northbound and Southbound are used. We use data from four of those sensors with the best data quality, two from each direction of this corridor. Morning and afternoon demand levels are estimated using the approximation method discussed in Section \ref{221}, namely demand level is approximated by cumulative counts in the early morning when congestion is not yet formed. In this case study, it is the aggregated counts on I-270 southbound, from 4AM to the earliest time of all days in 2017 when speed drops below 90\% of the free flow speed. Data from multiple sensors during this time period are summed up as the final value. The afternoon demand level is also calculated as the cumulative counts from 12PM to 3 PM. \label{inci_feature} \item Incidents: we use incidents data collected by the following state Department Of Transportation: Washington D.C., Maryland, and Virginia. Each incident is classified into one of the categories: crash, emergency road works, planned work zone and disabled vehicles. Each incident entry comes with start and end time, either provided in data or imputed. Note that we predict travel time/rate 30 min ahead, and thus, incidents reported within 30 minutes ahead at the time of prediction cannot be used as features. Based on the severity and geographical information of each incident, the following binary features are created: \begin{itemize} \item An incident on the upstream of the segments of study; \item A severe incident on the downstream of the segments of study; \item A non-severe incident on the downstream of the segments of study; \item An incident on the opposite direction (I-270 S). \item An incident on the downstream of the segments of study that is at least 3 miles away. \item An incident on the alternative route (MD 335 N); \item An incident in northern D.C. (the far upstream of the segments of study). \end{itemize} In particular, severe incidents are defined as those with personal injuries reported. Upstream I-270 includes all road segments within 3 miles to the south end of the segments of study, while downstream I-270 includes those within 3 miles to the north end as well as the stretch of study itself. The reason for separating downstream and upstream is that downstream/on-site incidents usually reduce traffic speed on this stretch as a result of queues, while upstream incidents can reduce incoming flow rate to the stretch which may, in turn, result in an increase of traffic speed. Incidents on the opposite direction is defined on segments of I-270 Southbound within the same range of latitudes as the stretch of study (I-270 N). Finally, alternative routes are defined as those segments of MD 335 Northbound in the Rockville and Gaithersburg area. \item Weather: Hourly weather reports from Weatherunderground\footnote{https://www.wunderground.com/} is used in this case study. The following weather features are incorporated in the initial feature set: temperature (degrees Fahrenheit), wind chill temperature, precipitation intensity (inch/hour), precipitation type (Snow/Rain/None), Visibility (miles), wind speed (miles/hour), wind gust (miles/hour), pressure (millibar), Pavement Condition type (wet or dry). Also, we added categorical features of wind, visibility and precipitation intensity into the feature set. Those features are binary indicators whether the current condition is among the top 20\% extreme cases of the entire year of 2014. \item Local events: we incorporate the schedule of the following events: MLB (Washington DC nationals); NFL (Washington Redskins); NBA (Washington Wizards); NHL (Washington Capitals); DC cherry blossom festival. The event feature is binary and constant for the entire PM peak, indicating whether there is an ongoing/incoming event on that day. \end{enumerate} \subsubsection{Clustering} We selecte in all 30 TMC segments from the speed feature set based on the following criteria: 1) The set of selected TMCs should be representative for the network. Thus, every major highway/arterial has at least one TMC selected; and 2) TMCs with a higher correlation with the travel time on the corridor of study are selected with high priority. We use speed measurements of three time points, 2:00PM, 4:00PM and 6:00PM for the 30 TMCs, in all 90 features for each weekday in 2013. We test both K-means Clustering and hierarchical agglomerative clustering (HAC). From the results of K-means, we discover that day of week can not be properly separated by clustering as the composition of all cluster are a mixture of different days of week. This is probability due to the high variation nature of congestion in this corridor. With HAC, we discover that it is feasible to separate the whole year into two seasons:(1) 2013-02-21 to 2013-11-04; (2) 2013-01-01 to 2013-02-20 and 2013-11-05 to 2013-12-31. The two clusters can be explained as the non-winter pattern and the winter pattern. As a result, conduct data analytics and regression for each of the two clusters independently. \subsubsection{Correlation analysis} The correlation matrix of top five TMC features with the lowest p-values and a subset of non-TMC features are visualized in Fig. \ref{Corre_map}, due to the limit of space, not all features are visualized in the figure. Hypothesis tests (significant level 0.05) are also conducted for whether selected features are correlated with the travel rate on the corridor of interest. Findings are listed below: \begin{itemize} \item Most TMC features are significant under the hypothesis tests, and the correlations between some TMC features and the travel rate(travel time) on target segments reach 0.7 in absolute values, much higher than all non-TMC features. \item Travel demand level in the morning are positively correlated with targeted travel rate, implying that morning travel demand can reveal afternoon congestion to some degree and can be used to predict travel time in afternoon peaks. \item Incidents on the upstream I-270 have a negative correlation with the targeted travel rate. In other words, when there is an incident upstream of I-270, downstream segments will experience less congestion than on a regular day. \item Presence of incidents on alternative route (MD 335) is also negatively correlated with the travel rate on the target corridor. \item Both severe and non-severe incidents on downstream I-270 are positively correlated with the targeted travel rate. The correlation coefficient of severe incidents is approximately 4 times as much as that of non-severe incidents. Overall, we see that the time, location and severity of incidents impact the congestion in very different ways. \item Among all weather features, wind speed, wind gust, visibility, precipitation intensity, rain, snow and pavement condition are significant under their hypothesis tests. The test on visibility owns the lowest p-value as 2.34e-09. This may reveal the causal effect of hazardous weather condition on congestion. \item Speed features are positively correlated with each other, including the speed on the corridor of study (inverse of the travel rate). \item Rain and snow weather conditions have relatively high correlation with pavement condition, precipitation intensity and visibility, as expected. \item Travel rate has a positive correlation with the hour of day. More specifically, the probability of congestion increases as time progresses from 2:00PM to 6:00PM. \end{itemize} \begin{figure} \centering \includegraphics[width=1.05\linewidth]{corr_winter.png} \caption{Correlation plot of selected features for winter season} \label{Corre_map} \end{figure} \subsubsection{Principal component analysis} To find the sources of variation in the feature set, we conduct PCA. The PCA is conducted for the two seasons separately with the same set of features, including the five most correlated TMCs as well as features of counts, weather, incidents and events. The first two principal components (PC), which can be interpreted as the two most important dimensions of the data are plotted in Fig \ref{PCA}. Each black dot in the plot is one data entry mapped to the orthogonal space of the two PCs. From the plots we see clear distinction between the two seasons, which indicates the existence of seasonal effects and further justifies the necessity for clustering. In terms of the composition of the principal components, the first PC of the two clusters contains all five TMC-based speed features, and both account for around 35\% of the total variance. The second PC consists of morning and afternoon demand features, downstream incidents, and several weather features including precipitation type, visibility and pavement condition, accounting for another 10\% of the variance. The third PC is a mixture of incidents, weather and TMC-based speed, accounting for 7\% of the total variance. To conclude, TMC-based speed features are the most essential source of variance in the data, followed by demand level approximated by counts, downstream incidents and weather conditions. \begin{figure*}[!t] \centering \includegraphics[width=5 in]{PCA} \caption{The first two principle components. Left: Non-winter cluster; Right: Winter cluster. Each black dot stands for a data entry. Red arrows are the loadings of all features} \label{PCA} \end{figure*} \subsubsection{Dimension reduction (feature selection) in the TMC-based speed data set} \label{select_DC} We utilize LASSO to select a subset of TMC-based speed features in predicting travel rate/time. As discussed in section \ref{221}, by adjusting $\lambda$ in Equation \ref{lassoequ}, we obtain selected features and prediction results with different degrees of freedom. The degrees of freedom and corresponding r-squared values from LASSO, based on the winter cluster, is shown in Table \ref{T1}. Here, we also test the influence of prediction time lag to the model performance, and calculate the r-squared values under the same degree of freedom with 15min or 30min time lag, respectively. To predict travel time 30min ahead, we use speed features that are 30min, 35min, ... 55min ahead. Likewise, to predict travel teim 15min ahead, we use more speed features that are 15min, 20min, ... 55min ahead. \begin{table}[t!] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|l|l|l|} \hline & \multicolumn{2}{l|}{R-squared values} \\ \hline Degree of freedom & 30 min lag & 15 min lag \\ \hline 0 & 0.0000 & 0.0000 \\ \hline 1 & 0.2165 & 0.2165 \\ \hline 2 & 0.3532 & 0.3543 \\ \hline 5 & 0.4621 & 0.5358 \\ \hline 7 & 0.5422 & 0.6138 \\ \hline 10 & 0.5971 & 0.5824 \\ \hline 13 & 0.6328 & 0.7480 \\ \hline 14 & 0.6561 & 0.6770 \\ \hline 16 & 0.6709 & 0.7577 \\ \hline 18 & 0.6808 & 0.7269 \\ \hline 21 & 0.6873 & 0.7642 \\ \hline 24 & 0.6920 & 0.7582 \\ \hline 30 & 0.6951 & 0.7727 \\ \hline 33 & 0.6973 & 0.7744 \\ \hline 45 & 0.7011 & 0.7801 \\ \hline 61 & 0.7050 & 0.7823 \\ \hline 104 & 0.7116 & 0.7893 \\ \hline 148 & 0.7186 & 0.7937 \\ \hline 190 & 0.7246 & 0.7979 \\ \hline 244 & 0.7301 & 0.8019 \\ \hline \end{tabular} \caption{Speed feature selection for winter season using LASSO } \label{T1} \end{table} Generally R-squared values increase as more speed features are selected. When predicting travel time 30min in advance, the marginal improvement in r-squared value starts to decline when the degree of freedom exceeds 18. Under the same degree of freedom, predicting travel time 15 min ahead are significantly more accurate than 30min ahead. Similar results are also observed for the non-winter cluster. To the balance out the model's reliability (namely to avoid overfitting) and goodness of fit, we choose 18 speed features from 16 different TMC segments as a result of LASSO for the winter season. In Fig \ref{speed_DC}, the corridor of study is marked in blue, and those selected TMC segments are marked in red with time lags in minutes listed. For instance, 35 means the speed on this TMC segment in 35min advance is selected to predict the travel rate of the corridor of study. \begin{figure} \centering \includegraphics[width=4.5 in]{DC_with_time_new.png} \caption{TMC segments selected for the winter season to predict travel time on I-270 northbound. Time lags in minutes are listed for each selected TMC.} \label{speed_DC} \end{figure} The results are compelling. Those 18 speed features selected by LASSO can be categorized into the following groups: \begin{itemize} \item Segments on I-270 northbound, upstream and downstream of the corridor. \item Segments on the alternative route (MD 335 North). \item Segments on I-495 North that merge into I-270 northbound. \item One segment on East West Highway with three different time lags is selected, and one segment of I-495 North Eastbound. \item Segments on several interchanges to the upstream of I-270 northbound. \end{itemize} Overall, the first three groups of features are expected by our feature selection criteria described in Section \ref{221}. The first group are fairly close to the targeted corridor in terms of degree of connections. Their correlations with the corridor can be explained by the propagation and spill back of congestion. For the two segments on MD 335 North in the second group, their correlations with the corridor are originated from the overwhelming travel demand from MD 335 to I-270 North. For the third group, since I-495 North is also the upstream of the corridor that serves one of the destinations of significant demand during afternoon peak, the traffic states on I-495 North can reveal the travel demand on I-270 in 30min advance. In addition, road segments in the fourth group are all eastbound. As their correlations with the northbound corridor are positive, we infer that the travel demand may peak at the same time for northbound and eastbound traffic during the afternoon peak. Last but not least, those segments on interchanges in the fifth group are all bottlenecks, as they are usually where congestion starts prior to congestion spills over to their upstream links. In other words, those segments on interchanges are more sensitive to incoming travel demand, and can serve as an early alert to the corridor of study. As a result, they are effective in predicting upcoming congestions. \subsubsection{Prediction model} \label{prediction} We test and compare the performance of four prediction models: LASSO linear regression, stepwise regression, random forest and support vector regression. Model performance is evaluated by a 10-fold cross validation on each season (cluster). We adopt a univariate autoregressive moving average (ARMA) model as the baseline. It utilizes the speed data of the corridor only without considering spatio-temporal features of any kind. Prediction accuracy is measured by Normalized Root Mean Squared Error (NRMSE): \begin{equation} NRMSE = \frac{\sqrt { \frac{1}{N_t}\sum_{t \in all} ({\hat{y}_t -y_t}})^2 }{\frac{1}{N_t}\sum_{t \in all} y_t} \label{RMSE} \end{equation} In which $N_t$ is the total number of data points in a cluster, $\hat{y}_t$ and $y_t$ are the predicted and observed travel rate, respectively. First, a random set of days are selected and used to find the best fit for the ARMA model based on the AIC values. The remaining days are used for a 10-fold cross validation to compute NRMSE. As a result, the best ARMA in this case is that the order of autoregressive and moving average is 3 and 3 respectively. When predicting travel time 5-min in advance, ARMA reaches 7.35\% in NRMSE. For 30-min ahead prediction, ARMA averages at 23.9\% in NRMSE. This shows that predicting travel time 30-min in advance is much more challenging than 5-min ahead. The final selected features for prediction contain all 18 TMC features for each cluster, and all other features that are significant under their hypothesis tests in correlation analysis. Some common features for the two clusters are incidents on the downstream segments and on the alternative routes, morning and afternoon travel demand level, and essential weather features including visibility, precipitation type, precipitation intensity, wind speed, and pavement conditions. For stepwise regression, we use AIC values as the criteria. In LASSO, the norm-1 penalty $\lambda$ is set to maximize the percentage of deviance explained. The results of model fitting are shown in Table \ref{fitting}, in which CV training and CV test stand for the average training and testing errors in cross validation, respectively. Num.F stands for the average number of features used in each model, including the intercept (constant). Ave. CV test is the weighted average cross validation testing error of the two seasons, serving as an indicator for the final model performance. Results from random forest is based on the setting of 20 trees for each cluster. Note that by adjusting the number of trees in the model, its performance changes accordingly. In this case study, we test multiple values for the number of trees, ranging from 5 to 80, and find that as the number of trees increases from 5 to 20, the testing error improves from 17.8\% to 16.6\%, and levels off if the number of trees goes beyond 21. By comparing the results of all models, we can see that the OLS regression on multivariate speed features with clustering marginally improves the benchmark model, ARMA(3,3). Furthermore, by incorporating non-TMC based features, LASSO outperforms OLS regression while owning a much lower complexity. It is effective to incorporate non-TMC features in travel time prediction, and LASSO can ensure the prediction is more robust (less overfitting) than the OLS where a large number of features are used. In addition, the two linear regression models, LASSO and stepwise regression by AIC share similar performance in prediction with an average testing error of around 20.4\%. However, the number of finally selected features in the stepwise regression model is lower than LASSO, since it uses 31 and 29 features for two clusters, comparing to 47 used by LASSO. Finally, random forest outperforms other models considerably with an average error rate of 16.6\%. Comparing to the benchmark ARMA model, our model effectively reduces the prediction error by a margin of 7.2\%. \begin{table*}[t!] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{l|}{Winter} & \multicolumn{3}{l|}{Non-winter} & \multirow{2}{*}{Ave. CV test} \\ \cline{2-7} & Num.F & CV train & CV test & Num.F & CV train & CV test & \\ \hline Baseline--ARMA & \multicolumn{6}{l|}{NA} & 0.238 \\ \hline OLS on all TMCs & 1231 & 0.162 & 0.230 & 1281 & 0.180 & 0.220 & 0.224 \\ \hline LASSO & 37 & 0.199 & 0.203 & 36 & 0.213 & 0.210 & 0.207 \\ \hline Stepwise AIC & 31 & 0.196 & 0.196 & 29 & 0.208 & 0.210 & 0.204 \\ \hline Random forest & 47 & 0.067 & 0.186 & 47 & 0.070 & 0.153 & 0.166 \\ \hline SVR & 47 & 0.160 & 0.182 & 47 & 0.169 & 0.182 & 0.182 \\ \hline \end{tabular} \caption{I-270 Case study: Model performance evaluations. Cross validation (CV) errors of predicting travel time 30-min in advance, errors are measured in NRMSE.} \label{fitting} \end{table*} \subsection{I-376 Eastbound} The second case study is conducted on a road segment of I-376 Eastbound, between Forbes Ave exit and Squirrel hill exit in Pittsburgh Metropolitan Area. This 2.8-mile-long highway corridor is one of the main roadways connecting Pittsburgh downtown to the east region of the city. Due to heavy traffic load and limited roadway capacity, congestion is sensitive to demand and very frequent on this corridor during afternoon peaks. The time period of study is the same as the I-270 case study, 2PM-6PM on weekdays during the year of 2014, while feature selection is based on all the data in 2013. The stretch is marked in red in the left map of Fig. \ref{Pit_map}. We predict its travel time 30 minutes in advance. \subsubsection{Data sources and feature engineering} The following datasets were collected and used in this case study: \begin{enumerate} \item Speed: TMC-based speed data from INRIX, in all 259 TMC segments covering all major roads near this corridor, in the neighborhood of Oakland and Southside. Historical data are available in 5-minutes granularity. Those TMCs are listed below and shown in the right map of Fig \ref{Pit_map}: \begin{itemize} \item Upstream/downstream of I-376 corridor of study, both Northbound and Southbound. (Blue on the map) \item Roadway network of the region, including Pittsburgh downtown, Oakland and Southside. (Red on the map) \item Three main arterial streets eastbound from Pittsburgh downtown: Forbes Ave, Penn Ave and Center Ave, as alternative routes for I-376 during afternoon peaks. (Purple on map) \end{itemize} \begin{figure*}[t!] \centering \includegraphics[width=3.5 in]{TMC_pit.png} \includegraphics[width=3.5 in]{TMC_area_pit2.png} \caption{Left: I-376 Eastbound Segment of study; Right: Speed dataset used in the case study: 259 TMC segments in total.} \label{Pit_map} \end{figure*} \item Incidents: incidents data are obtained from PennDOT Road Condition Reporting System (RCRS) where each incident entry is categorized into the following binary features based on its location: \begin{itemize} \item An incident on the upstream of the segments of study. \item An incident on the downstream of the segments of study, both severe and non-severe. \item An incident on the opposite direction (i.e., I-376 W). \item An incident on alternative routes (Penn Ave and Baum Blvd). \item An incident in Downtown Pittsburgh, far upstream of the segments of study. \end{itemize} The definition of incident features in this study follows a similar fashion as in the I-270 case study, namely all features are binary and calculated based on the timestamp and geographic info of all RCRS entries. Due to the limited number of incident records on I-376, severe and non-severe incidents are aggregated into one feature in the model. \item Weather: the set of weather features is identical to the I-270 case study, including: temperature (degrees Fahrenheit), wind chill temperature, precipitation intensity (inch/hour), precipitation type (Snow/Rain/None), Visibility (miles), wind speed (miles/hour), wind gust (miles/hour), pressure (millibar), pavement condition type (wet or dry), as well as categorical features of wind, visibility and precipitation intensity. \item Local events: we incorporate the schedule of home games of NFL (Pittsburgh Steelers) and NHL (Pittsburgh Penguins) to analyze their impacts. Similar to the I-270 case, the event feature is binary and constant for the entire PM peak, indicating whether there is an ongoing/incoming event on that day. \end{enumerate} \subsubsection{Clustering} Clustering is conducted on a set of 35 TMC segments covering all major highways of the network, selected in a similar way to the I-270 case study. Speed measurements of three time points, 2:00PM, 4:00PM and 6:00PM are used to represent the traffic states of the network during afternoon peaks. As a result of HAC , all weekdays of 2013 are split into the following two clusters:(1) 2013-01-01 to 2013-05-29 and 2013-12-10 to 2013-12-31; (2) 2013-05-30 to 2013-12-09. Unlike the case study of I-270, the separation of the two clusters can interpreted as winter/spring and summer/fall seasons respectively. \subsubsection{Correlation analysis} We calculate the correlation matrix of the feature set for each cluster to explore the relationship among features, and conduct hypothesis tests to check whether certain features are linearly correlated. The correlation matrix for the winter/spring cluster is visualized in Fig. \ref{Corre_pit}. Similar to the I-270 case study, the correlations among top TMC features with the highest coefficients and the travel rate(travel time) of the corridor of study are significantly higher than other non-TMC features. Thu, those TMC speed data are the most important factors in explaining the variation of travel rate. Most non-TMC factors are correlated with the travel rate, such as incidents, visibility, drew points, pavement condition, the hour of day, and they should be included in the feature set for the prediction model. The features that are not significantly correlated with the travel rate (travel time) of the targeted corridor are thus removed from the feature set. \begin{figure}[t!] \centering \includegraphics[width=5.5 in]{corr_pit.jpg} \caption{Correlation matrix of selected features for the winter/spring season.} \label{Corre_pit} \end{figure} \subsubsection{Principal component analysis} Similar to the I-270 case study, PCA is conducted on the set of selected features for the two clusters, including the five most correlated TMCs, as well as features of traffic demand level, weather, incidents and events. The first two principal components (PC) are plotted in Fig \ref{PCA_pit}. The first PCs for both clusters contain several weather related features, including temperature, humidity, wind speed visibility and pavement condition, accounting for around 19\% of the total variance for each cluster. The second PCs consist of most TMC-based speed features, namely 6 TMCs for the first cluster and 9 for the second cluster, accounting for about 10\% of the total variance. Comparing this result to the I-270 case study, it can be seen that the importance of non-TMC features differs from case to case, but the most correlated TMC-based speed features are always critical for prediction. \begin{figure*} \centering \includegraphics[width=5.5 in]{PCA_pit.jpg} \caption{Plot of Principle components, Left: First half-year cluster; Right: Second half-year cluster. Each black dot stands for one data point mapped to the orthogonal space of the two PCs.} \label{PCA_pit} \end{figure*} \subsubsection{Dimension reduction (feature selection) in the TMC-based speed data set} We use LASSO to first select a subset of TMC-based speed features in predicting travel time/rate. A total of 15 TMC features for the first cluster and 17 for the second cluster are selected following a similar method discussed in Section \ref{select_DC}. These selected TMCs are then combined with other non-TMC features to create the feature set. LASSO is further used to select final features to be included in the prediction model. In Fig \ref{speed_PIt}, the I-376 corridor of study are marked in blue, and the 15 selected TMC-based speed features for the first cluster are marked in red, along with time lags in minutes listed. Those 15 speed features can be categorized as follows: \begin{itemize} \item A segment within the corridor of I-376 Eastbound. \item A segment on the ramp merging into I-376 Eastbound. \item Segments on the alternative routes: Penn Ave and Center Ave. \item Segments on the opposite direction (I-376 Westbound). \item Segments in Pittsburgh downtown area. \item A segment in one of the adjacent neighborhood, Southside. \end{itemize} The first two segments can be seen as direct indicators of overall congestion level for the entire corridor. They are likely two of the bottlenecks. The third group of segments being selected shows that high correlations from alternative routes can effectively help predict congestion on the targeted corridor. Again, this is proven to be useful for prediction similar to the other case study. The last two groups of segments imply that the corridor congestion may be related to those critical roadways from the neighborhoods that feed travel demand to it. Though this does not necessarily constitute causal relations, those segments can send signals 40min ahead to alert congestion on the corridor. To sum up, the features selected for the final models of the I-376 case study include: 15 and 17 TMC features for the two clusters; essential weather features including visibility, precipitation type, precipitation intensity, wind speed, and pavement conditions; local events; incidents on up/down stream of I-376 E as well as incidents on I-376 W; Hour of day. \begin{figure} \centering \includegraphics[width= 5 in]{TMC_selected_Pit.png} \caption{TMC segments selected for the winter/spring season. Time lags in minutes are listed for each selected TMC} \label{speed_PIt} \end{figure} \subsubsection{Prediction model} To find the best prediction model for this case, we train and evaluate the following candidate models: ARMA as a baseline model, OLS regression on speed, LASSO, stepwise regression, random forest and support vector regression. The steps to run those models are the same as the I-270 case study, described in Section \ref{prediction}. For ARMA, the best fit in this case is two autoregressive terms and one moving average term. As the baseline, ARMA reaches an error rate of 14.22\% for 5-min ahead prediction and 38.4\% for 30-min ahead prediction. All model results are compared in Table \ref{fitting_pit}. The ranking of all candidate models are quite similar to the I-270 case. The two linear regression models, LASSO and AIC-based stepwise regression share similar performance with an average testing error of 25.2\%, a significant improvement from ARMA, but not great yet. Random forest achieves the lowest average error again, at 17\% in NRMSE, with a 21\% improvement comparing to the baseline ARMA. By combining the results of the two case studies, we summarize that adding spatio-temporal information from all segments in the network, weather, incidents and events can greatly improve real-time prediction. On the other hand, we notice that the accuracy of each model is quite consistent in the two case studies. LASSO, SVR and random forest are all good methods, showing considerable improvement than ARMA and naive models that use speed data only. \begin{table*}[t!] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{l|}{Cluster 1(First half year)} & \multicolumn{3}{l|}{Cluster 2(Second half year)} & \multirow{2}{*}{Ave. CV test} \\ \cline{2-7} & Num.F & CV train & CV test & Num.F & CV train & CV test & \\ \hline Baseline--ARMA & \multicolumn{6}{l|}{NA} & 0.384 \\ \hline OLS on all TMCs & 1063 & 0.219 & 0.270 & 1063 & 0.212 & 0.292 & 0.282 \\ \hline LASSO & 31 & 0.247 & 0.260 & 33 & 0.233 & 0.245 & 0.252 \\ \hline Stepwise AIC & 29 & 0.225 & 0.261 & 31 & 0.226 & 0.243 & 0.252 \\ \hline Random forest & 36 & 0.080 & 0.164 & 38 & 0.082 & 0.175 & 0.170 \\ \hline SVR & 36 & 0.162 & 0.193 & 38 & 0.190 & 0.208 & 0.200\\ \hline \end{tabular} \caption{I-376 Case study: Model performance evaluations. Cross validation (CV) errors of predicting travel time 30-min in advance, errors are measured in NRMSE.} \label{fitting_pit} \end{table*} \section{Conclusions} We propose a data-driven method for analyzing highway congestion and predicting travel time based on spatio-temporal network characteristics and multiple data sources including travel speed, counts, incidents, weather and events, all in the context of dynamic networks. The proposed method can be used to analyze the spatio-temporal correlations among various features related to travel time, explore possible causal relations to congestion, and identify the most critical and reliable features for real-time travel time prediction. The proposed method is applied to two regional highway corridors, I-270 in D.C. region and I-376 in Pittsburgh region. The results validate the effectiveness of the data-driven approach in understanding the correlations of highway congestion and various spatio-temporal features. We are able to predict travel time on those corridor 30min in advance, and the prediction results are satisfactory. In particular, we find that: 1) The days of year can be clustered into seasons, each of which show different traffic patterns; 2) TMC-based speed features are the most critical components of travel time variability in the multi-source data set. They include road segments on the alternative routes to the corridor of study, downstream and upstream bottleneck and major demand sources, all can be machine selected by the data-driven approach; 3) Other features that are useful in predicting travel time include time/location of incidents, morning and afternoon travel demand level, visibility, precipitation intensity, weather type(rain, snow), wind speed/gust, and pavement conditions; and 4) Random-forest shows the most promise of all candidate models, reaching an NRMSE of 16.6\% and 17.0\% respectively in afternoon peak hours for the entire year of 2014. \section*{Acknowledgements} This research is funded in part by Traffic 21 Institute and Carnegie Mellon University’s Mobility21, a National University Transportation Center for Mobility sponsored by the US Department of Transportation. The data acquisition and pre-processing for the I-270 corridor are funded by a FHWA research project ``Data Guide for Travel Time Reliability''. The authors wish to thank John Halkias, Douglas Laird, James Sturrock and David Hale for their valuable comments. The contents of this report reflect the views of the authors only. The U.S. Government assumes no liability for the contents or use thereof. \bibliographystyle{unsrt}
2023-04-23T08:17:57.216Z
2019-10-17T02:06:54.000Z
redpajama/arxiv
arxiv_0000
916
10,236
ae687e22bcefce12e00b10a8ab4de8da44966a93
\section{Introduction} Our main result is the following theorem. \begin{thm}\label{thm:main} Let $E/\mathbb{Q}$ be a number field of degree $n$. Denote by $D$ its discriminant, by $R$ the regulator of its ring of integers and by $h$ the class number. For every class group character $\chi\in \widehat{\mathrm{Cl}(E)}$ let $L(s,\chi)$ be the associated Hecke $L$-function. Fix a real number $1/2\leq s<1$. There are effectively computable constants $A,B>0$ that depends only on $s,n$ such that for every $1/2>\varepsilon>0$ \begin{equation*} h^{-1}\#\left\{\chi \in \widehat{\mathrm{Cl}(E)} \mid L(s,\chi)\neq 0 \right\} \geq |D|^{-(1-s+\varepsilon)/2} \left(A-B \frac{R}{|D|^{s/2}}\right) \varepsilon^n\;. \end{equation*} \end{thm} The most interesting point is of course $s=1/2$ as GRH would imply non-vanishing of $L(s,\chi)$ at $1/2<s<1$ for all $\chi$. Fr\"ohlich \cite{Frohlich} has demonstrated that the Dedekind zeta function actually vanishes at the central point for infinitely many number fields. Duke \cite{DukeLarge} has constructed for each $n$ an infinite family of degree $n$ totally real $S_n$ number fields such that $R\ll (\log |D|)^{(n-1)}$. There is a very rich literature about non-vanishing of $L$-functions at the central point for several families of $L$-functions. In this exposition we restrict our discussion to class group $L$-functions and closely related families. Blomer \cite{Blomer} has established a very strong result for the family of class group $L$-functions of imaginary quadratic fields. He is able to demonstrate non-vanishing for a large fraction of the class group characters, $\gg \varphi(|D|)/|D|$, whenever $|D|\gg 1$. Theorem \ref{thm:main} provides significantly weaker results for imaginary quadratic fields but it covers class group $L$-functions of any degree. In the conductor aspect, Balasubramanian and Murty \cite{BalasubramanianMurty} established that a positive proportion of Dirichlet $L$-function of prime conductor $q\gg 1$ do not vanish at the central point. Soundararajan \cite{SoundNonVanishing} has established that a positive proportion of Dedekind zeta functions of real quadratic fields do not vanish at the central point. Methodologically, the work of Michel and Venkatesh \cite{MichelVenkatesh} about non-vanishing of twists of automorphic $\mathbf{GL}_2$ $L$-functions by quadratic class group characters is the closest to ours. We remark that predictions about the behavior of $L$-functions at the central point can often be deduced from random matrix theory heuristics \cite{KatzSarnak,SarnakShinTemplier,ShankarSodergrenTemplier}. Moreover, the non-vanishing phenomena is related to deep questions in analytic number theory, such as the existence of Landau-Siegel zeros \cite{IwaniecSarnak} and spectral gap for automorphic representations \cite{LuoRudnickSarnakI,LuoRudnickSarnakII}. Three aspects of Theorem \ref{thm:main} stand out. The first is that the result is valid for number fields of any degree. The second is that we allow relatively large regulators. In particular, whenever $R=o(|D|^{1/4})$ Theorem \ref{thm:main} provides new non-vanishing results at the central point $s=1/2$. Finally, the non-vanishing fraction depends only on the discriminant and the regulator, and does not depend on the shape of the unit lattice. Specifically, we do not need to assume that the number field $E$ has no non-trivial subfields of a small regulator. The latter assumption is needed in the course of the proof of \cite[Theorem 1.10]{ELMVPeriodic} which is conceptually related to our method. Finally, it is worth mentioning that the constants $A,B$ are completely effective, and do not depend on Siegel's bound, cf.\ \cite{Blomer} where the lower bound for $|D|$ is ineffective. \subsection{Subconvexity} Some improvements of the lower bound in Theorem \ref{thm:main} for $s=1/2$ are easily achievable. \begin{enumerate} \item Using the weak subconvexity bound of Soundararajan \cite{Sound} we can deduce a lower bound with a logarithmic improvement for all number fields \begin{equation*} h^{-1}\#\left\{\chi \in \widehat{\mathrm{Cl}(E)} \mid L(1/2,\chi)\neq 0 \right\} \gg_{n,\varepsilon} |D|^{-1/4}(\log |D|)^{1-\varepsilon}\left(A-B \frac{R}{|D|^{1/4}}\right)\;. \end{equation*} \item Whenever there is $\delta>0$ such that a subconvex bound in the discriminant aspect \begin{equation*} |L(1/2,\chi)| \ll_{n,\varepsilon} |D|^{(1/2-\delta+\varepsilon)/2} \end{equation*} is known, we can improve the lower bound to \begin{equation*} h^{-1}\#\left\{\chi \in \widehat{\mathrm{Cl}(E)} \mid L(1/2,\chi)\neq 0 \right\} \gg_{n,\varepsilon} |D|^{-(1/2-\delta+\varepsilon)/2} \left(A-B \frac{R}{|D|^{1/4}}\right)\;. \end{equation*} The Grand Lindel\"of Hypothesis would provide the optimal $\delta=1/2$. A non-trivial $\delta>0$ is known unconditionally for abelian fields $E$ using the Burgess bound \cite{BurgessII} and for cubic fields using the convexity breaking results of Duke, Friedlander and Iwaniec \cite{DFI} and Blomer, Harcos, Michel \cite{BHM}. \end{enumerate} \subsection{Method of Proof} To study the Hecke $L$-function at the critical strip we follow Hecke's original method \cite{Hecke}. That is, we represent the $L$-function as an integral of a spherical degenerate Eisenstein series $E(\bullet,s)\colon\lfaktor{\mathbf{PGL}_n(\mathbb{Z})}{\mathbf{PGL}_n(\mathbb{R})}\to\mathbb{C}$ along a collection of periodic torus orbits. This spherical Eisenstein series coincides with the Epstein zeta function of the associated quadratic form. The definition and properties of the Epstein zeta function are reviewed in \S\ref{sec:Epstein-cusp}. Our strategy is most closely related to the methods of Michel and Venkatesh \cite{MichelVenkatesh} who study non-vanishing at the central point for twists of $\mathbf{GL}_2$ automorphic $L$-functions by quadratic class group characters. They provide two tools to establish non-vanishing in a family, either using effective equidistribution of a packet of Heegner points on the modular curve, or using the escape of mass of a portion of the packet that contains the trivial ideal class. Unfortunately, in higher rank we do not know unconditionally an effective equidistribution result of the analogues toral packets nor do we know that a large enough portion of the mass escapes to infinity, even for small regulators. Instead, we observe that combining very weak versions of both statement \emph{together} is sufficient to establishing the non-vanishing theorem. The equidistribution statement is weakened to the convexity bound of Hecke $L$-functions. It is supplemented with a good control on the mass that the single orbit of the trivial ideal class element spends high in the cusp. In section \S \ref{sec:periods} we construct a maximal torus $H<\mathbf{PGL}_n(\mathbb{R})$ from a fixed degree $n$ number field $E$ and an algebra isomorphism $\iota\colon E\otimes \mathbb{R}\to \mathbb{R}^{r_1}\times \mathbb{C}^{r_2}=\mathbb{R}^n$. Every fractional ideal $\Lambda \subset E$ gives rise to a periodic $H$-orbit which we denote by $\tensor[^\iota]{\Lambda}{}H\subset \lfaktor{\mathbf{PGL}_n(\mathbb{Z})}{\mathbf{PGL}_n(\mathbb{R})}$, cf.\ \cite{ELMVPeriodic}. This periodic orbit depends only on the ideal class of $\Lambda$. We recall these classical definitions as well in \S\ref{sec:periods}. Fix $1/2\leq s < 1$ and define the function $Z\colon \mathrm{Cl}(E)\to\mathbb{C}$ \begin{align*} Z(\Lambda)&\coloneqq\int_{[H]} E^*(\tensor[^\iota]{\Lambda}{}h,ns) \,\mathrm{d}^\times h\;,\\ E^*(g,s)&\coloneqq \pi^{-s/2}\Gamma\left(\frac{s}{2}\right) E(g,s)\;,\\ E\left(g,s\right)&\coloneqq\frac{1}{2}|\det g|^{s/n} \sum_{0\neq \mathrm{v}\in \mathbb{Z}^n} \|\mathrm{v} g \|_2^{-s}\;. \end{align*} The function $E\left(g,s\right)$ is the Epstein zeta function associated to the lattice $\mathbb{Z}^n g$ and $E^*(g,\rho)$ is the completed Epstein zeta function. The integration is with respect to the $H$-periodic measure of volume $1$. Hecke's period formula, cf.\ Theorem \ref{thm:Hecke-period}, expresses this integral in terms of a completed partial Dedekind zeta function \begin{equation*} Z(\Lambda)=\frac{w}{2^{r_1} n R} \zeta^*_{\Lambda}(s) \end{equation*} whose definition we recall in \ref{defi:partial-dedekind}. The Fourier coefficient of this function coincides with the completed $L$-function of the class group character \begin{equation*} \hat{Z}(\chi)=\frac{w}{2^{r_1} n h R} L^*(s,\chi) \end{equation*} for any $\chi\in\widehat{\mathrm{Cl}(E)}$. In Theorem \ref{thm:Epstein-cusp} we establish a good lower bound on the Epstein zeta function high in the cusp using an approximate functional equation. This lower bound and the fact that the lattice $\iota(\mathcal{O}_E)\subset \mathbb{R}^n$ contains the short vector $(1,\ldots,1)$ are used in the proof of the key statement of this manuscript -- Proposition \ref{prop:Z(O)-bound}. This proposition states that there are effectively computable constants $A_1,B_0>0$ such that \begin{equation*} Z(\mathcal{O}_E) \geq \frac{A_1|D|^{s/2}-B_0 R}{R}\;. \end{equation*} The proof of this result also uses a trick where the unit lattice is approximated by the lattice spanned by vectors realizing its successive minima. This allows us to remove the dependence on the shape of the unit lattice. V.\ Blomer has later suggested to the author a briefer proof of Proposition \ref{prop:Z(O)-bound} by applying the approximate functional equation directly to the partial Dedekind zeta function. The proof presented here emphasizes the role of the crucial concept of escape of mass. Without further ado we establish Theorem \eqref{thm:main} assuming this result and using the following elementary lemma. \begin{lem}\label{lem:non-vanishing} Let $C$ be a finite abelian group. For every function $f\colon C\to\mathbb{C}$ define \begin{equation*} \operatorname{NV}(f)\coloneqq\left\{\chi \in \widehat{C} \mid \hat{f}(\chi)\neq 0 \right\}\;. \end{equation*} Then \begin{equation*} \#\operatorname{NV}(f) \geq \frac{\|f\|_\infty}{\|\hat{f}\|_\infty}\;. \end{equation*} \end{lem} \begin{proof} Fix $c\in C$ where $|f|$ attains its maximum. Then \begin{equation*} \|f\|_\infty=\left|\sum_{\chi\in\operatorname{NV}(f)} \hat{f}(\chi) \chi(c)\right|\leq \|\hat{f}\|_\infty \left|\operatorname{NV}(f)\right| \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] We apply Lemma \ref{lem:non-vanishing} above to the function $Z\colon \mathrm{Cl}(E)\to\mathbb{C}$. Proposition \ref{prop:Z(O)-bound} provides the necessary lower bound on $\|Z\|_\infty$. We need only an appropriate upper-bound on $|\hat{Z}(\chi)|$ for any class-group character $\chi$. Recall the convexity bound for Hecke $L$-functions of class group characters, cf.\ \cite{Rademacher}. For every $0<\varepsilon<1/2$ and $1/2\leq s < 1$ \begin{equation*} |L(s,\chi)|\ll_{s,n} |D|^{(1-s+\varepsilon)/2} \zeta(1+\varepsilon)^n\Longrightarrow |\hat{Z}(\chi)|\ll_{s,n} \frac{|D|^{(1+\varepsilon)/2}}{hR}\zeta(1+\varepsilon)^n \ll_n \frac{|D|^{(1+\varepsilon)/2}}{hR} \varepsilon^{-n} \end{equation*} Dividing the lower bound from Proposition \ref{prop:Z(O)-bound} by the convexity upper bound implies the claimed theorem. \end{proof} The convexity bound should be understood as an almost-equidistribution statement for periods of degenerate Eisenstein series over toral packets. Indeed, any subconvex improvement in the discriminant aspect over the convexity bound would imply the equidistribution of any degenerate pseudo-Eisenstein series. If $n$ is a prime then such a subconvexity bound can be bootstrapped using the method of Einsiedler, Lindenstrauss, Michel and Venkatesh \cite{ELMVCubic} to equidistribution of any compactly supported continuous function. \subsection*{Acknowledgments} I wish to deeply thank Peter Sarnak, Vesselin Dimitrov and Elon Lindenstrauss for very fruitful discussions about this topic. I am extremely grateful to Valentin Blomer for helpful and insightful comments regarding a previous version of this manuscript. I wish to thank the referees for carefully reading the manuscript, their suggestions have notably improved the presentation. This work has been supported by the National Science Foundation under Grant No. DMS-1946333. \section{The growth of the Epstein zeta function in the cusp}\label{sec:Epstein-cusp} \begin{defi} Define the degenerate spherical Eisenstein series with complex parameter $s$ and $g\in\mathbf{GL}_n(\mathbb{R})$ \begin{equation*} E\left(g,s\right)=\frac{1}{2}|\det g|^{s/n} \sum_{0\neq \mathrm{v}\in \mathbb{Z}^n} \|\mathrm{v} g \|_2^{-s}\;. \end{equation*} \end{defi} This function coincides with the Epstein zeta function\footnote{Our normalization for the Epstein zeta function is different from \cite{Terras} where $Z(g\cdot \tensor[^t]{g}{},\rho)=|\det g|^{-2\rho/n}E(g,2\rho)$.} of the quadratic form with Gram matrix $g\cdot \tensor[^t]{g}{}$. The series converges absolutely for $\Re s>n$ and can be analytically continued to a meromorphic function of $s\in \mathbb{C}$. The unique pole of $E\left(g,s\right)$ is at $s=n$. This pole is simple with residue\footnote{This can be deduced from the fact that the number of lattice points in a sphere of radius $R$ is asymptotic to the volume of the sphere as $R\to \infty$ for any unimodular lattice.} \begin{equation*} \Res_{s=n}E\left(g,s\right)= \frac{\pi^{n/2}}{\Gamma(n/2)}\;. \end{equation*} The constant on the right is half the surface area of the $(n-1)$-dimensional unit sphere. Notice that it does not depend on $g$. We have normalized $E(g,s)$ using the determinant to make it a well-defined function on $\lfaktor{\mathbf{PGL}_n(\mathbb{Z})}{\mathbf{PGL}_n(\mathbb{R})}$. \begin{defi} We will also make use of a completed version of $E(g,s)$ defined as \begin{equation*} E^*(g,s)=\pi^{-s/2}\Gamma\left(\frac{s}{2}\right) E(g,s)\;. \end{equation*} \end{defi} The functional equation \cite{Epstein}, cf.\ \cite[Proposition 10.2]{ELMVCubic}, is especially simple for the completed Eisenstein series \begin{equation*} E^*(g,n-s)=E^*(\tensor[^t]{g}{^{-1}},s) \end{equation*} It follows that $E^*(g,s)$ is holomorphic in $s$ except for two simple poles at $s=0,n$ with residues $-1,1$ respectively. Our first goal is to understand the behavior of this function high in the cusp. The following theorem due to Riemann for $n=1$ and Terras \cite{Terras} for $n>1$ is a variant of the approximate functional equation for the Epstein zeta function. We provide a proof using Mellin inversion. \begin{thm}\label{thm:approx-func-eq} For any $0,n\neq s\in\mathbb{C}$ and $g\in\mathbf{GL}_n(\mathbb{R})$ \begin{equation*} E^*(g,s)=-\frac{1}{s}-\frac{1}{n-s}+\frac{1}{2}\sum_{0\neq v \in \mathbb{Z}^n} f\left(s, \frac{\|vg\|_2}{|\det g|^{1/n}}\right) +\frac{1}{2}\sum_{0\neq v \in \mathbb{Z}^n} f\left(n-s, \frac{\|v\tensor[^t]{g}{^{-1}}\|_2}{|\det \tensor[^t]{g}{^{-1}}|^{-1/n}}\right) \end{equation*} where \begin{equation*} f(s,a)\coloneqq (\pi a^2)^{-s/2}\Gamma\left(\frac{s}{2},\pi a^2\right) =\int_1^\infty t^{s/2}\exp(-\pi t a^2) \,\mathrm{d}^\times t\;. \end{equation*} \end{thm} \begin{proof} The Mellin transform in the $a$ variable of $f(s,a)$ is exactly $\pi^{-\sigma/2}\Gamma\left(\frac{\sigma}{2}\right)(\sigma-s)^{-1}$. Because of the exponential decay of the Gamma function in the vertical direction we can use Mellin inversion to write \begin{equation*} \frac{1}{2}\sum_{0\neq v \in \mathbb{Z}^n} f\left(s, \frac{\|vg\|_2}{|\det g|^{1/n}}\right)=\frac{1}{2\pi i}\int_{\Re \sigma=n+\delta} \frac{E^*(g,\sigma)}{\sigma-s} \,\mathrm{d} \sigma \end{equation*} for any $1>\delta>0$. We shift the contour of integration to the line $\Re \sigma=-\delta$ collecting residues at the $\sigma=0,s,n$. To justify the contour shift we claim that $E^*(g,s)$ decays exponentially in the vertical direction uniformly in any interval $a\leq \Re s \leq b$. To the right of the critical strip this follows from the definition using lattice summation and the exponential decay in the vertical direction of the Gamma function. To the left of the critical strip this can be deduced using the functional equation and inside the critical strip using the Phragm\'en-Lindel\"of principle. The residue at $\sigma=s$ coincides with $E^*(g,s)$ and the other two residues produce the terms $-\frac{1}{s}$ and $-\frac{1}{n-s}$ in the claim. The proof is concluded by applying the functional equation and the change of variable $\sigma \mapsto n-\sigma$: \begin{equation*} -\frac{1}{2\pi i}\int_{\Re \sigma=-\delta} \frac{E^*(g,\sigma)}{\sigma-s} \,\mathrm{d} \sigma =-\frac{1}{2\pi i}\int_{\Re \sigma=-\delta} \frac{E^*(\tensor[^t]{g}{^{-1}},n-\sigma)}{\sigma-s} \,\mathrm{d} \sigma =\frac{1}{2\pi i}\int_{\Re \sigma=n+\delta} \frac{E^*(\tensor[^t]{g}{^{-1}},\sigma)}{\sigma-(n-s)} \,\mathrm{d} \sigma\;. \end{equation*} The latter integral is equal to the dual sum because of Mellin inversion. \end{proof} \begin{cor}\label{cor:Epstein-pos} For any real $s\neq 0,n$ the function $E^*(g,s)$ is real and satisfies \begin{equation*} E^*(g,s)\geq -\frac{1}{s}-\frac{1}{n-s}\;. \end{equation*} \end{cor} \begin{defi} For any $g\in\mathbf{GL}_n(\mathbb{R})$ set \begin{equation*} \lambda_1(g)\coloneqq |\det g|^{-1/n} \min_{0\neq \mathrm{v} \in \mathbb{Z}^n} \|\mathrm{v} g\|_2 \end{equation*} to be the length of the shortest non-trivial vector in the unimodular lattice homothetic to $\mathbb{Z}^n g$. \end{defi} Recall that Mahler's compactness criterion implies that $\lambda_1\colon \lfaktor{\mathbf{PGL}_n(\mathbb{Z})}{\mathbf{PGL}_n(\mathbb{R})}\to \mathbb{R}_{>0}$ is a proper continuous function. \begin{thm}\label{thm:Epstein-cusp} Denote by $V_{n-1}=\nicefrac{\pi^{(n-1)/2}}{\Gamma\left((n+1)/2\right)}$ the volume of the $(n-1)$-dimensional unit ball. For any real $0<s<n$ if \begin{equation*} \lambda_1(g)\leq \begin{cases} 2^{-1/(s-1)} & s>1\\ 1 & s =1\\ V_{n-1} 2^{-\frac{(n-s)(n-1)}{n-s-1}} & s<1 \end{cases} \end{equation*} then \begin{equation*} E^*(g,s)+\frac{1}{s}+\frac{1}{n-s}\gg \begin{cases} \lambda_1(g)^{-s} & s>1\\ -\lambda_1(g)^{-1}\log \lambda_1(g) & s=1\\ \left(\lambda_1(g) \frac{2^{n-1}}{V_{n-1}}\right)^{-(n-s)/(n-1)} & s<1 \end{cases} \end{equation*} The implied constant above is independent of all parameters. \end{thm} \begin{remark} The lower bound above is optimal up to a constant. This can be seen by applying iteratively the Fourier expansion of $E^*(g,s)$ due to Terras \cite{TerrasFourier} to compute the constant term of $E^*(g,s)$. Write the Iwasawa decomposition of $g=u\cdot a \cdot k$ where $a=\diag(y_1,\ldots,y_n)$ with positive entries, $u\in \mathbf{N}(\mathbb R)$ is lower triangular unipotent and $k\in\mathbf{O}_n(\mathbb R)$. The constant term of $E^*(g,s)$ is equal to\footnote{The original expansion in \cite{TerrasFourier} is in terms of the Iwasawa decomposition of $\tensor[^t]{g}{^{-1}}$. To pass to an expression in terms of the decomposition of $g$ we apply first the functional equation.} \begin{equation*} \sum_{k=0}^{n-1} \zeta^*(s-k) \prod_{i=1}^{n-k-1} \left(\frac{y_{i+1}}{y_i}\right)^{i (1-s/n)} \cdot \prod_{i=n-k}^{n-1} \left(\frac{y_{i+1}}{y_i}\right)^{(n-i) s/n}\;. \end{equation*} While $\lambda_1(g) \asymp_n \prod_{i=1}^{n-1} \left(\frac{y_{i+1}}{y_i}\right)^{-(n-i)/n}$. Combining these two expressions we deduce that at least the constant term is asymptotic to the lower bound in the theorem above. The difficulty in establishing the theorem using the Fourier expansion is that it is hard to analyze for $n>2$ the contribution of the non-constant terms in the Fourier expansion when $\diag(y_1,\ldots,y_n)$ is near the walls of the positive Weyl chamber. Instead we study the behavior in the critical strip using an approximate functional equation. \end{remark} \begin{proof} Assume first $s\geq 1$ then $\lambda_1(g)\leq 1$ by assumption. Because the sums over the lattices $\mathbb{Z}^n g$ and $\mathbb{Z}^n \tensor[^t]{g}{^{-1}}$ in Theorem \ref{thm:approx-func-eq} are positive, we can compute a lower bound by restricting the sum to a line going through a vector of minimal length in $\mathbb{Z}^n g$. This implies \begin{equation*} E^*(g,s)+\frac{1}{s}+\frac{1}{n-s}\geq \frac{1}{2} \sum_{0\neq b\in\mathbb{Z}} f(s,|b|\lambda_1(g))\;. \end{equation*} The integral representation of $f(s,a)$ implies that it is a monotonic decreasing function of $a$ for $a>0$. Hence the right hand side above can be bounded below by \begin{equation*} \frac{1}{\lambda_1(g)}\int_{\lambda_1(g)}^{\infty} f(s,a)\,\mathrm{d} a=\frac{\lambda_1(g)^{-s}}{2} \int_{\lambda_1(g)^2}^{\infty} t^{(s-1)/2}\operatorname{erfc}(\sqrt{\pi t})\,\mathrm{d}^\times t \geq \frac{\lambda_1(g)^{-s}}{2} \int_{\lambda_1(g)^2}^{1} t^{(s-1)/2}\operatorname{erfc}(\sqrt{\pi t})\,\mathrm{d}^\times t \;. \end{equation*} The equality above follows by applying Fubini to the integral representation of $f(s,a)$ and the change of variables $t\mapsto \lambda_1(g)^2t$. We bound the latter integral using the monotonicity inequality $\operatorname{erfc}(x)\geq\operatorname{erfc}(\sqrt{\pi})$ for $0\leq x\leq \sqrt{\pi}$: \begin{equation}\label{eq:lambda_1_int_small} \frac{\lambda_1(g)^{-s}}{2} \int_{\lambda_1(g)^2}^{1} t^{(s-1)/2}\operatorname{erfc}(\sqrt{\pi t})\,\mathrm{d}^\times t \gg \frac{\lambda_1(g)^{-s}}{2} \int_{\lambda_1(g)^2}^{1} t^{(s-1)/2}\,\mathrm{d}^\times t= \begin{cases} \frac{\lambda_1(g)^{-s}-\lambda_1(g)^{-1}}{s-1} & s\neq 1\\ -\lambda_1(g)^{-1}\log \lambda_1(g) & s=1 \end{cases} \end{equation} This establishes the claim in case $s=1$. In case $s>1$ the assumption $\lambda_1(g)^{s-1}<1/2$ implies that $\lambda_1(g)^{-1}\leq1/2 \lambda_1(g)^{-s}$ and the claim follows again from \eqref{eq:lambda_1_int_small}. The lower bound for $s<1$ will follow from applying the $s>1$ case to the dual lattice which also contributes to $E^*(g,s)$ with $s$ replaced by $n-s$. We need only to establish \begin{equation}\label{eq:lambda1-dual-ineq} \lambda_1\left(\tensor[^t]{g}{^{-1}}\right)^{n-1}\leq \frac{2^{n-1}}{V_{n-1}} \lambda_1(g) \;. \end{equation} To prove inequality \eqref{eq:lambda1-dual-ineq} fix $v_1,\ldots,v_n$ a basis of the lattice $\Lambda\coloneqq\mathbb{Z}^n g$, where $v_1$ is a vector of minimal length. Denote by $\mathrm{v}_1^*,\ldots,\mathrm{v}_n^*$ the dual basis of $\Lambda^*\coloneqq\mathbb{Z}^n \tensor[^t]{g}{^{-1}}$. Then $\mathrm{v}_2^*,\ldots,\mathrm{v}_n^*$ span a lattice $\Lambda_1^*$ in the $n-1$-dimensional hyperplane $\mathrm{v}_1^\perp$ and \begin{equation*} |\det g|^{-1}=\operatorname{covol}\left(\Lambda^*\right)=\operatorname{covol}\left(\mathrm{v}_1^*\mathbb{Z}+\Lambda_1^*\right) =\left|\left\langle \mathrm{v}_1^*, \frac{\mathrm{v}_1}{\|\mathrm{v}_1\|}\right\rangle\right| \operatorname{covol}\left(\Lambda_1^*\right)= \|\mathrm{v}_1\|^{-1} \operatorname{covol}\left(\Lambda_1^*\right)\;. \end{equation*} Hence $\operatorname{covol}\left(\Lambda_1^*\right)= \lambda_1(g) |\det g|^{1/n-1}$ and Minkowski's first theorem implies that there is a vector $\mathrm{v}^*\in \Lambda_1^*\subset \Lambda^*$ satisfying $V_{n-1}\|\mathrm{v}_*\|_2^{n-1}\leq 2^{n-1} \lambda_1(g) |\det g|^{1/n-1}$. This implies \eqref{eq:lambda1-dual-ineq} and the second claimed inequality. \end{proof} \begin{cor}\label{cor:Epstein-cusp-uniform} Assume $1/n\leq s<1$. There are effectively computable constants $A_0,B_0>0$, depending only on $n$ and $s$, such that for all $g\in \mathbf{GL}_n(\mathbb{R})$ \begin{equation*} E^*(g,ns)\geq A_0 \lambda_1(g)^{-ns}-B_0\;. \end{equation*} In fact, \begin{align*} A_0&=\operatorname{erfc}(\sqrt{\pi}) \begin{cases} \frac{1}{2(ns-1)} & s>1/n\\ \log 2 &s=1/n \end{cases}, & B_0&=\frac{1}{n}\left(\frac{1}{s}+\frac{1}{1-s}\right)+A_0 \begin{cases} 2^{ns/(ns-1)} & s>1/n\\ 2 & s=1/n \end{cases} \end{align*} are admissible. \end{cor} \begin{proof} This follows immediately with $B_0=\frac{1}{n}\left(\frac{1}{s}+\frac{1}{1-s}\right)$ from Theorem \ref{thm:Epstein-cusp} above if $\lambda_1(g)< 2^{-1/(ns-1)}$ and $s>1/n$ or if $s=1/n$ and $\lambda_1(g)<1/2$. The specific value of $A_0$ is a direct consequence of the proof. Otherwise, assume first that $s>1/n$. If $\lambda_1(g)\geq 2^{-1/(ns-1)}$ then $\lambda_1(g)^{-ns}\leq 2^{ns/(ns-1)}$. Moreover, we know from Corollary \ref{cor:Epstein-pos} that $E^*(g,ns)\geq - \frac{1}{n}\left(\frac{1}{s}+\frac{1}{1-s}\right)$. Hence the claim holds for any $g$ with $B_0=\frac{1}{n}\left(\frac{1}{s}+\frac{1}{1-s}\right)+2^{ns/(ns-1)}A_0$. The argument for $s=1/n$ is analogous. \end{proof} \section{Toral periods of the Epstein zeta function}\label{sec:periods} We recall a formula originally due to Hecke that relates Hecke $L$-functions of number field to toral periods of the Epstein zeta function. The proofs are straightforward using the unfolding method and can be extended to any Grossencharakter $L$-function in the ad\`elic setting, cf. \cite[Lemma 10.4]{ELMVCubic} and \cite{Wielonsky}. Let $E/\mathbb{Q}$ be a degree $n$ number field with $r_1$ real places and $r_2$ inequivalent complex places. Denote by $D\coloneqq\disc(\mathcal{O}_E)$ the discriminant of its ring integers and let $R\coloneqq \Reg(\mathcal{O}_E)$ be its regulator. Set $h\coloneqq\#\mathrm{Cl}(E)$. Let $E_\infty$ be the \'etale-algebra $E\otimes \mathbb{R}$ over $\mathbb{R}$. Fix once and for all a ring isomorphism \begin{equation*} \iota\colon E_\infty\to \mathbb{R}^{r_1}\times \mathbb{C}^{r_2} \end{equation*} This map is unique up to post-composition with permutation of the real and complex places respectively and complex conjugation at each complex places. We henceforth identify the right hand side with $\mathbb{R}^n=\mathbb{R}^{r_1+2r_2}$ in the standard manner. For any $\mathbb{Z}$-lattice $\Lambda \subset E$ we denote by $\tensor[^\iota]{\Lambda}{}$ the element of $\lfaktor{\mathbf{PGL}_n(\mathbb{Z})}{\mathbf{PGL}_n(\mathbb{Z})}$ corresponding to the lattice $\iota(\Lambda)$. Specifically, the raw matrix of every $\mathbb{Z}$-basis of the lattice $\iota(\Lambda)$ is a representative of the coset $\tensor[^\iota]{\Lambda}{}$. \begin{defi} We denote by $\left(\mathbb{R}^\times\right)^\Delta$ the diagonal embedding of $\mathbb{R}^\times$ in $E_\infty^\times$. Set \begin{equation*} H\coloneqq \lfaktor{\left(\mathbb{R}^\times\right)^\Delta}{E_\infty^\times}=\lfaktor{\left(\mathbb{R}^\times\right)^\Delta}{\left(\mathbb{R}^\times\right)^{r_1}\times {\left(\mathbb{C}^\times\right)^{r_2}}}\,. \end{equation*} We identify $H$ with a maximal torus subgroup in $\mathbf{PGL}_n(\mathbb{R})$ using the map $\iota$. The Haar measure on $H$ is normalized to be consistent with the standard Haar measures on $E_\infty^\times$ and $\mathbb{R}^\times$. \end{defi} Dirichlet's unit theorem implies that $\mathcal{O}_E^\times\slash \mathbb{Z}^\times$ is a lattice in $H$ of covolume \begin{equation*} \frac{2^{r_1} \pi^{r_2} n R}{w}\;, \end{equation*} where $w$ is the number of roots of unity in $E$. \begin{defi} Define $[H]=\lfaktor{\mathcal{O}_E^\times}{H}$ and normalize the Haar measure on $[H]$ so it has volume $1$. If $\,\mathrm{d} ^\times h$ is the Haar measure on $H$, then the measure on $[H]$ descents from the Haar measure \begin{equation*} \frac{w\,\mathrm{d}^\times h}{2^{r_1}\pi^{r_2}n R} \;. \end{equation*} If $\Lambda\subset E$ is a fractional $\mathcal{O}_E$-ideal then the stabilizer in $H$ of $\tensor[^\iota]{\Lambda}{}$ is the lattice $\mathcal{O}_E^\times\slash \mathbb{Z}^\times$. Hence $\tensor[^\iota]{\Lambda}{}H$ is a periodic $H$-orbit isomorphic to $[H]$. This orbit depends only on the ideal class of $\Lambda$. The ideal classes of $\mathcal{O}_E$ give rise to a finite collection of periodic $H$-orbits, c.f.\ \cite{ELMVPeriodic}. This collection is called a \emph{packet} of periodic $H$-orbits. \end{defi} \begin{defi}\label{defi:partial-dedekind} Let $\Lambda\subset E$ be a fractional $\mathcal{O}_E$-ideal. The partial Dedekind zeta-function of $\Lambda$ is defined by the Dirichlet series \begin{equation*} \zeta_{\Lambda}(s)\coloneqq\Nr(\Lambda)^s \sum_{0\neq \mathrm{v}\in \Lambda} \left|\Nr \mathrm{v} \right|^{-s} \end{equation*} that converges for $\Re s>1$. The zeta function depends only the class of $\Lambda$ modulo the principal ideals. For every class group character $\chi\colon \mathrm{Cl}(E)\to\mathbb{C}^\times$ the class group $L$-function satisfies \begin{equation*} L(s,\chi)=\sum_{[\Lambda]\in \mathrm{Cl}(E)} \zeta_{\Lambda}(s) \bar{\chi}(\Lambda)\;. \end{equation*} We also define the completed partial zeta function as \begin{equation*} \zeta_{\Lambda}^*(s)\coloneqq \left(\pi^{-s/2}\Gamma\left(\frac{s}{2}\right)\right)^{r_1} \big((2\pi)^{-s}\Gamma\left(s\right)\big)^{r_2} |D|^{s/2} \zeta_{\Lambda}(s)\;. \end{equation*} The completed $L$-function $L^*(s,\chi)$ of a class group character $\chi$ is defined similarly. These satisfy a functional equation due to Hecke. The functional equation for these $L$-functions is a direct consequence of the functional equation for the completed Epstein zeta function and Theorem \ref{thm:Hecke-period} below. \end{defi} \begin{thm}\label{thm:Hecke-period}[Hecke] Let $\Lambda\subset E$ be a fractional $\mathcal{O}_E$-ideal. Then $\tensor[^\iota]{\Lambda}{}$ is a periodic $H$-orbit and for any $s\neq 0,1$ and \begin{equation*} \int_{[H]} E^*(\tensor[^\iota]{\Lambda}{}h,ns) \,\mathrm{d}^\times h =\frac{w}{2^{r_1} n R} \zeta^*_{\Lambda}(s)\;, \end{equation*} \end{thm} We reproduce the proof for completeness sake using the following important lemma, also due to Hecke. The crux of the proof is that the ring of $E_\infty^\times$-invariant polynomials on $E_\infty$ is generated by the norm function. \begin{lem}\label{lem:Hecke-trick} [Hecke's Trick] Equip $E_\infty$ with an Euclidean inner-product by summing the standard inner-products on each copy of $\mathbb{R}$ and $\mathbb{C}$. Then for all $\mathrm{v}\in E_\infty^\times$ \begin{equation*} \int_{H} \left\|\mathrm{v}h\right\|_2^{-ns} \left|\Nr h\right|^{s}\,\mathrm{d}^\times h=\frac{\pi^{r_2}\Gamma\left(s/2\right)^{r_1}\Gamma\left(s\right)^{r_2}}{\Gamma\left(ns/2\right)} |\Nr \mathrm{v}|^{-s}\;. \end{equation*} \begin{proof} Using the change of variables $h\mapsto \mathrm{v}h$ we see that \begin{equation*} \int_{H} \left\|\mathrm{v}h\right\|_2^{-ns} \left|\Nr h\right|^s \,\mathrm{d}^\times h=|\Nr \mathrm{v}|^{-s} \int_{H} \ \left\| h\right\|_2^{-ns} \left|\Nr h\right|^s \,\mathrm{d}^\times h=|\Nr \mathrm{v}|^{-s} I(s)\;. \end{equation*} To evaluate the integral on the right-hand side, $I(s)$, we calculate the integral $\int_{E_\infty^\times}e^{-\|y\|_2^2} \left|\Nr y\right|^s \,\mathrm{d}^\times y$ in two different ways. On one hand we use the compatibility of Haar measures on quotients and the change of variables $t\mapsto t \|y\|_2$ \begin{align*} \int_{E_\infty^\times}e^{-\|y\|_2^2} \left|\Nr y\right|^{s} \,\mathrm{d}^\times y&=\int_{H={\left(\mathbb{R}^\times\right)^\Delta}\backslash{E_\infty^\times}} \int_{\mathbb{R}^\times} e^{-(|t|\|y\|_2)^2} |t|^{ns} \left|\Nr y\right|^{s}\,\mathrm{d}^\times t \,\mathrm{d}^\times \left(\mathbb{R}^\times y\right)\\ &=\int_{H} \|y\|_2^{-ns} \left|\Nr y\right|^{s} \int_{\mathbb{R}^\times} e^{-(|t|\|y\|_2)^2} (|t|\|y\|_2)^{ns} \,\mathrm{d}^\times t \,\mathrm{d}^\times \left(\mathbb{R}^\times y\right)=I(s)\cdot \int_{0}^{\infty} e^{-t^2}t^{ns}\frac{2\,\mathrm{d} t}{t}\\ &=I(s)\Gamma(ns/2)\;. \end{align*} On the other hand, using polar coordinates for each complex coordinate, the integral over $E_\infty^\times$ decomposes as a product \begin{equation*} \int_{E_\infty^\times}e^{-\|y\|_2^2} \left|\Nr y\right|^{s} \,\mathrm{d}^\times y= \prod_{i=1}^{r_1} \int_{\mathbb{R}^\times} e^{-t_i^2}|t_i|^s \,\mathrm{d}^\times t_i \cdot \prod_{i=r_1+1}^{r_1+r_2} 2 \pi \int_{\mathbb{R}_{>0}} e^{-r_i^2}r_i^{2s} \,\mathrm{d}^\times r_i=\Gamma(s/2)^{r_1} \cdot \pi^{r_2}\Gamma(s)^{r_2}\;. \end{equation*} The proof concludes by comparing the two expressions for $\int_{E_\infty^\times}e^{-\|y\|_2^2} |y|^s \,\mathrm{d}^\times y$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Hecke-period}] We consider $h\in E_\infty^\times$ as an element of $\mathbf{GL}_n(\mathbb{R})$ using the map $\iota$, then $|\det h|=\left|\Nr h\right|$. Rewrite the period of the Epstein zeta-function using the standard unwinding transformation \begin{align*} \int_{[H]} E(\tensor[^\iota]{\Lambda}{}h,ns)\,\mathrm{d}^\times h = \frac{1}{2}&\operatorname{covol}(\Lambda)^s \sum_{0\neq \mathrm{v}\in \Lambda\slash \mathcal{O}_E^\times} \sum_{u\in \mathcal{O}_E^\times} \int_{[H]} \|\mathrm{v}uh\|_2^{-sn} \left|\Nr h\right|^s\,\mathrm{d}^\times h\\ =&\operatorname{covol}(\Lambda)^s \sum_{0\neq \mathrm{v}\in \Lambda\slash \mathcal{O}_E^\times} \int_H \|\mathrm{v}h\|_2^{-sn}\left|\Nr h\right|^s \frac{w\,\mathrm{d}^\times h}{2^{r_1}\pi^{r_2}n R} \;. \end{align*} The factor $1/2$ is absorbed in the difference between $\mathcal{O}_E^\times$ and the group $\mathcal{O}_E^\times \slash \mathbb{Z}^\times$. The proof concludes by applying Lemma \ref{lem:Hecke-trick} and using the formula $\operatorname{covol}(\Lambda)=2^{-r_2}\sqrt{|D|} \Nr(\Lambda)$. \end{proof} \end{lem} \section{The top period}\label{sec:highest} We continue to fix a degree $n$ number field $E/\mathbb{Q}$ and carry all the notations from the previous sections. Henceforth we fix $1/2\leq s<1$ and define the function $Z\colon \mathrm{Cl}(E)\to\mathbb{C}$ as in the introduction \begin{equation*} Z(\Lambda)\coloneqq \frac{w}{2^{r_1} n R} \zeta^*_{\Lambda}(s)=\int_{[H]} E^*(\tensor[^\iota]{\Lambda}{}h,ns) \,\mathrm{d}^\times h\;. \end{equation*} Notice that the Fourier coefficients of the function $Z$ satisfy \begin{equation*} \hat{Z}(\chi)=\frac{1}{h}\sum_{[\Lambda]\in \mathrm{Cl}(E)} Z(\Lambda)\bar{\chi}(\Lambda)= \frac{w}{2^{r_1} n h R} L^*(s,\chi)\;. \end{equation*} In this section we prove the following lower bound on the value of $Z$ at the identity class. This is a key part of our argument. \begin{prop}\label{prop:Z(O)-bound} Let $1/2\leq s<1$. There are effectively computable constants $A_1,B_0>0$ depending only on $s$ and $n$ such that \begin{equation*} Z(\mathcal{O}_E) \geq \frac{A_1|D|^{s/2}-B_0 R}{R} \end{equation*} In particular, $Z(\mathcal{O}_E)$ is positive if $|D|^{s/2}/R\gg_{s,n} 1$. \end{prop} We observe that the lower bound depends only on the regulator and not on the shape of the lattice of roots of unity. This is possible because the exponential map converts a linear combination of trace-less vectors in the logarithmic space to a product of units in $E_\infty$. Hence the average length in $E_\infty$, over a fundamental domain of units in the logarithmic space, almost decomposes as a product of averages. This reduces the proposition to a question of bounding a product of lengths of vectors forming a basis for the unit lattice. To control the latter we approximate the unit lattice by the sub-lattice spanned by vectors realizing the successive minima and use Minkowski's second theorem. \begin{proof} Consider as usual the logarithmic group homomorphism $\log_E\colon E_\infty^\times \to \mathbb{R}^{r_1+r_2}$ \begin{equation*} \log_E(u_1,\ldots,u_{r_1+r_2})\coloneqq\left(\log |u_1|_{\mathbb{R}},\ldots,\log|u_{r_1}|_{\mathbb{R}}, 2\log|u_{r_1+1}|_{\mathbb{C}},\ldots,2\log|u_{r_1+r_2}|_{\mathbb{C}} \right)\;. \end{equation*} We will need the right-inverse \begin{equation*} \exp_E(x_1,\ldots,x_{r_1+r_2})\coloneqq (\exp(x_1),\ldots,\exp(x_{r_1}),\exp(x_{r_1+1}/2),\ldots,\exp(x_{r_1+r_2}/2))\;. \end{equation*} The kernel of $\log_E$ is the group of elements whose coordinate-wise absolute values are all $1$. It is a compact subgroup that acts on $E_\infty$ by orthogonal transformation. In particular, $E^*(g,s)$ is invariant under right multiplication by $\ker (\log_E)$. The map $\log_E$ furnishes a homomorphism from $H$ onto the trace $0$ subspace $\mathbb{R}^{r_1+r_2}_0$. Dirichlet's units theorem states that the image of $\mathcal{O}_E^\times$ is a lattice in $\mathbb{R}^{r_1+r_2}_0$ of covolume $R$, where the covolume is computed with respect to the usual inner product on $\mathbb{R}^{r_1+r_2}$. Hence we can compute the integral of the spherical Epstein zeta function over the periodic $H$-orbit $\tensor[^\iota]{\mathcal{O}}{_E}H$ using a normalized Lebesgue measure on $\mathbb{R}^{r_1+r_2}_0$. \begin{equation*} \int_{[H]} E^*(\tensor[^\iota]{\mathcal{O}}{_E}h,ns)\,\mathrm{d}^\times h= \frac{1}{R}\int_{\mathcal{F}} E^*(\tensor[^\iota]{\mathcal{O}}{_E}\exp_E(x_1,\ldots,x_{r_1+r_2-1}),ns) \,\mathrm{d}^0(x_1,\ldots x_{r_1+r_2})\;, \end{equation*} where $\mathcal{F}$ is any fundamental domain for $\log_E(\mathcal{O}_E^\times)$ in $\mathbb{R}^{r_1+r_2}_0$ and $\,\mathrm{d}^0(x_1,\ldots, x_{r_1+r_2})$ is the standard Lebesgue measure on the trace-less subspace $\mathbb{R}^{r_1+r_2}_0$. Corollary \ref{cor:Epstein-cusp-uniform} and the formula above imply that \begin{equation}\label{eq:Z(OE)-cusp} Z(\mathcal{O}_E) \geq A_0 \frac{1}{R} \int_{\mathcal{F}} \lambda_1(\tensor[^\iota]{\mathcal{O}}{_E}\exp_E(x))^{-ns} \,\mathrm{d}^0 x-B_0 =A_0 I_\lambda(s)-B_0\;. \end{equation} Our aim now is to provide a proper lower bound for the normalized integral $I_\lambda(s)$. Denote by $\|x\|_{\infty}=\max(|x_1|,\ldots,|x_{r_1}|,|x_{r_1+1}|/2,\ldots,|x_{r_1+r_2}|/2)$ the supremum norm on $\mathbb{R}^{r_1+r_2}$. This restricts to a norm on $\mathbb{R}^{r_1+r_2}_0$. Denote by $\tilde{V}_{r_1,r_2}$ the Lebesgue measure of the unit ball of the latter norm. Let $\theta_1,\ldots,\theta_{r_1+r_2-1}\in\log_E(\mathcal{O}_E^\times)$ be vectors realizing the successive minima of the lattice $\log_E(\mathcal{O}_E^\times)$ with respect to the $\|\bullet\|_\infty$ norm. By Minkowski's second theorem \begin{equation}\label{eq:Minkowski-Theta} \|\theta_1\|_\infty \cdots \|\theta_{r_1+r_2-1}\|_\infty \cdot \tilde{V}_{r_1,r_2}\leq 2^{r_1+r_2-1} R \end{equation} Denote by $\Theta\subset \log_E(\mathcal{O}_E^\times) \subset \mathbb{R}^{r_1+r_2}_0$ the lattice spanned by $\theta_1,\ldots,\theta_{r_1+r_2-1}$. Define $$\mathcal{F}_\Theta\coloneq \left\{\sum_{j=1}^{r_1+r_2-1} \varepsilon_j \theta_j \mid 0\leq \varepsilon_j < 1 \right\}\;.$$ It is a fundamental domain for $\Theta$. This domain can be covered by exactly $\left[ \log_E(\mathcal{O}_E^\times) \colon \Theta\right]$ fundamental domains of $\log_E(\mathcal{O}_E^\times)$. We can now evaluate \eqref{eq:Z(OE)-cusp} over each of these domains to deduce \begin{align*} I_\lambda(s) &= \frac{1}{\operatorname{covol}{\Theta}} \int_{\mathcal{F}_\Theta} \lambda_1(\tensor[^\iota]{\mathcal{O}}{_E}\exp_E(x))^{-ns} \,\mathrm{d}^0x\\ &\gg_{s, n}\frac{|D|^{s/2}}{\operatorname{covol}{\Theta}} \int_{\mathcal{F}_\Theta} \|\exp_E(x)\|_2^{-ns} \,\mathrm{d}^0x\;, \end{align*} where in the last inequality we have used the fact the the covolume of $\mathcal{O}_E$ is $2^{-r_2}\sqrt{|D|}$ and that it contains the short vector $(1,\ldots,1)$. Define the norm $\|y\|_{E_\infty}=\max(|y_1|_{\mathbb{R}},\ldots,|y_{r_1}|_{\mathbb{R}},|y_{r_1+1}|_{\mathbb{C}},\ldots,|y_{r_1+r_2}|_{\mathbb{C}})$. We apply the inequality $\|\bullet\|_2\leq \sqrt{r_1+r_2} \|\bullet \|_{E_\infty}$ in $E_\infty$ and then rewrite the last integral using the basis $\theta_1,\ldots\theta_{r_1+r_2-1}$. \begin{align} \nonumber \frac{1}{\operatorname{covol}{\Theta}}\int_{\mathcal{F}_\Theta} &\|\exp_E(x)\|_2^{-ns} \,\mathrm{d}^0x \gg_{n,s} \frac{1}{\operatorname{covol}{\Theta}} \int_{\mathcal{F}_\Theta} \|\exp_E(x)\|_{E_\infty}^{-ns} \,\mathrm{d}^0 x \geq \frac{1}{\operatorname{covol}{\Theta}}\int_{\mathcal{F}_\Theta} \exp(-ns\|x\|_\infty) \,\mathrm{d}^0 x\\ \nonumber &\geq\int_0^1 \cdots \int _0^1 \exp(-ns\sum_{j+1}^{r_1+r_2-1}\varepsilon_j\|\theta_j\|_\infty ) \,\mathrm{d} \varepsilon_1\cdots \,\mathrm{d} \varepsilon_{r_1+r_2-1} = \prod_{j=1}^{r_1+r_2-1} \int_0^1 \exp(-n s \varepsilon_j\|\theta_j\|_\infty) \,\mathrm{d} \varepsilon_j\\ \label{eq:Ftheta-integral} &= \prod_{j=1}^{r_1+r_2-1} \frac{1-\exp(-n s \|\theta_j\|_\infty)}{n s \|\theta_j\|_\infty}\;, \end{align} where in the second line we have used the triangle inequality for the norm $\|\bullet\|_\infty$ on $\mathbb{R}^{r_1+r_2}_0$. We bound the denominator using Minkowski's second theorem \eqref{eq:Minkowski-Theta}. To bound the numerator we use the inequality $\|\log_E(y)\|_\infty\gg_n 1$ for every $y\in\mathcal{O}_E^\times\setminus \mu_E$, where $\mu_E<E^\times$ is the group of roots of unity. This inequality with an effective constant follows from the Northcott property, cf.\ \cite[\S1.6.15]{BombieriGubler}. The best possible bound (up to a multiplicative constant) follows from the recent breakthrough of V. Dimitrov \cite{Dimitrov} resolving the Schinzel-Zassenhaus conjecture: \begin{equation*} \|\log_E(y)\|_\infty\geq \frac{\log 2}{4 n}\;. \end{equation*} Worse bounds follow from the result of Dobrowolski \cite{Dobrowolski} towards the Lehmer conjecture. See also Blanskby-Montgomery \cite{BlanksbyMontgomery} and Stewart \cite{Stewart}. The claim finally follows by applying these bounds for the numerator and denominator in \eqref{eq:Ftheta-integral} and substituting into \eqref{eq:Z(OE)-cusp}. \end{proof} \bibliographystyle{alpha}
2023-04-23T08:17:57.401Z
2020-09-16T02:23:55.000Z
redpajama/arxiv
arxiv_0000
923
7,220
ec2ae1823fb59aa5840724903e0d8b40bd9b42ea
\section{Introduction and Rationale} \label{sec:intro} The vision of the Internet of Things~(IoT) opens a new era of technology penetration into the human lives, which touches upon a wide range of use cases: from Smart Home to Smart City and from Smart Grid to Factory Automation~\cite{extra1}. \textcolor{black}{The numbers of IoT devices that can collect, store, combine, and analyze the massive amounts of data around them by producing valuable knowledge and making relevant actions is growing uncontrollably in an attempt to offer decisive societal benefits while handling both routine and critical tasks across multiple~verticals~\cite{cisco2017global}.} As it simplifies the lives of people, the IoT also brings unprecedented security and privacy risks, since close to any object around us becomes interconnected with others to collect and process sensitive information~\cite{lin2017survey}. The conventional \emph{massive} IoT involves numerous low-cost devices (e.g., sensors, actuators, and smart meters), with limited computational capabilities and stringent power constraints; hence, the traditional security and privacy solutions had to be reconsidered and adjusted to the specifics of massive IoT. \textcolor{black}{Over recent decades, security and privacy in IoT remained a major research topic subject to heated discussions, e.g., in the area of lightweight cryptography~\cite{shamir2017summary}, secure connection and trust establishment~\cite{guo2017survey}, and privacy-preserving data processing~\cite{zhou2017security}.} While there are multiple open problems yet to be resolved, the current progress in this field promises to provide the demanded levels of security to these massive IoT~deployments. Meanwhile, in contrast to the massive and low-cost IoT solutions, an emerging trend in today's IoT is a rapid proliferation of high-end IoT equipment that features more capable connected devices. These include sophisticated wearables (including augmented, virtual, and mixed reality systems), smart vehicles, and consumer drones (see Fig.~\ref{fig:human}) -- that may collectively be named \emph{Advanced IoT~(A-IoT)}. These relatively high-cost devices have more abundant performance, memory, and battery resources to execute full-scale security and privacy protocols; thus, the establishment of secure machine-to-machine connections may not be a challenging problem for the~A-IoT. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{figures/concept.pdf} \caption{Human-centric Advanced IoT (A-IoT) applications in a Smart City.} \label{fig:human} \end{figure} At the same time, a number of specific security and privacy concerns emerge in connection with such systems. Since unauthorized access to these powerful devices may lead to severe risks that range from theft of this high-cost equipment (drones or cars) and up to putting human lives in danger by, e.g., manipulating with the information projected to the augmented reality grasses or maneuvering smart vehicles uncontrollably~\cite{joy2018internet}. \textcolor{black}{Therefore, reliable assessment of the \textit{fact of ownership} for the A-IoT devices that belong to both personal and collective use becomes one of the critical challenges that is faced today, which is very different from massive~IoT.} In this work, we first systematically review the unprecedented research challenges related to determining the human ownership of the A-IoT systems. We then classify the specific features of the A-IoT that can be employed to securely verify the fact of ownership of the A-IoT devices and map them onto the challenges by illustrating how the features can complement each other while covering the potential issues. We also discuss the concept of multi-factor human authentication with the A-IoT system, where multiple heterogeneous factors are intelligently combined to achieve higher levels of security while not compromising the usability of the A-IoT services. We finally enumerate the important practical matters to be resolved on the way towards successful implementation of the introduced concept. \section{Challenges of Determining Ownership in A-IoT} \label{sec:limitations} \textcolor{black}{As unauthorized access to A-IoT systems brings severe security threats, the challenge of reliable access control becomes one of the most crucial research problems for securing A-IoT solutions.} An access control procedure can generally be decomposed into user authentication and authorization. \textcolor{black}{The second stage is relatively less complicated and can be implemented by conventional discretionary, mandatory, or role-based access control methods.} However, the first stage introduces a number of A-IoT-specific research questions that we carefully review in this section. \subsection{Multi-Modality of Human-Computer Interaction} Today, most of the conventional ICT systems are equipped with advanced input devices, such as keyboards and touchscreens, as well as output devices, most commonly, LCD screens used for human-computer interaction~(HCI). \textcolor{black}{Since textual input remains the dominating form of HCI, these systems have historically been adopted for authentication purposes: memorable textual or numerical passwords, possibility to display a hint or advanced visual instructions, etc.} In contrast, the very nature of authentication does not imply text-based commands or responses. Very few of the emerging IoT devices are controlled by a keyboard; hence, the authentication methods based on textual passwords will need to evolve accordingly for them to continue being usable on the mass IoT market. \subsection{Robustness to Environment and User Behavior} The authentication process of today is typically applied in dedicated, comfortable, and stationary environments. Many such actions occur indoors, where neither weather conditions nor other unpredictable factors can impact the authentication decisions. Even when this process happens outdoors, the input devices to enter the security credentials acquire additional protection to resist the environmental changes up to a certain~extent However, the A-IoT systems in Smart Cities are mobile by design. Their interactions with a user are spontaneous and occur in uncontrolled and unpredictable environments. Moreover, even under regular weather/environment conditions, the initial state as the user begins interacting with the A-IoT system may be notably different. \textcolor{black}{For example, the user opening a vehicle may be wearing gloves during winter time, such that a fingerprint scanner installed on the door handle may not be available.} Therefore, authentication of A-IoT devices must be made robust to both dynamic environmental conditions and flexible user behavior. \subsection{High Levels of Reliance and Trust} \textcolor{black}{Broad penetration of ICT systems on the consumer market and their role in the daily human life have always been associated with a level of trust that people grant to these systems.} High trust is impossible to achieve without appropriate authentication and authorization procedures~\cite{pascal}. At the same time, the A-IoT systems are more elaborate than the ICT platforms of today. They are often granted direct access to sensitive personal information; hence, the data they collect and handle should not be made available to potential third parties. On the other hand, large vehicles, drones, and industrial robots represent more capable platforms, sometimes termed as \emph{sources of increased danger}. This recognizes that they may become hazardous as long as health and even lives of humans are concerned. Therefore, A-IoT systems have to be featured with more secure and reliable authentication procedures, so that they are capable of distinguishing their valid user from an unauthorized adversary. \subsection{Constrained Response Times and Usability} Regardless of their stringent security demands, the response levels of A-IoT authentication are also crucial for its successful adoption. \textcolor{black}{Previously, authentication process was a dedicated phase of the HCI, thus making users prepare for it both physically and mentally: recall the secret phrase, bring the token key, etc.} With further development and penetration of the A-IoT systems, they become more ubiquitous and omnipresent. In future Smart Cities, users will be interacting with various A-IoT devices numerous times a day; hence, they cannot afford to spend several second by authenticating with each of those and tolerate second-long delays in acquiring access. In response to these demands, A-IoT authentication must evolve to become capable of operating within stringent time intervals, preferably in an inconspicuous form, i.e., transparent to the user. For multi-functional A-IoT systems, this may even bring the need to temporarily provide access to certain basic functionality sooner, while more rigorous authentication is performed in the background. This is because the users are unlikely to require sensitive actions from the very first moments of their interaction with the target A-IoT platform. From the above, it follows that designing adequate A-IoT authentication mechanisms is challenging. However, the more advanced capabilities and functions of A-IoT devices can be beneficial when coining novel authentication schemes and we review these in the next section. \section{Enablers for Improved A-IoT Authentication} \label{sec:enablers} \textcolor{black}{Reliable human user authentication by the A-IoT system is a complex task due to various challenges as discussed previously.} Fortunately, modern A-IoT platforms feature a number of dedicated input devices as well as rich sensing, communication, and computation capabilities, which altogether can be employed during the authentication stage. \textcolor{black}{Various user authentication methods become suitable for new A-IoT systems utilizing this diverse functionality.} In this section, we discuss these authentication methods and their applicability in the A-IoT systems. For convenience, we sort them by following their mass adoption: from well-known to emerging, see Table~\ref{tab:comp} \begin{table}[!h] \caption{Authentication factors suitable for A-IoT. Type:~K~--~knowledge;~O~--~ownership; BI -- biometric; BE~--~behavior. Action:~A~--~active;~P~--~passive. Duration:~S~--~short~($<1$~sec);~M~--~medium ($1-15$ sec); L~--~Long~($>15$~sec).}\label{tab:comp} \centering \def1.2{1.2} \begin{tabular}{m{4cm} ccc} \hline\hline \textbf{Factor} & \textbf{Type} & \textbf{Action} & \textbf{Duration} \\ \hline \hline \textbf{PIN code} & K & A & S \\\hline \textbf{Password} & K & A & M \\\hline \textbf{Token} & O & P & S \\\hline \textbf{Voice} & BI/BE & A/P & S/M \\\hline \textbf{Facial} & BI & A/P & S/M \\\hline \textbf{Ocular-based} & BI & A & S/M \\\hline \textbf{Fingerprint} & BI & A/P & S \\\hline \textbf{Hand geometry} & BI & A/P & S \\\hline \textbf{Geographical location} & BE & P & L \\\hline \textbf{Vein recognition} & BI & A/P & S \\\hline \textbf{Thermal image} & BI/BE & P & S/M \\\hline \textbf{Behavior patterns} & BE & P & L \\\hline \textbf{Weight} & BI & P & S \\\hline \textbf{Electrocardiographic~(ECG) recognition} & BI/BE & P & S-L \\\hline \hline \end{tabular} \end{table} \subsection{Review of Possible Enablers} \label{sec:review} \subsubsection{Hardware tokens} \textcolor{black}{The automotive cluster has its legacy security mechanisms, primarily centered around the use of hardware tokens that represent the \textit{ownership} factor.} Recently, such tokens have been complemented by increasingly popular software-based replacements installed on smartphones\footnote{F.~Lardinois, ``BMW~wants~to~turn~your smartphone~into~your~car~key,'' \url{https://techcrunch.com/2018/02/26/bmw-wants-to-turn-your-smartphone-into-your-car-key/} [Accessed~November~2018]}. By leveraging this concept, the A-IoT systems can make a step forward and utilize the tokens placed not only in the smartphones but also on wearable devices. \subsubsection{Memorable passwords/PINs} \textcolor{black}{Utilization of conventional PINs is currently acceptable worldwide owing to the widespread adoption of ATMs and early-mobile phone era.} A combination of button presses to unlock a feature (e.g., engine start) or to access a restricted area in addition to the key are typical solutions. Finally, knowledge-based approaches are used widely to access a web-service. The A-IoT systems may intelligently utilize similar solutions as well, where password inputs can effectively become replaced by the use of touchscreen (where applicable) or, e.g., audio forms of input. \subsubsection{Fingerprint/palm/eye scanner} While core technology principles for fingerprint and palm recognition have been known for already a while, the recent achievements in the respective miniaturization made them accessible by a wide range of consumer products, namely, smartphones. Installation of biometric scanning devices within a conventional input interface (e.g., Home button in Apple iPhones) or behind a touchscreen is not a science fiction anymore\footnote{V.~Savov, ``I~tried~the~first~phone~with an~in-display~fingerprint~sensor,'' \url{https://www.theverge.com/circuitbreaker/2018/1/9/16867536/vivo-fingerprint-reader-integrated-display-biometric-ces-2018} [Accessed~November~2018]}. \textcolor{black}{Hence, the authentication process can become transparent for the user, thus improving the overall system usability.} \subsubsection{Facial recognition} The methods of facial recognition by built-in video cameras originally started with landmark picture analysis, which appeared to be vulnerable to trivial attacks of, e.g., presenting a photo instead of the real face. Over the last two decades, these tools have significantly developed towards three-dimensional face and expression recognition that is much more resilient to such attacks. \textcolor{black}{The security levels can be enhanced further by prompting the user to move the head in a specific manner so that a particular pattern to follow is not known in advance~\cite{corneanu2016survey}.} Solving this task from another angle, a drone can fly around the user to construct a 3D map of face/body without making the user move. \subsubsection{Voice recognition} All of the considered A-IoT devices are typically equipped with a microphone that enables voice recognition. The recently announced implementations are capable of distinguishing millions of different voices after capturing only a short phrase. \textcolor{black}{These solutions are however more vulnerable to a `spoofing attack' than facial recognition. The attack itself could be described as Eve intercepting or capturing digital or analog Alice's voice signal and ``spoofing'' own message into the authentication sensor in the real time (using artificial, synthesized voice) or replicating it later on.} While it is technically possible for an adversary to construct a phrase based on the recorded pronunciation of syllables and sounds, the A-IoT systems are likely to have sufficient computational power for timely recognition of the corresponding attacks. \subsubsection{Data from wearables} \textcolor{black}{The A-IoT devices may also employ their advanced communication capabilities. Particularly, if the authenticating user holds wearable devices, they could act as providers of the authentication factors~\cite{ometov2018multi}, such as gesture analysis, Electrocardiographic~(ECG) Recognition, Geographical Location analysis, etc.} Being connected to the A-IoT system via a short-range radio, wearables can present the security credentials of their user, such as heart rate or electrocardiogram. The utilization of this method requires support from appropriate security protocols, so that the platform may trust the data collected by the user-controlled equipment on the one hand, and the users can be certain that their sensitive personal information is not disclosed, on the~other. \subsubsection{Behavioral patterns} The A-IoT system can utilize one or several input interfaces to record and analyze the individual features of user behavior: response time to typical requests, typing rhythm, micro- or macro-scale mobility, etc. Here, the choice of particular factors to monitor highly depends on the form-factor of the A-IoT device: for wearable electronics these could be accelerometer fingerprinting, for drones they are the control operations, while for smart vehicles there are plenty of options that range from brake pressure and position of hands on the wheel to musical and radio preferences. \subsection{Mapping Enablers onto Challenges} While each of the A-IoT-specific authentication methods can bring its additional benefits, none of them alone is capable of efficiently resolving all of the discussed A-IoT challenges. To this end, Table~\ref{tab:mapping} offers a mapping of the authentication methods onto the challenges introduced in Section~\ref{sec:limitations}. \begin{table*} \caption{Comparing A-IoT authentication methods} \label{tab:mapping} \centering \def1.2{1.2} \begin{tabular}{m{2.5cm}|c|c|c|c|c} \hline\hline \textbf{Authentication method} & \textbf{Non-text input} & \textbf{Short contact time} & \textbf{Stringent usability} & \textbf{Environmental robustness} & \textbf{High security level}\\ \hline \hline Hardware tokens & + & + & - & + & - \\ \hline Password/PIN & - & + & - & + & -\\ \hline Fingerprint/Palm scanner & + & + & +/- & - & +\\ \hline Facial recognition & + & - & + & - & +\\ \hline Voice recognition & + & - & +/- & + & +/-\\ \hline Data from wearables & + & + & - & - & +\\ \hline Behavior patterns & + & - & + & - & +\\ \hline\hline \end{tabular} \end{table*} Notably, knowledge-based methods have their most severe limitations with usability and security requirements~\cite{katsini2016security}, since the user is expected to create, remember, and timely update the secret passwords for all A-IoT devices. In this case, it is very likely that the same password will be selected for multiple systems, which degrades the levels of security. In contrast, hardware tokens are more scalable to be used for multiple A-IoT systems. However, the security levels may still be insufficient as the token(s) can easily be stolen. \textcolor{black}{Biometrics allow to be authenticated without an additional device or knowledge, but the fingerprint, ocular scanning, or voice recognition may require further effort from the user (e.g., remove gloves or glasses, say a particular phrase, etc.) as well as remain not fully robust to the environmental conditions.} Finally, the risk of losing a biometric template has to be considered. Then, authentication with wearable data has a significant advantage over the conventional voice/face recognition, since the user is not required to perform any explicit action. \textcolor{black}{Meanwhile, this method has similar drawbacks as do the tokens, where the user has to carry the necessary devices continuously, always turned on and charged.} \textcolor{black}{The methods of behavior recognition allow for mitigating most of the constraints by observing the user behavior over a certain period.} However, the amounts of time necessary for such monitoring are at least an order of magnitude higher than those for other methods, which may become a severe usability concern in delay-sensitive A-IoT applications. \textcolor{black}{Furthermore, behavior recognition is a complex task from the algorithm design perspective, as there should be a constructive differentiation between a valid deviation in the monitored factor by the actual user and invalid patterns by adversaries.} As can be concluded from our analysis and Table~\ref{tab:mapping}, neither of the presented methods alone is sufficient to effectively authenticate the user over a broad range of possible scenarios related to the A-IoT systems. In the following section, we propose a novel approach to construct reliable authentication solutions for A-IoT devices by intelligently combining multiple potentially unreliable methods, which follows the multi-factor authentication~(MFA) paradigm. \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/aggreg.pdf} \caption{Heterogeneous MFA for A-IoT (by example of smart vehicles).} \label{fig:scenario} \end{figure} \section{Use of Multi-Factor Authentication for A-IoT} \label{sec:mfa} Since no single authentication method is likely to be suitable to resolve all of A-IoT challenges, the use of MFA is a natural approach to construct compound solutions (see Fig.~\ref{fig:scenario}). \textcolor{black}{At the same time, designing adequate MFA mechanisms is a complex matter, which calls for careful selection, harmonization, and combination of various individual methods, such that the resulting solution could outperform its component elements concerning both security and usability, as confirmed by experiments in, for example,~\cite{benaliouche2014comparative}.} \textcolor{black}{Below, we summarize the four fundamental design principles to be considered when building A-IoT-ready MFA solutions.} \subsection{Means to Compare} \textcolor{black}{Before combining several heterogeneous authentication methods, one needs to harmonize across them, such that knowledge-based methods could be integrated with, e.g., biometric and ownership schemes within a single-stop A-IoT authentication mechanism. Importantly, the output of the overwhelming majority of individual authentication solutions is binary: either acceptance or rejection, i.e., $\{0;1\}$. In rare cases, a continuous variable that characterizes the ``likelihood'' ($[0;1]$) could be retrieved from certain biometric systems.} However, most vendors do not provide with access to those values but rather convert the likelihood factor into a binary decision~internally. \textcolor{black}{In addition to the output data format, alternative methods can be characterized by their accuracy, which is typically estimated with two probabilities: (i)~false acceptance rate~(FAR), the probability that an unauthorized user is accepted; and (ii)~false rejection rate~(FRR), the probability that a valid user is rejected.} These reflect two major qualities of an authentication system: security (FAR) and usability (FRR). We here advocate their generalization to knowledge and ownership~methods. For instance, in password-based protection, FAR may correspond to the probability of guessing the secret, while FRR may characterize the possibility of making an accidental mistake during input. In turn, FAR and FRR may also reflect the chances for a token to be stolen or lost for ownership factors. Therefore, we conclude that all of the discussed authentication methods can be well-represented in a unified output format and supplemented with their suitable FAR/FRR values. \subsection{Means to Combine} \textcolor{black}{The use of several individual authentication methods does not offer immediate advantages, since it still remains unclear how to combine them efficiently. At the first glance, one may come up with either of the two extreme strategies: ``A~user should successfully pass ALL the checks to receive access'' (\emph{\underline{All}}) and ``A~user should successfully pass ANY of the checks to receive access'' (\emph{\underline{Any}}).} Below, we present a typical example that numerically illustrates the inherent weaknesses of these extreme strategies as well as emphasizes the importance of a certain level of intelligence when deriving the resulting decision from a number of individual outcomes by the component methods. \textcolor{black}{We assume a number of factors, each characterized by its own FAR and FRR values.} \textcolor{black}{For simplicity, we require that all the FARs are equal to~$0.03\%$, whereas all the FRRs are equal to~$2\%$. The Law of Total~Probability then derives the resultant values for FAR/FRR.} Observing Fig.~\ref{fig:far}, the \emph{\underline{All}} approach has the lowest FAR, thus yielding the best security level. However, its FRR is higher than with other approaches, by reaching over $12\%$ with~$7$ independent factors combined. Hence, the usability of \emph{\underline{All}} approach remains low, which makes it non-applicable in the A-IoT context. Further, we notice the opposite trend for the \emph{\underline{Any}} approach that increases the FAR value at the expense of much better FRR. Therefore, \emph{\underline{Any}} solution is not applicable either. Consequently, none of the trivial MFA combinations are directly usable in the challenging A-IoT scenarios. In contrast, a more intelligent \emph{\underline{Balanced}} approach -- ``A~user should successfully pass most of the checks to receive access''~-- constitutes a viable compromise between security and usability, by decreasing both FAR and FRR indicators. The quantitative gains highly depend on the input parameters and reach $10^4$ vs. $10^8$ when $7$ factors are combined. This example also highlights the importance of a threshold value selection, since incorrect combining may often result in rapid system performance degradation~\cite{ometov2018multi}. The same holds true for any other values of FAR/FRR, even though they may actually vary for different~factors. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{figures/security.pdf} \caption{Comparing alternative factor combining approaches.} \label{fig:far} \end{figure} \vspace{-0.3cm} \subsection{Means to Evaluate} \textcolor{black}{Given that A-IoT scenarios are highly heterogeneous, the results delivered by the individual devices should not lead to blind acceptance/rejection decisions.} Instead, additional data must be considered when comparing the output of the authentication function against a threshold value. \begin{figure*}[!ht] \centering \includegraphics[width=1.65\columnwidth]{figures/phases.pdf} \caption{Considered phases of time-separated MFA for A-IoT.} \label{fig:potencial_util} \end{figure*} \subsubsection{Binary decision} The first and foremost sub-factor to be considered is binary decisions delivered by the individual devices. \subsubsection{Vendor-specific metrics} The second sub-factor is the level of accuracy, which is directly related to FAR/FRR parameters. For example, the data collected with cameras by various vendors may deliver different probabilities during a facial recognition event for the same user. \subsubsection{Level of trust} Many factors may impact user and device trust. Here, trust in the ``owned'' devices (e.g., built-in cameras) should be valued higher than that in external equipment. Further, historically familiar devices may have higher trust levels than stranger equipment, see the paradigm of Social~IoT~\cite{atzori2014smart}. A significant benefit may be made available by utilizing social networks, since the devices owned by a friend or a colleague may also be considered as more trustworthy. The set of selected sub-factors can significantly affect the operation of the authentication solution. However, the above three factors are relatively stable -- the overall changes in the A-IoT system from these perspectives are not as abrupt and thus could be determined in advance. Conversely, the authentication system designer should be provided with a higher level of flexibility for a given application. This could be achieved by adding another dimension -- specific factor weight per application (or even per user). Accordingly, the general authentication function is to be considered as $\sum{\delta_i\mu_i\tau_i\varphi_i}>T,$ where $i$ is the factor number, $\delta_i$ is a binary decision, $\mu_i$ is the accuracy level provided by the vendor, $\tau_i$ is the trust level to the selected source, $\varphi_i$ is the factor weight, and $T$ is the system-wide threshold set by the designer. \textcolor{black}{Hence, the system may be adjusted per device, while the ultimate decision can be made flexible based on statistical analysis and machine learning techniques.} Finally, the use of various factors consumes different amounts of time, see Table~\ref{tab:comp}. \vspace{-0.4cm} \subsection{Means to Evolve} The conventional ICT systems typically exploit a single-stage authentication method, such that the user is either granted or denied access as the result of authentication. In contrast, the more stringent time constraints of A-IoT authentication dictate the need to complement the main authentication phase with additional checks that happen before and/or after it. Here, the considered MFA solution may benefit from a range of sensing devices widely deployed in Smart Cities as well as exploit the very nature of the human interaction with the A-IoT system. Therefore, the overall authentication process can be divided into several phases and, consequently, the level of trust to the user begins to evolve in time. \subsubsection{Pre-authentication phase} This phase is the most dynamic and unpredictable as a person `approaches' the target vehicle. Here, the surrounding environment plays a crucial role by providing with additional information. The only option during this phase is to utilize passive authentication strategies, i.e., `observe' the user biometrics/behavior that could be delivered by user-worn wearables, user-carried deceives, and other vehicles/infrastructure in proximity. \subsubsection{Active authentication phase} The most conventional phase relies upon active interaction. Hence, the user provides relevant input to the system directly. The most suitable authentication methods are knowledge- and biometrics-based. \subsubsection{Continuous (post) authentication phase} Another key part of the envisioned A-IoT authentication process is continuous monitoring of the fact that the user remains legitimate to operate the system even after the previous phases are completed successfully~\cite{8291131}. Monitoring and analyzing the subject by the smart vehicle, infrastructure, and other cars become a preferred option. Consider a case where the driver has provided all of the tokens, passed all of the biometric tests but faced a seizure during a highway trip. In this case, the vehicle may automatically overtake the control, connect to the neighboring cars, and safely stop by the wayside. As an example, recent works confirm that it is necessary to monitor the driver for just under 2.5 minutes in order to validate the behavior with 95\% accuracy~\cite{burton2016driver}. \vspace{-0.3cm} \section{Ecosystem of MFA-Powered A-IoT} \label{sec:challenges} The previous section summarized the underlying design principles of the MFA solutions in A-IoT, at large. However, even if these principles are followed, further development and mass adoption of MFA-powered A-IoT systems should be considered in perspective. This section brings the community's attention to the most significant questions to be answered in this context. \subsubsection{How to weigh factors?} While the MFA concept offers sufficient flexibility to adapt the authentication system to a wide range of possible scenarios, the choice of particular numerical weights and threshold values requires an extensive study, which needs to carefully balance the FAR and FRR values of the resulting system depending on the target use case. The system should also be made reconfigurable, such that its internal parameters are updated appropriately whenever an A-IoT device is, e.g., sold to another person with different~attributes. \subsubsection{How to adapt decisions?} \textcolor{black}{Another critical challenge is dynamic system adaptation in relation to a number of factors involved in the authentication process.} For instance, recognition based on a video camera may be unavailable at nighttime or in bad weather. Hence, the decision function should dynamically adjust the weights of the factors that are available during the authentication process based on contextual data. This task is much more challenging as compared to conventional single- and two-factor authentication with only a few static factors~involved. \subsubsection{How to earn user trust?} The next question is related to making a legitimate user trust the system in its operations. For example, the user had a video surveillance camera at the parking near home, which contributed 20\% to the overall authentication process, while the threshold was configured to grant access. Then, the user moved the car to another address and cannot open it anymore without an additional weight from the infrastructure, since there is no external camera nearby to participate in the authentication process. \textcolor{black}{Hence, it is crucial that the decision-making process be at least partially transparent to the user.} \subsubsection{How to receive assistance?} The A-IoT framework involves not only in-built authentication factors but also data from proximate sources. \textcolor{black}{Therefore, a question remains of how secure and trusted such assistance from the neighboring devices could be.} Our illustrative example considered above receives additional data from the wearable devices owned by the human user; the camera mounted on a lamp post; a surveillance drone patrolling the street, etc. Hence, designing secure and reliable methods to deliver the sensitive authentication data from these dissimilar Smart City devices to the target A-IoT system -- while not compromising the user privacy for third-party entities -- is an open problem. \subsubsection{How to delegate A-IoT devices?} \textcolor{black}{Users tend to share their devices both privately~(family) and publicly~(car rent).} \textcolor{black}{However, secure collective delegation of use is not straightforward for the A-IoT systems.} \textcolor{black}{Conventional landing of a physical token may not be a sufficient option anymore, since it does not necessarily verify the right to operate the A-IoT device.} From the A-IoT platform perspective, most of the factors related to its temporary user. \vspace{-0.3cm} \section{Conclusion and Standardization Aspects} Reliable and secure human authentication by various smart devices is one of the key drivers in the Advanced IoT era. \textcolor{black}{From the standardization perspective, there is a number of regional specifications and recommendations related to multi-factor authentication. However, most of them are still in their early development phases. For example, Payment Card Industry Security Standards Council provides recommendations for the MFA system implementation\footnote{``PCI Security Standards Council: Guidance for Multi-Factor Authentication,'' \url{https://www.pcisecuritystandards.org/pdfs/Multi-Factor-Authentication-Guidance-v1.pdf} [Accessed November 2018]} and also partially touches upon the MFA-related topic in terms of the requirements related to the utilization of MFA for card payments in PCI~DSS~v3.2. National Institute of Standards and Technology~(NIST) provides MFA-related guidelines\footnote{``NIST Special Publications Portal,'' \url{https://www.nist.gov/publications} [Accessed November 2018]} in NIST Special Publications 800-63B and 800-63C with a detailed overview of the technical requirements for federal agencies implementing digital identity in the US. Overall, these documents support the discussion provided in this work. However, so far there is no unified standard for the MFA system developers to follow.} In this article, we reviewed the existing research challenges and possible enablers for user authentication within the A-IoT ecosystem. We introduced a concept of multi-factor authentication for A-IoT as an attractive alternative to existing single-factor solutions with limited potential. \textcolor{black}{The fundamental design principles of MFA were highlighted by providing useful insights into facilitation of future MFA applications for the A-IoT.} Finally, key open questions related to the development, practical implementation, and adoption of MFA for diverse A-IoT systems were discussed together with potential use cases, thus laying the foundation for further research in this emerging~area. \vspace{-5mm} \bibliographystyle{ieeetr}
2023-04-23T08:17:57.801Z
2019-01-23T02:24:56.000Z
redpajama/arxiv
arxiv_0000
931
5,428
f6021f579b9f6a384b68fd13c4edc87071bda3fa
\section{introduction} The double Burnside ring of a finite group $G$, denoted $B(G,G)$, is an important and interesting invariant in the representation theory of finite groups. It is a central object in the study of biset functors, which has in turn answered important questions in representation theory of finite groups. In particular, bisets functors were used in determining the unit group of the standard Burnside ring for $p$-groups (see \cite{BoucUnits}) and determining the Dade group of a a finite group (see Chapter $12$ of \cite{BoucBook}). It has also been the subject of study in connection to group fusion and algebraic topology (see \cite{ragnarsson2013saturated}). Unlike the usual Burnside ring, the double Burnside ring is non-commutative and does not seem to have a convenient, so called, \emph{ghost ring} (see Theorem~\ref{ghostRing}) that one can embed it into. This has been the subject of much research (see \cite{boltje2012ghost}, \cite{boltje2013ghost}, and \cite{masterson2018table}) . The trade-off is that $B(G,G)$ carries much more data about the group $G$ than its standard Burnside ring, though it is more difficult to work with. The goal of the article at hand is to begin a structural theory of orthogonal units of the double Burnside ring. That is, units $u\in B(G,G)^\times$ such that $uu^\circ=u^\circ u=\mathrm{Id}_G$, where $(-)^\circ$ is the natural duality operator on $B(G,G)$ (see Proposition~\ref{dual}) and $\mathrm{Id}_G$ is the identity element of $B(G,G)$. These units form a group, denoted by $B_\circ(G,G)$. We offer three main results towards this goal: First, we introduce inflation maps that embed units of double Burnside rings of quotient groups. If $N$ is a normal subgroup of $G$, we denote these group homomorphism by $\mathrm{dBInf}^G_{G/N}:B(G/N,G/N)^\times\to B(G,G)^\times$ (Proposition~\ref{dBInf}). We also define isomorphism maps and show these behave well with the inflation maps, in the biset sense. These maps restrict to maps between orthogonal unit groups. Second, our first main theorem, {\bf Theorem~\ref{mainThm1}}, establishes the existence of a naturally occurring elementary abelian $2$-subgroup of orthogonal units of $B(G,G)$. These subgroups have a basis (in the sense of $\mathbb{F}_2$ vector spaces) parametrized using the normal subgroups of $G$. Lastly, our second main theorem (listed below), begins the classification of these orthogonal unit groups for cyclic $p$-groups $C_{p^n}$, where $p$ is a prime. We complete this classification for odd primes, leaving only the cases where $p=2$ and $n>1$. \begin{theorem}\label{main2} Let $G$ be a cyclic $p$-group with $p$ a prime: \begin{enumerate}[label=\roman*)] \item If $G$ is trivial, then $B_\circ(G,G)\cong C_2$. \item If $G=C_2$, then $B_\circ(G,G)\cong C_2\times D_8$. \item If $p$ is odd and $|G|=p^n$, then \[B_\circ(G,G)\cong \left\{\begin{array}{lll} C_2^{n+2}\times\prod_{i=1}^n\mathrm{Out}(C_{p^i}) &\mathrm{if}& p=3\\\\ C_2^{n+1}\times\prod_{i=1}^n\mathrm{Out}(C_{p^i}) &\mathrm{if}& p>3 \end{array}\right.\] \end{enumerate} \end{theorem} \section{preliminaries} In this section, we pool together definitions, results, and notation common for the subject. We give direct reference where applicable, and brief proofs for a few of the results. However, we remark that all the results in this sections can be found in \cite{BoucBook}, \cite{boltje2013ghost}, or \cite{boltje2015orthogonal}. \subsection{The Burnside ring of a finite group} Given a finite group $G$, the \emph{Burnside ring} of $G$, which we denote by $B(G)$, is defined to be the Grothendieck ring of isomorphism classes of finite (left) $G$-sets, with respect to the operations disjoint union and cartesian product. Recall that the Burnside ring is free as a $\mathbb{Z}$-module with a basis given by the isomorphism classes of transitive $G$-sets. Further, the isomorphism classes of transitive $G$-sets can be parameterized by the subgroups of $G$, up to conjugation, by considering the corresponding coset space. So if we let $\mathcal{S}_G$ be a set of representatives of the conjugacy classes of subgroups of $G$, then we get an explicit basis of $B(G)$, by considering $\displaystyle{\{[G/H]\}_{H\in \mathcal{S}_G}}$, where $[G/H]$ indicates the class of the $G$-set $G/H$ in $B(G)$. \subsection{The ghost ring of $B(G)$} If $S$ is a subgroup of $G$ and $X$ is a $G$-set, we can consider the set of elements in $X$, that are fixed by $S$, which we denote by $X^S$. We will the notation $|X^S|$, to denote the cardinality of this set. For any $g\in G$, we have $|X^{\,^gS}|=|X^S|$. It follows that this induces a ring homomorphism from $B(G)$ into $\mathbb{Z}$. We abusively denote the image of this map by $|a^S|$ for $a\in B(G)$. One of the most fundamental results about $B(G)$, due to Burnside, is that these we can use these maps to embed $B(G)$ into a direct product of $\mathbb{Z}$. This is called the \emph{ghost ring} of $B(G)$. \begin{theorem}[Burnside]\label{burnsideTheorem}\label{ghostRing} Let $G$ be a finite group and $\mathcal{S}_G$ a set of representatives of conjugacy classes of subgroups of $G$. Then the map \[B(G)\to \prod_{S\in \mathcal{S}_G}\mathbb{Z}\] \[a\mapsto (|a^S|)_{s\in \mathbb{Z}}\] is an injective ring homomorphism with finite cokernel. \end{theorem} An immediate corollary to the above fact is that the unit group of $B(G)$ is an elementary abelian $2$-group. However, determining $B(G)^\times$ in general is still very open, even for the case of solvable groups. Results and progress on this problem can be found in \cite{BoucUnits}, \cite{YoshidaUnits}, and \cite{Barsotti}. \subsection{Bisets} Given finite groups $G$ and $H$, a set $X$ equipped with a left $G$-action and a right $H$-action, such that $(g\cdot x)\cdot h=g\cdot (x\cdot h)$ is called a $(G,H)$-\emph{biset}. If we consider the Grothendieck group of finite $(G,H)$-bisets, with respect to disjoint union, these form a $\mathbb{Z}$ module we denote by $B(G,H)$. Note that $B(G,H)$ is canonically isomorphic, as a $\mathbb{Z}$-module, to $B(G\times H)$, since there is a one-to-one correspondence between $(G,H)$-bisets and left $G\times H$-sets Given by identifying the $(G,H)$ biset $X$ with the left $G\times H$-set $X$, together with the action $(g,h)\cdot x=gxh^{-1}$, for all $(g,h)\in G\times H$ and $x\in X$. Thus $B(G,H)$ has a basis given by $\displaystyle{\{[G\times H/L]\}_{L\in \mathcal{S}_{G\times H}}}$. We recount some fundamental information about bisets. Recall that a \emph{section} of $G$ is a pair of subgroups $(A,B)$ of $G$, with $B\trianglelefteqslant A$. Goursat's Lemma gives us an important way to enumerate the standard basis of $B(G,H)$ in terms of sections of $G$ and sections of $H$. Given a subgroup $L\leqslant G\times H$, we define the \emph{first and second projections} of $L$ by $P_{1}(L):=\{g\in G\,|\,(g,h)\in L\}$ and $P_{2(L)}:=\{h\in H\,|\,(g,h)\in L\}$. We also define the \emph{first and second kernels} of $L$ by $K_{1}(L):=\{g\in G\,|\,(g,1)\in L\}$ and $K_{2}(L):=\{h\in H\,|\,(1,h)\in L\}$. We then have that $K_1(L)\trianglelefteqslant P_1(L)\leqslant G$ and $K_2(L)\trianglelefteqslant P_2(L)\leqslant H$. In other words, the pairs $(P_1(L),K_1(L))$ and $(P_2(L),K_2(L))$ are sections of $G$ and $H$ respectively. Moreover, there is a canonical isomorphism $\varphi:P_2(L)/K_2(L)\to P_1(L)/K_1(L)$ such that if $(g,h)\in L$, then $\varphi(gK_2(L))=hK_1(L)$. \begin{lemma}[Goursat's Lemma, \cite{BoucBook}, Lemma $2.3.25$]\label{GoursatLemma} If $G$ and $H$ are groups and $L$ is a subgroup of $G\times H$, then there is a unique isomorphism $\varphi:P_2(L)/K_2(L)\to P_1(L)/K_1(M)$ such that if $(g,h)\in L$, then $\varphi:(gK_2(L))=hK_1(L)$. Conversely, for any sections $(A,B)$ and $(C,D)$ of $G$ and $H$, respectively, such that there is an isomorphism $\varphi:A/B\to C/D$, there is a subgroup \[L_{(A,B),\varphi,(C,D)}=L:=\{(g,h)\in G\times H\,|\, \varphi(hD)=gB\},\] where $P_1(L)=A,K_1(L)=B, P_2(L)=C$, and $K_2(L)=D$. \end{lemma} \begin{rmk}\label{encodeRmk} If $G$ and $H$ are finite groups and $L\leqslant G\times H$, we frequently identify $L$ with the quintuple $(A,B,\varphi, C,D)$, where $P_1(L)=A$, $P_2(L)=C$, $K_1(L)=B$, $K_2(L)=D$, and $\varphi$ is the isomorphism $P_2(L)/K_2(L)\overset{\sim}{\to} P_1(L)/K_1(M)$ described by Goursat's Lemma. In this case, we say $L$ is \emph{encoded} as the quintuple $(A,B,\varphi, C,D)$. We abusively write this as $L=(A,B,\varphi, C,D)$. We also will write $(A,B,\varphi, C,D)$ as $(A,B;C,D)_\varphi$. In the case where $A=B$ and $C=D$, we will just use the quadruple $(A,B;C,D)$, since there is no choice of $\varphi$. We also remark that in Propositions~\ref{baseCase} and ~\ref{inductiveCase}, we consider $G$ being a cyclic group of order $p^n$, where $p$ is a prime number. Since the subgroups of $G$ are in one-to-one with the nonnegative integers $0,\cdots, n$, we use the notation $(i,j;k,l)_\varphi$, where $0\leq i,j,k,l\leq n$ and $i-j=k-l\geq0$ to encode subgroups of $G\times G$.. \end{rmk} If we suppose finite groups $G$, $H$, and $K$, a $(G,H)$-biset $X$, and an $(H,K)$-biset $Y$, we can set $X\times_HY$ to be the $H$-obits of the $H$-set $X\times Y$ with the action $h\cdot(x,y)=(xh^{-1},hy)$, for any $h\in H$ and $(x,y)\in X\times Y$. We then let $X\times_HY$ take on the natural action from $G\times K$. Elements of $X\times_HY$ are denoted by $(x,_Hy)$. The operation $\times_H$, which we call the \emph{tensor product} of bisets, induces a bilinear map from $B(G,H)\times B(H,K)\to B(G,K)$, which we denote with $\circ_H$, such that $[X]\circ_H[Y]=[X\times_HY]$. \begin{prop}[\cite{BoucBook}, 2.3.14(1)] Let $G, H, K,$ and $L$ be groups. If $U$ is a $(G,H)$-biset, $V$ is an $(H,K)$-biset, and $W$ is a $(K, L)$-biset. then there is a canonical isomorphism of $(G,L)$-bisets \[(U\times_HV)\times_KW\cong U\times_H(V\times_KW)\] given by $((u,_Hv),_Kw)\mapsto (u,_H(v,_Kw))$, for all $(u,v,w)\in U\times V\times W$. \end{prop} \begin{rmk} The above proposition implies that if the bilinear maps $\circ_H:B(G,H)\times B(H,K)\to B(G,K)$ and $\circ_K:B(H,K)\times B(K,L)\to B(H,L)$ interact associatively. That is \[((a\circ_Hb)\circ_Kc)=(a\circ_H(b\circ_Kc))\in B(G,L),\] for all $a\in B(G,H)$, $b\in B(H,K)$, and $c\in B(K,L)$. Because of this, we will frequently write it without the subscript if the context is clear. \end{rmk} There is another related operation we need to consider, this time between subgroups of $G\times H$ and $H\times K$. Given a subgroup $L \leqslant G\times H$ and a subgroup $M\leqslant H\times K$, we can define a subgroup of $G\times K$ by \[L*M:=\{(g,k)\in G\times K\,|\, \exists\, h\in H, \mathrm{ \,such\,\, that\, } (g,h)\in L \mathrm{\, and\, } (h,k)\in M\}.\] The following proposition gives us a formula for computing the product $\circ_H$ of basis elements of $B(G,H)$ and $B(H,K)$ in terms of the basis of $B(G,K)$. \begin{prop}[\cite{BoucBook}, $2.3.24$]\label{MackeyFormula} For $L\leqslant G\times H$ and $M\leqslant H\times K$, we have \[[G\times H/L]\circ_H[H\times K/M]=\sum_{h\in [P_2(L)\backslash H/P_1(M)]}[G\times K/(L*\,^{(h,1)}M)]\in B(G,K)\] \end{prop} \subsection{Opposite bisets} The following definition is central to our topic. \begin{defn}(\cite{BoucBook}, $2.3.6$) If $G$ and $H$ are finite groups and $X$ is a $(G,H)$-biset, then there is a unique $(H,G)$-biset called the \emph{opposite biset} of $X$, which is equal to the set $X$, equipped with the action \[h\cdot x\cdot g = g^{-1}xh^{-1} \in X\] for all $h \in H$, $g\in G$, and $x\in X$. We denote this $(H,G)$-biset by $X^\circ$. \end{defn} If $L\leqslant G\times H$, we also consider the \emph{opposite subgroup of $L$}, defined by \[L^\circ:=\{(h,g)\in H\times G\,|\, (g,h)\in L\}\leqslant H\times G.\] \begin{lemma}\label{oppositeProp} Let $G, H$, and $K$ be finite groups. \begin{enumerate}[label=(\roman*)] \item If $L$ is a subgroup of $G\times H$, then \[(G\times H/L)^\circ\cong H\times G/L^\circ\] as $(H,G)$-bisets. \item If $L\leqslant G\times H$ and $M\leqslant H\times K$ then \[(L^\circ)^\circ=L\] and \[(L*M)^\circ=M^\circ*L^\circ.\] \item If $X$ is a $(G,H)$-biset and $Y$ is an $(H,K)$-biset, then \[(X\times_HY)^\circ\cong Y^\circ\times_HX^\circ\] as $(K,G)$-bisets. \item There is a group isomorphism $(-)^\circ:B(G,H)\to B(H,G)$ induced by sending \[[X]\mapsto[X^\circ]\] for any $(G,H)$-biset $X$. \end{enumerate} \end{lemma} \begin{proof} Part $(ii)$ follows easily from definitions. Part $(iv)$ follows from part $(i)$. To prove $(i)$, it is sufficient to check that the map $(g,h)L\to (h^{-1},g^{-1})L^\circ$ is an isomorphism of $(H,G)$-bisets. Similarly, for $(iii)$, it just needs to be verified that $(x,_Hy)\mapsto (y,_H,x)$ is a well-defined isomorphism of $(K,G)$-bisets. The verifications are straightforward. \end{proof} \subsection{Double Burnside rings} If $G$ is a finite group, then $B(G,G)$ has a ring structure given by the multiplication $[X]\circ_G[Y]=[X\times_GY]$ for all $(G,G)$-bisets $X$ and $Y$. With this multiplication $B(G,G)$ is called the \emph{double Burnside ring} of $G$. The identity element of $B(G,G)$, which we denote by $\mathrm{Id}_G$, is equal to the class $[G]$, where $G$ is the set of elements of $G$ with the usual left and right multiplication as its $(G,G)$-action. It should be recognized that although $B(G,G)$ and $B(G\times G)$ are isomorphic \emph{as groups}, their ring structures are quite different. For example, Burnside rings are commutative rings, yet $B(G,G)$ is commutative only when $G$ is trivial. Thus, we have $B(G,G)\cong B(G\times G)$ as rings if and only if $B(G,G)\cong B(G\times G)\cong B(G)\cong \mathbb{Z}$. \begin{prop}\label{dual} Let $G$ be a finite group. Taking opposite bisets induces an anti-involution on $B(G,G)$. In other words, for any $a,b\in B(G,G)$ we have \[(a^\circ)^\circ=a\] and \[(a\circ_Gb)^\circ=b^\circ\circ_Ga^\circ.\] \end{prop} \begin{proof} The first equality follows from Lemma~\ref{oppositeProp}, using parts $(i), (ii),$ and $(iv)$. The second equality follows from Lemma~\ref{oppositeProp}$(iii)$. \end{proof} \subsection{Elementary bisets} In this subsection, we assume $G$ is a finite group and $N$ is a normal subgroup of $G$. For the topic at hand we only consider three of the five elementary biset types. The interested reader is encouraged to check out Bouc's treatise on the subject in Chapters $1$ and $2$ from \cite{BoucBook}, where the following definition and propositions come from. \begin{defn} The $(G, G/N)$-biset $G/N$ with natural action will be called \emph{inflation} from $G/N$ to $G$ and denoted by $\mathrm{Inf}_{G/N}^G$. Dually, we define \emph{deflation} from $G$ to $G/N$ to be the set $G/N$ with natural $(G/N,G)$-action and denote this by $\mathrm{Def}_{G/N}^G$. If $f:G\mapsto H$ is an isomorphism, then the set $H$ with the $(H,G)$-action $h\cdot x \cdot g=hxf(g)$, for all $h,x\in H$ and all $g\in G$, will be denoted $\mathrm{Iso}(f)$. \end{defn} Note that the biset $\mathrm{Inf}_{G/N}^G$ gives rise to a functor from the category of finite $G/N$-sets to the category of $G$-sets. Similarly, $\mathrm{Def}_{G/N}^G$ gives rise to a functor from the category of $G$-sets to the category of $G/N$-sets. This justifies the slightly awkward ``from/to" language in their definition. We abusively denote the images of these bisets in $B(G,G/N)$ and $B(G/N,G)$ as $\mathrm{Inf}_{G/N}^G$ and $\mathrm{Def}_{G/N}^G$, respectively. Similarly, for an isomorphism $f:G\to H$ we do not notationally distinguish the biset $\mathrm{Iso}(f)$ from its image in $B(H,G)$. \begin{prop}[\cite{BoucBook}, 1.1.3, 2.b.]\label{IsoInfDef} Let $G$ be a finite group, $N\trianglelefteqslant G$, and $\varphi:G\to H$ is a group isomorphism., then \[\mathrm{Iso}(\varphi')\circ\mathrm{Def}^G_{G/N}=\mathrm{Def}_{H/\varphi(N)}^H\circ\mathrm{Iso}(\varphi)\] \[\mathrm{Iso}(\varphi)\circ\mathrm{Inf}_{G/N}^G=\mathrm{Inf}^H_{H/\varphi(N)}\circ\mathrm{Iso}(\varphi),\] where $\varphi':G/N\to H/\varphi(N)$ is the gorup isomorphism induced by $\varphi$. \end{prop} (Again, all the statements in the proposition below can be found in \cite{BoucBook} Chapters $1$ and $2$, however, the last statement of part $(iv)$ in the comes from $6.2.3$ in \cite{BoucBook}.) \begin{prop}\label{elementaryBisetProp}\hfill \begin{enumerate}[label=(\roman*)] \item Identifying $G/1$ with $G$, we have $\mathrm{Inf}_{G/1}^G=\mathrm{Def}_{G/1}^G=\mathrm{Id}_G\in B(G,G)$. \item We have $(\mathrm{Inf}_{G/N}^G)^\circ=\mathrm{Def}_{G/N}^G\in B(G/N,G)$ and $(\mathrm{Def}_{G/N}^G)^\circ=\mathrm{Inf}_{G/N}^G\in B(G,G/N)$. \item If $M$ is a normal subgroup of $G$ containing $N$, then \[\mathrm{Inf}_{G/N}^G\circ\mathrm{Inf}_{G/M}^{G/N}=\mathrm{Inf}_{G/M}^G\in B(G,G/M)\] and \[\mathrm{Def}_{G/N}^G\circ\mathrm{Def}_{G/M}^{G/N}=\mathrm{Def}_{G/M}^G\in B(G/M,G).\] (Note we are identifying $G/M$ canonically with the quotient $(G/N)/(M/N)$.) \item There is an isomorphism of $(G/N,G/N)$-bisets \[\mathrm{Def}_{G/N}^G\times_G\mathrm{Inf}_{G/N}^G\cong G/N.\] Thus $\mathrm{Def}_{G/N}^G\circ \mathrm{Inf}_{G/N}^G=\mathrm{Id}_{G/N}\in B(G/N,G/N)$. Moreover, $\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}^N$ is an idempotent in $B(G,G)$. \end{enumerate} \end{prop} \begin{proof} Statements (i) and (ii) are clear. For (iii), notice that $(G/N)\times_{G/N}(G/M)$ and $G/M$ are isomorphic as $(G,G/M)$-bisets via the map $(g_1N,_{G/N}g_2M)\mapsto(g_1g_2M)$. Thus $\mathrm{Inf}_{G/N}^G\cdot_{G/N}\mathrm{Inf}_{G/M}^{G/N}=\mathrm{Inf}_{G/M}^G\in B(G,G/M)$. Taking opposites give us $\mathrm{Def}_{G/N}^G\circ\mathrm{Def}_{G/M}^{G/N}=\mathrm{Def}_{G/M}^G\in B(G/M,G)$. For (iv), we use the isomorphism $(g_1N,_{G}g_2N)\mapsto g_1g_2N$ of $(G,G)$-bisets $\mathrm{Def}_{G/N}^G\times_G\mathrm{Inf}_{G/N}^G\cong G/N$. The last statement follows from the calculation \[\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}^N\circ\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}^N=\mathrm{Inf}_{G/N}^G\circ\mathrm{Id}_{G/N}\circ\mathrm{Def}_{G/N}^N=\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}^N.\] \end{proof} \begin{nota}[\cite{BoucBook}, $6.2.3$] Given a normal subgroup $N$ of $G$, we use the notation \[j_N^G:=\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}^N\] to denote the idempotent from Proposition~\ref{elementaryBisetProp}(iv) associated with $N$. \end{nota} One thing to not is that $j_N^G=[G\times G/L]$, where $P_i(L)=G$, and $K_i(L)=N$, with trivial homomorphism $P_1(L)/K_1(L)\cong P_2(L)/K_2(L)$. This tells us that the set $\{j_N^G\}_{N\trianglelefteqslant G}$ is linearly independent in $B(G,G)$. The idempotents $j_N^G$ can be used to define a set of useful idempotents from $B(G,G)$. In the definition below the function $\mu_{\trianglelefteqslant G}$ denotes the M{\"o}bius function of the poset of normal subgroups of $G$. \begin{defn}[\cite{BoucBook} $6.2.4$]\label{fidempotentDefinition} Let $G$ be a finite group and $N\trianglelefteqslant G$. Let $f^G_N$ denote the element in $B(G,G)$ defined by \[f^G_N:=\sum_{N\leqslant M\trianglelefteqslant G}\mu_{\trianglelefteqslant G}(N,M)j^G_N.\] \end{defn} \begin{prop}[\cite{BoucBook} $6.2.5$ and $6.2.7$]\label{fidempotents} Let $G$. The elements $f_M^G\in B(G,G)$ for $M\trianglelefteqslant G$, are orthogonal idempotents, and for any $N\trianglelefteqslant G$, we have \[j^G_N=\sum_{N\leqslant M\trianglelefteqslant G}f^G_M.\] In particular, if $N=1$, we have \[j_{\{1\}}^G=\mathrm{Id}_G=\sum_{M\trianglelefteqslant G}f^G_M.\] \end{prop} \subsection{The $*$ multiplication.} In the final section, we will need a robust way of computing multiplication in the double Burnside ring. It will be worthwhile to digest the $*$ multiplication a bit more. We consider the general setting where $G, H$, and $K$ are finite groups. The following is a classic lemma due to Zassenhaus. \begin{lemma}[Butterfly Lemma] Let $(A,B)$ and $(C,D)$ be two sections of $G$. Then there exists a canonical isomorphism \[\beta(A',B';C',D'):C'/D'\to A'/B'\] where $B\leqslant B'\trianglelefteqslant A'\leqslant A$ and $D\leqslant D'\trianglelefteqslant C'\leqslant C$ are defined as \[ A'=(A\cap C)B,\, B'=(A\cap D)B,\, C'=(C\cap A)D,\,\, \mathrm{and}\,\,\, D'=(C\cap B)D.\] The isomorphism $\beta(A',B';C',D')$ is uniquely determined by the property that it takes $xD'$ to $xB'$ for all $x\in C\cap A$. \end{lemma} Recall Goursat's Lemma (\ref{GoursatLemma}) allows us to describe any subgroup of $G\times H$ as a quintuple $(A,B,\varphi,C,D)$ where the pairs $(A,B)$ and $(C,D)$ are sections of $G$ and $H$, respectively, and $\varphi:C/D\to A/B$ is a uniquely determined isomorphism. For subgroups $L\leqslant G\times H$ and $M\leqslant H\times G$, the following lemma describes explicitly the product $L*M$ in these terms. Both the lemma and subsequent diagram that illustrates it can be found in \cite{boltje2013ghost}. \begin{lemma}[\cite{boltje2013ghost}, 2.7] \label{explicitStarComputation} Let $L=(P_1,K_1, \varphi, P_2,K_2)\leqslant G\times H$ and $M=(P_3,K_3,\psi, P_4,K_4)\leqslant H\times K$. Then \[L*M=(P_1',K_1',\overline{\varphi}\circ\beta(P_2',K_2';P_3',K_3')\circ\overline{\psi},P_4',K_4')\leqslant G\times K\] where \begin{itemize} \item $K_2\leqslant K_2'\trianglelefteqslant P_2'\leqslant P_2$ and $K_3\leqslant K_3'\trianglelefteqslant P_3'\leqslant P_3$ are determined by the Butterfly Lemma applied to the sections $(P_2,K_2)$ and $(P_3,K_3)$ of $H$; \item $K_1\leqslant K_1'\trianglelefteqslant P_1'\leqslant P_1$ and $K_4\leqslant K_4'\trianglelefteqslant P_4'\leqslant P_4$ are determined by \[P_1'/K_1=\varphi(P_2'/K_2),\quad K_1'/K_1=\varphi(K_2'/K_2)\] \[P_4'/K_4=\psi^{-1}(P_3'/K_3),\quad K_4'/K_4=\psi^{-1}(K_3'/K_3);\] \item the isomorphisms $\overline{\varphi}:P_2'/K_2'\to P_1'/K_1'$ and $\overline{\psi}:P_4'/K_4'\to P_3'/K_3'$ are induced by the isomorphism $\varphi$ and $\psi$, respectively. \end{itemize} \end{lemma} \begin{center} \begin{tikzcd} & & & H\arrow[ddd, dash] & & & & H\arrow[dddd, dash] & & & \\ & & & & & & & & & & K\arrow[ddd, dash] \\ G\arrow[d, dash] & & & & & & & & & & \\ P_1\arrow[rrr, dash]\arrow[dd, dash] & & |[alias=H_4]| & P_2\arrow[dd, dash] & & & & & & & \\ & & & & & & & P_3\arrow[rrr,dash]\arrow[d, dash] & |[alias=C_4]| & & P_4\arrow[d, dash] \\ P_1'\arrow[rrr,dash, dashed]\arrow[d, dash] & |[alias=I_4]| & & P_2'\arrow[rrrr, dash, dashed]\arrow[d, dash] & & |[alias=E_4]| & & P_3'\arrow[d, dash]\arrow[rrr,dash, dashed] & & |[alias=A_4]| & P'_4\arrow[d, dash] \\ K_1'\arrow[rrr,dash, dashed]\arrow[d, dash] & |[alias=J_4]|\arrow[from=J_4, to=I_4, leftrightarrow, "\overline{\varphi}"'] & & K_2'\arrow[rrrr, dash, dashed]\arrow[d, dash] & & |[alias=F_4]|\arrow[from=F_4, to=E_4, leftrightarrow, "\beta"'] & & K_3'\arrow[dd, dash]\arrow[rrr,dash, dashed] & & |[alias=B_4]|\arrow[from=B_4, to=A_4, leftrightarrow, "\overline{\psi}"'] & K'_4\arrow[dd, dash] \\ K_1\arrow[rrr,dash]\arrow[d, dash] & & |[alias=G_4]|\arrow[from=G_4, to=H_4, leftrightarrow, "\varphi"' near end] & K_2\arrow[dd, dash] & & & & & & & \\ 1 & & & & & & & K_3\arrow[rrr,dash]\arrow[d, dash] & |[alias=D_4]|\arrow[from=D_4, to=C_4, leftrightarrow, "\psi"' near start] & & K_4\arrow[dd, dash] \\ & & & 1 & & & & 1 & & & \\ & & & & & & & & & & 1 \end{tikzcd} \end{center} \subsection{The bifree double Burnside ring} There are a few notable consequences of Proposition~\ref{MackeyFormula} and Lemma~\ref{explicitStarComputation}. Given a finite group $G$ and $M\leqslant G\times G$, if $|P_1(M)/K_1(M)|=|P_2(M)/K_2(M)|<|G|$, then Lemma~\ref{explicitStarComputation} implies $|P_1(M*L)/K_1(M*L)|=|P_2(M*L)/K_2(M*L)|<|G|$ and $|P_1(L*M)/K_1(L*M)|=|P_2(L*M)/K_2(L*M)|<|G|$ for all subgroups $L\leqslant G\times G$. Thus, Proposition~\ref{MackeyFormula} implies that the elements of $B(G,G)$ spanned by the basis elements $[G\times G/M]$ where $|P_1(M)/K_1(M)|=|P_2(M)/K_2(M)|<|G|$ form and ideal of $B(G,G)$. \begin{prop}[\cite{BoucBook}, $4.3.2$]\label{outGIdeal} Let $G$ be a finite group and let $I_G$ denote the subgroup of $B(G,G)$ spanned by elements $[G\times G/M]$, where $M\leqslant G\times G$ and $|P_1(M)/K_1(M)|=|P_2(M)/K_2(M)|<|G|$. Then $I_G$ is an ideal of $B(G,G)$ and there is a surjective ring homomorphism \[\rho :B(G,G)\to \mathbb{Z}\mathrm{Out}(G),\] with $I_G=\ker(\rho)$, that sends $[G\times G/M]\mapsto 0$ if $|P_1(M)/K_1(M)|=|P_2(M)/K_2(M)|<|G|$ and $[G\times G/M]\mapsto \overline{\varphi}$ if $P_1(M)=P_2(M)=G$ and $K_1(M)=K_2(M)=1$, where $\varphi$ is the uniquely determined automorphism of $G$ indicated by the Goursat Lemma and $\overline{\varphi}$ is its image in $\mathrm{Out}(G).$ \end{prop} \begin{rmk}\label{bifreeBGG} The map in Proposition~\ref{outGIdeal} is a retraction to the ring homomorphism \[\eta:\mathbb{Z}\mathrm{Out}(G)\to B(G,G), \] \[\overline{\varphi}\mapsto [G\times G/M]\] where $M\leqslant G\times G$ is defined by the quintuple $(G,1,\varphi,G,1)$. Indeed, the map is well-defined, since the basis elements of $B(G,G)$ are conjugation invariant. Moreover, if $L=(G,1,\psi,G,1)$, then Proposition~\ref{MackeyFormula} implies \[[G\times G/M]\cdot_G[G\times G/L]=[G\times G/(M*L)]\] and Lemma~\ref{explicitStarComputation} implies that $M*L=(G,1,\varphi\circ\psi,G,1)$. \end{rmk} If $M\leqslant G\times G$ such that $K_1(M)=K_2(M)=1$, then we say $M$ is \emph{bifree}. Through Goursat's Lemma, bifree subgroups can be identified with notation $\Delta(A,\varphi,B)$ or $\Delta_\varphi(A,B)$, where $A=P_1(M)\leqslant G$, $B=P_2(M)\leqslant G$ and $\varphi:A\to B$ is an isomorphism. If $A=B$, then we write $\Delta_\varphi(A,B)=\Delta_\varphi(A)$. In the case where $\varphi$ is the identity, we simply write $\Delta(A)$. Note that $\Delta(A,\varphi,B)=(A,\{1\};B,\{1\})_\varphi$ in the notation of Remark~\ref{encodeRmk}. It is straightforward to check, using Lemma~\ref{explicitStarComputation}, that for $M,L\leqslant G\times G$, if $K_i(M)=K_i(L)=1$ for $i=1,2$, then $K_i(M*L)=1$ for $i=1,2$. It follows by Proposition~\ref{MackeyFormula} that the subset $B^\Delta(G,G)\subset B(G,G)$ spanned by the elements $[G\times G/M]$, where $M$ is bifree, is a subring. We call $B^\Delta(G,G)$ \emph{the bifree double Burnside ring.} It is clear that for any subgroup $M=(A,B,\varphi,C,D)\leqslant G\times G$, we have $M^\circ=(C,D,\varphi^{-1},A,B)$. Thus if $M$ is bifree, so is $M^\circ$ and Proposition~\ref{oppositeProp} implies that taking opposite bisets induces a group automorphism on $B^\Delta(G,G)$. We end this section with a well-known embedding of $B(G)$ into $B^\Delta(G,G)$. The proof can be found in [\cite{BoucBook}, 2.5.5-2.5.8]. \begin{prop}\label{burnsideRingEmbedding} Let $G$ be a finite group. The map \[\iota:B(G)\to B^\Delta(G,G)\] \[[G/L]\mapsto [G\times G/\Delta(L)]\] is an injective ring homomorphism. \end{prop} \section{An inflation map between units} We begin this section with an observation that we have a natural embedding of double Burnside rings of quotient groups, in the sense that there exists an injective \emph{rng} morphism. Recall, a \emph{rng} is a set with the same properties of a ring, without the assumption of an identity. If $A$ and $B$ are rngs, then a \emph{rng morphism} is a map that is both additive and multiplicative. We denote the category of rngs by $\mathrm{{\bf Rng}}$. \begin{lemma}\label{DBRrng} Let $G$ be a finite group and $N\trianglelefteqslant G$. Then there is an injective rng morphism \[B(G/N,G/N)\to B(G,G)\] \[a\mapsto \mathrm{Inf}^G_{G/N} \circ a\circ \mathrm{Def}^{G}_{G/N}.\] \end{lemma} \begin{proof} The additivity follows from the distributivity of the tensor product of bisets. Recall from Proposition~\ref{elementaryBisetProp} $\mathrm{Def}^{G}_{G/N}\circ\mathrm{Inf}^{G}_{G/N}=\mathrm{Id}_{G/N}$, thus \[\mathrm{Inf}^G_{G/N}\circ ab\circ\mathrm{Def}^{G}_{G/N}\] \[=\mathrm{Inf}^G_{G/N}\circ a\circ\mathrm{Def}^{G}_{G/N}\circ\mathrm{Inf}^G_{G/N}\circ b\circ\mathrm{Def}^{G}_{G/N},\] for all $a,b\in B(G/N,G/N)$, so the map is multiplicative. The injectivity of this map follows from the fact there is a group homomorphism, defined \[B(G,G)\to B(G/N,G/N)\] \[x\mapsto \mathrm{Def}^{G}_{G/N}\circ x\circ \mathrm{Inf}^G_{G/N},\] that is its left inverse. Indeed, we have \[\mathrm{Def}^{G}_{G/N}\circ\mathrm{Inf}^G_{G/N}\circ a\circ\mathrm{Def}^{G}_{G/N}\circ\mathrm{Inf}^G_{G/N}=a\in B(G/N,G/N).\] \end{proof} Let $\mathrm{{\bf Rng}_1}$ by the full subcategory of $\mathrm{{\bf Rng}}$ whose objects are rings (with unity). Below is a generalization of the familiar functor which restricts rings to their group of units. \begin{lemma}\label{rngFunctor} There is a functor \[(-)^\times:\mathrm{{\bf Rng}_1}\to \mathrm{{\bf Grp}}\] defined such that, for any $A\in \mathrm{Ob(\mathrm{{\bf Rng}_1})}$, we have \[A\mapsto A^\times\] and for any morphism $f:A\to B$ in $\mathrm{{\bf Rng}_1}$, we have \[f^\times:A^\times\to B^\times\] \[u\mapsto1_B+f(u-1_A).\] Moreover, this functor takes monomorphisms to monomorphisms. \end{lemma} \begin{proof} The last statement is clear. It suffices to check that if $u$ is a unit in $A$, then $1_B+f(u-1_A)$ is a unit in $B$, that $f^\times$ is a group homomorphism, and that composition is well-defined. All are straightforward but we check composition: If $f:A\to B$ and $g:B\to C$ are morphisms from $\mathrm{{\bf Rng}_1}$ and $u\in A^\times$, then \[g^\times\circ f^\times (u)=g^\times (1_B+f(u-1_A))=1_C+g(1_B+f(u-1_A)-1_B)\] \[=1_C+g(f(u-1_A))=1_C+g\circ f(u-1_A)=(g\circ f)^\times(u).\] \end{proof} Using this functor, and Lemma~\ref{DBRrng} we get the following structural maps on unit groups of double Burnside rings. \begin{propd}\label{dBInf} Let $G$ be a finite group and $N\trianglelefteqslant G$. Then there is an injective group homomorphism \[\mathrm{dBInf}_{G/N}^G:B(G/N,G/N)^\times\to B(G,G)^\times\] defined by \[u\mapsto 1_G+\mathrm{Inf}_{G/N}^G\circ(u-1_{G/N})\circ\mathrm{Def}_{G/N}^G\] for all $u\in B(G/N,G/N)^\times$. Moreover, we have \begin{enumerate}[label=\roman*)] \item $\mathrm{dBInf}_{G/1}^G$ is the identity map if we identify $G/1$ with $G$, and \item if $M$ is a normal subgroup of $G$ containing $N$, then \[\mathrm{dBInf}_{G/N}^G\circ \mathrm{dBInf}_{G/M}^{G/N}=\mathrm{dBInf}_{G/M}^G.\] Note we are identifying $G/M$ canonically with the quotient $(G/N)/(M/N)$. \end{enumerate} \end{propd} \begin{proof} The existence of $\mathrm{dBInf}_{G/N}^G$ follows form Lemmas~\ref{DBRrng} and ~\ref{rngFunctor}. The last two properties follow from Proposition~\ref{elementaryBisetProp}. \end{proof} The next proposition says that if $N\trianglelefteqslant G$, then the image of the embedding $B(G/N,G/N)\hookrightarrow B(G,G)$ from Lemma~\ref{DBRrng}, can be seen as the span of basis elements of $B(G,G)$, $[G\times G/L]$ with $L\leqslant G\times G$, which have $N\times N\leqslant L$. \begin{lemma}\label{kernelArgument} Let $G$ be a finite group and $N$ a normal subgroup of $G$. Suppose $L\leqslant G\times G$ is the subgroup encoded be Goursat's Lemma as $(P_1,K_1,\varphi, P_2, K_2)$. If $N\leqslant K_1$ and $N\leqslant K_2$. Define $L'$ to be the subgroup of $G/N\times G/N$ encoded by Goursat's Lemma as $(P_1',K_1',\overline{\varphi},P_2', K_2')$, where $P_1',P_2', K_1',K_2'$ and $\overline{\varphi}$ are defined respectively by $P_1,P_2, K_1,K_2$ and $\varphi$, through the natural surjection $G\to G/N$. Then \[G\times G/L\cong \mathrm{Inf}_{G/N}^G\times_{G/N} (G/N\times G/N)/L'\times_{G/N} \mathrm{Def}_{G/N}^G\] as $(G,G)$-bisets, via the mapping \[(g_1,g_2)L\mapsto (N\,,_{G/N}\, (g_1N,g_2N)L'\,,_{G/N}\, N).\] \end{lemma} \begin{proof} This amounts to straightforward verification that the explicit map is an isomorphism of bisets. \end{proof} We immediately get the following corollary. \begin{cor} If $G$ is a finite group and $N$ is a nontrivial normal subgroup of $G$, then \[\mathrm{im}(\mathrm{dBInf}_{G/N}^G)\cap B^\Delta(G,G)=\{\mathrm{Id}_G\}.\] \end{cor} \begin{nota} If $f:G\to H$ is an isomorphism of groups, then the map \[B(G,G)\to B(H,H)\] \[a\mapsto \mathrm{Iso}(f)\circ a\circ\mathrm{Iso}(f^{-1})\] is clearly an isomorphism of rings. Moreover, denote the restriction of this map to units by \[\mathrm{dBIso}(f):B(G,G)^\times\to B(H,H)^\times.\] \end{nota} \begin{prop} Let $G$ be a finite group and $N$ a normal subgroup of $G$. Suppose $\varphi:G\to H$ is an isomorphism of groups, then \[\mathrm{dBIso}(\varphi)\circ\mathrm{dBInf}_{G/N}^G=\mathrm{dBInf}_{H/\varphi(N)}^H\circ\mathrm{dBIso}(\varphi'),\] where $\varphi':G/N\to H/\varphi(N)$ is the isomorphism induced by $\varphi$. \end{prop} \begin{proof} This follows from Proposition~\ref{IsoInfDef}. \end{proof} \section{Orthogonal units} \begin{defn} Let $G$ be a finite group. A unit $u\in B(G,G)^\times$ is called \emph{orthogonal} if we have \[uu^\circ=u^\circ u=\mathrm{Id}_G.\] The set of orthogonal units of $B(G,G)$ is denoted by $B_\circ(G,G)$. \end{defn} \begin{rmk} Given a finite group $G$, the set $B_\circ(G,G)$ of orthogonal units is a subgroup of $B(G,G)^\times$. Indeed, if $u,v\in B_\circ(G,G)$, then by Proposition~\ref{dual} we have \[(uv)^\circ=v^\circ u^\circ,\] thus \[(uv)(v^\circ u^\circ)=(v^\circ u^\circ)(uv)=\mathrm{Id}_G.\] So we call $B_\circ(G,G)$ the \emph{group orthogonal units} of $B(G,G)$. \end{rmk} \begin{prop}\label{orthogonalRestriction} Let $G$ be finite group and $N\trianglelefteqslant G$. The map $\mathrm{dBInf}^G_{G/N}$ restricts to a group homomorphism \[\mathrm{dBInf}^G_{G/N}:B_\circ(G/N,G/N)\to B_\circ(G,G)\] \end{prop} \begin{proof} We check that the image of the proposed restriction lands in $B_\circ(G,G)$. Let $u\in B_\circ(G/N,G/N)$. Then \[(\mathrm{dBInf}^G_{G/N}(u))^\circ=[\mathrm{Id}_G+\mathrm{Inf}^G_{G/N}\circ(u-\mathrm{Id}_{G/N})\circ\mathrm{Def}_{G/N}^G]^\circ\] \[=(\mathrm{Id}_G+\mathrm{Inf}^G_{G/N}\circ u\circ\mathrm{Def}_{G/N}^G-j_N^G)^\circ=\mathrm{Id}_G+\mathrm{Inf}^G_{G/N}\circ u^\circ\circ\mathrm{Def}_{G/N}^G-j_N^G\] \[=\mathrm{dBInf}^G_{G/N}(u^\circ)=(\mathrm{dBInf}^G_{G/N}(u))^{-1},\] since $\mathrm{dBInf}^G_{G/N}(u)$ is a group homomorphism. \end{proof} From here on, we will assume the function $\mathrm{dBInf}_{G/N}^G$ is the one from Porposition~\ref{orthogonalRestriction}. Elements from the subset $B^\Delta_{\circ}(G,G):=B_\circ(G,G)\cap B^\Delta(G,G)$ are called \emph{bifree orthogonal units}. Since $M\leqslant G\times G$ is bifree if an only if $M^\circ$ is bifree, it follows that $B^\Delta_{\circ}(G,G)$ is a subgroup of $B_\circ(G,G)$. Boltje and Perepelitsky studied and characterized these groups for nilpotent $G$. \begin{theorem}[\cite{boltje2015orthogonal}, 1.1(e)]\label{bifreeNilpotent} Let $G$ be nilpotent group. Then \[B^\Delta_{\circ}(G,G)\cong B^\times(G)\rtimes \mathrm{Out}(G)\] with respect to the natural action of $\mathrm{Out}(G)$ on $B(G)^\times$. \end{theorem} We will need the following result for a detail in Theorem~\ref{mainThm1}. Recall the definition of idempotents $f_N^G\in B(G,G)$, for $N\trianglelefteqslant G$ from Definition~\ref{fidempotentDefinition}. \begin{lemma}\label{uniqunessLemma} Let $G$ be a finite group. Let $\mathcal{N}$ and $\mathcal{M}$ be two sets of normal subgroups of $G$. Then \[\sum_{N\in \mathcal{N}}f_N^G=\sum_{M\in \mathcal{M}}f_M^G.\] if and only if $\mathcal{N}=\mathcal{M}$. \end{lemma} \begin{proof} The ``if" direction is trivial. The ``only if" direction follows from the fact that the set $\{f_N^G\}_{N\trianglelefteqslant G}$ is linearly independent in $B(G,G)$, which follows from their definition. \end{proof} There is another naturally occurring subgroup of $B_\circ(G,G)$. Trivially, we know that $\mathrm{Id}_G=[G]=[G\times G/\Delta(G)]\in B(G,G)$ is in $B_\circ(G,G)$. Moreover, we also have that $-\mathrm{Id}_G\in B_\circ(G,G)$. Thus, there is a subgroup obtained by inflating the negative identities, as we run over all normal subgroups of $G$. \begin{theorem}\label{mainThm1} Let $G$ be a finite group. Set $n$ to be the number of normal subgroups $G$ has and \[H_{dB}:=\langle \mathrm{dBInf}_{G/N}^G(-\mathrm{Id}_{G/N})\,|\, N\trianglelefteqslant G\rangle\] Then $H_{dB}$ is an elementary abelian $2$-sugroup of $B_{\circ}(G,G)$, with order $2^n$. Moreover, we have $B^\Delta_{\circ}(G,G)\cap H_{dB}=\{\pm \mathrm{Id}_G\}$. \end{theorem} \begin{proof} We prove this in a slightly indirect fashion. Notice that \[(\mathrm{Id}_G-2f^G_N)^2=\mathrm{Id}_G-4f^G_N+4f^G_Nf^G_N=\mathrm{Id}_G-4f^G_N+4f^G_N=\mathrm{Id}_G,\] since $f_N^G$ is an idempotent. Further, we have that \[(\mathrm{Id}_G-2f^G_N)^\circ=\mathrm{Id}_G^\circ-2(f^G_N)^\circ=\mathrm{Id}_G-2f^G_N,\] so $\mathrm{Id}_G-2f^G_N\in B_\circ(G,G)$. If we set \[H'=\langle \mathrm{Id}_G-2f^G_N\,|\, N\trianglelefteqslant G\rangle\] then we will proceed by proving that $H'$ has all the properties expected of $H_{dB}$ and see that $H_{dB}=H'$. We first prove that $H'$ is an elementary abelian $2$-group. However, we have seen that every generator of $H'$ has order $2$, so it suffices to see that it is abelian. If $M$ is a different normal subgroup of $G$, then \[(\mathrm{Id}_G-2f^G_N)(\mathrm{Id}_G-2f^G_M)=\mathrm{Id}_G-2f_N^G-2f_M^G+4f^G_Nf^G_M\] \[=\mathrm{Id}_G-2(f_N^G+f_M^G)=\mathrm{Id}_G-2f_N^G-2f_M^G+4f^G_Mf^G_N=(\mathrm{Id}_G-2f^G_M)(\mathrm{Id}_G-2f^G_N),\] where the second and third equality come from the fact that $f^G_Nf^G_M=f^G_Mf^G_N=0$, which follows from Proposition~\ref{fidempotents} since $N\neq M$. Moreover, together with Lemma~\ref{uniqunessLemma}, this calculation is easily extended to show every element of $H'$ can be written uniquely as \[\mathrm{Id}_G-2\left(\sum_{N\in \mathcal{N}}f_N^G\right)\] where $\mathcal{N}$ is any set of normal subgroups of $G$. Hence $|H'|=2^n$. By the definition of $\mathrm{dBInf}_{G/N}^G$, we have \[\mathrm{dBInf}_{G/N}^G(-\mathrm{Id}_{G/N})=\mathrm{Id}_{G}-\mathrm{Inf}_{G/N}^G\circ(\mathrm{Id}_{G/N}+\mathrm{Id}_{G/N})\circ\mathrm{Def}_{G/N}\] \[=\mathrm{Id}_{G}-2\mathrm{Inf}_{G/N}^G\circ\mathrm{Def}_{G/N}=\mathrm{Id}_{G}-2j^G_N.\] Thus by Proposition~\ref{fidempotents}, we have \[\prod_{N\leqslant M\trianglelefteqslant G}(\mathrm{Id}_G-2f_{N}^G)=\mathrm{Id}_G-2\left(\sum_{N\leqslant M\trianglelefteqslant G}f_N^G\right)\] \[\mathrm{Id}_{G}-2j^G_N=\mathrm{dBInf}_{G/N}^G(-\mathrm{Id}_{G/N}).\] This proves that $H_{dB}\subseteq H'$. Moreover, this calculation shows that working inductively (by descending order, starting with $N=G$), we can replace the generators of $H'$ with the generators of $H_{dB}$, thus $H_{dB}=H'$. The last statement comes from noticing that $\displaystyle{\mathrm{Id}_G-2\left(\sum_{N\in \mathcal{N}}f_N^G\right)\in B^\Delta(G,G)}$ if and only if $\sum_{N\in \mathcal{N}}f_N^G=0$ or $\mathrm{Id}_{G}$, since each $f^G_{N}=\sum_{N\leqslant M\trianglelefteqslant G}\mu_{\trianglelefteqslant G}(N,M)j^G_N$ and $j_{N}^G\in B^\Delta(G,G)$ if and only if $G=1$. Thus $B^\Delta_{\circ}(G,G)\cap H_{dB}=\{\pm \mathrm{Id}_G\}$. \end{proof} \begin{cor}\label{proper} Let $G$ be a finite group. Then $B^\Delta_\circ(G,G)=B_\circ(G,G)$ if and only if $G$ is trivial. \end{cor} \begin{proof} In the case where $G$ is trivial, it is easy to see that $B(G,G)\cong B(G)$ as rings and that $B(G,G)^\times=B_\circ(G,G)=B^\Delta_\circ(G,G)=\{\pm\mathrm{Id}_G\}$. Otherwise, Theorem~\ref{mainThm1} shows that $H_{dB}\subset B_{\circ}(G,G)$ has at least $4$ elements but $B^\Delta_{\circ}(G,G)\cap H_{dB}=\{\pm \mathrm{Id}_G\}$. The result follows. \end{proof} \begin{rmk} The genesis of this paper began when the author's advisor, Robert Boltje, asked the author to investigate orthogonal units of the double Burnside ring that are not bifree. There is a connection to modular representation theory in considering what is called the \emph{trivial source ring}. If $F$ is an algebraically closed field of characteristic $p>0$ and $G$ and $H$ are finite groups with blocks $A$ and $B$ from $FG$ and $FH$, repspectively, we denote by $T(A,B)$ to be the Grothendieck group, with respect to direct sum, of $(A,B)$-bimodules that are direct summands of finitely generated permutation $F(G\times H)$ modules. If $G=H$ and $A=B$, this is a ring with respect to the tensor product over $A$ (or over $FG$). Taking $F$-duals gives rise to a bilinear group isomorphism from $T(A,A)$ to itself, $\gamma\mapsto \gamma^\circ$, with the property $(\gamma\beta)^\circ=\beta^\circ\gamma^\circ$. In \cite{boltje2015orthogonal}, it is posed to consider the group of auto-equivalencies of the subgroup $T^\Delta(A,A)\subset T(A,A)$, with respect to taking duals, that is elements $\gamma\in T^\Delta(A,A)$ such that $\gamma\gamma^\circ=\gamma^\circ\gamma=\mathrm{Id}$, where $T^\Delta(A,A)$ is the the subgroup of $T(A,A)$ spanned by those standard basis elements of $T(A,A)$ which have vertex coming from a subgroup of the form $\Delta_\varphi(X,Y)$ of $G\times G$. This group is denoted by $T^\Delta_\circ(A,A)$. However, it makes sense to also consider the group $T_\circ(A,A)$, i.e. all elements $\gamma\in T(A,A)$, such that $\gamma\gamma^\circ=\gamma^\circ\gamma=\mathrm{Id}$. In the case that $G$ is a $p$-group, then $A=FG$ and there is a canonical, dual preserving, isomorphism $T(A,A)\cong B(G,G)$, that restricts to an isomorphism $T^\Delta(A,A)\cong B^\Delta(G,G)$. In particular, Corollary~\ref{proper} can be used to show that in general $T^\Delta_\circ(A,A)$ is a proper subgroup of $T_\circ(A,A)$. More information on $T(A,B)$ and $T^\Delta_\circ(A,A)$ can be found in \cite{perepelitsky2014p}. \end{rmk} \begin{lemma}[\cite{boltje2015orthogonal}, 3.2(c)]\label{thing} Let $G$ be a finite group. For each $\gamma\in B_\circ^\Delta(G,G)$, there is a unique $\overline{\varphi}\in \mathrm{Out}(G)$ and a unique $\epsilon\in \{\pm1\}$ such that $\rho(\gamma)=\epsilon\overline{\varphi}$. Moreover, the resulting map \[B_\circ^\Delta(G,G)\to \mathrm{Out}(G)\] \[\gamma\mapsto \overline{\varphi}\] is a surjective group homomorphism. \end{lemma} \begin{rmk}\label{variation} We make a slight variation on the above map to fit better with our purposes. If we identify $\mathrm{Out}(G)$ with its image in $(\mathbb{Z}\mathrm{Out}(G))^\times$ and consider the the subgroup $\langle-\mathrm{Id}_{\mathrm{Out}(G)},\mathrm{Out}(G)\rangle \leqslant (\mathbb{Z}\mathrm{Out}(G))^\times$. The above lemma tells us that restricting the the map $\rho$ gives us a surjective group homomorphism \[\rho^\times: B_\circ^\Delta(G,G)\to\langle-\mathrm{Id}_{\mathrm{Out}(G)},\mathrm{Out}(G)\rangle.\] \end{rmk} The first part of next lemma shows that we can extend the map from Remark~\ref{variation} to all of $B_\circ(G,G)$. All parts are likely known by experts, except for the last part. \begin{lemma}\label{workhorse} Let $G$ be a finite group. \begin{enumerate}[label=(\roman*)] \item For each $\gamma\in B_\circ(G,G)$, there is a unique $\overline{\varphi}\in\mathrm{Out}(G)$ and a unique $\epsilon\in\{\pm1\}$ such that $\rho(\gamma)=\epsilon\overline{\varphi}$ In particular, the map $\rho:B(G,G)\to \mathbb{Z}\mathrm{Out}(G)$ restricts to a surjective group homomorphism \[\rho^\times:B_\circ(G,G)\to \langle-\mathrm{Id}_{\mathrm{Out}(G)}, \mathrm{Out}(G)\rangle\] \[u\mapsto \mathrm{sgn}(\epsilon)\overline{\varphi},\] where we are identifying the group $\mathrm{Out}(G)$ with its image in $(\mathbb{Z}\mathrm{Out}(G))^\times$. \item The map $\eta:\mathbb{Z}\mathrm{Out}(G)\to B(G,G)$ restricts to an injective group homomorphism \[\eta:\langle -\mathrm{Id}_{\mathrm{Out}(G)},\mathrm{Out}(G)\rangle\to B_\circ(G,G),\] such that $\rho^\times\circ\eta=\mathrm{Id}$. In particular, $B_\circ(G,G)=\mathrm{im}(\eta)\ltimes\ker(\rho^\times)$. \item There is a one-to-one correspondence between $u\in \ker(\rho^\times)$ and elements $a\in I_G$ such that \[aa^\circ=a^\circ a=a+a^\circ.\] \item If $N$ is a nontrivial normal subgroup of $G$, then $\mathrm{im}(\mathrm{dBInf}_{G/N}^G)\leqslant \ker(\rho^\times)$. \end{enumerate} \end{lemma} \begin{proof} To prove $(i)$ let $u\in B_\circ(G,G)$ and write $u=u_\Delta+u_I$ where \[u_\Delta=\sum_{\overline{\varphi}\in \mathrm{Out}(G)}c_{\varphi}[G\times G/\Delta_\varphi(G)]\] with $c_{\varphi}\in \mathbb{Z}$ for all $\overline{\varphi}\in \mathrm{Out}(G)$, and $u_I\in I_G$. Then we have $u^\circ=u_\Delta^\circ+u_I^\circ$, with $u_I^\circ\in I_G$ and \[u_\Delta^\circ=\sum_{\overline{\varphi}\in \mathrm{Out}(G)}c_{\varphi}[G\times G/\Delta_{\varphi^{-1}}(G)].\] We prove that $u_\Delta\in B_\circ^\Delta(G,G)$ and the result will follow from Lemma~\ref{thing} since $\rho(u_I)=0$. We have that $u_\Delta\in B^\Delta(G,G)$ so it suffices to see that $u_\Delta u_\Delta^\circ=u_\Delta^\circ u_\Delta=\mathrm{Id}_G$. Indeed, \[\mathrm{Id}_G=uu^\circ=(u_\Delta+u_I)(u_\Delta^\circ+u_I^\circ)=u_\Delta u_\Delta^\circ+u_\Delta u_I^\circ+u_Iu_\Delta ^\circ+u_Iu_I^\circ.\] Yet, $u_\Delta u_I^\circ+u_Iu_\Delta ^\circ+u_Iu_I^\circ\in I_G$ and if we write $u_\Delta u_\Delta^\circ$ in terms of the standard basis elements of $B(G,G)$, none of the summands will be in $I_G$, hence $u_\Delta u_I^\circ+u_Iu_\Delta ^\circ+u_Iu_I^\circ=0$ and so $u_\Delta u_\Delta^\circ=\mathrm{Id}_G$. Similarly, we get that $u_\Delta^\circ u_\Delta=\mathrm{Id}_G$. Part $(ii)$ is clear from the definition of $\eta$. Part $(iii)$ follows by writing $u=\mathrm{Id}_G-a$ and noticing that $u\in \ker(\rho^\times)$ if and only if $\rho(a)=0$ and \[\mathrm{Id}_G=uu^\circ=\mathrm{Id}_G-a-a^\circ+aa^\circ=u^\circ u=-a-a^\circ+a^\circ a,\] if and only if $a\in I_G$ and \[aa^\circ=a^\circ a=a+a^\circ.\] Part $(iv)$ follows from part $(iii)$ and Lemma~\ref{kernelArgument}. \end{proof} \begin{rmk} There is another way to naturally produce units in $B_\circ(G,G)$, namely via the embedding $\iota:B(G)\to B(G,G)$ (see Proposition~\ref{burnsideRingEmbedding}). In fact, if we restrict this map to units we get a map \[\iota:B(G)^\times\to B^\Delta_\circ(G,G).\] Moreover, if we look at the subgroup of $B(G)^\times$ consisting of elements $x\in B(G)^\times$ such that $|x^G|=1$ (see Theorem~\ref{burnsideTheorem}), this can be identified with $\overline{B}(G)^\times:=B(G)^\times/\{\pm 1\}$, and $\iota$ induces an injective group homomorphism \[\iota':\overline{B}(G)^\times\to B^\Delta_\circ(G,G)\cap \ker(\rho^\times).\] That this map is surjective for nilpotent groups follows from Lemma~\ref{workhorse} and Theorem~\ref{bifreeNilpotent}. However, it is shown in \cite{boltje2015orthogonal} ($4.1$, $4.3$) that $\iota'$ is not surjective in general. Furthermore, if $N$ is a nontrivial normal subgroup of $G$, we also have $\mathrm{im}(\iota')\cap\mathrm{im}(\mathrm{dBInf}_{G/N}^G)=\{\mathrm{Id}_G\}$. This follows since $\mathrm{im}(\mathrm{dBInf}_{G/N}^G)\cap B^\Delta(G,G)=\mathrm{Id}_G$, which is a consequence of Lemma~\ref{kernelArgument}. \end{rmk} \section{Cyclic $p$-groups} In this final section, we use the inflation maps between units of double Burnside rings to prove Theorem~\ref{main2}. If $G$ is a finite group and $N$ is a normal subgroup of $G$, we will assume $\mathrm{dBInf}_{G/N}^G$ is the map from $B_\circ(G/N,G/N)$ to $B_\circ(G,G)$ established in Proposition~\ref{orthogonalRestriction}. Since we are working with double Burnside rings of cyclic groups it is useful to consider double Burnside rings for general abelian groups. In particular, we study a useful isomorphism for calculation in the double Burnside ring in this case. Before we do so, suppose $G$ is an abelian group and let $\mathcal{S}_{G\times G}$ denote the set of subgroups of $G\times G$. Define the map \[\gamma:\mathcal{S}_{G\times G}\times \mathcal{S}_{G\times G}\to \mathbb{Z}\] \[\gamma(L,M)=\frac{|G|}{|P_2(L)P_1(M)|}.\] Notice that since $G$ is abelian, Proposition~\ref{MackeyFormula} tells us the product \[[G\times G/L]\circ_G[G\times G/M]=\sum_{h\in P_1(L)\backslash G/P_2(M)}[G\times G/(L*\,^{(h,1)}M)]\] \[=\sum_{h\in G/(P_1(L)P_2(M))}[G\times G/(L*M)]=\gamma(L,M)[G\times G/(L*M)].\] It follows from the associativity of $\circ_G$ that $\gamma$ satisfies the $2$-cocycle relation. \begin{defn} If $G$ is a finite abelian group and $\mathcal{S}_{G\times G}$ is the set of subgroups of $G\times G$. We define $\mathbb{Z}_\gamma\mathcal{S}_{G\times G}$ to be the $\mathbb{Z}$-algebra with basis given by the elements of $\mathcal{S}_{G\times G}$ and multiplication defined by extending \[L*'M:=\gamma(L,M)L*M,\] for $L,M\in \mathcal{S}_{G\times G}$, linearly to all elements of $\mathbb{Z}_\gamma\mathcal{S}_{G\times G}$. \end{defn} \begin{prop}\label{abelianIsomorphism} Let $G$ be an abelian group. We have an isomorphism of algebras $B(G,G)\cong\mathbb{Z}_\gamma\mathcal{S}_{G\times G}$ given by the map \[[G\times G/M]\mapsto M.\] Moreover, the duality operator on $B(G,G)$ corresponds with the duality operator on $\mathbb{Z}_\gamma\mathcal{S}_{G\times G}$ induced by taking opposite bisets. \end{prop} \begin{proof} Since $G$ is abelian, this is a one-to-one correspondence between bases. The verification that multiplication is preserved follows from Proposition~\ref{MackeyFormula}. The last statement follows from Proposition~\ref{oppositeProp}. \end{proof} In the following proofs, we abusively identify $B(G,G)$ with $\mathbb{Z}_\gamma\mathcal{S}_{G\times G}$ , since $G$ will always be abelian. We also ignore the operator $*'$. \begin{lemma}\label{trivialAction} Suppose $G$ is finite a cyclic group. Then \[B_\circ(G,G)=\mathrm{im}(\eta)\times\ker(\rho^\times)\] where $\eta$ and $\rho^\times$ are the maps from Lemma~\ref{workhorse}. \end{lemma} \begin{proof} This amounts to showing that conjugation by an element of $\mathrm{im}(\eta)$ is trivial on elements of $\ker(\rho^\times)$. We can actually say more. In fact, we show that $\mathrm{im}(\eta)\subset Z(B(G,G))$. Every element of $\mathrm{im}(\eta)$ is of the form $\epsilon\Delta_\varphi(G)$ where $\epsilon\in \{\pm 1\}$ and $\varphi$ is an automorphism of $G$. Write $G=\langle x\rangle$, then $\varphi(x)=x^k$ where $k$ relatively prime with the order of $G$. Suppose $L=(P_1,K_1,\psi,P_2,K_2)\in \mathcal{S}_{G\times G}$. By Proposition~\ref{abelianIsomorphism} and Lemma~\ref{explicitStarComputation} we have \[\Delta_\varphi(G)*'L=(G,1,\varphi,G,1)*'(P_1,K_1,\psi,P_2,K_2)=(P_1,K_1,\overline{\varphi}\circ\psi,P_2,K_2)\] where $\overline{\varphi}:P_1/K_1\to P_1/K_1$ is the map that takes $xK_1$ to $x^kK_1$. However, $\overline{\varphi}\circ\psi=\psi\circ\overline{\varphi}'$ where $\overline{\varphi}':P_2/K_2\to P_2/K_2$ that takes $xK_2\mapsto x^kK_2$. Thus \[\Delta_\varphi(G)*'L=(P_1,K_1,\overline{\varphi}\circ\psi,P_2,K_2)=(P_1,K_2,\psi\circ\overline{\varphi}',P_2,K_2)=L*'\Delta_\varphi(G),\] where the last equality, again, comes from Lemma~\ref{explicitStarComputation}. The result follows. \end{proof} We now specialize to the case where $p$ is a prime and $G$ is a cyclic $p$-group. Theorem~\ref{main2} will be a consequence of the next two propositions. It is proved by induction. The first proposition encompasses the base case, with the next proposition essentially being the inductive step when $p$ is odd. We refer the reader to Remark~\ref{encodeRmk} for a recap on the notation used for the following propositions. \begin{prop}\label{baseCase} Let $G=C_p$ where $p$ is a prime. \begin{enumerate}[label=(\roman*)] \item If $p=2$, then $B_\circ(G,G)\cong C_2\times D_8$. \item If $p=3$, then $B_\circ(G,G)\cong C_{p-1}\times C_2^3$ \item If $p>3$, then $B_\circ(G,G)\cong C_{p-1}\times C_2^2$ \end{enumerate} \end{prop} \begin{proof} Referring to Lemma~\ref{workhorse}, we have that $\mathrm{im}(\eta)\cong C_{p-1}\times C_2$ and Lemma~\ref{trivialAction} shows that $\mathrm{im}(\eta)$ is in the center of $B(G,G)$. What is left is to determine $\ker(\rho^\times)$. Suppose $u\in \ker(\rho^\times)$. By Lemma~\ref{workhorse} $(iii)$, we can write $u=\mathrm{Id}_G-\alpha$ where $\alpha\in I_G$ and $\alpha\alpha^\circ=\alpha^\circ \alpha=\alpha+\alpha^\circ$. Since $G$ has exactly two subgroups, it follows by Goursat's Lemma that $I_G$ is spanned by exactly four elements, namely $w=(1,1,\mathrm{Id},1,1), x=(1,1,\mathrm{Id},G,G), y=(G,G,\mathrm{Id},1,1),$ and $z=(G,G,\mathrm{Id},G,G)$. Notice that $w^\circ=w$ and $z^\circ=z$ and $x^\circ=y$ and $y^\circ=x$. Given integers $a_1,a_2,a_3,a_4\in \mathbb{Z}$, we can write \[\alpha=a_1w+a_2x+a_3y+a_4z\] and \[\alpha^\circ=a_1w+a_3x+a_2y+a_4z.\] It is straightforward to verify, using Proposition~\ref{abelianIsomorphism} and Lemma~\ref{explicitStarComputation}, that \[ww=pw, wx=px, wy= w, wz=x,\] \[xw=w, xx=x, xy= w, xz=x,\] \[yw=py, yx=pz, yy= y, yz=z,\] \[zw=y, zx=z, zy= y, zz=z.\] If we write \[\alpha\alpha^\circ=c_1w+c_2x+c_3y+c_4z,\] with \[c_1=pa_1^2+2a_1a_2+a_2^2=(p-1)a_1^2+(a_1+a_2)^2\] \[c_2=pa_1a_3+a_1a_4+a_2a_3+a_2a_4\] \[c_3=pa_1a_2+a_2a_3+a_1a_4+a_3a_4\] \[c_4=pa_3^2+2a_3a_4+a_4^2=(p-1)a_3^2+(a_3+a_4)^2,\] and note that $\alpha+\alpha^\circ=2a_1x+(a_2+a_3)y+(a_2+a_3)w+2a_4z$, we have \[c_1=(p-1)a_1^2+(a_1+a_2)^2=2a_1\] \[c_2=pa_1a_3+a_1a_4+a_2a_3+a_2a_4=a_2+a_3\] \[c_3=pa_1a_2+a_2a_3+a_1a_4+a_3a_4=a_2+a_3\] \[c_4=(p-1)a_3^2+(a_3+a_4)^2=2a_4.\] Dually, we have \[\alpha^\circ \alpha=d_1w+d_2x+d_3y+d_4z=\alpha+\alpha^\circ,\] which implies \[d_1=(p-1)a_1^2+(a_1+a_3)^2=2a_1\] \[d_2=pa_1a_2+a_1a_4+a_2a_3+a_3a_4=a_2+a_3\] \[d_3=pa_1a_3+a_2a_3+a_1a_4+a_2a_4=a_2+a_3\] \[d_4=(p-1)a_2^2+(a_2+a_4)^2=2a_4.\] Thus, our search boils down to finding quadruples $(a_1,a_2,a_3,a_4)$ of integers that satisfy the above quadratic equations. Notice that $c_1, d_1\geq0$, this implies that $a_1\geq 0$. Moreover, from $c_1$ and $d_1$ we also have that \[(a_1+a_2)^2=(a_1+a_3)^2=(2-(p-1)a_1)a_1\geq 0.\] If $a_1=0$, then $a_2=a_3=0$. Looking at $c_4$ and $d_4$, this leave $a_4=0$ or $a_4=2$. Note that both $(0,0,0,0)$ and $(0,0,0,2)$ satisfy our system of equations. If $a_1\neq0$, then this implies that $2-(p-1)a_1\geq0\implies \frac{2}{p-1}\geq a_1$. Since $p$ is a prime, this forces $p=2$ or $p=3$. We note that this implies that if that in the case $p>3$, $a_1=0$ and $\ker(\rho^\times)$ has exactly $2$ elements, thus it is isomorphic to $C_2$ and this proves $(iii)$. We split the rest of the proof up into the two obvious cases.\\ {\bf Case $p=3$:} We continue with the assumption that $a_1>0$. The inequality $\frac{2}{p-1}\geq a_1$ forces $a_1=1$. The coefficients $c_1$ and $d_1$ then imply that \[(1+a_2)^2=(1+a_3)^2=0\implies a_2=a_3=-1\] Looking at the coefficients $c_4$ and $d_4$, we can conclude that $a_4$ must satisfy the quadratic equation \[2+(a_4-1)^2=2a_4.\] Thus $a_4=1$ or $a_4=3$. Checking that the quadruples $(1,-1,-1,1)$ and $(1,-1,-1,-3)$ both satisfy the equations given by the coefficients $c_2$ and $c_3$ (note that $d_2$ and $d_3$ are the same). We see that $|\ker(\rho^\times)|=4$. That it is isomorphic to $C_2^2$ comes form the fact that every element is self dual, thus has order $2$. This proves $(ii)$.\\ {\bf Case $p=2$:} We again assume that $a_1>0$ and call upon the inequality $\frac{2}{p-1}\geq a_1$. There are two cases, either $a_1=1$ or $a_1=2$. If $a_1=2$, then $c_1$ and $d_1$ imply that $(2+a_2)^2=(2+a_3)^2=0$, which forces $a_2=a_3=-2$. Using the coefficients $c_4$ and $d_4$, this means that $a_4$ must satisfy the quadratic $4+(a_4-2)^2=2a_4$ and this implies $a_4=2$ or $a_4=4$. Note that both the quadruples $(2,-2,-2,2)$ and $(2,-2,-2,4)$ satisfy the equations given by the coefficients $c_2$ and $c_3$. If $a_1=1$, then $(1+a_2)^2=(1+a_3)^2=1\implies a_2\in\{0,-2\}$ and $a_3\in\{0,-2\}$. If $a_2=a_3=0$ then any of the equations provided by $c_2,c_3,d_2,d_3$ imply that $a_4=0$. Clearly $(1,0,0,0)$ satisfies $c_4$ and $d_4$. If $a_2=-2$ (respectively $a_3=-2$), then $d_4$ (respectively $c_4$) implies $a_4=2$ or $a_4=4$. Which narrows the other quadruples down to $(1,-2,0,2), (1,-2,0,4),$ $(1,0,-2,2),$ $(1,0,-2,4), (1,-2,-2,2),$ and $(1,-2,-2,4)$. Notice that the only quadruples that satisfy the coefficients $c_2$ and $c_2$, are $(1,-2,0,2)$, $(1,0,-2,2)$ and $(1,-2,-2,4)$. Thus, $\ker(\rho^\times)$ is a group of order $8$, parametrized by the quadruples \[(0,0,0,0),(0,0,0,2),(2,-2,-2,2), (2,-2,-2,4),\] \[(1,0,0,0),(1,-2,0,2),(1,0,-2,2),(1,-2,-2,4).\] Notice that exactly $6$ elements are self dual. This implies that $\ker(\rho\times)$ has $5$ elements of order $2$ and $2$ elements of order $4$. Thus, $\ker(\rho^\times)\cong D_8$ proving $(i)$. \end{proof} We note that the above proposition gives an outlines for how to find orthogonal units of double Burnside rings for a general finite group $G$. Boiling the process down to solving a system of (several) quadratic equations. However, as can already be seen, this process is rather tedious and not particularly insightful. For cyclic groups of order $p$, it showcases that $p=2$ and $p=3$ are exceptional cases. Yet, in the inductive step, we will see that $p=3$ falls in line with the rest of the odd cases. However, the case where $p=2$ remains exceptional. We leave the $p=2$ open to further research for now.\\ \begin{prop}\label{inductiveCase} Let $p$ be an odd prime. Let $G=C_{p^n}$ with $n>1$. Then \[B_\circ(G,G)=\mathrm{im}(\eta)\times \mathrm{im}(\mathrm{dBInf}_{C_{p^n}/C_p}^{C_{p^n}}).\] \end{prop} \begin{proof} Our strategy starts off similarly to how we approached Proposition~\ref{baseCase}. Using Lemma~\ref{workhorse} $(ii)$ and Lemma~\ref{trivialAction}, we want to show that $\mathrm{im}(\rho^\times)=\mathrm{im}(\mathrm{dBInf}_{C_{p^n}/C_p}^{C_{p^n}})$. To accomplish this let $u=\mathrm{Id}_G-\alpha\in \mathrm{im}(\rho^\times)$ with $\alpha\in I_G$ and $\alpha\alpha^\circ=\alpha^\circ \alpha=\alpha+\alpha^\circ$. Moreover, if $\mathcal{S}_{G\times G}$ is the set of subgroups of $G\times G$, we can write \[\alpha=\sum_{X\in \mathcal{S}_{G\times G}}a_XX\] with $a_X\in \mathbb{Z}$. By Goursat's Lemma, each $X$ can be encoded as a quintuple $(C_{p^a},C_{p^b},\varphi, C_{p^c},C_{p^d})$ where $0\leq a,c\leq n$, $a-b=c-d>0$ and $\varphi$ is an isomorphism $C_{p^c}/C_{p^d}\cong C_{p^a}/C_{p^b}$. We abbreviate this by $(a,b;c,d)_\varphi$. Our goal is to show that if $b=0$ or $d=0$, then $a_X=0$ for all $X\in \mathcal{S}_{G\times G}$. By Lemma~\ref{kernelArgument}, this will imply that there is some $\alpha'\in B(C_{p^n}/C_p,C_{p^n}/C_p)$ such that $\alpha=\mathrm{Inf}_{C_{p^n}/C_p}^{C_{p^n}}\circ \alpha'\circ \mathrm{Def}_{C_{p^n}/C_p}^{C_{p^n}}$, where $\alpha'\alpha'^\circ=\alpha'^\circ \alpha'=\alpha'+\alpha'^\circ$. In other words, $u\in \mathrm{im}(\mathrm{dBInf}_{C_{p^n}/C_p}^{C_{p^n}})$. The result then follows from Lemma~\ref{workhorse} $(iv)$.\\ We begin by writing \[\alpha\alpha^\circ=\sum_{X\in \mathcal{S}_{G\times G}}c_XX\] We will work inductively, first by considering the coefficient $c_{\Delta(C_{p^{n-1}})}$. Recall that $\Delta(C_{p^{i}})=(i,0;i,0)\in \mathcal{S}_{G\times G}$. Notice that for any $X,Y\in \mathcal{S}_{G\times G}$, with $X,Y\in I_G$ and $X*Y=\Delta(C_{p^{n-1}})$, Lemma~\ref{explicitStarComputation} implies that $X$ is encoded as $(n-1,0;c,d)_\varphi$ and $Y$ is encoded as $(c,d;,n-1,0)_{\varphi^{-1}}$, where $c-d=n-1$. In other words, $X^\circ=Y$. This implies that the coefficient $c_{\Delta(C_{p^{n-1}})}$ of $\alpha\alpha^\circ$, is equal to \[pa_{\Delta(C_{p^{n-1}})}^2+p\sum_{Y}a_Y^2+\sum_{Z}a_Z^2\] where $Y$ runs over all the elements of $\mathcal{S}_G$ encoded as $(n-1,0;n-1,0)_\varphi$ with $\varphi$ nontrivial, and $Z$ runs over the elements of $\mathcal{S}_G$ encoded as $(n-1,0;n,1)_{\psi}$. Since $\alpha \alpha^\circ=\alpha+\alpha^\circ$, we have $c_{\Delta(C_{p^{n-1}})}=2a_{\Delta(C_{p^{n-1}})}$. Thus \[pa_{\Delta(C_{p^{n-1}})}^2+p\sum_{Y}a_Y^2+\sum_{Z}a_Z^2=2a_{\Delta(C_{p^{n-1}})}\geq 0,\] which implies that $a_{\Delta(C_{p^{n-1}})}\geq 0$. However, this further implies that $\frac{2}{p}\geq a_{\Delta(C_{p^{n-1}})}$. Since $p$ is an odd prime, this forces $a_{\Delta(C_{p^{n-1}})}=0$. Which forces $a_X=0$ as $X$ runs over all elements of $\mathcal{S}_{G\times G}$ that can be encoded as $(n-1,0;c,d)_\varphi$. Dually, since $\alpha\alpha^\circ=\alpha^\circ \alpha$, we also get that $a_X=0$ as $X$ runs over elements $\mathcal{S}_{G\times G}$ that can be encoded as $(c,d;n-1,0)_\varphi$. Now we assume that for $X$ encoded as $(b,0;c,d)_\varphi$ or $X$ encoded as $(c,d;b,0)_\psi$, for $1<b\leq n-1$, we have $a_X=0$. Consider now the coefficient $c_{\Delta(C_{p^{b-1}})}$. As before Proposition~\ref{explicitStarComputation} implies that if $X*Y=(b-1,0;b-1,0)=\Delta(C_{p^{b-1}})$ such that $X$ is encoded as $(b-1,0;c,d)_\varphi$ and $Y$ as $(c,d;,b-1,0)_{\psi}$, then $X^\circ=Y$. Thus if we compute the coefficient $c_{\Delta(C_{p^{b-1}})}$ using the computation $\alpha\alpha^\circ$, we have \[p^{n-b+1}a_{\Delta(C_{p^{b-1}})}^2+p^{n-b+1}\sum_{Z_{n-b+1}}a_{Z_{n-b+1}}^2+\cdots+p\sum_{Z_{1}}a_{Z_{1}}^2+\sum_{Z_{0}}a_{Z_{0}}^2=2a_{\Delta(C_{p^{b-1}})}\geq0,\] where $Z_{n-b+1}$ runs over elements of $\mathcal{S}_{G\times G}$ which can be encoded by $(b-1,0;b-1,0)_\varphi$ with $\varphi$ nontrivial, and $Z_{i}$ runs through all elements of $\mathcal{S}_{G\times G}$ which can be encoded as $(b-1,0;c,d)_\psi$, with $c=i$, for $i=0,\dots, n-b$. As in the base case, we must have $\frac{2}{p^{n-b+1}}\geq a_{\Delta(C_{p^{b-1}})}\geq 0$. This forces $a_{\Delta(C_{p^{b-1}})}=0$ and thus $a_{Z_i}=0$ as we run over all possible $Z_i$ and $i=0,\cdots,n-b+1$. Considering $\alpha\alpha^\circ=\alpha^\circ \alpha$, inductively speaking we have that $a_X=0$ as $X$ runs through elements of $\mathcal{S}_{G\times G}$ that can be encoded as $(i,0;c,d)_\varphi$ or $(c,d;i,0)_\psi$ for $i=1,\cdots, n$. The final step is to consider the coefficients $a_X$ where $X$ is encoded as $(0,0;c,c)$ or $(d,d;0,0)$ for some $0\leq c,d\leq n$ (note we leave off the isomorphism, since it is trivial in this case). We show these coefficients are all $0$ as well. To do this, we compute the coefficient $a_{\Delta(\{1\})}$. However, there is a catch! We have proven so far that $a_X=0$ if $P_i(X)\neq K_i(X)=\{1\}$ for $i=1$ or $i=2$, so if we consider elements $X,Y\in \mathcal{S}_{G\times G}$ encoded as $(0,0;c,c)$ or $(d,d;,0,0)$, then $X*Y=(0,0;0,0)=\Delta(\{1\})$, as long as $X$ encoded as $(0,0;c,c)$ and $Y$ encoded as $(d,d;0,0)$ for any $0\leq c,d\leq n$. Moreover, if $X$ is encoded as $(0,0;c,c)$ then $X^\circ$ is encoded as $(c,c;0,0)$. We abbreviate the coefficient for $(0,0;c,c)$ in $\alpha$ by $a_c$, for all $0\leq c\leq n$. Hence, computing the coefficient $c_{\Delta(\{1\})}$ in $\alpha\alpha^\circ$ gives us \[p^na_0^2+p^{n-1}a_1^2+\cdots+a_n^2+2\sum_{(i,j)}p^{n-i}a_ia_j\] as $(i,j)$ runs over pairs $0\leq j<i\leq n$. Thus \[p^na_0^2+p^{n-1}a_1^2+\cdots+a_n^2+2\sum_{(i,j)}p^{n-i}a_ia_j\] \[=(a_0+\cdots+a_n)^2+(p^n-1)a_0^2+(p^{n-1}-1)a_1^2+\cdots+(p-1)a_{n-1}^2\] \[+2\sum_{(l,k)}(p^{n-l}-1)a_la_k\] \[=(a_0+\cdots+a_n)^2+(p-1)(a_0+\cdots+a_{n-1})^2\] \[(p^n-p)a_0^2+(p^{n-1}-p)a_1^2+\cdots+(p-p)a_{n-2}^2\] \[+2\sum_{(r,s)}(p^{n-r}-p)a_ra_s,\] as $(l,k)$ runs over pairs $0\leq k<l<n$ and $(r,s)$ runs over pairs $0\leq s<r<n-1$ Continuing in this fashion, we get \[c_{\Delta(\{1\})}=\left(\sum_{k=0}^na_k\right)^2+\sum_{i=0}^{n-1}(p^{n-i}-p^{n-1-i})\left(\sum_{j=0}^ia_j\right)^2.\] However, we still have that $c_{\Delta(\{1\})}=2a_{\Delta(\{1\})}$. This implies that $a_{\Delta(\{1\})}\geq0$. Moreover, subtracting $(p^n-p^{n-1})a_{\Delta(\{1\})}^2$ from both sides of the equation \[\left(\sum_{k=0}^na_k\right)^2+\sum_{i=0}^{n-1}(p^{n-i}-p^{n-1-i})\left(\sum_{j=0}^ia_j\right)^2=2a_{\Delta(\{1\})}\] still leaves the left hand side nonnegative. Thus $2a_{\Delta(\{1\})}-(p^n-p^{n-1})a_{\Delta(\{1\})}^2\geq0\implies \frac{2}{p^{n}-p^{n-1}}\geq a_{\Delta(\{1\})}\geq0$. Since $p$ is a prime number, we have $a_{\Delta(\{1\})}=0$. Thus the we have $\sum_{j=0}^ia_j=0$ for all $i=0,\dots n$, which implies $a_0=a_1=\cdots=a_n=0$. Finally, if we repeat the symmetric argument for the product $\alpha^\circ \alpha$, we get that all the coefficients $a_X=0$ for $X\in\mathcal{S}_{G\times G}$ encoded as $(0,0;c,c)$ or $(d,d;0,0)$ for any $0\leq c,d\leq n$. This proves that $\ker(\rho^\times)=\mathrm{im}(\mathrm{dBInf}^{C^{p^n}}_{C^{p^n}/C^{p}})$. \end{proof} \begin{proof}[Proof of Theorem~\ref{main2}] Assume $G$ is cyclic of order $p^n$, where $p$ is a prime. If $G$ is trivial, then $B(G,G)\cong B(G)\cong \mathbb{Z}$, and $B(G,G)^\times=B_\circ(G,G)=\{\pm1\}$. If $n=1$, we are done by Proposition~\ref{baseCase}. Assume now that $p$ is an odd prime. For $n=k+1$, with $k\geq 1$, Proposition~\ref{inductiveCase} tells us that \[B_\circ(G,G)=\mathrm{im}(\eta)\times\mathrm{im}(\mathrm{dBInf}_{C_{p^n}/C_p}^{C_{p^n}}).\] By Lemma~\ref{workhorse} $(ii)$, $\mathrm{im}(\eta)\cong C_2\times\mathrm{Out}(G)$ and by induction we have, \[\mathrm{im}(\mathrm{dBInf}_{C_{p^n}/C_p}^{C_{p^n}})\cong B_\circ(C_{p^{n-1}},C_{p^{n-1}})\cong \left\{\begin{array}{lll} C_2^{n+1}\times\prod_{i=1}^{n-1}\mathrm{Out}(C_{p^i}) &\mathrm{if}& p=3\\\\ C_2^{n}\times\prod_{i=1}^{n-1}\mathrm{Out}(C_{p^i}) &\mathrm{if}& p>3 \end{array}\right..\] This finishes the proof. \end{proof}
2023-04-23T08:17:58.435Z
2019-04-11T02:20:02.000Z
redpajama/arxiv
arxiv_0000
945
12,581
5013fb9be4129b0fd8a86dc507bc72f825fda1fe
\section{Introduction} Object reconstruction from three-dimensional point clouds, a process known as photogrammetry, has become widely available through theoretical breakthroughs and the release of software packages \citep[e.g. Visual SFM][]{wu2011visualsfm}. In particular, 3D reconstruction has already found numerous applications in paleontology \citep{falkingham2012acquisition,falkingham2014historical}, and architecture and archaeology \citep{kersten2012potential}, or forestry \citep{Gatziolis_at_al_2015}. As generating 3D point clouds from a collection of pictures becomes a streamlined process thanks to ready-to-use software \citep{wu2013towards}, the identification of geometric structures from discrete point clouds emerges as a challenging problem. Historically, the oldest methods to tackle this problem were surface smoothness approach that rely on local parametric approximations of the point cloud, often assuming that it is free of noise \citep{berger2014state}. A wide array of general methods to reconstruct meshes from point clouds exist, including Ball pivoting \citep{bernardini1999ball}, Marching cubes \citep{lorensen1987marching}, Poisson surface reconstruction \citep{kazhdan2006poisson}, and the Alpha-hull approach \citep{edelsbrunner1983shape}. These methods are being successfully applied in some domains, such as paleontology where the body volume of extinct species is estimated from convex hulls of the fossil bone structures \citep{sellers2012minimum}. However, generic-purpose mesh reconstruction techniques fail to deliver adequate approximations when parts of the scene are missing, or when the noise in a 3D point cloud is high. Their shortcomings are due to the fact that they make only minimal assumptions on the underlying shapes of the objects. In this work, we investigate another approach: enforcing geometrical assumptions about the scene to obtain suitable approximations of the mesh. The core idea of this model-driven approach is to parameterize primitive shapes including spheres, cylinders, cones and toruses \citep{schnabel2007efficient}. This approach can be seen as the 3D analog to the 2D ``deformable template'' approach employed in object detection \citep{mesejo2016survey}. Several reasons make this problem challenging from the optimization point of view. First, the search space has high dimensionality: an individual primitive shape, such as a cylinder, requires four or more parameters to encode its geometrical features\footnote{The simplest closed shape in space is a sphere and requires three parameters to encode the spatial coordinates of its center, and one parameter for its radius. More complex shapes, such as cylinders, require more parameters: two additional parameters for the orientation, and another one for the axial length.}, and each scene is composed of many primitive shapes. For example, the branch structure of a tree or a pipe-run network comprises dozens to hundreds of cylinders resulting in a set of solutions with more than one thousand parameters. Moreover, the 3D point clouds often presents with many artifacts, regardless of the reconstruction software used \citep{comparison_paper}: it is noisy and only visible aspects of the scene can be captured in the 3D point cloud. Furthermore, except for a few man-made objects, most primitive shapes will only approximate the real surface: for example, a tree branch is only cylindrical as a first approximation. These properties of 3D data make the search landscape filled with local optimization maxima (i.e. shapes with imperfect fits to the point cloud but for which any small -- or ``local'' -- modifications of their features, such as a small change in orientation or size, would result in worse fits). Finally, the cost function that evaluates the goodness-of-fit of a given shape to the scene has to be computed over the discrete set of 3D points, leading to a high computational burden every time a shape has to be assessed. To date, most shape-fitting algorithms have focused on reducing the complexity of the search space by resampling 3D points into small clusters and performing a local optimization of a single shape on each cluster \citep{schnabel2007efficient}. Limiting the number of points considered thus enables a very quick optimization, and successive iterations lead to a finer approximation. However, the faithfulness of the end result depends heavily on the clustering heuristics, and sub-optimal segmentation heuristics will result in sub-optimal recovering of the objects real structure. Typically, clustering heuristics are based on the similarity of point locations and their normals \citep{schnabel2007efficient}. They perform usually well for man-made objects with regular geometries (for example where all cylinders have similar radii and well-separated axis orientations), but become imprecise when point clouds contain noise and/or are incomplete. Global alignment procedures have been developed to overcome this limitation \citep[such as GLOBFIT,][]{li2011globfit}, yet they still assume some global regularity in the scene, and are not robust to high levels of noise \citep{qiu2014pipe,berger2014state}. More closely related to the approach that we develop here, cylinder-specific clustering procedures have also been investigated in previous works. In particular, it was observed that normals of points representing cylinders form circles when projected on a Gaussian sphere, a property that can be exploited to facilitate clustering \citep{liu2013cylinder,qiu2014pipe}. While elegant, this heuristic performs well only when applied to straight cylinders oriented in distinctly separate directions, such as an industrial piping system, and it requires an ad-hoc procedure to model joints \citep{qiu2014pipe}. Another type of algorithm utilizes an iterative approach where cylinders are fitted in succession \citep{pfeifer2004automatic}. It has been successfully applied to modeling standing trees with cylindrical or closely related shapes \citep{raumonen2013fast,markku2015analysis}, although as with most greedy optimization processes it is highly dependent on the starting condition and is thus prone to convergence to local maxima. Here we propose an evolutionary algorithm which fits a collection of shapes on a 3D point cloud. At odds with other approaches, this algorithm seeks to optimize simultaneously a population of cylinders approximating the scene as a whole -- without resorting to iterative cluster resampling \citep{schnabel2007efficient} or one-at-a-time shape optimization procedure \citep{pfeifer2004automatic}. This is made possible by relying on the evolutionary optimization paradigm, where a population of best-fitting cylinders is considered at every step. This algorithm avoids artifacts due to early segmentation, and enables convergence even in highly noisy or partial object representations. Technically, our evolutionary framework capitalizes on two desirable optimization properties: elitism (best solutions are kept across generations) and diversity (solutions span the entire search space). We also designed a collection of mutation operators that can be used to generate interesting variations of 3D shapes and thus explore their search space efficiently. To validate their robustness and practical relevance for 3D reconstruction, we adopt a framework derived from the game theory through the use of Shapley values \citep{shapleyValue}. This enables us to quantitatively test the contributions of individual mutation operators to the overall reconstruction success. The performances of the algorithm are evaluated in a series of synthetic test cases made to exemplify typical problems with 3D point clouds (namely noise, partial object occlusion, and object geometry imperfectly matching the primitive shape). Finally, we present real-life experiments demonstrating successful mesh reconstruction in the context of vegetation modeling and industrial pipe-run network. \section{Methods} \subsection{Algorithm overview} The goal of our algorithm is to obtain a set of shapes with comprehensive coverage of the 3D scene. The framework of evolutionary computation is suitable for this goal, as it allows to optimize a set (or \textit{population}) of solutions without any constraint on the fitness function \citep{holland1975adaptation}. The optimization strategy that we developed follows the canonical outline of evolutionary algorithms: \begin{itemize} \item random initialization of the starting population \item until the termination condition is fulfilled, do: \begin{enumerate} \item select a subset of the population \item generate offsprings by applying mutation and cross-over operators \item score the new population fitness \item replace the old population with a new one, according to the fitness of individuals \end{enumerate} \end{itemize} We customize this general scheme to the specific case of optimizing a population of shapes spanning the entire 3D scene, with a focus on adapting the steps 2 and 3 above. Although our method is compatible with any parameterized shape, we selected to work with cylinders. Thus the mutation operators designed in Section \ref{sec:mut_ope} are tailored to these shapes. We also develop in Section \ref{sec:fitness} a fitness function with a built-in diversity mechanism that promotes the population's convergence toward spatially segregated geometrical shapes. \subsection{Fitness function} \label{sec:fitness} The goal of the fitness function is to evaluate how closely each cylinder matches to the point cloud. When fitting a single shape, the mean distance to the point cloud is the metric of choice, coupled with a non-linear optimization framework \citep{lukacs1998faithful,shi2016genetic}. However this metric is not robust to missing data (i.e. where parts of the object are not represented in the point cloud). Also, it does not scale well to large and complex scenes where many points do not belong to cylinders. Here we adopt another approach, in which we consider not the average distance to points of the scene, but the proportion of each primitive shape that is covered by points. To this end we discretize the primitive's surface at regular intervals into small patches of fixed size, and we count the fraction of patches having points in their immediate neighborhood (Fig. \ref{fig:patch}). With a patch size $\tau$, the number of patches along the circular axis of length $l$ is $i_\mathrm{max}=l/\tau$ (Fig. \ref{fig:patch}A), and along each circular cross-section this number is $j_\mathrm{max}=\frac{2\pi r}{\tau}$ (Fig. \ref{fig:patch}B). Formally, the binary function describing the occupancy of patch $(i,j)$ is given by: \begin{equation} \label{eq:ine_cond} \mathrm{filled} (i,j) = 1 \Leftrightarrow \exists P / \left\{ \begin{array}{l} \frac{2\pi r j}{j_\mathrm{max}} - \tau < \gamma < \frac{2\pi r j}{j_\mathrm{max}} + \tau\\ r - \tau < y < r + \tau\\ l \frac{i}{i_\mathrm{max}} - \tau < z < l \frac{i}{i_\mathrm{max}} + \tau \end{array} \right., \end{equation} and the potential fitness $f$ of a cylinder is the proportion of filled patches: \begin{equation} \label{eq:filled} f = \frac{\sum_{i=1}^{i_\mathrm{max}} \sum_{j=1}^{j_\mathrm{max}} \mathrm{filled}(i,j)}{i_\mathrm{max} . j_\mathrm{max}} \end{equation} \begin{figure} \centering \includegraphics[width=0.99\textwidth]{images/patch_cylinder_illustration2.pdf} \caption{Illustration of the patch-based fitness. The potential fitness of a solution is computed as the proportion of patches extruded from the cylinder surface that contain points. Each patch is defined as a square of size $\tau\times\tau$ on the cylinder surface (panel A), which is then extruded along the radial axis by $\tau/2$ toward the inside and outside of the cylinder (panel B). Panel C shows one patch with an orthographic projection of the cylinder.} \label{fig:patch} \end{figure} The following terms: \textit{similar cylinders}, \textit{ideal cylinders} (relative to a point) and \textit{theoretical fitness} are employed to discuss our algorithm. Many cylinders can have the same potential fitness function due to the discrete approximation of patches. We call these cylinders \textit{similar} as their fitness $f$ has the same value. Because similar cylinders are identical in terms of points overlap, we can arbitrarily pick one of them and discard all the others. We call the \textit{ideal} cylinder(s) for each point as the cylinder(s) with the highest potential fitness among all cylinders that include this point. Finally, the \textit{theoretical fitness} $F$ is the potential fitness defined in Eq. \ref{eq:filled} but computed without the points already assigned to ideal cylinders that have a strictly higher theoretical fitness. In other words, the theoretical fitness of a cylinder is the fitness computed using only points that are not encompassed by another better-fitting cylinder. The theoretical fitness $F$ is thus equal to or lower than its potential fitness $f$. The theoretical fitness can not be used in the optimization procedure, as its computation relies on the knowledge of all the best-fitting cylinders - which is precisely the goal of the optimization procedure. However, it is possible to compute an estimate of the theoretical fitness by considering only the set of ideal cylinders in the current population. We call this estimator the \textit{realized} fitness and denote it $\hat{F}$. This quantity is used for solution ranking, leading to its maximization through the evolution's elitist selection. Conversely, as the population of shapes achieves increasingly good fits with the point cloud, the realized fitness becomes a better approximation of the theoretical fitness. This dual process results eventually in a collection of distinct shapes that cover the point cloud. The computation of the realized fitness is performed with the procedure described below: \RestyleAlgo{boxruled} \begin{algorithm}[ht] \Begin{ \tcc{initialization} compute the patch occupancy for each point of each solution (Eq. \ref{eq:ine_cond})\; un-mark all solutions (a solution is marked when its realized fitness is computed)\; realized fitness list $\leftarrow \emptyset$\; \tcc{iterative computation of the realized fitness} \While{there are un-marked solutions}{ \For{every un-marked solution $S_i$}{ compute the potential fitness $f_i$, without the points already assigned to a marked solution (Eq. \ref{eq:filled}) } identify $S_\mathrm{max}$ the solution with the highest potential fitness and mark it\; For $S_\mathrm{max}$, the potential fitness $f_\mathrm{max}$ is the realized fitness $\hat{F}_\mathrm{max}$: save it in the realized fitness list\; } } \caption{Computation of realized fitness \label{alg}} \end{algorithm} This approach maintains diversity in a way that is conceptually similar to the clearing strategy \citep{petrowski1996clearing,petrowski1997new}, as only the best solutions are kept in each neighborhood. Its computation is efficient: while the establishment of the patch occupancy for a given cylinder is a computationally intensive task (it requires calculating the geometrical inequalities of Eq.~\ref{eq:ine_cond} after having expressed the points in the cylinder referential), it is independent of the other cylinders. Thus, the resource-demanding patch occupancy calculation needs to be performed only once for solutions kept across generations. \subsection{Adaptive population size} \label{sec:pop} The number of shapes required to cover a point cloud is hard to estimate \textit{a priori}. We can assume that any point of the scene will be in the neighborhood of a cylinder at some point of the optimization process, however in practice not all these cylinders should be retained (some points of the scene might be noise, or might belong to a non-cylindrical geometrical object). Hence we introduce the acceptance threshold $\alpha \in [0,1]$ which the user specifies as the minimal fractional coverage of each cylinder. This coverage depends on the density of the point cloud and on the object's representation completeness, and is investigated with synthetic examples (Fig. \ref{fig:noise} and \ref{fig:completeness}). To ensure a complete exploration of the search space, we further enable dynamic growth and shrinkage of the population. This is achieved by indexing the population size on the number $n$ of solutions with coverage greater than $\alpha$. Prior to the offspring generation step, the new size of the population is computed as $max(\ceil{kn}, p)$, with $k>1$, and $p$ an arbitrary minimum population size. We established empirically that $p=50$ and $k=2$ are suitable settings, and we use these values in all the reconstructions presented in this paper. \subsection{Mutation and Crossover operators} \label{sec:mut_ope} The choice of mutation operators suitable for exploring a 3D landscape depends heavily on the shape parameterization. Of the many different ways to parameterize shapes we chose a generic option applicable to most geometric primitives. First, we encode the shape position in space with the coordinates of its center ($x$, $y$, $z$). We then encode the shape direction using spherical coordinates consisting of two angles: the elevation $\theta \in [-\pi, \pi]$ and the azimuth $\phi \in [0, 2\pi]$. The shape length along its main axis (according to $\theta$ and $\phi$) is encoded by a positive number, $l$, and its radius is encoded by another positive number, $r$. These 7 parameters fully characterize a cylinder in the 3D space. Our approach can easily be extended to more complex shapes, such as cones, where the additional length parameters are treated similarly to $l$ and $r$ \citep{markku2015analysis}. We designed four geometric transformation operators to enable spatial coherence during the exploration of 3D shapes: \begin{itemize} \item \textbf{Translation}: mutate all spatial coordinates with $ \left\{ \begin{array}{l} x\leftarrow P_m(x)\\ y\leftarrow P_m(y)\\ z\leftarrow P_m(z) \end{array} \right. $ \item \textbf{Rotation}: mutate the direction with $ \left\{ \begin{array}{l} \phi\leftarrow P_m(\phi)\\ \theta\leftarrow P_m(\theta) \end{array} \right. $ \item \textbf{Elongation}: mutate the length with $l\leftarrow P_m(l)$ \item \textbf{Dilation}: mutate the radius with $r\leftarrow P_m(r)$ \end{itemize} Where the bounded polynomial mutation operator $P_m$ introduced by \cite{deb1999niched} is used to update the value of the real-coded parameters. During the mutation step, each operator has a probability of $1/m$ to be selected and used, with $m$ the number of operators. Several operators can thus be combined in one mutation step to enable complex shape modifications. While the above are in theory sufficient to fully explore the search space, in practice cylinders fitting could be stuck in local fitness optima. Concretely, these local optima are cylinders that overlap imperfectly with the point cloud, but for which small changes in orientation or size would result in a lower fitness score. We identified three typical local optima situations (illustrated in Fig. \ref{fig:ope_mutation}) and designed mutation operators to overcome them. These operators rely on the location of best contact between the cylinder and the point cloud, which has a low computational footprint given that the patch occupancy has already been computed for the fitness (Eq. \ref{eq:ine_cond}). Given $\{c_x, c_y, c_z\}$ the vector from the cylinder center to the point of best contact, the additional operators are: \begin{itemize} \item \textbf{Targeted Dilation} (Fig. \ref{fig:ope_mutation}A) leaves the point of best contact intact but increases/decreases the radius of the cylinder: $ \left\{ \begin{array}{l} r'\leftarrow P_m(r)\\ x\leftarrow x + (r-r') c_x\\ y\leftarrow y + (r-r') c_y\\ z\leftarrow z + (r-r') c_z\\ r\leftarrow r' \end{array} \right. $ \item \textbf{Targeted Flip} (Fig. \ref{fig:ope_mutation}B) performs a symmetrical translation with respect to the point of best contact: $ \left\{ \begin{array}{l} x\leftarrow x + 2 r c_x\\ y\leftarrow y + 2 r c_y\\ z\leftarrow z + 2 r c_z \end{array} \right. $ \item \textbf{Targeted Translation} (Fig. \ref{fig:ope_mutation}C) translates the cylinder along its axis to match the point of best contact: $ \left\{ \begin{array}{l} x\leftarrow c_x \frac{r}{2} sin(\phi)*cos(-pi/2+\theta)\\ y\leftarrow c_y \frac{r}{2} cos(\phi)*cos(-pi/2+\theta)\\ z\leftarrow c_z \frac{r}{2} sin(-pi/2+\theta) \end{array} \right. $ \end{itemize} \begin{figure}[t] \centering \begin{minipage}{0.32\textwidth} \sidesubfloat[position=bottom]{% \includegraphics[width=0.79\textwidth]{images/ope_mutation_dil.pdf}} \end{minipage} \begin{minipage}{0.32\textwidth} \sidesubfloat[position=bottom]{% \includegraphics[width=0.79\textwidth]{images/ope_mutation_flip.pdf}} \end{minipage} \begin{minipage}{0.32\textwidth} \sidesubfloat[position=bottom]{% \includegraphics[width=0.79\textwidth]{images/ope_mutation_trans.pdf}} \end{minipage} \caption{Illustration of the targeted mutation operators. The points to be approximated by primitive shapes are shown in black. The red circles correspond to the original position of cylinders, prior to mutation. The green circles illustrate possible outcomes of the three operators (A: Targeted Dilation, B: Targeted Flip, and C: Targeted Translation).} \label{fig:ope_mutation} \end{figure} We also adapted the crossover operators to 3D shape optimization. Genes for the crossover operation are selected using a multi-point design, where crossing points match the fundamental blocks of 3D phenotype \citep{de1992formal}. Four such fundamental blocks can be identified with our encoding of cylinder shapes: 1) the triplet of spatial coordinates {X,Y,Z}; 2) the pair of orientation vectors {$\phi$,$\theta$}; 3) the axial length $l$; and 4) the radius $r$. Within these blocks, updated values are obtained using the Simulated Binary Crossover operator \citep{deb1994simulated}. \subsection{Quantifying the relevance of mutation operators} Given the empirical nature of the design of the mutation operators, we sought to assess quantitatively their relevance to the overall optimization performance. For this we used Shapley values, which are metrics originally developed in the field of game theory to score the contribution of each player (here, mutation operator) to coalitions \citep{shapleyValue,aumann1989game}. This is done by considering all possible teams of players and seeing how changing the team composition alters the outcome (score) of the game. Formally, given the fitness function $\hat{F}$ and the set of mutation operators $M$, the Shapley value $\zeta_i$ of the mutation operator $i$ is defined as: $$ \zeta_i(v)= \mkern-6mu\sum_{S \subseteq M \setminus \{i\}} \mkern-14mu\frac{|S|!\; (|M|-|S|-1)!}{|M|!}(f(S\cup\{i\})-f(S)) $$ \subsection{Real world 3D reconstructions} We tested the algorithm on actual point clouds obtained from processing videos acquired around different targeted objects. We selected two different cases in which objects consisted of cylinders: (a) a vegetation reconstruction featuring coarse woody debris, where the recovery of vegetation dimensions is relevant for estimating biomass and volume \citep{Gatziolis_at_al_2015}, and (b) an industrial reconstruction, where the recovery of pipe-run is useful to mapping their network and possibly identifying space for further extensions \citep{qiu2014pipe,pang2015automatic}. 118 photos from the vegetation example were taken with the low-resolution, integrated camera of a smartphone and subsequently reconstructed with Visual SFM \citep{wu2011visualsfm}. The pipe-run example was imaged with a professional-grade camera in a video with 4K resolution at 30 Hz frame rate. 254 high-quality frames were extracted from the video and processed with \cite{manual2014professional} using the ``Ultra High'' quality settings. \subsection{Code availability} The code performing the evolutionary optimization and the analysis are written in R \citep{R}, with geometrical routines computing the fitness in C++ with Rcpp \citep{Rcpp,RcppArmadillo} and Armadillo \citep{sanderson2016armadillo,sanderson2018user} for speed. The full code is available at: \footnotesize\texttt{https://github.com/jealie/3D\_cylinder\_evolution}\normalsize. \section{Results} \subsection{Mutation operators analysis} \begin{figure*}[htpb] \centering \begin{minipage}{0.45\textwidth} \sidesubfloat[position=bottom]{% \begin{minipage}{0.7\textwidth} \includegraphics[width=0.99\textwidth]{images/single_trace1.png}\\ \includegraphics[width=0.99\textwidth]{images/single_trace3.png} \end{minipage} \begin{minipage}{0.24\textwidth} \includegraphics[width=0.99\textwidth]{images/colorbar.pdf} \end{minipage} \hfill } \end{minipage} \begin{minipage}{0.5\textwidth}\sidesubfloat[position=bottom]{ \includegraphics[width=0.99\textwidth]{images/operators_one_by_one2.pdf} \hfill }\end{minipage} \caption{Illustration of the relative importance of mutation operators in the optimization process. In all the panels of this figure, the population size is limited to a singleton to focus on mutation operators (excluding the adaptive population size mechanisms). (A) Two examples show the evolution of the single solution when initialized in the upper-right corner of the point cloud. The chronological order of mutations is shown with colors. Black points indicate the target cylinder (the goal of the optimization). As the iterative process advances, which is reflected by a progression from red to blue hues, shapes with the proper orientation, placement and radius are found. (B) Goodness-of-fit when the mutation operators are successively enabled. The three operators with the highest Shapley value, Translation, Rotation and Dilation, suffice for a perfect fit (yellow trace). Enabling the Targeted mutation operators further decreases convergence time. Thick lines indicate the median performance, and shaded area show the bootstrapped 95\% confidence interval. }\vspace{0.9cm} \label{fig:comp_ope} \end{figure*} We investigate the performance of 3D shape mutation operators using a simplified version of the evolutionary algorithm. In this version, the population is reduced to a singleton (i.e. a single solution), and thus the cross-over is omitted. We further designed a simplified task where the point cloud to approximate is regularly sampled on the surface of a single cylinder. Fig. \ref{fig:comp_ope}A shows two typical optimization runs, where the algorithm explores the search space and eventually matches the target cylinder. In this setting, the basic set of operators (translation, rotation, elongation and dilation) were sufficient to reach an optimal fit (Fig. \ref{fig:comp_ope}B). The extended set of targeted operators further sped up convergence, reducing the number of iterations by up to 50\% (Fig. \ref{fig:comp_ope}B). The mutation operators explore the search space using different strategies, and their usefulness depends on the cylinder's position relatively to the point cloud. To investigate this spatial dependency in our analysis, we performed multiple optimization runs with different starting locations while enabling/disabling operators selectively (Fig. \ref{fig:comp_ope}C). The relative importance of mutation operators showed a strong spatial dependency. Not surprisingly, Translation and Rotation were the most important operators overall. Their relative importance changed as a function of the initial distance to the target cylinder, with Rotation being the most important operator when initialized inside the cylinder and Translation being the most important outside (Fig. \ref{fig:comp_ope}C). Dilation and Targeted Dilation were equally useful, especially when the search was initialized in the neighborhood of the cylinder. Targeted Translation and Targeted Flip were slightly less beneficial to the overall search quality (although enabling them led to quicker convergence, cf. Fig. \ref{fig:comp_ope}B). The Elongation operator was not relevant in this task as the reference points formed a very long cylinder. The quantitative analysis revealed it to be neutral, or even detrimental, to the overall performance. This was expected in this particular setting where a singleton population is considered, and this operator remains essential as it is the only one that can change cylinder axial lengths. Overall, this analysis demonstrates that the degrees of freedom granted by the chosen set of mutation operators is sufficient. It also demonstrates that the Targeted operators can improve convergence speed. \subsection{Synthetic case studies} We conducted a series of synthetic experiments with point clouds engineered to showcase common problems with 3D reconstructions: noise (Fig. \ref{fig:noise}), occluded components (Fig. \ref{fig:completeness}), and objects whose geometry departs slightly from the primitive shape used (Fig. \ref{fig:dupin}). The first two synthetic experiments involve the fitting of a single cylinder, whereas the third synthetic experiment demonstrates the fitting of numerous cylinders. The experiment in Fig. \ref{fig:noise} assesses robustness to noise of the algorithm. Noise is a recurring issue in 3D reconstruction \citep{berger2014state} and is manifested at various levels with all photogrammetry software \citep{comparison_paper}. This experiment features a single cylinder onto which a uniform point jitter was applied, with an amplitude reaching up to 40\% of the cylinder's radius. It demonstrates that the algorithm is robust to noise, as it achieves near-perfect convergence even in the high-noise scenario (Fig. \ref{fig:noise}A,B). It also illustrates how the fitness score is linked to the point cloud quality: the perfectly matching fit obtained with 30\% jitter achieves a fitness score of 0.5, whereas the score without noise is 1 (Fig. \ref{fig:noise}C). \begin{figure*} \vspace{2cm} \centering \begin{minipage}{0.39\textwidth} \sidesubfloat[position=bottom]{\includegraphics[width=0.49\textwidth]{images/xp_noise_cylinder0.png} \includegraphics[width=0.49\textwidth]{images/xp_noise_cylinder40.png}} \sidesubfloat[position=bottom]{\includegraphics[width=0.99\textwidth]{images/xp_noise_circles.pdf}} \end{minipage} \begin{minipage}{0.59\textwidth}\sidesubfloat[position=bottom]{ \includegraphics[width=0.99\textwidth]{images/xp_noise_graph.pdf}} \end{minipage} \caption{Performance of the cylinder search with added noise. (A) 3D views of the synthetic data and fitted cylinder when no (left) and substantial noise (right, 40\% jitter) is added. (B) Cross-sectional views with varying jittering intensity. Dashed circles and 'x' marks indicate the ideal solutions and their center. The colored circles and '+' marks represent the best fits. Near-perfect fits are achieved for noise level below 30\%. (C) Fitness scores of the best solution across the optimization process. The lower fitness values obtained with high jitter (panel C) reflect the poor overlap between optimal solutions and reference points the approximation is still nearly optimal as the excellent spatial correspondence in panel B indicate.} \label{fig:noise} \end{figure*} \begin{figure*} \vspace{2cm} \centering \begin{minipage}{0.39\textwidth} \sidesubfloat[position=bottom]{\includegraphics[width=0.49\textwidth]{images/xp_completion_cylinder70.png} \includegraphics[width=0.49\textwidth]{images/xp_completion_cylinder10.png}} \sidesubfloat[position=bottom]{\includegraphics[width=0.99\textwidth]{images/xp_completion_circles.pdf}} \end{minipage} \begin{minipage}{0.59\textwidth}\sidesubfloat[position=bottom]{ \includegraphics[width=0.99\textwidth]{images/xp_completion_graph.pdf}} \end{minipage} \caption{Performance of the cylinder search with partially represented cylinders. (A) 3D views of synthetic data and fitted cylinder with most (70\%) and a small fraction (10\%) of the surface present. (B) Cross-sectional views with completeness ranging from 10\% to 90\%. The optimized cylinders (colored circles) fail to recover the correct shape when the point cloud is too partial (10-20\% complete) but succeed when at least 30\% of the surface is represented. (C) Best solution fit across the optimization process. The fitness decreases with point cloud alterations, indicating poor overlap between the cylinder shape and the points (as in the added noise experiment of Fig. \ref{fig:noise}). However the spatial match remains high when 30\% or more of the cylinder's surface is represented in the original point cloud.} \label{fig:completeness} \end{figure*} \begin{figure*} \centering \sidesubfloat[position=bottom]{\includegraphics[width=0.49\textwidth]{images/dupin_mesh.pdf}} \begin{minipage}[b]{0.49\textwidth} \sidesubfloat[position=bottom]{% \begin{minipage}{0.49\textwidth}% \includegraphics[width=0.99\textwidth]{images/cyclide_pp1.png}% \end{minipage} \begin{minipage}{0.49\textwidth}% \begin{minipage}{0.49\textwidth}% \includegraphics[width=0.99\textwidth]{images/cyclide_inset_1.pdf}% \\% \includegraphics[width=0.99\textwidth]{images/cyclide_inset_2.pdf}% \end{minipage}% \begin{minipage}{0.49\textwidth}% \includegraphics[width=0.99\textwidth]{images/cyclide_inset_3.pdf}% \\% \includegraphics[width=0.99\textwidth]{images/cyclide_inset_4.pdf}% \end{minipage}% \end{minipage}% }% \end{minipage} \begin{minipage}{0.99\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.49\textwidth]{images/cyclide_more2.png} \includegraphics[width=0.49\textwidth]{images/barplot_cyclide.pdf}} \end{minipage} \caption{Approximation of a ring cyclide as a collection of cylinders. (A) The ring cyclide displayed at two different viewing angles. (B) Population of cylinders with fitness scores higher than the threshold $\alpha = 50\%$. They span most of the inner ring wall and display radial orientations consistent with the cyclide geometry. The cross-sections of the 4 best-fitting cylinders are shown as insets. (C) Selected, above-threshold solutions in purple (as in panel B) along with sub-optimal solutions from the population whose fitness scores are color-coded from blue to red. When grouped, the solutions cover the whole surface of the cyclide.} \label{fig:dupin} \end{figure*} Fig. \ref{fig:completeness} exemplifies how the shape recovering capability is impacted by point cloud completeness. Various levels of completeness are simulated by keeping an arc spanning only a fraction of the full circle, from 90\% down to 10\% (Fig. \ref{fig:completeness}A, B). This mimics the partial point clouds obtained when all photographs are taken from one side of the object of interest. This experiment revealed that cylinders with at least 30\% of their surface present in the point cloud can be fully recovered (Fig. \ref{fig:completeness}B), making the developed approach suitable for reconstructions of partially occluded objects. In the synthetic trials, approximations with 20\% and 10\% completeness converged to local maxima (Fig. \ref{fig:completeness}A, B). It must be noted that these are very hard cases: with less than 20\% of the surface represented, the point cloud amounts to little more than a slightly curved surface, resulting in low fitness scores (Fig. \ref{fig:completeness}C). In addition, convergence was slower compared to the noise experiment, revealing that the search space of incomplete cylinders is harder to explore. The final example shown in Fig. \ref{fig:dupin} addresses cases where the primitive shape varies from from the modeled object. The point cloud was obtained by sampling regularly a ring cyclide (a special case of Dupin cyclide with an ellipto-hyperbolic parameterization, shown in Fig. \ref{fig:dupin}A). This object presents with substantial challenges as the radial cross-sections are not perfectly circular. In addition, this experiment requires optimizing a population of cylinders with different radii and orientations. Results of the optimization show successful matching (Fig. \ref{fig:dupin}B). Fig. \ref{fig:dupin}C proves the feasibility of extracting more matching shapes and hints of opportunities to improve the completeness of derived shapes as a post optimization phase. \subsection{Real case studies} We tested our shape optimization algorithm in industrial (Fig. \ref{fig:ohsu}) and vegetation settings (Fig. \ref{fig:barcelona}). The examples were selected to highlight the challenges previously discussed with the synthetic data. In particular, pipes from the industrial pipe-run networks could be imaged only from one side (Fig. \ref{fig:ohsu}A), resulting in a partial reconstruction similar to the second synthetic example (Fig. \ref{fig:completeness}). Furthermore, cylinders are curved and thus an imperfect match to the chosen primitive shape (as in the Dupin cyclide example of Fig. \ref{fig:dupin}). Despite these challenges, the 3D structure was successfully approximated by the algorithm (\ref{fig:ohsu}B-D). Additional challenges arise from the low-density point clouds and the substantial amount of noise visible in the case of tree reconstruction (Fig. \ref{fig:barcelona}). These result from the low, VGA resolution of the camera used. In addition, about one-fourth of the photos were over-exposed, leading to wide deformations of the corresponding side of the trunk. Despite these challenges, the algorithm succeeded in fitting cylinders to the trunk of the central tree (Fig. \ref{fig:barcelona}, panel D1 to D3). Unlike other approaches that require pre-processing of the point cloud to remove planes and other non-cylindrical objects \citep[e.g.][]{qiu2014pipe}, we did not perform any modification or cleaning of the initial point cloud obtained by photogrammetry. The vast majority of the points were thus not identified by the algorithm as belonging to a cylinder (Fig. \ref{fig:barcelona}B). Only a single false-positive cylinder was obtained (Fig. \ref{fig:barcelona}, panel D4). An important feature of our algorithm is its ability to adjust \textit{a posteriori} the shape acceptance threshold to remove false detections, as is the case here, or to include more cylinders (as in Fig. \ref{fig:dupin}C). \begin{figure} \centering \begin{minipage}{0.99\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.99\textwidth]{images/all_c.jpg}}\end{minipage} \begin{minipage}{0.94\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.99\textwidth]{images/OHSU_cylinders_pp2c.png}}\end{minipage} \begin{minipage}{0.99\textwidth}\sidesubfloat[position=bottom]{\begin{minipage}{0.49\textwidth}\includegraphics[width=0.99\textwidth]{images/OHSU_cylinders_cyl1.pdf} \includegraphics[width=0.99\textwidth]{images/OHSU_cylinders_cyl2.pdf} \end{minipage}\begin{minipage}{0.49\textwidth}\includegraphics[width=0.99\textwidth]{images/OHSU_cylinders_cyl3.pdf} \includegraphics[width=0.99\textwidth]{images/OHSU_cylinders_cyl4.pdf} \end{minipage}} \end{minipage} \hfill \caption{Example of cylinder reconstruction at a pipe-run network. (A) Photographs used for the reconstruction, acquired from an overlooking location and thus limiting the representation to only the upper half of the pipes. 3D point cloud and optimal set of cylinders assigned during the approximation (panel B). (C) Detail of fit cylinders demonstrating adequate recovery of cylindrical shapes even from incomplete point cloud.} \label{fig:ohsu} \end{figure} \begin{figure*} \centering \begin{minipage}[b]{0.49\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.9\textwidth]{images/barcelona_wide.jpg}\hfill }\end{minipage} \begin{minipage}[t]{0.49\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.9\textwidth,trim={0 0 0 3cm},clip]{images/barcelona_pp1c.png}\hfill }\end{minipage} \begin{minipage}{0.49\textwidth}\sidesubfloat[position=bottom]{\includegraphics[width=0.99\textwidth]{images/barcelona_pp2c.png}\hfill }\end{minipage} \begin{minipage}{0.49\textwidth}\sidesubfloat[position=bottom]{\begin{minipage}{0.49\textwidth}\includegraphics[width=0.99\textwidth]{images/barcelona_cyl1.pdf} \includegraphics[width=0.99\textwidth]{images/barcelona_cyl2.pdf} \end{minipage}\begin{minipage}{0.49\textwidth}\includegraphics[width=0.99\textwidth]{images/barcelona_cyl3.pdf} \includegraphics[width=0.99\textwidth]{images/barcelona_cyl4.pdf} \end{minipage}} \end{minipage} \caption{ Example of tree stem reconstruction on a test plot with challenging conditions. Photographs acquired along a circle around a targeted tree with a low resolution (640x480 pixels) camera yielded a sparse and noisy point cloud. Panels (A) and (B) show that the entire tree trunk is successfully approximated by a collection of cylinders using a fully automated application of our evolutionary algorithm, without any human intervention or pre-treatment. Cross-sections of the framed area are shown in panel C. Panels C1, C2, and particularly C3 demonstrate that the trunk diameter is captured despite the numerous imperfections of the point cloud. A false positive is found on the ground, close to the tree (captured as frame 4). The cross-section in C4 shows that part of the cylinder does overlap with the points. } \label{fig:barcelona} \end{figure*} \section{Discussion} Point clouds obtained from photographs via photogrammetry are imperfect models of reality and they contain noise, deformations, and have completeness issues. The objects represented rarely have regular shapes. Our evolutionary optimization method largely resolves these limitations, is able to approximate and recover the underlying object geometry from point clouds, and can capitalize on \textit{a priori} knowledge of the geometric structure of scene objects. To the best of our knowledge, this is the first time a genetic strategy has been deployed to evolve the geometric properties of shapes. We extended the classical evolutionary paradigm on real-valued encodings with the design of a set of spatial mutation operators, and we analyzed their contribution to the overall optimization performance. We have also identified a number of limitations associated with shape optimization in 3D, which we investigated through a series of synthetic experiments. Finally, we demonstrated application of our approach to a set of actual examples. Our optimization procedure is based on evolutionary algorithms and tackles the challenging problem of shape approximation from an incomplete 3D scene. Compared to general purpose mesh reconstruction procedures (such as Ball-pivoting algorithm or Poisson surface Reconstruction, cf. graphical abstract), our shape-based method supports recovering the 3D structure of even partially-occluded objects. Because it does not require an iterative segmentation of the scene to isolate potential cylinder locations \citep[such as][]{schnabel2007efficient}, our method avoids artifacts leading to false positives (identification of non-existent cylinders) and false negatives (failure to identify existent cylinders). Unlike other approaches that rely on heuristics requiring complete cylinders \citep{qiu2014pipe}, our method is able to effectively recover cylindrical shapes from partially occluded objects (Fig. \ref{fig:completeness}). This study introduces a novel evolutionary algorithm able to fit a collection of shapes to a set of 3D points. Although we demonstrated the optimization of cylindrical shapes, our method can actually be applied to any shape. In particular, our method could be used to recovers 3D scene as a collection of composite shapes, pursuing essentially a goal similar to the primitive fitting approach of \cite{schnabel2007efficient}. One could expect from this potential extension of our work, the same trade-offs observed with cylinders: namely, the ability to obtain a superior model accuracy at the expense of higher computational power. 3D shapes require rigorous parameterization and depend on a set of customized mutation operators capable of efficiently exploring the search space. We proposed a set of such operators and we demonstrated how to rank them based on concepts originating from the game theory (relative importance derived from Shapley values). In the interest of manipulating several types of primitive shapes at once, it is desirable to design a mutation operator that transforms a primitive shape of one type into the closest geometrical shape of another type (for example, a sphere into a cuboid). Further investigation on this topic is required. Keeping the search global comes at computational price. Whereas typical shape recovery with RANSAC-based algorithms uses seconds-to-minutes on a standard computer, the genetic approach described here takes minutes-to-hours - up to one day for large, high-density point clouds. This is on par with the computational cost required to obtain the point clouds from pictures using photogrammetry software. In addition, improvements targeting optimization speed-ups are possible. Lowering the point density with an intelligent thinning operation can lead to important performance gains. The iterative initiation of optimization runs on increasingly denser point clouds could further relay optimal parameters already identified in prior runs to improve performance. Such a sequential fitting approach could be improved by adding a transferability objective relying on a surrogate model \citep[here, the less sampled point cloud; see also][]{pinville2011promote,koos2013transferability}. Alternatively, coupling the efficient but locally-based RANSAC search with our time-consuming but global genetic search is a promising idea. A simple hybrid scheme could involve the segmentation of the point cloud using the best-fitting cylinders of the genetic search, and the local search of cylinders with the RANSAC approach. \section*{Acknowledgments} I am thankful to Nikolay Strigul and Demetrios Gatziolis for acquiring financial support (joint-venture agreement between the USDA Forest Service Pacific Northwest Research Station and Washington State University) and for proofreading. DG formulated the overarching research goal of using Structure-from-Motion technology to assess understory vegetation dimensions. \bibliographystyle{apalike}
2023-04-23T08:17:58.771Z
2019-01-23T02:17:58.000Z
redpajama/arxiv
arxiv_0000
964
6,905
58d0429f2c0e762148ea0b1f00e5bfe047e62702
\section{Introduction} Tunnels \& Trolls~\cite{StAndre:1975} is one of the oldest published tabletop roleplaying games~\cite[footnote~814]{Peterson:2012}. Roleplaying games have become the subject of scholarly studies\footnote{In for example International journal of roleplaying~\url{http://journalofroleplaying.org/} and Analog game studies~\url{http://analoggamestudies.org/}.}, as the influence and study of video games has increased. Many tabletop roleplaying games use dice to create or represent uncertainty~\cite{Dormans:2006,Torner:2014}. Some of the schemes for rolling dice are mathematically nontrivial; we investigate one such scheme in the present article. In tabletop roleplaying games, most players take the role of fictional characters. On defining roleplaying games see Arjoranta~\cite{Arjoranta:2011}, and for description see Dormans~\cite{Dormans:2006}. In Tunnels \& Trolls, the player characters are described numerically by characteristics, which define for example how strong, lucky and intelligent the character is. In the 5.5th and further editions of Tunnels \& Trolls~\cite{StAndre:2005} the characteristics are determined by rolling dice with a process explained below. The starting characteristics are random, so we can hope to determine their expected value. This value is useful for a player of the game, since it allows quickly judging the worth of a new player character -- are its scores above or below average? A designer or game master might also consider various alternative methods of determining characteristics, and information about their average is helpful in the process. The expected values and distributions of various methods of rolling dice are often discussed by game designers and players of tabletop roleplaying games. Each characteristic is independently determined with the following process. \begin{enumerate} \item Roll three dice (ordinary six-sided dice). \item If the result is triples -- each of the dice shows the same value -- roll three additional dice. \item Continue until the newly rolled dice are not a triple. \item Sum the results of all the dice rolled thus far. \end{enumerate} \begin{example} Suppose we roll $(3, 3, 1)$. This is not a triple, so the value of the characteristic is $3+3+1 = 7$. \end{example} \begin{example} Suppose we roll $(5, 5, 5)$. This is a triple, so we roll again and get $(1,1,1)$. This is also a triple, so we roll yet again and get $(1, 2, 2)$. This is no longer a triple, so we sum everything rolled thus far: $5+5+5+1+1+1+1+2+2 = 23$. \end{example} The same rolling process can be used with an arbitrary number of dice, which may have an arbitrary number of sides. Indeed, the saving roll system in Tunnels \& Trolls uses the same process with two six-sided dice; there, doubles add and roll over. It turns out that the expected value of a characteristic is $54/5 = 10.8$, and the expected value of the dice roll in a saving roll is $42/5 = 8.4$. These values can be found as corollary~\ref{cor:expect_tt}. Several roleplaying games use various exotic dice (for example Pathfinder~\cite{Bulmahn:2009} and Dungeon Crawl Classics~\cite{Curtis:Goodman:Stroh:Zimmerman:2012}), which motivates us to ask how the number of sides the dice have influences the expectation. It is interesting to ask how significant it is to roll all the dice again when they match and add this to the previous result. One way of answering this is checking how much the expected value of the result changes when matches are added and rolled over, when compared to the situation where only one set of dice is rolled and matching dice do not have any special meaning. It turns out that the difference in expectations vanishes in the limit of increasing number of dice or increasing number of sides in the dice, except when we roll precisely two dice, and let the number of sides they have increase to infinity. For the precise result, see theorem~\ref{thm:expect_diff}. \section{Formalization and calculation} We assume we are rolling $n \in \N$ dice, each of which has $s \in \Z_+$ sides. Let both $n$ and $s$ be fixed. The cases $s = 1$ or $n \in \joukko{0,1}$ are trivial, so by default we suppose $n,s \geq 2$. See remark~\ref{remark:trivial} for a precise formulation of the triviality. Each $s$-sided die is represented by a random variable $Z^l_j$ where the index $j \in \Z_+$ indicates which set of dice rolls is in question and $l \in \joukko{1,2,\ldots,n}$ indicates which of the dice in a given set of dice rolls is in question; this is made precise in equation~\eqref{eq:x} and the text below it. Each such random variable is a mapping $Z^l_j \colon \Omega \to \N$ such that for every $ m \in \joukko{1,\ldots,s}$ we have $\Prob \joukko{\omega \in \Omega; Z^l_j(\omega) = m} = 1/s$, and probability of every other event is zero. We assume the die rolls, i.e.\ variables $Z^l_j$, are mutually independent. Let $M_0 = \Omega$ and $M_j$ be the event of the $j$th set of rolls being a match; that is, for $j \ge 1$, \begin{equation} M_j = \joukko{\omega \in \Omega; Z_j^1 = Z_j^2 = \cdots = Z_j^n}. \end{equation} Define the random variables $X_j$ as follows: $X_0 = 0$ and for $j \ge 1$ \begin{equation} \label{eq:x} X_j = \begin{cases} X_{j-1} + \sum_{l=1}^s Z_j^l \text{ when } \omega \in \bigcap_{k = 0}^{j-1} M_{k} \\ X_{j-1} \text{ otherwise.} \end{cases} \end{equation} The interpretation of these random variables in terms of dice rolls is that $X_1$ is the sum of the first set of dice rolls, $X_2$ allows for the possibility of matches and the corresponding additional dice rolls, $X_3$ allows for two sets of matches, and so on. Hence, we define \begin{equation} X^{n,s} = \lim_{j \to \infty} X_j, \end{equation} which is the final outcome of the entire dice rolling scheme. Note that the fixed constants $n$ and $s$ are implicit in the $X_j$ variables. We sometimes write them explicitly as superindices. \begin{remark}\label{remark:trivial} We have $X^{0,s} = 0$ and $X^{1,s} = X^{n,1} = \infty$, where the free variables satisfy $n,s\geq 1$. \end{remark} \begin{remark} Characteristics in Tunnels \& Trolls are rolled with $X^{3,6}$ and saving rolls with $X^{2,6}$. In saving rolls there is an additional rule of automatic failure when the initial roll gives the result of three. If one wanted to assign it a numerical value of $0$ (an arbitrary choice which would not necessarily lead to failure in game), then the relevant expectation would have to decreased by \begin{equation} \Prob\sulut{X^{2,6}=3}\cdot 3 = \frac{2}{36} \cdot 3 = 1/6. \end{equation} This type of adjustment is quite easy to do and generalizing it to all of the situations covered in the paper is far from obvious, so we ignore it from now on. We stress that the choice of zero here is both arbitrary and unsatisfactory; in theory, no finite numerical value leads to guaranteed failure, though large negative numbers such as $-1000$ would almost certainly guarantee failure in all practical game situations. \end{remark} We are interested in the expected value $\E \sulut{X^{n,s}}$, and the relation between it and $\E\sulut{X^{n,s}_1}$. That is, we want the know how much the expected value increases when matches allow rolling an additional set of dice. The following lemma follows from the linearity of expectation and the definition of the random variables: \begin{lemma}[Expectation without rerolls] For $n \in \N$ and $s \in \Z_+$ we have \begin{equation} \E \sulut{X^{n,s}_1} = n\sulut{s+1}/2. \end{equation} \end{lemma} \begin{corollary}[Expectations of dice rolls in Tunnels \& Trolls without rerolls] \begin{align} \E\sulut{X^{3,6}_1} = 21/2 \text{ and } \E\sulut{X^{2,6}_1} = 7. \end{align} \end{corollary} By monotone convergence, $\E (X^{n,s}) = \lim_{j \to \infty} \E(X_j) = \E(X_N)$, where \begin{equation} N(\omega) = \sup\joukko{j \in \N; \omega \in \bigcap_{k = 0}^{j-1} M_{k}} \end{equation} is the first time the dice do not match. Note that $N$ is finite almost surely. We now calculate the probability mass function~$f$ of $N$. The probability of rolling a match is $p = \sulut{\frac{1}{s}}^{n-1} = s^{1-n}$, so \begin{equation} \begin{split} f(j) &= \begin{cases} p^{j-1} \sulut{1-p} &\text{ for } j \geq 1 \\ 0 &\text{ for } j = 0. \end{cases} \end{split} \end{equation} On the other hand, \begin{equation} \E(X^{n,s}) = \E \sulut{\E\left[ X^{n,s} | N \right]} = \sum_{j=1}^\infty f(j) \E \left[ X_j | N = j \right]. \end{equation} To calculate this explicitly we need to know the value of the conditional expectation $\E \left[ X_j | N = j \right]$. For all $k < j$ we have $\omega \in M_k$, so $Z_k^1 = \ldots = Z_k^n$. Further, the set of random variables $\joukko{N}\cup \joukko{Z_l^1; l \in \N}$ is independent. Thus, \begin{equation} \begin{split} \E \left[ X_j | N = j \right] &= \E\left[\sum_{l=1}^n Z_j^l + \sum_{k=1}^{j-1} \sum_{l=1}^n Z_k^l \; \Big| N = j\right] \\ &= \E\left[\sum_{l=1}^n Z_j^l\; \Big| N = j\right] + \sum_{k=1}^{j-1}\E\left[ n Z_k^1 \; \Big| N = j\right]. \end{split} \end{equation} We calculate the first and the second part of the expectation separately, starting from the second sum: \begin{equation} \begin{split} \sum_{k=1}^{j-1}\E\left[ n Z_k^1 \; \Big| N = j\right] &= n\sum_{k=1}^{j-1}\E\sulut{ Z_k^1 } \\ &= n\sulut{j-1}\sulut{s+1}/2. \end{split} \end{equation} For the first sum we have \begin{equation} \E\left[\sum_{l=1}^n Z_j^l\; \Big| N = j\right] = \E\left[\sum_{l=1}^n Z_j^l\; \Big| \omega \in \bigcap_{k=0}^{j-1} M_k \setminus M_j \right]. \end{equation} By the independence of all the $Z$ variables, and in particular the independence of $Z_j$ from $M_k$ when $k < j$ and the independence of $M_k$ from $M_j$, we get \begin{equation} \begin{split} \E\left[\sum_{l=1}^n Z_j^l\; \Big| \omega \in \bigcap_{k=0}^{j-1} M_k \setminus M_j \right] &= \E\left[\sum_{l=1}^n Z_j^l\; \Big| \omega \in \Omega \setminus M_j \right] \\ &= \E\sulut{\sum_{\omega \in \Omega \setminus M_j} \sum_{l=1}^n Z_j^l(\omega) } / \Prob\sulut{\Omega \setminus M_j} \\ &= \E\sulut{\sum_{\omega \in \Omega} \sum_{l=1}^n Z_j^l(\omega) - \sum_{\omega \in M_j} \sum_{l=1}^n Z_j^l(\omega)} / \sulut{1-p} \\ &= \sulut{\E\sulut{ \sum_{l=1}^n Z_j^l} - \E\sulut{ \sum_{l=1}^n Z_j^l \I_{M_j}}} / \sulut{1-p} \\ &= \sulut{n\sulut{s+1}/2 - n\E\sulut{ Z_j^1} \Prob\sulut{M_j}} / \sulut{1-p} \\ &=n\sulut{s+1}/2. \end{split} \end{equation} Thus we have \begin{equation} \E \left[ X_j | N = j \right] = j n(s+1)/2, \end{equation} whence \begin{equation} \E(X^{n,s}) = \sum_{j=1}^\infty p^{j-1} \sulut{1-p} j n(s+1)/2. \end{equation} This is a series of the form $c\sum_{j=1}^\infty j a^{j-1}$, which has the value \begin{equation} c\sum_{j=1}^\infty j a^{j-1} = c\sulut{1-a}^{-2}. \end{equation} So, we have \begin{equation} \begin{split} \E\sulut{X^{n,s}} &= \E\sulut{X_1^{n,s}} / (1-s^{1-n}). \end{split} \end{equation} This proves the following theorem: \begin{theorem}[Expectation of the random sum] Suppose $n \geq 2$ and $s \geq 2$. Then \begin{equation} \E\sulut{X^{n,s}} = (1-s^{1-n})^{-1} n(s+1)/2. \end{equation} \end{theorem} \begin{corollary}[Expectations related to Tunnels \& Trolls] \label{cor:expect_tt} \begin{align} \E\sulut{X^{3,6}} = 54/5 \text{ and } \E\sulut{X^{2,6}} = 42/5. \end{align} \end{corollary} The identity in the theorem is consistent with the trivial identities $\E\sulut{X^{0,s}} = 0$, $\E\sulut{X^{1,s}} = \infty$ and $\E\sulut{X^{n,1}} = \infty$, and furthermore, as $n \to \infty$, we have \begin{equation} \begin{split} \E\sulut{X^{n,s}}-\E\sulut{X^{n,s}_1} &= \sulut{\frac{1}{1-s^{1-n}}-1}\frac{n(s+1)}{2} \\ &= \sulut{\frac{ns^{1-n}}{1-s^{1-n}}}\frac{s+1}{2} \to 0. \end{split} \end{equation} On the other hand, as $s \to \infty$, we have \begin{equation} \begin{split} \E\sulut{X^{n,s}}-\E\sulut{X^{n,s}_1} &= \sulut{\frac{s^{2-n}+s^{1-n}}{1-s^{1-n}}}\frac{n}{2} \to \begin{cases} 0 \text{ when } n \geq 3 \\ 1 \text{ when } n = 2. \end{cases} \end{split} \end{equation} Thus, we get the following theorem: \begin{theorem}[Limits of expectations] \label{thm:expect_diff} \begin{equation} \begin{split} \text{For } s\in \Z_+: \E\sulut{X^{0,s}}-\E\sulut{X^{0,s}_1} &= 0. \\ \text{For } s\in \Z_+: \E\sulut{X^{1,s}}-\E\sulut{X^{1,s}_1} &= \infty. \\ \text{For } n\in \Z_+: \E\sulut{X^{n,1}}-\E\sulut{X^{n,1}_1} &= \infty. \\ \text{For } s \ge 2:\lim_{n \to \infty} \sulut{\E\sulut{X^{n,s}}-\E\sulut{X^{n,s}_1}} &= 0. \\ \lim_{s \to \infty} \sulut{\E\sulut{X^{2,s}}-\E\sulut{X^{2,s}_1}} &= 1. \\ \text{For } n \ge 3 : \lim_{s \to \infty} \sulut{\E\sulut{X^{n,s}}-\E\sulut{X^{n,s}_1}} &= 0. \end{split} \end{equation} \end{theorem} \section{Conclusion} The effect of rolling again and adding dice is quite small, unless one is rolling two fairly small dice. The probability of very high results does grow from zero to a small but positive number, which might be relevant even in the absence of a large change in the expected value. \bibliographystyle{plain}
2023-04-23T08:17:58.776Z
2019-01-23T02:22:07.000Z
redpajama/arxiv
arxiv_0000
965
2,382
f5fa83c1aec9101307fc1f373dc3902417043902
\section{Introduction}\label{intro} \noindent Recently (\cite{ChasAs18} and \cite{Arxiv2}), we proposed new perspectives on relative finite elements accuracy based on a mixed geometrical-probabilistic interpretation of the error estimate derived from Bramble-Hilbert lemma. \sa This led us to derive two laws of probability that estimate the relative accuracy, considered as a random variable, between two finite elements $P_{k_1}$ and $P_{k_2}$ ($k_1 < k_2$).\sa By doing so, we obtained new insights which showed, among others, which of $P_{k_1}$ or $P_{k_2}$ is the most likely accurate, depending on the value of the mesh size $h$ which is no more considered as going to zero, as in the usual point of view.\sa These results have been obtained by considering a second-order elliptic variational problem set in the Sobolev space $H^1(\Omega)$. However, many partial differential equations are well posed in a more general class of Sobolev spaces, namely, $W^{m,p}(\Omega), (m,p)\in\N^{*2}$.\sa Possible applications for studying case $p \neq 2$ can be the Laplace equation set in an open-bounded domain $\Omega \subset \R^n$ with a given right-hand side $f \in L^{p}(\Omega), (p\ne 2)$. Indeed, in that case, the solution to the associated variational formulation, $u$, belongs to $W^{1,p}(\Omega)$ for $p\ne2$ if the domain $\Omega$ is regular enough: this problem is indeed discussed in \cite{Brezis} (note in the Chapter on Sobolev spaces, where a reference to \cite{AgDn59} is quoted). Other examples may be found for instance in \cite{Gris92}, \cite{Haubxx}, \cite{KuMi12}, or in \cite{Lion69} for non-linear problems.\sa Here, we consider a functional framework defined by the help of $W^{m,p}$ Sobolev spaces, particularly when $p\ne 2$, and extend our previous work \cite{Arxiv2} limited to the case of the $H^1$ Hilbert space.\sa The paper is organized as follows. We recall in Section \ref{Second_Order_Elliptic} the mathematical problem we consider as well as the basic definitions of functional tools we will need along the paper. In Section \ref{Pk_Properties}, we introduce $P_k(K)$, the space of polynomial functions defined on a given $n$-simplex $K$, of degree less than or equal to $k$. We then obtain several estimates to upper-bound the basis functions of $P_k(K)$ and their partial derivatives. We provide in Section \ref{explicit_estimate} results that make explicit the dependence of the constant involved in the \emph{a priori} $W^{m,p}-$error estimates with respect to degree $k$ of the concerned $P_k$ Lagrange finite element. Section \ref{FEM_accuracy} presents applications to the analysis of the relative finite elements accuracy in $W^{m,p}$. In particular, extending to $W^{m,p}$ spaces the two generalized probabilistic laws introduced in \cite{ChasAs18}, we prove, relying on distributions theory, and under some {\em ad hoc} assumptions that are fulfilled in many cases, that an asymptotic relation exist between these two laws. Concluding remarks follow. \section{The abstract problem}\label{Second_Order_Elliptic} \noindent In this section, we introduce the abstract framework we will use to derive error estimates in the $W^{m,p}$ Sobolev spaces, particularly in the non standard cases $p \neq 2$, corresponding to non-Hilbert spaces. As a consequence, we need a well-posedness result based on a stability (or inf-sup) condition extended to non-Hilbert spaces. For the error analysis, we will also need an extension of C\'ea's Lemma to Banach spaces, devoted to the approximation of the abstract problem using a Galerkin method.\sa \noindent In order to provide sufficient resources for a reader even not familiar with these methods to understand the approach as a whole, we recall here some fundamental results. To this end, we basically follow the presentation and the terminology proposed in the book by A. Ern and J. L. Guermond \cite{Ern_Guermond}. The book of Brenner et al. \cite{BrSc08}, that goes back to a paper by Rannacher and Scott \cite{RaSc82} can also provide helpful references. A well-informed reader may skip to subsection \ref{example}. \subsection{Preliminary results}\label{basicBanach} \noindent Let $W$ and $V$ be two Banach spaces equipped with their norms $\|.\|_W$ and $\|.\|_V$, respectively. In addition, $V$ is assumed to be reflexive. Let $u \in W$ be the solution to the variational formulation \begin{equation}\label{VP} \left\{ \begin{array}{l} \mbox{Find } u \in W \mbox{ solution to:} \\ [0.1cm] a(u,v) = l(v), \quad\forall v \in V, \end{array} \right. \end{equation} where $l$ is a continuous linear form on $V$, and $a$ is a continuous bilinear form on $W \times V$, i.e. $$ \forall (u,v)\in W\times V,\, |a(u,v)|\leq\|a\|_{W,V}\|u\|_W\|v\|_V, $$ with $\D\|a\|_{W,V}\equiv\inf\left\{C\in\R^{*}_{+},\forall (u,v)\in W\times V: |a(u,v)|\leq C\|u\|_W\|v\|_V\right\}$.\label{Norme_a} Assuming that \begin{description} \item[\textbf{(BNB1)}] \hspace{4cm}$\D \exists \alpha > 0, \hs \inf_{w\in W}\sup_{v\in V}\frac{a(w,v)}{\|w\|_{W}\|v\|_{V}} \geq \alpha $,\vspace{0.2cm} \item[\textbf{(BNB2)}] \hspace{3cm}$\forall v\in V, (\forall w\in W, a(w,v)=0)\Longrightarrow (v=0)$\,, \end{description} one can prove that problem (\ref{VP}) has one and only one solution in $W$, (see \cite{Ern_Guermond} Theorem 2.6), where \textbf{(BNB1)-(BNB2)} refers to the Banach-Necas-Babuska conditions. \sa Now, let us introduce the approximation $u_{h}$ of $u$, solution to the approximate variational formulation \begin{equation}\label{VP_h} \left\{ \begin{array}{l} \mbox{Find } u_{h} \in W_h \mbox{ solution to:} \\ a(u_{h},v_{h}) = l(v_{h}),\quad \forall v_{h} \in V_h, \end{array} \right. \end{equation} where $W_h \subset W$ and $V_h \subset V$ are two finite-dimensional subspaces of $W$ and $V$. As noted in \cite{Ern_Guermond}, (Remark 2.23, p.92), neither condition \textbf{(BNB1)} nor condition \textbf{(BNB2)} implies its discrete counterpart. The well-posedness of (\ref{VP_h}) is thus equivalent to the two following discrete conditions: \begin{description} \item[(BNB1$_h$)] \hspace{4cm}$\D \exists \alpha_h > 0, \hs \inf_{w_h\in W_h}\sup_{v_h\in V_h}\frac{a(w_h,v_h)}{\|w_h\|_{W_h}\|v_h\|_{V_h}} \geq \alpha_h$,\vspace{0.2cm} \item[(BNB2$_h$)] \hspace{3cm}$\forall v_h\in V_h, (\forall w_h\in W_h, a(w_h,v_h)=0)\Longrightarrow (v_h=0)$. \end{description} \vs From now on, we assume hypotheses \textbf{(BNB1)-(BNB2)} and \textbf{(BNB1$_h$)-(BNB2$_h$)} which guarantee the well-posedness of (\ref{VP}) and (\ref{VP_h}).\sa The last key ingredient we need for the error estimates is the following generalized C\'ea's Lemma \cite{Ern_Guermond} valid in Banach spaces: \begin{lemma}\label{Lemme_Cea} \textbf{(C\'ea).} Assume that $V_h\subset V$, $W_h\subset W$ and dim$(W_h)$ = dim$(V_h)$. Let $u$ solve the problem (\ref{VP}) and $u_h$ the problem (\ref{VP_h}). Then, the following error estimate holds: \begin{equation}\label{Cea_Banach} \D\|u-u_h\|_{W} \leq \left(1+\frac{\|a\|_{W,V}}{\alpha_h}\right)\inf_{w_h\in W_h}\|u-w_h\|_{W}. \end{equation} \end{lemma} \noindent In the rest of this paper, we will consider the variational formulation (\ref{VP}) and its approximation (\ref{VP_h}) in the case where the Banach space $W$ and the reflexive Banach space $V$ are chosen as \begin{equation}\label{V_and_W} W\equiv W^{m,p}(\Omega) \mbox{ and } V\equiv W^{m',p'}(\Omega)\,. \end{equation} Above, $m$ and $m'$ are two non zero integers, $p$ and $p'$ two real positive numbers satisfying $p\ne 2$ and $p'>1$ with \begin{equation}\label{Conjugated} \D \frac{1}{p}+\frac{1}{p'}=1. \end{equation} As usual, for any integer $m$ and $1 < p < +\infty$, $W^{m,p}(\Omega)$ denotes the Sobolev space of (class of) real-valued functions which, together with all their partial distributional derivatives of order less or equal to $m$, belongs to $L^p(\Omega)$: $$ \D W^{m,p}(\Omega) = \left\{\!\!\frac{}{}u \in L^p(\Omega)\,/\,\forall\, \alpha, |\alpha|\leq m, \partial^{\alpha}u\in L^p(\Omega)\right\}, $$ $\alpha=(\alpha_1, \alpha_2, \ldots, \alpha_n) \in \N^{n}$ being a multi-index whose length $|\alpha|$ is given by $|\alpha|=\alpha_1+\dots+\alpha_n$, and $\partial^{\alpha}u$ the partial derivative of order $|\alpha|$ defined by: $$ \D \partial^{\alpha}u \equiv \frac{\partial^{|\alpha|}u}{\partial x_{1}^{\alpha_1}\dots\partial x_{n}^{\alpha_n}}. $$ The norm $\|.\|_{m,p,\Omega}$ and the semi-norms $|.|_{l,p,\Omega}$ are respectively defined by: $$ \D \forall u \in\,W^{m,p}(\Omega): \|u\|_{m,p,\Omega} = \left(\sum_{|\alpha|\leq m}\|\partial^{\alpha}u\|^{p}_{L^p}\right)^{1/p}, \hs\hs|u|_{l,p,\Omega} = \left(\sum_{|\alpha|= l}\|\partial^{\alpha}u\|^{p}_{L^p}\right)^{1/p}, 0 \leq l \leq m, $$ where $\|.\|_{L^p}$ denotes the standard norm in $L^p(\Omega)$. \sa \subsection{A simple example}\label{example} \noindent We illustrate below, through an elementary example, the choice of the spaces $W$ and $V$ defined by (\ref{V_and_W}).\sa Let $f$ be a given function that belongs to $L^{p}(]0,1[), (p\ne 2),$ and $u\in W^{2,p}(]0,1[)$ solution to: $$ \left\{ \begin{array}{l} -u''(x) + u(x) = f(x), x \in ]0,1[, \\[0.1cm] u(0)=u(1)=0. \end{array} \right. $$ The corresponding variational formulation is given by: \begin{equation}\label{VP_0} \left\{ \begin{array}{l} \mbox{Find } u \in W^{1,p}_{0}(]0,1[), \mbox{ solution to:} \\ [0.1cm] \D \int_{0}^{1}\left[ u'(x)v'(x)+u(x)v(x)\right] dx = \int_{0}^{1}f(x)v(x) \,dx , \forall v \in W^{1,p'}_{0}(]0,1[), \end{array} \right. \end{equation} where $p$ and $p'$ satisfy (\ref{Conjugated}), and $W^{1,p}_{0}(]0,1[)$ denotes the space of functions $w$ of $W^{1,p}(]0,1[)$ such that $w(0)=w(1)=0$. \begin{remark}$\frac{}{}$ \begin{itemize} \item First of all, we notice that all the integrals in (\ref{VP_0}) are bounded due to H\''older's inequality. \item Second, taking for example $p=3/2$ and $q=3$, the corresponding spaces $W$ and $V$ introduced above are equal to $W=W^{1,3/2}_{0}(]0,1[)$ and $V=W^{1,3}_{0}(]0,1[)$, that are respectively a Banach space and a reflexive Banach space, as required. \end{itemize} \end{remark} \noindent In the rest of the paper, we shall assume that $\Omega$ is an open subset in $\R^n$, exactly covered by a mesh ${\mathcal T}_h$ composed by $N_K$ $n$-simplexes $K_{\mu}, (1\leq \mu \leq N_K),$ which respect classical rules of regular discretization, (see for example \cite{ChaskaPDE} for the bidimensional case, or \cite{RaTho82} in $\R^n$). Moreover, we denote by $P_k(K_{\mu})$ the space of polynomial functions defined on a given $n$-simplex $K_{\mu}$ of degree less than or equal to $k$, ($k \geq$ 1). \sa Henceforth, we assume that the approximate spaces $W_h$ and $V_h$, satisfying dim$(W_h)$ = dim$(V_h)$, are included in the space of functions defined on $\Omega$, composed by polynomials belonging to $P_k(K_{\mu}), (1 \leq \mu \leq N_K)$. As a consequence, $W_h \subset W^{m,p}(\Omega)$ and $V_h \subset W^{m',p'}(\Omega)$.\sa In the following section, we derive appropriate estimates related to the canonical basis of $P_k(K_\mu)$. This will in turn enable us to make explicit the dependence on $k$ of the constant involved in the \emph{a priori} error estimates in $W^{m,p}(\Omega)$. \section{Properties of Lagrange finite element $P_k$}\label{Pk_Properties} \noindent In this section we follow the definitions and properties of the $P_k$ finite element in $\R^n$ described by P. A. Raviart and J. M. Thomas in \cite{RaTho82}. \sa Let us consider a $n$-simplex $K \subset \R^n$ which belongs to a regular mesh ${\mathcal T}_h$. Since a complete polynomial of order $k$ which belongs to $P_k(K)$ contains \begin{equation}\label{Dim_Pk} \D N \equiv \left( \begin{array}{c} n+k \\ n \end{array} \right) = \frac{(n+k)!}{n!\,k!} \end{equation} terms, each $n$-simplex element of the mesh ${\mathcal T}_h$ must be associated with $N$ independent specifiable parameters, or degrees of freedom, to assure the unisolvence of the finite element \cite{RaTho82}. \sa Then, it is convenient to carry out all analysis of $n$-simplexes in terms of the so-called $n$-simplex barycentric coordinates $\lambda_1,\dots,\lambda_{n+1}$ which satisfy $\D \sum_{i=1}^{n+1}\lambda_i=1$.\sa A regularly spaced set of points $M_{i_1,\dots,i_{n+1}}$ can be defined in a $n$-simplex $K$ by the barycentric coordinates values $\D M_{i_1,\dots,i_{n+1}} = \left(\frac{i_1}{k},\dots,\frac{i_{n+1}}{k}\right), \hs 0 \leq i_1,\dots,i_{n+1} \leq k$ satisfying \begin{equation}\label{sum_ij_egal_k} i_1 + \dots +i_{n+1}=k. \end{equation} One can check that the number of points defined in this way is equal to $N$, the dimension of $P_k(K)$ given by (\ref{Dim_Pk}). \sa Therefore, we introduce the canonical basis of functions $p_{i_1,\dots,i_{n+1}}$ of the variables $(\lambda_1, \dots, \lambda_{n+1})$ which belongs to $P_k(K)$ defined by: \vspace{-0.3cm} \begin{equation}\label{shape_function} \D p_{i_1, \dots, i_{n+1}}(\lambda_1, \dots, \lambda_{n+1}) \equiv \prod_{j=1}^{n+1}P_{i_j}(\lambda_j), \end{equation} where the auxiliary polynomials $P_{i_j}(\lambda_j)$ are given by: \begin{equation}\label{P_ij} \D \D P_{i_j}(\lambda_j) \equiv \left | \begin{array}{ll} \hs \D\prod_{c_j=1}^{i_j}\left(\frac{k \lambda_j - c_j +1}{c_j}\right), & \mbox{ if } \hs i_j \geq 1, \vspace{0.1cm} \\ \hs 1, & \mbox{ if } \hs i_j = 0. \end{array} \right. \end{equation} $P_{i_j}$ is clearly a polynomial of order $i_j$ in $\lambda_j$, and therefore, due to condition (\ref{sum_ij_egal_k}), $p_{i_1, \dots, i_{n+1}}$ given by (\ref{shape_function}) is a polynomial of order $k$. \sa In the sequel, we will also use a single-index numbering to substitute the multi-index one. It will be the case for the $N$ points $M_{i_1,\dots,i_{n+1}}$ simply denoted $(M_i)_{i=1,N}$, as well as for the $N$ canonical functions $p_{i_1,\dots,i_{n+1}}$ denoted $(p_i)_{i=1,N}$, and so on. \sa Let us also remark that each polynomial $p_i$ defined by (\ref{shape_function})-(\ref{P_ij}) is characteristic to the corresponding point $M_i$. That is to say that we have the following property (see \cite{RaTho82}): $$ \forall\, i,j=1\mbox{ to } N: p_i(M_j)=\delta_{ij}. $$ Therefore, for a given set of $N$ values $\varphi_i \equiv \varphi_{{i_1, \dots, i_{n+1}}}$ known at the $N$ points $M_i \equiv M_{i_1, \dots, i_{n+1}}$, the polynomial $Q$ in $P_k(K)$ given by: \begin{eqnarray*} \D\forall M \in K: Q(M) & = & Q(\lambda_1, \dots, \lambda_{n+1}), \nonumber\\[0.1cm] & = & \hspace{-0.7cm}\sum_{i_1 + \dots + i_{n+1}=k} \hspace{-0.5cm}\varphi_{i_1, \dots, i_{n+1}}\,p_{i_1, \dots, i_{n+1}} \! (\lambda_1, \dots, \lambda_{n+1}) \, = \, \sum_{i=1}^{N} \varphi_i p_i(\lambda_1, \dots, \lambda_{n+1}), \end{eqnarray*} is the unique one in $P_k(K)$ such that $Q(M_i)= \varphi_i$. \sa \noindent The following lemma gives the first point-to-point estimates for the polynomials $p_i$ defined by (\ref{shape_function})-(\ref{P_ij}). \begin{lemma} Let $(p_i)_{i=1,N}$ be the canonical basis functions of the space of polynomials $P_k(K)$ which are defined by (\ref{shape_function})-(\ref{P_ij}). \sa Then: \begin{equation}\label{deriv_part_ordre_m_pi_vs_lambda_l} \D|p_i(\lambda_1, \dots, \lambda_{n+1})|\leq k^{n+1}, \hs \forall\, r \in \N^*: \left|\frac{\partial^{\,r} p_i}{\partial\lambda_{q_{1}}\dots\partial\lambda_{q_{r}}}(\lambda_1, \dots, \lambda_{n+1})\right| \leq k^{r(n+2)}, \end{equation} where $(q_{1}, q_{2}, \ldots, q_{r}) \in \N^r$. \end{lemma} \begin{prooff} $\frac{}{}$ \sa This lemma generalizes lemma 5.2 of \cite{Arxiv2} in the case where $p$ is not necessarily equal to $2$. Hence, we will only provide a sketch of the proof, and refer the interested reader to this reference for details. \sa \noindent The logical sequence of the proof can be summarized as follows:\sa \noindent $\blacktriangleright$ First, examine the upper boundary of the basis functions $p_i, (i= 1,\dots, N)$. \sa This requires to introduce integer $n_i$ $(0 \leq n_i\!\leq n+1)$ corresponding to the number of polynomials $P_{i_j}(\lambda_j)$ such that: \begin{eqnarray*} \forall j=1,\dots,n_i, \,(n_i \geq 1),\,& : & P_{i_j}(\lambda_j)=P_{1}(\lambda_j)=k\lambda_j, (i_j=1), \\[0.1cm] \forall j=n_i+1,\dots,n+1, \,(n_i \leq n)\, & : & P_{i_j}(\lambda_j)=\frac{k\lambda_j(k\lambda_j - 1)\dots(k\lambda_j - i_j+1)}{i_j!}, (i_j>1). \end{eqnarray*} Using the fact that the structure of $p_i$ depends on the value of $n_i$, ($n_i=0$, $1 \leq n_i \leq n$ or $n_i=n+1$), one obtains in all cases that $$ \D |p_i(\lambda_1,\dots,\lambda_{n+1})| \leq k^{n+1}. $$ \noindent $\blacktriangleright$ Consider next $r=1$, which corresponds to upper-bound the partial derivative $\D\frac{\partial p_i}{\partial\lambda_q}$, for a given pair of non-zero integers $(i,q)$. \sa Once again, depending on the value value of $n_i$, we obtain different estimates, the more restrictive being $$ \D\left|\frac{\partial p_i}{\partial\lambda_q}\right| \leq k^2 \, k^{n_i} \leq k^{n+2}. $$ \noindent $\blacktriangleright$ Finally, handle the partial derivative of $p_i$ of order $r$ with respect to $\lambda_{q_1},\dots,\lambda_{q_r}$. \sa To this end, we basically use the upper bound and remark that any first-order partial derivative of $p_i$ with respect to a given $\lambda_q$ will bring a term in $k^{n+2}$; this leads to: $$ \D\forall r \in \N^*: \left|\frac{\partial^{r} p_i}{\partial\lambda_{q_1}\dots\partial\lambda_{q_r}}\right| \leq \left(k^{n+2}\right)^{r}=k^{r(n+2)}, $$ which corresponds to the second inequality of (\ref{deriv_part_ordre_m_pi_vs_lambda_l}). \end{prooff} We can now prove the following theorem in order to obtain the estimate for the canonical basis $(p_i)_{i=1,N}$ with respect to the semi-norms $|.|_{l,p,K}$. \begin{theorem}\label{Estimation_pi} Let $\rho$ be the diameter of the largest ball that can be inscribed in $K$. Let $(p_i)_{i=1,N}$ be the canonical basis of $P_k(K)$ defined in (\ref{shape_function}), $(k,l,n)$ three integers and $p$ a positive real number such that: \begin{equation}\label{parameter_condition_0} \D k+1 > l + \frac{n}{p},\, (0 < p < +\infty). \end{equation} Then, there exists two positive constants $C_0$ and $C_l$ independent of $k$ such that \begin{equation}\label{Norm_0_2_and_Norm_1_2} \D\forall p\in\R^*\!\!: |p_i|_{0,p,K} \leq C_0\,k^{n+1} \,\mbox{ and }\,\, \forall l\in\N^{*}\!\!:|p_i|_{l,p,K} \leq C_l\,\frac{k^{l(n+2)}}{\rho^{\,l}}\,. \end{equation} \end{theorem} \begin{prooff} Let us consider the canonical basis of polynomials $(p_i)_{i=1,N}$ of $P_k(K)$ defined by (\ref{shape_function}) and (\ref{P_ij}). \sa Then, due to remark 2.2 in R. Arcangeli and J. L. Gout \cite{Arcangeli_Gout}, for each polynomial $p_i$, we have for all $l\geq 0$ for which (\ref{parameter_condition_0}) holds : \begin{equation}\label{Norm_Grad_L2_pi_0} \D |p_i|_{l,p,K} \leq \frac{1}{\rho^{\,l}}\left\{\int_{K}\left[\sum_{|\alpha|=l}\frac{l!}{\alpha !}\left|\partial^\alpha p_i(x)\right|\right]^{p}\!\!\!dx\right\}^{\frac{1}{p}} = \frac{1}{\rho^{\,l}}\left\{\int_{K}\left[\sum_{|\alpha|=l}\frac{l!}{\alpha !}\left|\frac{\partial^{|\alpha|} p_i(x)}{\partial x_{1}^{\alpha_1}\dots\partial x_{n}^{\alpha_n}}\right|\right]^{p}\!\!\!dx\right\}^{\frac{1}{p}}\!\!, \end{equation} where $\alpha != \alpha_1 !\dots\alpha_n !$ and $\rho$ is the supremum of the diameters of the inscribed spheres within the $n$-simplex $K$.\sa So, when $l=0$, (\ref{Norm_Grad_L2_pi_0}) together with the first inequality of (\ref{deriv_part_ordre_m_pi_vs_lambda_l}) directly leads to: \begin{equation}\label{Norm_Grad_L2_pi_0_l=0} \D |p_i|_{0,p,K} \leq \left\{\int_{K}\left|p_i(x)\right|^{p}dx\right\}^{\frac{1}{p}} \leq \mbox{ mes}(K)^{1/p}\,k^{n+1}, \end{equation} which corresponds to the first part of (\ref{Norm_0_2_and_Norm_1_2}) with $C_0=\mbox{ mes}(K)^{1/p}$.\sa Let us now consider the case where $l\geq 1$. Here, each first-order partial derivative $\D \frac{\partial p_i}{\partial x_j}$ can be written as \begin{equation}\label{D_pi_D_x_j} \D \frac{\partial p_i}{\partial x_j} = \sum_{q=1}^{n+1}\frac{\partial p_i}{\partial \lambda_q}\frac{\partial \lambda_q}{\partial x_j}, \end{equation} where $\D \frac{\partial \lambda_q}{\partial x_j}$ is a constant denoted $\Lambda^{q}_j$ which does not depend on $k$, since $\lambda_q$ is a polynomial of degree one, and we rewrite (\ref{D_pi_D_x_j}) as: \begin{equation} \D \frac{\partial p_i}{\partial x_j} = \sum_{q=1}^{n+1}\Lambda^{q}_j\frac{\partial p_i}{\partial \lambda_q}. \end{equation} Therefore, in the same way, the second-order partial derivatives are given by: $$ \D \frac{\partial^2 p_i}{\partial x_j\partial x_k} = \sum_{q_1=1}^{n+1}\sum_{q_2=1}^{n+1}\Lambda^{q_1}_{j}\Lambda^{q_2}_{k}\frac{\partial^2 p_i}{\partial\lambda_{q_1}\partial\lambda_{q_2}}, $$ and more generally for any non zero multi-index $\alpha=(\alpha_1,\dots,\alpha_n)$ whose length is denoted $|\alpha|$, we get: \begin{eqnarray*} \D \hspace{-0.7cm}\frac{\partial^{|\alpha|} p_i}{\partial x_{1}^{\alpha_1}\dots\partial x_{n}^{\alpha_n}} \hspace{0.3cm} & \, = \, & \,\dots \nonumber \\[0.2cm] \D \hspace{-0.6cm}\left(\sum_{q^{1}_1=1}^{n+1}\hspace{-0.2cm}\dots\hspace{-0.2cm}\sum_{q^{1}_{\alpha_1}=1}^{n+1}\hspace{-0.1cm}\right)\hspace{-0.1cm}\dots\hspace{-0.1cm} \left(\sum_{q^{n}_1=1}^{n+1}\hspace{-0.2cm}\dots\hspace{-0.2cm}\sum_{q^{n}_{\alpha_n}=1}^{n+1}\hspace{-0.1cm}\right) & & \hspace{-1.3cm}\left(\Lambda^{q^{1}_1}_{1}\hspace{-0.1cm}\dots\hspace{-0.1cm}\Lambda^{q^{1}_{\alpha_1}}_{1} \hspace{-0.2cm}\dots\hspace{-0.1cm}\Lambda^{q^{n}_{1}}_{n}\hspace{-0.2cm}\dots\hspace{-0.1cm}\Lambda^{q^{n}_{\alpha_n}}_{n}\right) \hspace{-0.1cm}\frac{\partial^{|\alpha|}p_i}{\left(\partial\lambda_{q^{1}_1}\dots\partial\lambda_{q^{1}_{\alpha_1}}\right) \dots\left(\partial\lambda_{q^{n}_1}\dots\partial\lambda_{q^{n}_{\alpha_n}}\right)}. \end{eqnarray*} Now, by using the second estimate of (\ref{deriv_part_ordre_m_pi_vs_lambda_l}) where we set $r=|\alpha|$, this gives the following estimate: \begin{equation}\label{derive_pi_majo_1} \D \forall \alpha\in\N^n,\,|\alpha|>0: \left|\frac{\partial^{|\alpha|} p_i(x)}{\partial x_{1}^{\alpha_1}\dots\partial x_{n}^{\alpha_n}}\right| \leq \left[(n+1)\Lambda\right]^{|\alpha|}k^{|\alpha|(n+2)}, \end{equation} where we set $\D\Lambda\equiv\max_{(j,q)}\Lambda^{q}_{j}$, $(j,q)\in \{1,\dots,n\}\times\{1,\dots n+1\}$ .\sa Finally, from (\ref{derive_pi_majo_1}) we can derive: \begin{equation}\label{derive_pi_majo_3} \D \forall l\in\N^*: \sum_{|\alpha|=l}\frac{l!}{\alpha !}\left|\frac{\partial^{|\alpha|} p_i(x)}{\partial x_{1}^{\alpha_1}\dots\partial x_{n}^{\alpha_n}}\right| \leq \left[(n+1)\Lambda\right]^{l}l!\,n^l k^{l(n+2)}, \end{equation} since $n^l$ corresponds to the number of partial derivatives of order $l$ in $\R^n$ for the polynomials $p_i$. \sa Therefore, one can estimate the $|.|_{l,p,K}-$norm for each polynomial $p_i, (1 \leq i\leq N),$ due to (\ref{Norm_Grad_L2_pi_0}) and (\ref{derive_pi_majo_3}), and finally obtain: \begin{equation}\label{Norm_L2_pi_00_1} \D \forall l\in\N^*: |p_i|_{l,p,K} \leq \left[\frac{\left[n(n+1)\Lambda\right]^{l} l!\,\mbox{ mes}(K)^{1/p}}{\rho^{\,l}}\right]k^{l(n+2)}, \end{equation} which corresponds to the second part of (\ref{Norm_0_2_and_Norm_1_2}), with $C_l=\left[n(n+1)\Lambda\right]^{l} l!\,\mbox{ mes}(K)^{1/p}$. \end{prooff} \section{Explicit $k-$dependence in \emph{a priori} $P_k$ finite element error estimates}\label{explicit_estimate} \vspace{-0.25cm} \noindent We are now in a position to derive a $k$-explicit dependence of the constant involved in a $W^{m,p}$ \emph{a priori} error estimate for $P_k$ Lagrange finite elements.\\[0.2cm] This is the purpose of the following theorem: \begin{theorem}\label{Thm_error_estimate} Let the hypothesis of C\'ea's Lemma \ref{Lemme_Cea} hold with $W$and $V$ defined by (\ref{V_and_W}). Let $(k,m,n)$ be three integers and $p$ a positive real number satisfying \begin{eqnarray} \D \mbox{if }\hspace{0.2cm} \frac{n}{p} < 1 & \mbox{then} & m \leq k, \label{cond_parametre_1}\\[0.2cm] \D \mbox{if }\hspace{0.2cm} \frac{n}{p} \geq 1 & \mbox{then} & m \leq k-1 \,\mbox{ and }\, k+1-\frac{n}{p}>0. \label{cond_parametre_2} \end{eqnarray} Suppose that the approximation $u_h \in W_h$ is a continuous piecewise function composed by polynomials which belong to $P_k(K_{\mu}), (1 \leq \mu \leq N_K)$. \sa If the exact solution $u$ to problem (\ref{VP}) belongs to $W^{k+1,p}(\Omega)$, the approximation $u_h$, solution to problem (\ref{VP_h}), converges to $u$ in $W^{m,p}(\Omega)$ when $h$ goes to zero, and we have \vspace{0.1cm} \begin{equation}\label{estimation_error} \|u_h-u\|_{m,p,\Omega} \hs \leq \hs \mathscr{C}_k\,h^{k+1-m} \, |u|_{k+1,p,\Omega}\,, \end{equation} \vspace{0.1cm} where $\mathscr{C}_k$ is a positive constant independent of $h$ defined by: \begin{equation}\label{C_k_Estimation} \D \mathscr{C}_k \, = \, C \frac{(k+n)^{n}\,k^{\,m(n+2)}}{(k-m)!\left(\!\!k+\!1-m-\D\frac{n}{p}\right)} \end{equation} Above, $C$ is a positive constant which does not depend on $k$. \end{theorem} \begin{prooff} The proof of this theorem is based on the paper of R. Arcangeli and J.L. Gout \cite{Arcangeli_Gout}, itself an extension of the paper by P.G. Ciarlet and P.A. Raviart \cite{Ciarlet_Raviart}. \sa To this end, let us first recall the conditions of theorem 2.1 of R. Arcangeli and J.L. Gout. \sa Let $\Omega$ be an open, bounded and non empty convex subset of $\R^n$ and $\Gamma$ its boundary. Let us denote by $P_k$ the space of polynomial functions of degree less than or equal to $k$. We assume that $\Sigma=\{a_i\}_{i=1,N}$ is a $P-$unisolvent set of points which belongs to $\bar{\Omega}$, where $P$ denotes a finite-dimensional space of dimension $N$ composed by functions defined on $\bar{\Omega}$ such that $P_k \subset P \subset C^{k}(\bar{\Omega})$.\sa Then, for all $u\in W^{k+1,p}(\Omega)$ and for all integer $l \geq 0$ such that \begin{equation}\label{parameter_condition} \D k+1 > l + \frac{n}{p}, \end{equation} we have: \begin{eqnarray} \D |u -\Pi_h u|_{l,p,\Omega} & \, \leq \, & \frac{1}{(k-l)!}\frac{1}{\left(k+1-l-\D\frac{n}{p}\right)}|u|_{k+1,p,\Omega}\,h^{k+1-l} \nonumber \\[0.2cm] & \, + \, & \frac{1}{(\mbox{mes }\Omega)^{1/p}}\frac{1}{k!} \frac{1}{\left(k+1-\D\frac{n}{p}\right)}\left(\sum_{i=1}^{N}|p_i|_{l,p,\Omega} \right)|u|_{k+1,p,\Omega}\,h^{k+1}, \label{Estimation_Arc_Gout_000} \end{eqnarray} where $\Pi_h$ is the classical Lagrange interpolation which consists in interpolating the set of points $\Sigma$ in $\R^n$ by a polynomial function of a given degree $k$, and $(p_i)_{i=1,N}$ are the unique functions such that $$p_{i}(Mj)=\delta_{ij}, \forall\, M_j \in \Sigma, \forall \, 1\leq i, j\leq N,$$ where $\delta_{ij}$ denotes the classical Kronecker symbol. \sa First of all, let us remark that, since we are interested in getting an \emph{a priori} error estimate in $W^{m,p}(\Omega)$ for the exact solution $u$ to the variational formulation (\ref{VP}) defined in (\ref{VP}), we will need to write estimates (\ref{Estimation_Arc_Gout_000}) for all values of $l$ between 0 and $m$. It means that condition (\ref{parameter_condition}) also needs to be satisfied from $l=0$ to $m$, which implies that the following inequality must hold true: $$ \D \frac{n}{p} < k+1-m. $$ Now, to guarantee this condition, according to the ratio $\D\frac{n}{p}$, we get two conditions, conditions (\ref{cond_parametre_1}) and (\ref{cond_parametre_2}). \sa Particularly, for the usual case where $p=2$ and $n=2$, condition (\ref{cond_parametre_2}) implies that when considering finite element $P_1$, estimate (\ref{estimation_error}) will only be written for $m=0$ which corresponds to the $L^2$-norm. However, the finite element $P_1$ would also be considered with the $W^{m,p}$-norm by adapting our theorem with another result from R. Arcangeli and J.L. Gout, (see remark 2.3 and theorem 1.1 in \cite{Arcangeli_Gout}). \sa Thus, for our objectives, we write (\ref{Estimation_Arc_Gout_000}) for the following conditions: \begin{itemize} \item $\Omega = K_{\mu}$, $(1\leq \mu \leq N_k)$, a $n$-simplex which belongs to a regular mesh ${\mathcal T}_h$. \item $u$ is the exact solution in $W^{m,p}(\Omega) \cap W^{k+1,p}(\Omega)$, to the variational formulation (\ref{VP}). \item The set of points $\Sigma$ in $\R^n$ correspond to the $P_k$ finite element nodes of $K_{\mu}$. \item The global interpolate function $\Pi_h u$ is replaced by the local interpolate one $\Pi_{K_{\mu}}u$ on the $n$-simplex $K_{\mu}$. \end{itemize} Then, due to (\ref{Norm_L2_pi_00_1}), estimate (\ref{Estimation_Arc_Gout_000}) becomes, $\forall l=1,\dots,m$: \begin{eqnarray} \D |u -\Pi_{K_{\mu}} u|_{l,p,K_\mu} & \, \leq \, & \frac{1}{(k-l)!}\frac{1}{\left(k+1-l-\D\frac{n}{p}\right)}|u|_{k+1,p,K_\mu}\,h_{K_{\mu}}^{k+1-l} \nonumber \\[0.3cm] & \, + \, & \frac{1}{\rho_{K_\mu}^{l}}\left(\frac{\left[n(n+1)\Lambda\right]^{l}l!}{k!\left(k+1-\D\frac{n}{p}\right)}\frac{(k+n)!}{n!\,k!}\,k^{l(n+2)}\right) |u|_{k+1,p,K_\mu}\,h_{K_{\mu}}^{k+1}, \label{Estimation_Arc_Gout} \end{eqnarray} where we used (\ref{Dim_Pk}). \sa So, (\ref{Estimation_Arc_Gout}) becomes: \begin{eqnarray} \hspace{-0.6cm}\D |u -\Pi_{K_{\mu}} u|_{l,p,K_\mu}\hspace{-0.3cm} & \leq & \hspace{-0.3cm}\left[\frac{1+\D\left(\frac{\left[n(n+1)\sigma \Lambda\right]^{l}l!\, (k+1)\hspace{-0.1cm}\dots\hspace{-0.1cm}(k+n)\,k^{l(n+2)}}{n!}\right)}{(k-l)!\left(k+1-l-\D\frac{n}{p}\right)}\right] |u|_{k+1,p,K_\mu} h_{K_{\mu}}^{k+1-l}, \nonumber \\[0.3cm] \hspace{-0.6cm}\hspace{-0.3cm}& \leq & \hspace{-0.3cm}\D\left(\!\!\frac{}{}1+\D\frac{\left[n(n+1)\sigma\right]^{m}\!\Lambda^{\!*}\,m!}{n!}\right) \frac{(k+n)^n\, k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\,\, |u|_{k+1,p,K_\mu} h_{K_{\mu}}^{k+1-l},\label{Estimation_Arc_Gout_m_1} \end{eqnarray} where $\D\Lambda^{\!*}\equiv\max_{0\leq l \leq m}\Lambda^l$, and $\sigma$ a given number such that $\sigma\geq 1$ and $\D\frac{h_{K_{\mu}}}{\rho_{K_{\mu}}}\leq \sigma$, $\forall\,K_{\mu} \in {\mathcal T}_h$ which we assumed to be a regular mesh.\sa For simplicity, we rewrite (\ref{Estimation_Arc_Gout_m_1}) as follows: \begin{equation}\label{Estimation_Arc_Gout_m_2} \D |u -\Pi_{K_{\mu}} u|_{l,p,K_\mu} \, \leq \, \D C_1(\sigma,\Lambda^{\!*},m,n)\, \frac{(k+n)^n\, k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\, |u|_{k+1,p,K_\mu}\,h_{K_{\mu}}^{k+1-l}, \end{equation} where we introduced constant $C_1(\sigma,\Lambda^{\!*},m,n)$ defined by: \begin{equation}\label{xi} \D C_1(\sigma,\Lambda^{\!*},m,n) \, \equiv \, 1+\D\frac{\left[n(n+1)\sigma\right]^{m}\!\Lambda^{\!*}\,m!}{n!}\,. \end{equation} Now, when $l=0$, due to (\ref{Norm_Grad_L2_pi_0_l=0}), estimate (\ref{Estimation_Arc_Gout_000}) becomes: \begin{eqnarray*} \D |u -\Pi_{K_{\mu}} u|_{0,p,K_{\mu}} & \, \leq \, & \frac{1}{k!}\frac{1}{\left(k+1-\D\frac{n}{p}\right)}|u|_{k+1,p,K_{\mu}}\,h^{k+1} \nonumber \\[0.2cm] & \, + \, & \frac{1}{k!} \frac{1}{\left(k+1-\D\frac{n}{p}\right)}\frac{(k+n)!}{n!\,k!}\,k^{n+1}|u|_{k+1,p,K_{\mu}}\,h^{k+1}, \end{eqnarray*} which leads to: \begin{equation}\label{base_caracteristique} \D |u -\Pi_{K_{\mu}}|_{0,p,K_{\mu}} \leq C_2(n) \frac{(k+n)^nk^{m(n+2)}}{(k-m)!\left(k+1-m-\D\frac{n}{p}\right)}|u|_{k+1,p,K_{\mu}}\,h^{k+1}, \end{equation} for all $k \geq 1$ and $m \geq 1$, and where we introduced constant $C_2(n)$ defined by: \begin{equation}\label{C2_n} \D C_2(n) = 1+\frac{1}{n!} \end{equation} Therefore, by the help of (\ref{Estimation_Arc_Gout_m_2})-(\ref{xi}) and (\ref{base_caracteristique})-(\ref{C2_n}), we get the following $W^{m,p}$ local interpolation error estimate: \begin{eqnarray} \hspace{-0.8cm}\D \|u -\Pi_{K_{\mu}} u\|^p_{m,p,K_{\mu}} \hspace{-0.2cm} & = & \hspace{-0.1cm}\sum_{l=0}^{m}|u -\Pi_{K_{\mu}}|^{p}_{l,p,K_{\mu}}, \nonumber \\ \hspace{-0.4cm} & \leq & \hspace{-0.1cm} \sum_{l=0}^{m}C^p(\sigma,\Lambda^*,m,n)\hspace{-0.15cm}\left[\frac{(k+n)^n\, k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\!\right]^p \hspace{-0.3cm}|u|^{p}_{k+1,p,K_\mu} h_{K_{\mu}}^{p(k+1-l)}, \label{Bid_0} \end{eqnarray} where constant $C(\sigma,\Lambda^*,m,n)$ is defined by : $\D C(\sigma,\Lambda^*,m,n) = \max\left(\frac{}{}\!\!C_1(\sigma,\Lambda^{\!*},m,n), C_2(n)\right)$. Then, (\ref{Bid_0}) leads to: \begin{equation}\label{Lambda_0000} \D \|u -\Pi_{K_{\mu}} u\|^p_{m,p,K_{\mu}} \leq \hspace{-0.1cm} C^p(\sigma,\Lambda^*,m,n,p,h)\hspace{-0.15cm}\left[\frac{(k+n)^n\, k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\!\right]^p \hspace{-0.3cm} |u|^{p}_{k+1,p,K_\mu} h^{p(k+1-m)}, \end{equation} where $\D h \equiv \max_{K_{\mu} \in {\mathcal T}_h} h_{K_{\mu}}$ and $C(\sigma,\Lambda^*,m,n,p,h)\equiv \xi(m,p,h).C(\sigma,\Lambda^*,m,n)$ with $\xi(m,p,h)$ defined as follows: \begin{equation}\label{varphi} \D \xi(m,p,h) \equiv \left | \begin{array}{ll} \D\hs \left[\frac{1-h^{p(m+1)}}{1-h^p}\right]^{\frac{1}{p}} & \mbox{ if } \hs h\neq 1, \medskip \\ \hs (m+1)^{\frac{1}{p}} & \mbox{ if } \,h=1. \end{array} \right. \end{equation} Since the mesh ${\mathcal T}_h$ is regular, by the help of (\ref{Lambda_0000}), we get for the whole domain $\Omega$ the following global interpolation error estimate: \begin{eqnarray} \D \|u -\Pi_h u\|_{m,p,\Omega} & = & \hspace{-0.2cm} \left(\sum_{K_{\mu}\in {\mathcal T}_h}\|u -\Pi_{K_{\mu}}u\|_{m,p,K_{\mu}}^p\!\!\right)^{\!\!1\!/p} \nonumber \\ & \leq & \hspace{-0.2cm} C(\sigma,\Lambda^*,m,n,p,h)\left[\frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\!\right]\left(\!\sum_{K_{\mu}\in {\mathcal T}_h}|u|^p_{k+1,p,K_{\mu}}\!\!\right)^{\!\!1\!/p}\!\!\!\!\!\!h^{k+1-m},\nonumber\\[0.2cm] & \leq & \hspace{-0.2cm} C(\sigma,\Lambda^*,m,n,p,h)\left[\frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\!\right]|u|_{k+1,p,\Omega}\,h^{k+1-m}. \label{Error_estimate_for_any_k} \end{eqnarray} Then, estimate (\ref{C_k_Estimation}) is proved, provided that one takes into account estimate (\ref{Cea_Banach}) of C\'ea's Lemma \ref{Lemme_Cea}. Indeed, consider the $W^{m,p}-$norm to measure the difference between the exact solution $u$ to the variational problem (\ref{VP}) and its approximation $u_h$ solution to (\ref{VP_h}), we have: \begin{equation}\label{Erreur_approx_Erreur_global_interpol} \|u-u_h\|_{m,p,\Omega} \hs \leq \hs \left(1+ \frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_h}\right)\,\|u-\Pi_h u\|_{m,p,\Omega}, \end{equation} where we choose in (\ref{Cea_Banach}) $w_h\in W_h$ equal to $\Pi_h u$, $\|a\|_{W^{m,p},W^{m'\!,p\,'}}$ as defined in (\textbf{BNB}), and $\alpha_h$ being the constant of the discrete inf-sup condition, see (\textbf{BNB1$_h$}). \sa Then, replacing expression (\ref{Error_estimate_for_any_k}) in inequality (\ref{Erreur_approx_Erreur_global_interpol}) leads to: \begin{equation}\label{Estim_Preuve} \|u-u_h\|_{m,p,\Omega} \hs \leq \hs \left(1+ \frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_h}\right)\,C(\sigma,\Lambda^*,m,n,p,h)\left[\frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}\!\right]|u|_{k+1,p,\Omega}\,h^{k+1-m}. \end{equation} Now, since $\xi(m,p,h)$ introduced in (\ref{varphi}) is bounded as $h\leq diam\,({\bar{\Omega}})$, $C(\sigma,\Lambda^*,m,n,p,h)$ is uniformly bounded with $h$. Hence, there exists $C(\sigma,\Lambda^*,m,n,p)$ independent of $h$ such that: $$ C(\sigma,\Lambda^*,m,n,p,h) \leq C(\sigma,\Lambda^*,m,n,p). $$ Consequently, by defining constant $\mathscr{C}_k$ by \begin{equation}\label{C_k_Value} \D \mathscr{C}_k \equiv \left(1+ \frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_h}\right)C(\sigma,\Lambda^*,m,n,p)\,\frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(\!\!k+\!1-m-\D\frac{n}{p}\right)}, \end{equation} we obtain the error estimate (\ref{estimation_error})-(\ref{C_k_Estimation}), with $C= \left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_h}\right)C(\sigma,\Lambda^*,m,n,p)$. \end{prooff} \section{Application to relative finite elements accuracy}\label{FEM_accuracy} \noindent In this section, we apply inequality (\ref{estimation_error}) of Theorem \ref{Thm_error_estimate} to evaluate the relative accuracy between two finite elements. Hereafter, we will replace notation $u_h$ with $u_h^{(k)}$, in order to highlight the degree $k$ of the polynomials involved in $P_{k}(K_{\mu})$.\sa In \cite{ChasAs18}, regarding a problem set in the usual Sobolev space $H^1(\Omega)$, we introduced a probabilistic framework which enables one to compare the relative accuracy of two finite elements of different degrees in a non standard way. Indeed, we claimed that quantitative uncertainties exist in the approximate solution $u^{(k)}_h$, due for instance to the quantitative uncertainties that are commonly produced in the mesh generation.\sa For this reason, we have considered the approximation error as a random variable, and we aimed at evaluating the probability of the difference between two $H^{1}-$approximation errors of $u- u_h^{(k_1)}$ and $u- u_h^{(k_2)}$ corresponding to finite elements $P_{k_1}$ and $P_{k_2}, (k_1<k_2)$.\sa Here, in the same way, one can only infer that the value of the approximation error $\|u^{(k)}_h-u\|_{m,p,\Omega}$ belongs to the interval $[0,\mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}]$, using error estimates (\ref{estimation_error})-(\ref{C_k_Estimation}).\sa As a consequence, for fixed values of $k, m$ and $p$, we define the following random variable $X_{m,p}^{(k)}$ by: \begin{eqnarray*} X_{m,p}^{(k)} : & {\bf\Omega} & \hspace{0.1cm}\rightarrow \hspace{0.2cm}[0,\mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}] \noindent \\ & \boldsymbol{\omega}\equiv u^{(k)}_h & \hspace{0.1cm} \mapsto \hspace{0.2cm}\D X_{m,p}^{(k)}(\boldsymbol{\omega}) = X_{m,p}^{(k)}(u^{(k)}_h) = \|u^{(k)}_h-u\|_{m,p,\Omega}, \end{eqnarray*} where the probability space ${\bf\Omega}$ contains all the possible results for a given random trial, namely, all possible grids that the involved meshing tool can generate for a given value of $h$. Equivalently, ${\bf\Omega}$ consists of all the possible corresponding approximations $u^{(k)}_h$. Below, for simplicity, we will set: $X_{m,p}^{(k)}(u^{(k)}_h)\equiv X_{m,p}^{(k)}(h)$. \sa Now, regarding the absence of information concerning the more likely or less likely values of norm $\|u^{(k)}_h-u\|_{m,p,\Omega}$ within the interval $[0, \mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}]$, we assume that the random variable $X_{m,p}^{(k)}(h)$ has a uniform distribution on the interval $[0, \mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}]$ in the following sense: $$ \forall (\alpha,\beta), 0 \leq \alpha < \beta \leq \mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}: Prob\left\{X_{m,p}^{(k)}(h) \in [\alpha,\beta]\right\}=\frac{\beta-\alpha}{\mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}}. $$ The above equation means that if one slides interval $[\alpha,\beta]$ anywhere in $[0, \mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}]$, the probability of the event $\D\left\{X_{m,p}^{(k)}(h) \in [\alpha,\beta]\right\}$ does not depend on the localization of $[\alpha,\beta]$ in $[0, \mathscr{C}_k |u|_{k+1,p,\Omega}\,h^{k+1-m}]$, but only on its length; this reflects the property of uniformity for $X_{m,p}^{(k)}$. \sa Hence, it is straightforward to extend the theorem proved in \cite{ChasAs18} for the $H^{1}$ case to the $W^{m,p}$ context. This yields the following result, which estimates the probability of event $\D\left\{X_{m,p}^{(k_2)}(h) \leq X_{m,p}^{(k_1)}(h)\right\}$. \sa Let $C_{k_{i}}$ be equal to $C_{k_i} \equiv \mathscr{C}_{k_i} |u|_{k_{i}+1,p,\Omega}$, for $i=1,2$, and let $h^*_{m,p}$ be defined as: \begin{equation}\label{h*} \D h^*_{m,p} \equiv \left( \frac{C_{k_{1}}}{C_{k_{2}}} \right)^{\frac{1}{k_2-k_1}}. \end{equation} As in \cite{ChasAs18}, by changing the $H^1-$norm to the $W^{m,p}$ one, we can derive that: \begin{equation}\label{Nonlinear_Prob} \D Prob\left\{ X_{m,p}^{(k_2)}(h) \leq X_{m,p}^{(k_1)}(h)\right\} = \left | \begin{array}{ll} \D \hs 1 - \frac{1}{2}\!\left(\!\frac{\!\!h}{h^*_{m,p}}\!\right)^{\!\!k_2-k_1} & \mbox{ if } \hs 0 < h \leq h^*_{m,p}, \medskip \\ \D \hs \frac{1}{2}\!\left(\!\frac{h^*_{m,p}}{\!\!h}\!\right)^{\!\!k_2-k_1} & \mbox{ if } \hs h \geq h^*_{m,p}. \end{array} \right. \end{equation} \noindent Then, using (\ref{C_k_Value}), one can rewrite $h^*_{m,p}$ defined in (\ref{h*}) as follows: \begin{equation}\label{explicit_h*} \D h^*_{m,p} = \left[ \frac{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k_1}}\right)}{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k_2}}\right)} \left(\frac{k_1+n}{k_2+n}\right)^{n}\left(\frac{k_1}{k_2}\right)^{m(n+2)}\frac{(k_2-m)!}{(k_1-m)!}\,\frac{\left(k_2+1-m-\D\frac{n}{p}\right)}{\left(k_1+1-m-\D\frac{n}{p}\right)}\,\frac{|u|_{k_{1}+1,p,\Omega} }{|u|_{k_{2}+1,p,\Omega}} \right]^{\frac{1}{k_2-k_1}}\,, \end{equation} where $\alpha_{h,k_1}$ and $\alpha_{h,k_2}$ denotes the $\alpha_h$ appearing in generalized C\'ea's Lemma \ref{Lemme_Cea}, associated to finite elements $P_{k_1}$ and $P_{k_2}$, respectively. \begin{remark} Notice that, as proposed in \cite{Arxiv2}, one can derive another law of probability to evaluate the most accurate finite element between $P_{k_{1}}$ and $P_{k_{2}}$. More precisely, for $h<h^*_{m,p}$, assuming the independence of events $A\equiv\D\left\{X_{m,p}^{(k_2)}(h) \leq X_{m,p}^{(k_1)}(h)\right\}$ and $B\equiv \left\{X_{m,p}^{(k_1)}(h) \in [C_{k_2} h^{k_2},C_{k_1} h^{k_1}]\right\}$, one can obtain the following law of probability: \begin{equation}\label{Heaviside_Prob} \D Prob\left\{ X_{m,p}^{(k_2)}(h) \leq X_{m,p}^{(k_1)}(h)\right\} = \left | \begin{array}{ll} \hs 1 & \mbox{ if } \hs 0 < h < h^*_{m,p}, \medskip \\ \hs 0 & \mbox{ if } \hs h> h^*_{m,p}\,. \end{array} \right. \end{equation} \end{remark} The probability distribution (\ref{Heaviside_Prob}) is obtained by replacing the uniform distribution assumption in (\ref{Nonlinear_Prob}) by the independence of events $A$ and $B$. However, with no prior information about the independence of these events, the more "natural" probabilistic law is (\ref{Nonlinear_Prob}).\sa Therefore, in what follows, we take a fixed value for $k_1$ (that we will denote $k$ in the sequel), and we study the asymptotic behavior of the accuracy between $P_k$ and $P_{k+q}$, when $q$ goes to $+\infty$: this will give us the asymptotic relation between the two probabilistic laws (\ref{Nonlinear_Prob}) and (\ref{Heaviside_Prob}). \sa To this end, it is convenient to introduce notation $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$ corresponding to the sequence of functions defined by (\ref{Nonlinear_Prob}), namely: \begin{equation}\label{P(h)} \D\forall q\in\N^*:\mathcal{P}_{q}(h) \equiv Prob\left\{ X_{m,p}^{(k+q)}(h) \leq X_{m,p}^{(k)}(h)\right\} = \left | \begin{array}{ll} \D \hs 1 - \frac{1}{2}\!\left(\!\frac{\!\!h}{h^{*}_q}\!\right)^{\!q} & \mbox{ if } \hs 0 < h \leq h^{*}_q, \medskip \\ \D \hs \frac{1}{2}\!\left(\!\frac{h^{*}_q}{\!\!h}\!\right)^{\!q} & \mbox{ if } \hs h \geq h^{*}_q\,. \end{array} \right. \end{equation} Above, we denote by $h^{*}_q$ the $h^*_{m,p}$ expressed as a function of $q$ for given values of $k,m$ and $p$, that is: \begin{equation}\label{h*q} \hspace{-1cm}\D h^*_q = \left[ \frac{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k}}\right)}{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k+q}}\right)} \left(\frac{k+n}{k+q+n}\right)^{n}\left(\frac{k}{k+q}\right)^{m(n+2)}\frac{(k+q-m)!}{(k-m)!}\,\frac{\left(k+q+1-m-\D\frac{n}{p}\right)}{\left(k+1-m-\D\frac{n}{p}\right)} \,\frac{|u|_{k+1, p,\Omega} }{|u|_{k+q+1, p,\Omega}}\right]^{\frac{1}{q}}\hspace{-0.1cm}. \end{equation} To obtain the asymptotic behavior of sequence $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$, we first have to compute the limit of sequence $\D\left(h^*_q\right)_{q\in\N}$. \sa It is the purpose of the following lemma: \begin{lemma}\label{Conv_asympt_h*q} Let $u\in W^{r,p}(\Omega), (\forall\, r\in \N),$ be the solution to problem (\ref{VP}) and $(h^*_{q})_{q\in\N^\star}$ the sequence defined by (\ref{h*q}). We assume that sequence $(\alpha_{h,k+q})_{q\in\N^\star}$ satisfies: \begin{equation}\label{alpha_asympt} \D\forall k\in\N, \lim_{q\rightarrow +\infty}\alpha_{h,k+q} = \alpha_{h,k}^* \in\R^*. \end{equation} Let $k, m$ and $p$ be fixed such that (\ref{cond_parametre_1}) or (\ref{cond_parametre_2}) holds. \sa If \begin{equation}\label{Cond_ Ratio_Semi_Norm} \D \lim_{q\rightarrow +\infty}\frac{|u|_{k+q+2,p,\Omega}}{|u|_{k+q+1,p,\Omega}} = l, (l\in\R^{*}_{+}), \end{equation} then, \begin{equation}\label{lim_hq*} \lim_{q\rightarrow +\infty}h^*_q = +\infty. \end{equation} \end{lemma} \noindent \begin{prooff} From (\ref{h*q}), we readily get: \begin{equation}\label{C_k_0} \hspace{-1cm}\D\left(h^*_q\right)^q = \frac{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k}}\right)}{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k+q}}\right)} \frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(k+1-m-\D\frac{n}{p}\right)} \, \frac{(k+q-m)!\left(k+q+1-m-\D\frac{n}{p}\right)}{\D\left(k+q+n\right)^n \left(k+q\right)^{m(n+2)}}\,.\frac{|u|_{k+1,p,\Omega}}{|u|_{k+q+1,p,\Omega}}. \end{equation} Let us first remark that condition (\ref{alpha_asympt}) implies that the following ratio, based on the constant involved in (\ref{Cea_Banach}) of Lemma \ref{Lemme_Cea}, is uniformly bounded and stays strictly positive for any value of $q$. In particular, we have: \begin{equation}\label{rapp_alpha} \D \lim_{q\rightarrow +\infty}\frac{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k}}\right)}{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k+q}}\right)} = \beta_{h,k}^* \in\R^{*}. \end{equation} Then, using Stirling's formula when $q$ goes to $+\infty$, we first remark that \begin{eqnarray} \frac{(k\!+\!q\!-\!m)!\left(k\!+\!q\!+\!1\!-\!m\!-\!\D\frac{n}{p}\right)}{\D\left(k+q\right)^{m(n+2)}\left(k+q+n\right)^n} \hspace{-0.3cm}& \underset{q \rightarrow +\infty}{\sim} & \hspace{-0.3cm}\frac{\sqrt{2\pi (k+q-m)}\D\left(\frac{k+q-m}{e}\right)^{\!\!(k+q-m)}\!\!\!\left(k\!+\!q\!+\!1\!-\!m\!-\!\D\frac{n}{p}\right)}{\left(k+q\right)^{m(n+2)}\left(\!\!\frac{}{}k+q+n\right)^n}, \nonumber \\[0.2cm] \hspace{-0.3cm}& \underset{q \rightarrow +\infty}{\sim} & \hspace{-0.3cm}\frac{\sqrt{2 \pi}(k+q-m)^{(k+q-m+\frac{1}{2})}}{e^{k+q-m}}\frac{1}{\left(\!\!\frac{}{}k+q\right)^{n+m(n+2)-1}}, \nonumber \\[0.2cm] \hspace{-0.3cm}& \underset{q \rightarrow +\infty}{\sim} & \hspace{-0.3cm}\sqrt{2 \pi}\,\frac{(k+q)^{k+q-3m-n(m+1)+\frac{3}{2}}}{e^{k+q}}, \label{h*q_Asympt} \end{eqnarray} where, according to Euler's formula \cite{Euler}, we have used the following equivalence $$ \D (k+q-m)^{(k+q-m+\frac{1}{2})} \underset{q \rightarrow +\infty}{\sim} e^{-m}(k+q)^{(k+q-m+\frac{1}{2})}. $$ Then, substituting (\ref{h*q_Asympt}) in (\ref{C_k_0}) allows us to determine equivalent of $h^{*}_q$ when $q \rightarrow +\infty$. Using (\ref{rapp_alpha}), one obtains \begin{equation}\label{Truc} \D\left(h^*_q\right)^q \underset{q \rightarrow +\infty}{\sim} \Theta \,\frac{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k}}\right)}{\left(1+ \D\frac{\|a\|_{W^{m,p},W^{m'\!,p\,'}}}{\alpha_{h,k+q}}\right)} e^{-(k+q)}(k+q)^{k+q-3m-n(m+1)+\frac{3}{2}}\,.\frac{|u|_{k+1,p,\Omega}}{|u|_{k+q+1,p,\Omega}}, \end{equation} where $\Theta$ denotes a constant independent of $q$ defined by $$ \Theta \equiv \sqrt{2 \pi}\frac{(k+n)^n k^{m(n+2)}}{(k-m)!\,\left(k+1-m-\D\frac{n}{p}\right)}. $$ We now introduce two sequences $(v_q)_{q\in\N}$ and $(w_q)_{q\in\N}$, as follows: $$ \forall\, q\in\N: v_q\equiv \ln\,|u|_{k+q+1,p,\Omega}, \hs w_q \equiv q\,. $$ Then, owing to condition (\ref{Cond_ Ratio_Semi_Norm}), if sequence $r_q$ defined by the ratio $$ \D r_q \equiv \frac{v_{q+1}-v_q}{w_{q+1}-w_q} = \ln\left(\frac{|u|_{k+q+2,p,\Omega}}{|u|_{k+q+1,p,\Omega}}\right) $$ has a limit $L\equiv \ln\, l \in\R$, when $q$ goes to $+\infty$, then $\D\lim_{q\rightarrow +\infty}r_q = L$.\sa As a consequence, due to the Stolz-Cesaro theorem, (see \cite{OviFur}, p.263-266), the ratio $\D\frac{v_q}{w_q}$ also converges to the same limit $L$ when $q$ goes to $+\infty$: $$ \D \lim_{q\rightarrow +\infty}\D\frac{v_q}{w_q} = \lim_{q\rightarrow +\infty}\D\frac{\ln\,|u|_{k+q+1,p,\Omega}}{q} = L, $$ and \begin{equation}\label{exp(-L)} \D \lim_{q\rightarrow +\infty}\D\left(\frac{|u|_{k+1,p,\Omega}}{|u|_{k+q+1,p,\Omega}}\right)^{\frac{1}{q}} = \lim_{q\rightarrow +\infty}\D\left(\frac{1}{|u|_{k+q+1,p, \Omega}}\right)^{\frac{1}{q}} =e^{-L}=\frac{1}{l}. \end{equation} As a result, from (\ref{rapp_alpha}), (\ref{Truc}) and (\ref{exp(-L)}), one can conclude that $\D h^*_q \underset{+\infty}{\sim} \frac{1}{e\,l}\,q$, which proves (\ref{lim_hq*}). \end{prooff} \begin{remark}$\frac{}{}$ Let us comment on the assumptions of this lemma. \begin{enumerate} \item The hypothesis on the ratio of norms in Eq. (\ref{Cond_ Ratio_Semi_Norm}) might appear very {\em ad-hoc}. Nevertheless, one can easily check, based on several examples, that it is satisfied. Take for instance $u$, solution to a standard Laplace problem solved in a regular domain $\Omega \in \R^2$ (for example a square), with a given regular Dirichlet boundary condition on the boundary $\partial \Omega$ and a regular enough right-hand side, (for details see \cite{Arxiv2}). \item As a matter of fact, inequality (\ref{rapp_alpha}) is fulfilled in most cases. For instance, assuming the bilinear form $a$ is coercive, the functional framework is necessarily Hilbertian (see Remark 2.3 in \cite{Ern_Guermond}). Since we have considered that $W\equiv W^{m,p}(\Omega) \mbox{ and } V\equiv W^{m',p'}(\Omega)$, with $\frac{1}{p}+\frac{1}{p'}=1$, then $p=2$ and $W=V=H^{m}(\Omega)$.\sa So, if we denote by $\alpha$ the coercivity constant and by $\|a\|$ the continuity constant, inequality (\ref{Cea_Banach}) of C\'ea's Lemma \ref{Lemme_Cea} can be expressed with constant $\frac{\|a\|}{\alpha}$ instead of $\left(1+\frac{\|a\|_{W,V}}{\alpha_h}\right)$ and the ratio in the limit (\ref{rapp_alpha}) equals 1 and $\beta_{h,k}^*$ too. \item In terms of linear algebra, i.e. considering the matrix $\A$ associated with the bilinear form $a$, it can be shown that $\alpha_{h,k}$ (or $\alpha_{h,k+q}$) of (\ref{rapp_alpha}) is related to the smallest eigenvalue of the square matrix $\A^t \A$, i.e. the smallest singular value of $\A$, (see \cite{Ern_Guermond}, Remark 2.23, (iii)). Hence, inequality (\ref{rapp_alpha}) could be checked if one is able to get information about the singular value decomposition of $\A$. \end{enumerate} \end{remark} \noindent We now consider the convergence of sequence $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$ as $q\rightarrow +\infty$. As we will see, due to the definition (\ref{P(h)}) of sequence $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$, pointwise convergence presents a discontinuity at point $h=h^*_q$. Indeed, when $q$ goes to $+\infty$, thanks to lemma \ref{Conv_asympt_h*q}, $h^*_q$ also goes to $+\infty$, and this discontinuity is therefore at $+\infty$. \sa Thus, to handle this singular behavior, we introduce the weak convergence of the sequence $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$, {\it i.e.} convergence on the sense of distributions. \sa For the sake of exhaustivity, we briefly recall here some basic notions about distribution theory \cite{Schwartz}, that allows us in passing to introduce the notations we will use. A well-informed reader may skip these few lines.\sa We denote by $\mathcal{D}(\R)$ the space of functions $\,C^{\infty}(\R)$ with a compact support in $\R$, and by $\mathcal{D'}(\R)$ the space of distributions defined on $\R$. As we will carry out our analysis for all x $\in \R$, we extend the sequence of functions $\D\left(\mathcal{P}_{q}(h)\right)_{q \in \N^\star}$ on $]-\infty, 0[$ by setting: $\forall h \leq 0:\mathcal{P}_{q}(h)=0$.\sa Therefore, the sequence of extended functions $\D\left(\frac{}{}\!\!\mathcal{P}_{q}(h)\right)_{q\in\N^\star}$ belongs to the space $L^{1}_{loc}(\R)$.\footnote{the space of functions locally integrable for any compact $K$ of $\R$.} Hence, $\forall\, q\in\N^*$, each function $\frac{}{}\!\!\mathcal{P}_{q}(h)$ can be associated to its regular distribution $T_{\mathcal{P}_{q}}$ defined by: \begin{equation}\label{P(h)_Distrib} \D\forall \varphi \in {\cal D(\R)} : \hs <T_{\mathcal{P}_{q}}, \varphi > \hs \equiv \hs \int_{\R}\mathcal{P}_{q}(h)\varphi(h)dh. \end{equation} For what follows, we will also need the Heaviside distribution $T_H$ defined by: $$ \D\forall \varphi \in {\cal D(\R)} : \hs <T_H, \varphi > \hs \equiv \hs \int_{\R}H(h)\varphi(h)dh \hs = \hs \int_{0}^{+\infty}\!\!\varphi(h)dh, $$ where $H(h)=1$ if $h > 0$, and zero otherwise. We are now in a position to state the convergence result of the sequence of distributions $\left(T_{\mathcal{P}_{q}}\right)_{q\in\N^*}$ in $D'(\R)$. \begin{theorem}\label{Conv_Simple} With the same assumptions on $u$ as in Lemma \ref{Conv_asympt_h*q}, let $\D\left(T_{\mathcal{P}_{q}}\right)_{q\in\N^*}$ be the sequence of distributions defined by (\ref{P(h)_Distrib}) and (\ref{P(h)})-(\ref{h*q}). Then, $\D\left(T_{\mathcal{P}_{q}}\right)_{q\in\N^*}$ converges with respect to the weak-* topology on $D'(\R)$ to the Heaviside distribution $T_H$. \end{theorem} \noindent \begin{prooff}$\frac{}{}$ By definition \cite{Schwartz} of the weak convergence in $D'(\R)$, we have to evaluate the limit of the numerical sequence $\D\left(<T_{\mathcal{P}_{q}}, \varphi >\right)_{q\in\N^*}$ when $q$ goes to $+\infty$.\sa Hence, due to (\ref{P(h)_Distrib}) and (\ref{P(h)}), we have, $\D\forall \varphi \in {\cal D(\R)}:$ \begin{eqnarray}\label{limit_distrib} \hspace{-0.6cm}\D <T_{\mathcal{P}_{q}}, \varphi > \hspace{-0.1cm}& \equiv & \hspace{-0.1cm}\int_{\R}\mathcal{P}_{q}(h)\varphi(h)dh = \int_{0}^{h^{*}_q}\left[1 - \frac{1}{2}\!\left(\!\frac{\!\!h}{h^{*}_q}\!\right)^{\!q}\right]\varphi(h)dh + \int_{h^{*}_q}^{+\infty}\frac{1}{2}\!\left(\!\frac{h^{*}_q}{\!\!h}\!\right)^{\!q}\varphi(h)dh \nonumber \label{}\\[0.2cm] \hspace{-0.6cm} & = & \hspace{-0.1cm}\int_{-\infty}^{+\infty}\left[1 - \frac{1}{2}\!\left(\!\frac{\!\!h}{h^{*}_q}\!\right)^{\!q}\right]1\hspace{-0.15cm}1_{[0,h^*_q]}(h)\varphi(h)dh + \int_{-\infty}^{+\infty}\frac{1}{2}\!\left(\!\frac{h^{*}_q}{\!\!h}\!\right)^{\!q}1\hspace{-0.15cm}1_{[h^*_q,+\infty[}(h)\varphi(h)dh,\label{limit_distrib} \end{eqnarray} where $1\hspace{-0.15cm}1_{[a,b]}$ denotes the indicator function of interval $[a,b], \forall (a,b)\in \R^2$.\sa Therefore, to compute the limit of $<T_{\mathcal{P}_{q}}, \varphi >$ when $q$ goes to $+\infty$, we will check the hypothesis of the dominated convergence theorem \cite{Brezis} for the integrals involved in (\ref{limit_distrib}).\sa For the first one, introduce the sequence of functions $\D\left(\psi_{q}\right)_{q\in\N^*}$ defined on $\R$ by: $$ \D\forall \, q \in \N^*: \psi_q(h)= \left[1 - \frac{1}{2}\!\left(\!\frac{\!\!h}{h^{*}_q}\!\right)^{\!q}\right]1\hspace{-0.15cm}1_{[0,h^*_q]}(h)\varphi(h). $$ Then, sequence $\D\left(\psi_{q}\right)_{q\in\N^*}$ exhibits the following properties: \begin{itemize} \item It converges pointwise on $\R$ to function $H\varphi$, thanks to the following properties: \begin{eqnarray*} \D \forall h \in\R & : &\lim_{q\rightarrow +\infty}1\hspace{-0.15cm}1_{[0,h^*_q]}(h)=1\hspace{-0.15cm}1_{[0,+\infty[}(h), \\[0.2cm] \D\forall h,\, 0 < h < h^*_q & : & \lim_{q\rightarrow +\infty}\left(\frac{h}{h^*_q}\right)^q = \lim_{q\rightarrow +\infty}\exp\left[\D q\ln\left(\!\frac{h}{h^*_q}\right)\right] = 0^+, \\[0.2cm] \mbox{For } \D h=h^*_q & : & \psi_q(h^*_q) = \frac{1}{2}\,1\hspace{-0.15cm}1_{[0,h^*_q]}(h^*_q)\varphi(h^*_q)=\frac{1}{2}\,\varphi(h^*_q) \underset{q \to +\infty}{\longrightarrow} 0, \end{eqnarray*} as $h^*_q$ goes to $+\infty$ when $q$ goes to $+\infty$, $\varphi$ being a function with compact support. \item The sequence of functions $\D\left(\psi_{q}\right)_{q\in\N^*}$ is uniformly dominated for all $q\in\N^*$ by an integrable function: $$ \forall q\in N^*: |\psi_q(h)|\leq |\varphi|, $$ and $|\varphi|\in L^1(\R)$ as $\varphi\in {\cal D(\R)}$. \end{itemize} The dominated convergence theorem enables us to conclude that $$ \D \lim_{q\rightarrow +\infty}\int_{-\infty}^{+\infty}\psi_q(h)dh \,= \int_{-\infty}^{+\infty}\lim_{q\rightarrow +\infty}\psi_q(h)dh \,= \int_{-\infty}^{+\infty}(H\varphi)(h)dh. $$ With the same arguments, one gets for the second integral of (\ref{limit_distrib}) $$ \D \lim_{q\rightarrow +\infty} \int_{-\infty}^{+\infty}\frac{1}{2}\!\left(\!\frac{h^{*}_q}{\!\!h}\!\right)^{\!q}1\hspace{-0.15cm}1_{[h^*_q,+\infty[}(h)\varphi(h)dh = 0, $$ so that $$ \D \lim_{q\rightarrow +\infty} <T_{\mathcal{P}_{q}}, \varphi > \,\,= \int_{-\infty}^{+\infty}(H\varphi)(h)dh \,\, = \,\, <T_H, \varphi >, \forall \varphi \in {\cal D(\R)}. $$ This ends the proof. \end{prooff} \noindent In this setting, it is worth giving an interpretation of the results proved in this section. Basically, our results mean that when the distance between the values of $k_1$ and $k_2, (k_1 < k_2)$ increases, the finite elements $P_{k_2}$ will be \emph{surely more accurate} than finite elements $P_{k_1}$, for values of the mesh size $h$ which fulfill the interval $[0,+\infty[$, and not only when $h$ goes to zero, as usually considered for accuracy comparison.\sa Apart from the asymptotic case where $k_2-k_1$ goes to infinity, the probabilistic law (\ref{Nonlinear_Prob}) gives new insights into the relative accuracy between $P_{k_1}$ and $P_{k_2}$ finite elements. Indeed, for $h>h^*_{m,p}$, we obtained that $Prob\left\{ X_{m,p}^{(k_2)}(h) \leq X_{m,p}^{(k_1)}(h)\right\}\leq 0.5$. This shows that there are cases where $P_{k_2}$ finite elements \emph{probably} must be overqualified. As a consequence, a significant reduction of implementation time and execution cost could be obtained without loss of accuracy. Such a phenomenon has already been observed by using data-mining techniques coupled with other probabilistic models (see \cite{AsCh11, AsCh13, AsCh14}, \cite{AsCh16} and \cite{AsCh17}). \section{Conclusion}\label{Conclusion} \noindent In this paper, we derived an explicit $k-$dependence in $W^{m,p}$ \emph{a priori} error estimates, that we then applied to probabilistic relative accuracy of Lagrange finite elements. After having recalled some fundamental results of Banach spaces, especially the extension of C\'ea's classical Lemma to non Hilbert spaces, we derived general upper bounds on the basis functions and their partial derivatives for the polynomial space $P_k(K)$.\sa Hence, we extended previous work \cite{ChasAs18}, \cite{Arxiv2} to the case of Banach $W^{m,p}$ spaces. This enabled us to evaluate the relative accuracy between two Lagrange finite elements $P_{k_1}$ and $P_{k_2}, (k_1 < k_2)$, when the norm to measure the error estimate is defined on $W^{m,p}(\Omega)$. We also analyzed the asymptotic behavior of the relative accuracy between finite elements $P_{k_1}$ and $P_{k_1+q}$, for a fixed $k_1$, when $q$ goes to $+\infty$. We proved that, under some {\em ad hoc} assumptions that are fulfilled in most cases, the probabilistic law (\ref{Nonlinear_Prob}) is convergent to the Heaviside distribution $T_H$ in the weak-* topology on $D'(\R)$. \sa Lastly, note that these perspectives are not necessarily restricted to finite element methods, but can be extended to other approximation methods: given a class of numerical schemes and their corresponding error estimates, one can order them, not only by considering their asymptotic rates of convergence, but also by evaluating the most probably accurate one. \sa \textbf{\underline{Homages}:} The authors want to warmly dedicate this research to pay homage to the memory of Professor Andr\'e Avez and Professor G\'erard Tronel, who largely promoted the passion of research and teaching in mathematics.
2023-04-23T08:17:59.175Z
2019-09-10T02:23:59.000Z
redpajama/arxiv
arxiv_0000
972
10,787
415070eb9adb082170b940998a7fda20a967a812
\section{Introduction} \label{sec:introduction} Let $(M,g)$ be an $n$\hyp{}dimensional complete Riemannian manifold. We say that $(M,g)$ supports an $L^p$\textsl{\hyp{}Calder\'on\hyp{}Zygmund inequality} for some $p\in(1,\infty)$, if there exists a constant $C>0$ such that \begin{equation} \label{e: CZp} \tag{CZ(p)} \|\Hess \varphi \|_{L^p}\le C(\|\varphi\|_{L^p}+\|\Delta\varphi\|_{L^p}), \qquad \forall\,\varphi\in C^\infty_c(M). \end{equation} Here, $\Hess\varphi = \nabla^2\varphi$ denotes the second order covariant derivative tensor and $\Delta$ is the (negatively defined) Laplace-Beltrami operator of $(M,g)$; both tensorial and $L^p$ norms are computed with respect to the Riemannian metric $g$, with the common abuse of notation $\|\Hess \varphi \|_{L^p}=\| \,|\Hess \varphi|\, \|_{L^p}$. Calder\'on\hyp{}Zygmund inequalities were first established by a work of A. Calder\'on and A. Zygmund, \cite{CZ1952}, in the Euclidean space $\R^n$, where in fact one has the stronger \begin{equation*} \|\Hess \varphi \|_{L^p}\le C(p, n)\|\Delta\varphi\|_{L^p}, \qquad \forall\,\varphi\in C^\infty_c(\R^n); \end{equation*} see also \cite[Theorem 9.9]{GT1998}. This inequality is a fundamental tool, for instance, as an \textit{a priori} estimate in the regularity theory of elliptic PDEs. In the Riemannian setting, $CZ(p)$ is known to hold for compact manifolds $(M,g)$. Here, the constant $C$ clearly depends on the Riemannian metric. The case of complete non\hyp{}compact manifolds is much less understood. In this context a systematic study of Calder\'on\hyp{}Zygmund inequalities was recently initiated by B. G\"uneysu and S. Pigola in \cite{G2016,GP2015}. Since then, geometric analysts have shown an increasing interest towards the topic, both concerning the (non\hyp{})existence of $CZ(p)$ on a given manifold, and the interaction of Calder\'on\hyp{}Zygmund theory with other related issues \cite{GP2018,DPNZ2019,GPM2019,IRV2019,L2020,V2020}; see in particular the very nice recent survey \cite{P2020}. When the $C^{1,\alpha}$-harmonic radius of $(M,g)$ is positive, a computation in a harmonic coordinate system together with a covering argument allows to reduce the Riemannian problem to the Euclidean setting. Using this strategy, the inequality $CZ(p)$ was proved to be true in the whole range $p\in (1,\infty)$ for manifolds of bounded Ricci curvature (both from above and below) and positive injectivity radius; see \cite[Theorem C]{GP2015}. Note that the limit cases $CZ(1)$ and $CZ(\infty)$ are disregarded as they fail to be true even in the Euclidean space, \cite{dLM1962,O1962}. On the other hand, manifolds which do not support $CZ(p)$ have been recently constructed in \cite{GP2015,L2020,V2020}, hence, the validity of an $L^p$\hyp{}Calder\'on\hyp{}Zygmund inequality in the Riemannian setting needs some geometric assumptions. It is worth mentioning that in the cited counterexamples the Ricci curvature of the manifold at hand is always unbounded from below. Unsurprisingly, the $L^2$ case is a peculiar one. Indeed, one can use the Bochner formula to obtain a much stronger result. \begin{theorem}[Theorem B in \cite{GP2015}]\label{th: CZ(2)} Let $(M, g)$ be a complete Riemannian manifold, if $\Ric \geq -K^2$ then $CZ(2)$ holds on $M$ with a constant depending only on $K$. \end{theorem} It is worthwhile to observe that the Theorem above is sharp in the following sense which, to the best of our knowledge, has not been yet observed so far. \begin{mytheorem} \label{th: sharp} For each $m\ge 2$ and $p\in (1,\infty)$, and for each increasing function $\lambda: [0,+\infty)\to \R$ such that $\lambda(t) \to +\infty$ as $t\to \infty$, there exists a complete Riemannian manifold $(M,g)$ satisfying $\min\Sect (x) \ge -\lambda(r(x))$ for $r(x)$ large enough, and which does not support an $L^p$-Calder\'on-Zygmund inequality $CZ(p)$. Here $r(x)$ is the Riemannian distance from some fixed reference point $o\in M$ and the $\min$ is over all the sectional curvatures at the point $x$. \end{mytheorem} In particular, it is not possible to obtain $CZ(2)$ under negative decreasing curvature bounds (for instance $\Ric (x) \ge -Cr^\alpha(x)$ for some $\alpha>0$), as it is the case for the closely related problem of the density of smooth compactly supported functions in the Sobolev space $W^{2,p}(M)$ (see \cite{IRV2019}). Under this milder condition, however, a disturbed $CZ(p)$ holds, \cite[Section 6.2]{IRV-preprint}. According to Theorem \ref{th: CZ(2)} and Theorem \ref{th: sharp}, the following question naturally arises. \begin{question}[Conjectured for $\Ric\ge 0$ in \cite{G2016}, p. 177] \label{q: batu} Suppose that $(M,g)$ is geodesically complete and has lower bounded Ricci curvature. Does $CZ(p)$ hold on $(M,g)$ for all $p\in(1,\infty)$? \end{question} Strong evidence for a negative answer comes from a deep and recent result by G. De Philippis and J. N\'u\~nez\hyp{}Zimbron who proved the impossibility to have a Calder\'on\hyp{}Zygmund theory on compact manifolds with constants depending only on a lower bound on the sectional curvature, at least when $p>n$. Namely, when $p>n$ one can find a sequence of compact, non\hyp{}negatively curved Riemannian manifolds $\{(M_j,g_j)\}_{j=1}^\infty$ for which the best constant in $CZ(p)$ is at least $j$; see \cite[Corollary 1.3]{DPNZ2019}. The main result of this short note gives a concrete and final answer to \Cref{q: batu}, even under the stricter assumption of positive sectional curvature. \begin{mytheorem} \label{t: main} For every $n\ge 2$ and $p>n$, there exists a complete, non\hyp{}compact $n$\hyp{}dimensional Riemannian manifold $(M, g)$ with $\Sect(M) > 0$ such that $CZ(p)$ fails. \end{mytheorem} With respect to the argument in \cite{DPNZ2019}, our main contribution consists in proving the existence of a fixed Riemannian manifold on which $CZ(p)$ can not hold, whatever constant $C$ one takes (as explained above, such a result is clearly impossible in the compact setting). To prove \cite[Corollary 1.3]{DPNZ2019}, the authors considered a sequence of smooth non\hyp{}negatively curved $n$\hyp{}dimensional compact manifolds Gromov\hyp{}Hausdorff approaching a compact $RCD(0,n)$ space $X$ with a dense set of singular points. A bound on the constant $C$ in $CZ(p)$ along the sequence, combined with a Morrey inequality, would imply that all functions on $X$ with Laplacian in $L^{p>n}$ are $C^1$. On the other hand, De Philippis and N\'u\~nez\hyp{}Zimbron proved in \cite[Theorem 1.1 ]{DPNZ2019} that the gradient of a harmonic function (or more generally of any function whose Laplacian is in $L^{p>n}$) vanishes at singular points of an $RCD$ space. By a density argument, this would imply that all harmonic functions on $X$ are constant which is impossible. To achieve our result, we localize this procedure. The key observation is the fact that the argument is indeed local and can be repeated on infinitely many singular perturbations scattered over a non\hyp{}compact manifold. Namely, we begin with a complete non\hyp{}compact manifold $(M,g)$ with $\Sect(M) > 0$. In the interior of infinitely many separated sets $\{\mathfrak D_j\}_{j\in\mathbb{N}}$ of $M$ we take sequences of local perturbations $g_{j,k}$ of the original metric $g$ such that all the $g_{j,k}$ have $\Sect > 0$ and $g_{j,k}$ Gromov\hyp{}Hausdorff converges to an Alexandrov metric $d_{j,\infty}$ on $M$ of non\hyp{}negative curvature (hence $RCD(0,n)$). In particular, the metric $d_{j, \infty}$ is singular on a dense subset of $\mathfrak{D}_j$. Next, we observe that a neighborhood of each $\mathfrak D_j$ can be seen as a piece of a compact space whose metric is smooth outside $\mathfrak D_j$, so that De Philippis and N\'u\~nez\hyp{}Zimbron's strategy can be applied locally to the sequence $g_{j,k}$. Accordingly, we find a (large enough) $k$ and a function $v_j$ compactly supported in a small neighborhood of $\mathfrak{D}_j$ such that the following estimate holds with respect to the metric $g_{j,k}$ \begin{equation*} \Vert \Hess v_j \Vert_{L^p} > j \left(\Vert \Delta v_j \Vert_{L^p} + \Vert v_j \Vert_{L^p}\right). \end{equation*} Gluing together all the local deformations of the metric, we thus obtain a smooth manifold on which no constant $C$ makes $CZ(p)$ true. It is worthwhile to note that, to the best of our knowledge, the problem of extending to $1<p\le n$ De Philippis and N\'u\~nez\hyp{}Zimbron's result (and thus our extension) is completely open, except for the case $p=2$ alluded to above. Notably, it is not known whether a lower bound on the sectional curvature suffices to have the validity of $CZ(p)$ for any $p\in (1,n]$. The main obstruction to reproduce the strategy detailed above is the lack of a Morrey embedding when $p\leq n$. Accordingly, the gradient of a harmonic functions on the singular space could be non\hyp{}continuous. We wonder whether an $L^q$ control on the gradient of harmonic functions, for $q$ large enough, could suffice, thus permitting to lower the threshold $n$ for $p$ in Theorem \ref{t: main}. \vspace{\baselineskip} \Cref{t: main} confirms the strong indications carried by \cite{DPNZ2019} and sheds light on the conditions necessary to the validity of $CZ(p)$ on complete non\hyp{}compact Riemannian manifolds. On the other hand, our result answers as a byproduct two other related questions. Beyond the importance that Calder\'on\hyp{}Zygmund inequalities have in themselves, their validity has consequences on other topics in the field. For instance, $CZ(p)$ is related to a class of functional inequalities known as $L^p$\textsl{\hyp{}gradient estimates}, i.e. \begin{equation}\label{eq: Lp-grad} \|\nabla \varphi \|_{L^p}\le C(\|\varphi\|_{L^p}+\|\Delta\varphi\|_{L^p}) \end{equation} for all $\varphi\in C^\infty_c(M)$. These gradient estimates are known to hold on any complete Riemannian manifold (actually in a stronger multiplicative form) for $p\in(1,2]$, \cite{CD2003}. To the best of our knowledge, it is still unknown if \eqref{eq: Lp-grad} holds as well for $p>2$ without further assumption. Nonetheless, a Riemannian manifold supports an $L^p$\hyp{}gradient estimate whenever $CZ(p)$ holds on $M$, \cite[Corollary 3.11]{GP2015}. This naturally leads to the following question, raised by B. Devyver. \begin{question}[see Section 8.1 in \cite{P2020}]\label{q: baptiste} Are $L^p$\hyp{}gradient estimates and $L^p$\hyp{}Calder\'on\hyp{}Zygmund inequalities equivalent? \end{question} Since $L^p$\hyp{}gradient estimates are known to hold when the Ricci curvature is bounded from below, \cite{CTT2018}, \Cref{t: main} gives a negative answer also to \Cref{q: baptiste}. \begin{mycorollary} For any $n \geq 2$ and $p> n$, there exists a complete Riemannian manifold $(M,g)$ supporting the $L^p$\hyp{}gradient estimate \eqref{eq: Lp-grad} on which $CZ(p)$ does not hold. \end{mycorollary} Another important feature of $CZ(p)$ is its interaction with Sobolev spaces. Unlike the Euclidean setting, on a Riemannian manifold there exist several non-necessarily equivalent definitions of $k$-th order $L^p$ Sobolev space; see for instance the introduction in \cite{V2020} for a brief survey. The role of $CZ(p)$ in the density problem of compactly supported functions in the Sobolev space is by now well understood; see \cite[Remark 2.1]{V2020}. Here, we consider the spaces \[ W^{2,p}(M)=\{f\in L^p:\ \nabla f\in L^p,\ \Hess\, f\in L^p\}, \] and \[ H^{2,p}(M)=\{f\in L^p:\ \Delta f\in L^p\}, \] endowed with their canonical norms. Here, the gradient, the Hessian and the Laplacian are interpreted in the sense of distributions. Note that the space $H^{2,p}$ can be interpreted as the maximal self-adjoint realization of $\Delta:C^\infty_c\to C^\infty_c$ in $L^p$. By definition, $W^{2,p}(M)\subset H^{2,p}(M)$. Moreover, if $CZ(p)$ and \eqref{eq: Lp-grad} hold on $M$ one has \begin{equation*} \|\nabla \varphi \|_{L^p}+\|\Hess \varphi \|_{L^p}\le C(\|\varphi\|_{L^p}+\|\Delta\varphi\|_{L^p}),\quad\forall\,\varphi\in C_c^{\infty}(M). \end{equation*} Thanks to a density result due to O. Milatovic (see \cite[Appendix A]{GPM2019}), the latter estimate holds for any $\varphi\in H^{2,p}$. Thus, $H^{2,p}=W^{2,p}$ whenever $CZ(p)$ and \eqref{eq: Lp-grad} hold on $M$, for instance, when the geometry of $M$ is bounded. Conversely, examples proving that $H^{2,p}\neq W^{2,p}$ on wildly unbounded geometries are known; \cite{D1981,V2020}. In this direction, as a corollary of the proof of \Cref{t: main}, we get the following interesting observation. \begin{mycorollary} \label{cor:counterexample to sobolev inclusion} For every $n\geq 2$ and $p>n$ there exists a complete, non\hyp{}compact $n$\hyp{}dimensional Riemannian manifold with $\Sect(M) > 0$ such that $W^{2, p}(M) \subsetneq H^{2, p}(M)$. \end{mycorollary} The paper is organized as follows. In \Cref{sec:singular space} we construct the sequence of local deformations of the initial smooth metric, each Gromov\hyp{}Hausdorff converging to an Alexandrov space of positive curvature with a locally dense cluster of singular points. In \Cref{sec: Poisson} we specify to our setting a proposition by S. Pigola (which collects a series of previous results due to S. Honda) about the convergence of functions defined on a Gromov\hyp{}Hausdorff converging sequence of manifolds. Finally, in \Cref{sec: proof} we conclude the proofs of \Cref{t: main} and \Cref{cor:counterexample to sobolev inclusion}. \section{The singular space and its smooth approximations} \label{sec:singular space} It is well known from previous literature that for every $n\ge 2$ one can always construct a compact, convex set $C \subset \R^{n+1}$ whose boundary $X = \partial C$ is an Alexandrov space with $\Curv (X) \geq 0$ and a dense set of singular points. The first example of such spaces is due to Y. Otsu and T. Shioya in dimension 2, \cite[Example (2)]{OS1994}, although the result holds in arbitrary dimension. Observe that the space $X$ can be GH approximated with a sequence of smooth manifolds $X_k$ of non\hyp{}negative sectional curvature; see the proof of \cite[Theorem 1]{AKP2008}. In the following, we would like to localize this construction inside a compact set of a complete, non\hyp{}compact manifold. Indeed, we prove that a smooth and strictly convex function can always be perturbed on a compact set by introducing a dense sequence of singular points. Our construction leaves the function unaltered outside the compact set, preserves smoothness outside the singular set and convexity at a global scale. Furthermore, we prove that such singular perturbation can be locally and uniformly approximated with smooth convex functions in a neighborhood of the singular set. Once again, the difficulty here is to leave the functions unaltered outside the compact set. \begin{lemma} \label{lem:singular deformation} Let $f: \R^n \to \R$ be a smooth, convex function. For every $x \in \R^n$, $r >0$ there exists a convex function $f_\infty: \R^n \to \R$ such that \begin{enumerate}[label=(\roman*)] \item $f_\infty$ is smooth and equal to $f$ outside $B_r(x)$; \item the graph of $f_\infty$ restricted to $B_r(x)$ has a dense set of singularities. \end{enumerate} Furthermore, there exists a sequence of smooth, strictly convex functions $f_\infty^k:\R^n \to \R$ converging uniformly to $f_\infty$ and equal to $f$ outside $B_r(x)$. \begin{proof} Take $\lbrace y_k\rbrace_{k = 1}^\infty$ any dense set contained in $S \coloneqq B_r(x)$. We want to perturb $f$ in $S$ to obtain a new convex function whose graph has singularities in correspondence with $y_k$. To do so, we consider $g : B_1(0) \to \R$ such that \begin{equation*} \begin{cases} g(x) = |x| + |x|^2 - 1 & x \in B_{1/2}(0)\\ g \in C^\infty(B_1(0) \setminus \lbrace 0\rbrace) \\ \supp g \subset B_1(0)\\ g \leq 0. \end{cases} \end{equation*} Then, for $\varepsilon> 0$ and $y \in \R^n$ we define $g_{\varepsilon, y }: B_\varepsilon(y) \to \R$ as \begin{equation*} g_{\varepsilon, y }(x) \coloneqq g\left(\frac{x-y}{\varepsilon} \right), \end{equation*} so that $g_{\varepsilon, y }$ is smooth outside $\lbrace y\rbrace$, non-positive and strictly convex on $B_{\varepsilon/2}(y)$. Let $\varepsilon_1 < 1 - |y_1|$, define \begin{equation*} f_1(x) \coloneqq f(x) + \eta_1 g_{\varepsilon_1, y_1}(x), \end{equation*} with $\eta_1>0$ small enough so that $f_1$ is strictly convex. Observe that $f_1$ is smooth outside $\lbrace y_1 \rbrace$ and (its graph) has a singular point on $y_1$. Recursively, we let $\varepsilon_k < \min \left\lbrace 1-|y_k|, \text{dist} (y_k, y_1), \ldots \text{dist} (y_k, y_{k-1}) \right\rbrace$ and we define \begin{equation} \label{eq: f_k} f_k(x) \coloneqq f_{k-1}(x) + \eta_k g_{\varepsilon_k, y_k}(x). \end{equation} By construction $f_k$ is smooth outside $\lbrace y_1, \ldots, y_k \rbrace$, where its graph is singular, and strictly convex, provided that $\eta_k$ is small enough. Furthermore, if $\eta_k$ are such that $\sum_k \eta_k$ converges, $f_k$ converges uniformly to some $f_\infty$, which is convex, singular on $\lbrace y_k\rbrace_{k = 1}^\infty$ and is smooth elsewhere. Moreover it is equal to $f$ outside $S$. Observe also that $\lbrace (y_k, f_\infty (y_k)) \rbrace_{k = 1}^\infty$ is dense in $\Graph(f_\infty|_S)$ since $f_\infty$ is locally Lipschitz. It remains to show that $f_\infty$ can be smoothly approximated with strictly convex functions. By a diagonalization procedure, it is enough to uniformly approximate each $f_k$. For $0 < \delta < \min\lbrace \varepsilon_1, \ldots, \varepsilon_k \rbrace$, let $\phi_{\delta, k} = \phi_\delta: \R^n \to \R$ be a smooth convex function such that \begin{equation*} \phi_\delta(x) = f_k(x) \text{ on } \R^n \setminus \bigcup_{i=1}^k B_\delta(y_k). \end{equation*} The existence of $\phi_\delta$ is ensured by \cite[Theorem 2.1]{G2002}. Clearly $\phi_\delta$ converges pointwise to $f_k$ as $\delta \to 0$. Since the functions are all strictly convex the convergence is actually uniform. This concludes the proof. \end{proof} \end{lemma} \begin{remark} Observe that the epigraph of $f_\infty$ is a convex set in $\R^{n+1}$ whose boundary, endowed with the intrinsic distance, is an Alexandrov space of non\hyp{}negative curvature (see \cite{B1976}). Its singularities are contained (and dense) in the compact set $\Graph(f_\infty|_S)$. Similarly, the graphs of $f_\infty^k$ are smooth hypersurfaces of positive sectional curvature, isometrically immersed in $\R^{n+1}$. Since $f_\infty^k \to f_\infty$ uniformly, their graphs converge with respect to the Hausdorff metric. In the case of convex sets of $\R^n$, it is well known that this implies Gromov\hyp{}Hausdorff convergence; see \cite[Theorem 10.2.6]{BBI2001} observing that the proof applies in any dimension. Observe also that the convergence is measured if we endow these spaces with the usual $n$\hyp{}dimensional Hausdorff measure $\mathcal{H}^n$. On an isometrically immersed manifold, this is in fact the Riemannian volume. \end{remark} \section{Convergence of solutions of the Poisson equation}\label{sec: Poisson} The next step in our proof is a convergence result for the solutions of the Poisson equation on limit spaces. In what follows we mimic, up to minor modifications necessary to our purposes, \cite[Proposition B.1]{P2020}, where Pigola collects and develops a series of previous results due to Honda, \cite{H2015} and \cite{H2018}. Let us consider the following space \begin{equation*} \mathcal{M}(n, D) = \lbrace (M, g) \text{ cpt.} : \dim M = n, \diam (M) \leq D, \Sect \geq 0 \rbrace, \end{equation*} and denote with $\overline{\mathcal{M}(n, D)}$ its closure with respect to the measured Gromov\hyp{}Hausdorff topology. Note that elements of $\overline{\mathcal{M}(n, D)}$ are in particular Alexandrov spaces with $\Curv \geq 0$ and $\diam \leq D$. Note that, by volume comparison and bounds on the diameter, there exists $V > 0$ depending on $n, D$ such that $\vol X \leq V$ for all $X \in \mathcal{M}(n, D)$. \begin{remark} The following Proposition actually holds in the more general setting of Ricci limit spaces. To avoid unnecessary complication in notations, we restrict ourselves to the case of Alexandrov spaces which are a special case of the former. \end{remark} In what follows, all convergences are intended in the sense of Honda, see \cite[Section 3]{H2015}. \begin{proposition} \label{prop:convergence poisson} Let $(M_k, g_k) \in \mathcal{M}(n, D)$ be a sequence of smooth manifolds converging in the mGH topology to an Alexandrov space $(X_\infty, d_\infty,\mu_\infty) \in \overline{\mathcal{M}(n, D)}$ of dimension $n$ and let $x_\infty \in X_\infty$. There exist functions $u_k \in C^2(M_k)$, $g_k \in \Lip(M_k)$ and $u_\infty \in W^{1, 2}(X_\infty) \cap L^p(X_\infty)$, $g_\infty \in L^p(X_\infty)$ for all $1<p<+\infty$, such that $u_k, u_\infty$ are non-constant and $\Delta_{M_k} u_k = g_k$, $\Delta_{X_\infty} u_\infty = g_\infty$. Furthermore \begin{enumerate}[label=\normalfont (\alph*)] \item $g_\infty \geq \sfrac{1}{2}$ on a neighborhood of $x_\infty$; \item $g_k \to g_\infty$ in the strong $L^p$ sense; \item $u_k \to u_\infty$ in the strong $W^{1, 2}$ sense; \item $\Vert u_k \Vert_{W^{1, p}} \leq L$ for some $L = L(p, n, D, K) > 0$; \item $u_k \to u_\infty$ in the strong $L^p$ sense; \item $\nabla^{M_k} u_k \to \nabla^X u_\infty$ in the weak $L^p$ sense. \end{enumerate} These functions satisfy (a) through (f) for all $1<p<+\infty$. \begin{proof} Since $M_k$ is bounded, separable and $M_k$ converges to $X_\infty$ with respect to the mGH topology, there exists a sequence of points $x_k \in M_k$ such that the mGH convergence $(M_k, g_k, x_k) \to (X_\infty, \mu_\infty, x_\infty)$ is pointed. Next, using volume comparison and the convergence $\vol (M_k)\to \mathcal H^n(X)$ as $k\to\infty$, one can show the existence of a uniform $R> 0$ such that for $k \gg 1$, \begin{equation*} \vol B_R^{M_k}(x_k) \leq \frac{1}{2} \vol M_k. \end{equation*} Let $f_k : M_k \to [0, 1]$ be Lipschitz functions compactly supported in $B_R^{M_k}(x_k)$ satisfying \begin{equation*} i) \, f_k = 1\ \text{ on } B_{R/2}^{M_k}(x_k), \qquad ii) \, \Vert \nabla f_k \Vert_{L^\infty} \leq \frac{2}{R}. \end{equation*} Define \begin{equation*} g_k \coloneqq f_k - \fint_{M_k} f_k \in \Lip(M_k), \end{equation*} and note that \begin{equation*} 0 \leq \fint_{M_k} f_k \leq \frac{\vol B_R^{M_k}(x_k)}{\vol M_k} \leq \frac{1}{2}. \end{equation*} Clearly $\fint_{M_k} g_k = 0$ and $\Vert g_k\Vert_{L^\infty} \leq 1$. Moreover, $g_k \geq 1/2$ on $B_{R/2}^{M_k}(x_k)$ so that $g_k \not\equiv 0$. Since $\Vert g_k\Vert_{L^\infty} \leq 1$ and the volumes are uniformly bounded, $\Vert g_k \Vert_{L^p} \leq V^{1/p}$ for all $p > 1$. Using \cite[Proposition 3.19]{H2015} we conclude that $g_k$ converges weakly to some $g_\infty \in L^p(X_\infty)$ (\cite[Definition 3.4]{H2015}). Condition $ii)$ ensures that the sequence $g_k$ is asymptotically uniformly continuous in the sense of \cite[Definition 3.2]{H2015}. Hence, $g_k$ converges to $g_\infty$ strongly and in the sense of \cite[Definition 3.1]{H2015}, see \cite[Remark 3.8]{H2015}. This ensures that $g_\infty\not\equiv 0$ in a neighborhood of $x_\infty$ and, more importantly, allows us to use \cite[Proposition 3.32]{H2015} which proves strong $L^p$ convergence of $g_k$ to $g_\infty$. It is worthwhile to notice that $g_k$ converges $L^p$ strongly to $g_\infty$ for every $1<p<+\infty$, in particular, for $p = 2$. Next, we denote with $u_k \in C^2(M_k)$ the unique (non-constant) solution of the Poisson equation \begin{equation*} \Delta_{M_k} u_k = g_k \quad \text{on }M_k, \end{equation*} satisfying \begin{equation*} \fint_{M_k} u_k = 0. \end{equation*} Since $g_k$ converges to $g_\infty$ in a strong (and thus weak) $L^2$ sense, \cite[Theorem 1.1]{H2018} ensures $W^{1, 2}$ convergence of $u_k$ to the unique (non-constant) solution $u_\infty \in W^{1, 2}(X_\infty)$ of the Poisson equation \begin{equation*} \Delta_{X_\infty} u_\infty = g_\infty \quad \text{on }X_\infty, \end{equation*} satisfying \begin{equation*} \fint_{X_\infty} u_\infty = 0. \end{equation*} Finally, we claim that $\lbrace u_k \rbrace$ is bounded in $W^{1, p}$. By \cite[Theorem 4.9]{H2015} this implies $L^p$ strong convergence of $u_k$ to $u_\infty$ and $L^p$ weak convergence of $\nabla^{M_k} u_k$ to $\nabla^X u_\infty$ up to a subsequence and thus concludes the proof of \Cref{prop:convergence poisson}. To prove the claim we observe that since $u_k \to u_\infty$ in a strong $W^{1,2}$ sense, we have $L^2$ boundedness of $u_k$. Applying the estimates in \cite[Corollary 4.2]{ZZ2019} we obtain $L^\infty$ bounds for $ u_k$ and $\nabla u_k$, hence, the desired $L^p$ estimates using the uniform bound on volumes. \end{proof} \end{proposition} \section{Proofs of the results} \label{sec: proof} In \Cref{sec:singular space} we established a method to locally perturb a smooth and strictly convex function by introducing a set of singular points, which is dense inside a given compact. In the following we consider a sequence of infinitely many singular perturbations scattered over a non\hyp{}compact manifold, each of these perturbations is GH approximated with smooth Riemannian manifolds. For each perturbation, we prove that it is impossible to have the validity of a local (hence of a global) Calder\'on\hyp{}Zygmund inequality whose constant is uniformly bounded across the approximating sequence of manifolds. To do so, we show that each singular set together with its corresponding approximation can be seen as a piece of a compact space whose metric is smooth outside the singular part. This observation is a technical device which allows the application of already available results. In particular, we can employ \Cref{prop:convergence poisson} to localize the strategy of De Philippis and N\'u\~nez-Zimbr\'on in a neighborhood of each singular set. Once we have proven that the constants of the local Calder\'on\hyp{}Zygmund inequalities cannot be chosen uniformly, we select on the $j$-th perturbation in the approximating sequence a manifold with $CZ(p)$ constant greater that $j$. \begin{lemma} \label{lem:main lemma} Let $n \geq 2$ and $p> n$. There exists a sequence of smooth and strictly convex functions $f_j : \R^n \to \R$, $j \geq 1$ and a monotone increasing sequence of radii $r_j > 0$ such that \begin{enumerate}[label=(\roman*)] \item $f_j(x) = f_{j-1}(x)$ for $x \in \mathfrak{B}_{j-1}$; \item $f_j(x) = |x|^2$ for $x \in \R^n \setminus \overline{\mathfrak{B}_j}$; \end{enumerate} where $ \mathfrak{B}_j = B_{r_j}(0)$ and $\mathfrak{B}_0 = \emptyset$. Furthermore, if we consider $N_j = \Graph(f_j)$ as a Riemannian manifold isometrically immersed in $\R^{n+1}$, there exists some $v_j \in C^2(N_j)$ compactly supported in $\Graph (f_j|_{\mathfrak{B}_j \setminus \overline{\mathfrak{B}_{j-1}}}$) which satisfies \begin{equation} \label{eq:contraddiction to CZ(p)} \Vert \Hess v_j \Vert_{L^p} > j \left(\Vert \Delta v_j \Vert_{L^p} + \Vert v_j \Vert_{L^p}\right), \end{equation} where $L^p = L^p(M_j)$. \begin{proof} We begin with a remark on notation: given a subset $A \subset \R^n$ and some function $k : \R^n \to \R$, we denote with $k(A) = \Graph(k|_A) \subset \R^{n+1}$. This abuse of notation is repeatedly used throughout the proof. To simplify the exposition, the proof proceeds inductively on $j\ge 1$. Set $f_0(x)=|x|^2$. Suppose one has $f_{j-1}$ and wants to build $f_j$. Let $S_j$ be a Euclidean ball contained in $\R^n \setminus \overline{\mathfrak{B}_{j-1}}$. By \Cref{lem:singular deformation} there exists a convex function $h_j$ with a dense set of singular points in $S_j$ and equal to $f_{j-1}$ outside $S_j$. Furthermore, $h_j$ can be approximated with smooth and strictly convex functions $h_{j,k} : \R^n \to \R$ equal to $f_{j-1}$ outside $S_j$. Note that $h_j(S_j)$ corresponds to the $\mathfrak{D_j}$ of the Introduction. Next, let $r_j > 0$ be such that $S_j \subset \mathfrak{B}_j$. For later use we observe that one can always consider a larger ball $T_j$ such that $S_j \subset T_j$ and $T_j \subset \mathfrak{B}_j \setminus \overline{\mathfrak{B}_{j-1}}$. We want to extend $h_j(\mathfrak{B}_j)$ to a closed (i.e. compact without boundary) Alexandrov space $X_j$ with $\Curv(X_j) \geq 0$. Moreover, we would like the extension to be smooth outside $h_j(\mathfrak{B}_{j-1})$. To this purpose, let $A_j$ be the upper hemisphere of boundary $h_j(\partial \mathfrak{B}_j)$ in $\R^{n+1}$, so that $\widetilde{X_j} \coloneqq h_j(\mathfrak{B}_j) \cup A_j$ is a convex hypersurface in $\R^{n+1}$. To obtain $X_j$, one simply needs to smooth $\widetilde{X_j}$ in a neighborhood of $h_j(\partial \mathfrak{B}_j)$. For instance, one can use \cite[Theorem 2.1]{G2002}, observing that in this neighborhood, $\widetilde{X_j}$ is obtained by rotation of a piecewise smooth curve. We consider on $X_j$ the metric induced by $\R^{n+1}$. By the same strategy, we extend $h_{j,k}(\mathfrak{B}_j)$ to a compact and smooth Riemannian manifold $M_{j,k}$ with $\Sect(M_{j,k}) > 0$, isometrically immersed in $\R^{n+1}$. Note that, for all $k$, $ M_{j,k}=X_j$ outside of $S_j$. Moreover $M_{j,k}$ converges to $X_j$ in a (measured) Gromov\hyp{}Hausdorff sense as $k \to \infty$. Then, choosing a point $x_{j,\infty} \in S_j \subset X_j$, we apply \Cref{prop:convergence poisson} to deduce the existence of $u_{j,k} \in C^2(M_{j,k})$, $g_{j,k} \in \Lip(M_{j,k})$ and $u_{j,\infty} \in W^{1, 2}(X_j) \cap L^p(X_j)$, $g_{j,\infty} \in L^p(X_j)$ such $\Delta_{M_{j,k}} u_{j,k} = g_{j,k}$ and $\Delta_{X_j} u_{j,\infty} = g_{j,\infty}$. In particular \begin{enumerate}[label=(\alph*)] \item $\Delta u_{j,k} \to \Delta u_{j,\infty}$ strongly in $L^p$, hence, $\Vert \Delta u_{j,k}\Vert_{L^p} \leq C_1$; \item $\Vert u_{j,k}\Vert_{W^{1,p}} \leq C_1$; \item $g_{j,\infty} \geq 1/2$ in a neighborhood of $x_{j,\infty}$. In particular, in this neighborhood $u_{j,\infty}$ can not be constant. \end{enumerate} Here $C_1$ depends on $n, p$ and the upper bound $\diam M_{j,k} \leq D_j$ and the norms are intended over $L^p=L^p(M_{j,k})$ and $W^{1,p} = W^{1,p}(M_{j,k})$. A key element in our proof is the possibility to localize the sequence $u_{j,k}$ without altering its essential properties. This can be done via smooth cutoff functions $\chi_{j,k} \in C^\infty(M_{j,k})$ equal to $1$ on $h_{j,k}(S_j)$ and identically $0$ outside of $h_{j,k}(T_j)$. Moreover, since the manifolds $M_{j,k}$ are all isometric outside $h_{j,k}(S_j)$, we can choose the functions $\chi_j = \chi_{j,k}$ so that they are equal (independently of $k$) outside $h_{j,k}(S_j)$. Let $v_{j,k} \coloneqq \chi_j\, u_{j,k} \in C^2(M_{j,k})$ and observe that $v_{j,k}$ preserves the $L^p$ bounds of $u_{j,k}$, indeed: \begin{equation} \label{eq:v_k} \Vert v_{j,k} \Vert_{L^p} \leq \Vert u_{j,k} \Vert_{L^p}\leq C_2, \end{equation} \begin{equation} \label{eq:delta v_k} \Vert \Delta v_{j,k} \Vert_{L^p} \leq \Vert \Delta u_{j,k} \Vert_{L^p} + \Vert u_{j,k} \Delta \chi_j \Vert_{L^p} + 2 \Vert |\nabla u_{j,k}|\,|\nabla \chi_j| \Vert_{L^p} \leq C_2, \end{equation} where $C_2$ depends on $C_1$ as well as on the choice of $\chi_j$. Next, we need some function theoretic considerations. First, we observe that compactness of $M_{j,k}$ implies the validity of an $L^p$\hyp{}Calder\'on\hyp{}Zygmund inequality \begin{equation} \label{eq:CZ(p)} \Vert \Hess \varphi \Vert_{L^p} \leq E_{j,k} \left(\Vert \Delta \varphi \Vert_{L^p} + \Vert \varphi\Vert_{L^p}\right), \quad \forall\,\varphi \in C^2(M_{j,k}). \end{equation} Second, if $p > n$, we have the validity on the sequence $M_{j,k}$ of a uniform Morrey\hyp{}Sobolev inequality \begin{equation} \label{eq:morrey embedding} \left\vert \varphi(x) - \varphi(y) \right\vert \leq C_3 \Vert \nabla \varphi \Vert_{L^p} d_{j,k}(x, y)^{1 - \frac{n}{p}}, \quad \forall\,\varphi \in C^1(M_{j,k}), \end{equation} where $d_{j,k}$ is the Riemannian distance on $M_{j,k}$, and the constant $C_3$ depends on $n, p$ and the uniform upper bound on $\diam M_{j,k}$. See \cite[Theorem 9.2.14]{HK2015} for reference, observing that the lower bound on the Ricci curvature ensures the validity of a $p$-Poincaré inequality; see \cite[Theorem 5.6.5]{S2002}. Applying \eqref{eq:morrey embedding} to $|\nabla \varphi|$ and combining with the Calder\'on\hyp{}Zygmund inequality \eqref{eq:CZ(p)} implies the following estimate \begin{equation} \label{eq: morrey + CZ(p)} \left\vert |\nabla \varphi |(x) - |\nabla \varphi|(y) \right\vert \leq C_3 E_{j,k} \left(\Vert \Delta \varphi \Vert_{L^p} + \Vert \varphi \Vert_{L^p}\right) d_{j,k}(x, y)^{1 -\frac{n}{p}}, \end{equation} for all $\varphi \in C^2(M_{j,k})$ and all $x, y \in M_{j,k}$. Applying \eqref{eq: morrey + CZ(p)} to $v_{j,k}$ and using estimates \eqref{eq:v_k} and \eqref{eq:delta v_k} we obtain \begin{equation} \label{eq:asympt unif cont v_k} \left\vert |\nabla v_{j,k} |(x) - |\nabla v_{j,k}|(y) \right\vert \leq C E_{j,k} d_{j,k}(x, y)^{1-\frac{n}{p}} \quad x, y \in M_{j,k}, \end{equation} where $C$ depends on $C_1, C_2$ and $C_3$, i.e., $C = C(n, p, \chi_j, D_j)$. Suppose by contradiction that $E_{j,k}$ is bounded from above uniformly in $k$. By \eqref{eq:asympt unif cont v_k} we deduce that $|\nabla v_{j,k}|$ is uniformly asymptotic continuous in the sense of Honda, hence, from \cite[Proposition 3.3]{H2015} we conclude that $|\nabla v_{j,k}|$ converges pointwise to $|\nabla v_{j,\infty}| \in C^0(X)$. However, since $X$ is an $n$\hyp{}dimensional Alexandrov space with $\Sect \geq 0$, it is a $RCD(0, n)$ space. Moreover, $\Delta v_{j,\infty} \in L^{p>n}$. From \cite[Theorem 1.1]{DPNZ2019} we then conclude that $|\nabla v_{j,\infty} |(x) = |\nabla u_{j,\infty}|(x) = 0$ whenever $x$ is a singular point. Note here that singular points of Alexandrov spaces are \textit{sharp} in the sense of De Philippis and N\'u\~nez-Zimbr\'on and have finite Bishop-Gromov density. By density we conclude that, $v_{j,\infty}$ must be constant in a neighborhood of $x_{j,\infty}$ thus contradicting (c). In particular there exists some $\bar{k}$, which may depend on $j$, such that \begin{equation} \label{eq:CZ(p) v_k} \Vert \Hess v_{j,\bar{k}} \Vert_{L^p} > j \left(\Vert \Delta v_{j,\bar{k}} \Vert_{L^p} + \Vert v_{j,\bar{k}}\Vert_{L^p}\right) \end{equation} on $M_{j,\bar{k}}$. Finally, we set $f_j = h_{j,\bar{k}}$, since $v_{j,\bar{k}}$ is compactly supported in $h_{j,\bar{k}}(T_j)$, it defines a function $v_j = v_{j,\bar{k}}$ on $N_j$ which satisfies \eqref{eq:contraddiction to CZ(p)}. \end{proof} \end{lemma} Note that, while \Cref{prop:convergence poisson} is independent of $p$, the previous result depends on the initial choice of $p>n$. This has to be attributed to the fact that the constants $C_1, C_2, C_3$ and $C$ are all dependent on $p$. To obtain a contradiction to $CZ(p)$ for $p>n$, we then simply need to glue the manifolds of \Cref{lem:main lemma} together. \begin{myproof}[of \Cref{t: main}] For $p>n$, let $f_j$ be as in \Cref{lem:main lemma}, and let $f$ be its point-wise limit. Note that the convergence is actually uniform on compact sets. The function $f$ is smooth and strictly convex, thus, $M = \Graph(f)$ is a smooth, non\hyp{}compact Riemannian manifold isometrically immersed in $\R^{n+1}$ satisfying $\Sect(M) > 0$. Since $f$ is defined on the whole space $\R^n$, $M$ is also a complete manifold. Observe that the sequence $v_j$ as in \Cref{lem:main lemma} induces functions in $C^2(M)$ whose supports are compact and disjoint, and which satisfy \eqref{eq:contraddiction to CZ(p)} on $L^p(M)$. This sequence clearly contradicts the validity of a global Calder\'on\hyp{}Zygmund inequality on $M$. \end{myproof} Note that in the above we have not exploited to the fullest the fact that the functions $v_j$ have disjoint supports. In fact, not only one has a sequence $v_j$ on which \eqref{eq:contraddiction to CZ(p)} holds, but one can actually define a function $F \in C^2(M)$ such that $\Vert F \Vert_{L^p} + \Vert \Delta F \Vert_{L^p} < + \infty$ but $\Vert \Hess F \Vert_{L^p} = + \infty$, which is a stronger condition. This allows to prove \Cref{cor:counterexample to sobolev inclusion}. \begin{myproof}[of \Cref{cor:counterexample to sobolev inclusion}] Fix $p >n $, let $(M,g)$ and $v_j \in C^2(M)$ be as in the proof of \Cref{t: main}. Define \begin{equation*} F \coloneqq \sum_{j=1}^\infty \frac{1}{j^2}\frac{v_j}{\Vert \Delta v_j \Vert_{L^p} + \Vert v_j \Vert_{L^p}}, \end{equation*} and observe that the sum converges since it is locally finite. Note that \begin{equation*} \Vert \Delta F \Vert_{L^p} + \Vert F \Vert_{L^p}=\sum_{j=1}^\infty \frac{1}{j^2}, \end{equation*} so that $F \in H^{2, p}(M)$. By \eqref{eq:contraddiction to CZ(p)}, on the other hand, we have \begin{equation*} \Vert \Hess F \Vert_{L^p} \geq \sum_{j=1}^\infty \frac{1}{j}, \end{equation*} hence, $F \not\in W^{2, p}(M)$. \end{myproof} We conclude this paper with a proof of Theorem \ref{th: sharp} which follows quite directly from the constructions of the counterexamples in \cite{GP2015,L2020}. These counterexamples rely on the construction of manifolds whose sectional curvature are increasingly oscillating on a sequence of compact annuli going to infinity. However, by distancing the (disjoint) annuli far enough we are able to provide a controlled lower bound on sectional curvatures. \begin{myproof}[of \Cref{th: sharp}] The counterexamples to \eqref{e: CZp} in \cite{GP2015,L2020} are constructed on a model manifold $(M, g)$, i.e. $M=[0,+\infty)\times \mathbb S^{n-1}$ endowed with a warped metric $g = dt^2 +\sigma^2(t) g_{\mathbb S^{n-1}}$. By carefully choosing the warping function $\sigma$, the authors proved the existence of a sequence of smooth functions $\{u_k\}_{k=1}^\infty$ and a sequence of intervals $\{[a_k,b_k]\}_{k=1}^\infty$ such that \begin{itemize} \item $a_{k+1}>b_k$; \item $u_k$ is compactly supported in the annulus $[a_k,b_k] \times \mathbb S^{n-1}$; \item the sequence of functions $u_k$ contradicts \eqref{e: CZp} for any possible constant, i.e. \[ \frac{\Vert \Hess u_k \Vert_{L^p}}{\Vert \Delta u_k \Vert_{L^p} + \Vert u_k \Vert_{L^p}}\to \infty,\qquad\text{as }k\to\infty; \] \item there exists two sequences of intervals $\{[c_k,d_k]\}_{k=1}^\infty$ and $\{[e_k,f_k]\}_{k=1}^\infty$ with $b_k<c_k<d_k<e_k<f_k<a_{k+1}$ such that $\sigma$ is linear and increasing on $[c_k,d_k]$ and is linear and decreasing on $[e_k,f_k]$, namely \[\sigma|_{[c_k,d_k]}(t)= \alpha_k t + \beta_k,\quad\text{and}\quad \sigma|_{[e_k,f_k]}(t)= \gamma_k t + \delta_k \] for some constants $\alpha_k>0$, $\gamma_k <0$ and $\beta_k,\delta_k\in\R$. \end{itemize} Note that, in order to satisfy this latter condition, our $\{u_k\}_{k=1}^\infty$ could be a subsequence of the sequence $\{u_k\}_{k=1}^\infty$ produced in \cite{L2020} Now, for $k\ge 2$, let $0<\kappa_k<\infty$ be such that \[ \forall\, x\in [e_{k-1},d_k]\times \mathbb S^{n-1},\quad \min\Sect(x)\ge -\kappa_k. \] Up to an increase of $\kappa_{k+1}$, we can assume that $\kappa_k\le \kappa_{k+1}$. For $k\ge 2$, let $T_k$ be such that $\lambda (T_k)> \kappa_k$. For later purpose, since $\lambda$ is increasing we can assume without loss of generality that $T_{k+1}>T_{k}+d_{k-1}-e_{k-2}$ and that \begin{equation} \label{condition T} \alpha_{k-1}(T_{k+1}+e_{k-2}-T_k)+\beta_{k-1}>\sigma(e_{k-1}). \end{equation} We define now a new warping function $\tilde\sigma(t):[0,+\infty)\to [0,+\infty)$ and a corresponding model metric $\tilde g=dt^2 +\tilde \sigma^2(t) g_{\mathbb S^{n-1}}$ on $M$ as follows. We define $\tilde \sigma(t)$ only for $t\ge T_3$, since the choice of $\tilde \sigma$ on $[0,T_3)$ does not affect the conclusion of the theorem. For $t\in [T_k,T_k+d_{k-1}-e_{k-2}]$ define \[\tilde\sigma(t)=\sigma(t+e_{k-2}-T_k),\] so that \[ \Sect_{\tilde g}\ge -\kappa_{k-1} \] on $[T_k,T_k+d_{k-1}-e_{k-2}]\times \mathbb S^{n-1}$. In particular \[ \Sect_{\tilde g}(t,\Theta)\ge -\kappa_{k}> - \lambda (T_k)\ge -\lambda (t) \] for any $(t,\Theta)\in ([T_k,T_k+d_{k-1}-e_{k-2}]\cup [T_{k+1},T_{k+1}+d_{k}-e_{k-1}])\times \mathbb S^{n-1}$. It remains to prescribe $\tilde \sigma$ on the intervals $(T_k+d_{k-1}-e_{k-2},T_{k+1})$ for $k\ge 3$. Note that on $[T_k+c_{k-1}-e_{k-2},T_k+d_{k-1}-e_{k-2}]$ we have $\tilde\sigma(t)=\alpha_{k-1}(t+e_{k-2}-T_k)+\beta_{k-1}$. Similarly, on $[T_{k+1},T_{k+1}+f_{k-1}-e_{k-1}]$, we have $\tilde\sigma(t)=\gamma_{k-1}(t+e_{k-1}-T_{k+1})+\delta_{k-1}$. Because of assumption \eqref{condition T}, we can find a $S_k\in (T_k+d_{k-1}-e_{k-2},T_{k+1})$ such that \[\hat \sigma (t)=\begin{cases}\alpha_{k-1}(t+e_{k-2}-T_k)+\beta_{k-1}&\text{on }[T_k+c_{k-1}-e_{k-2},S_k]\\\gamma_{k-1}(t+e_{k-1}-T_{k+1})+\delta_{k-1}&\text{on } [S_k,T_{k+1}+f_{k-1}-e_{k-1}] \end{cases}\] is a well-defined concave continuous piece-wise linear function which coincides with $\tilde\sigma$ outside $(T_k+d_{k-1}-e_{k-2},T_{k+1})$. Let $\epsilon_k>0$ be a small constant to be fixed later, and define $\tilde\sigma$ on $(T_k+d_{k-1}-e_{k-2},T_{k+1})$ to be a concave smooth approximation of $\hat\sigma$ equal to $\hat\sigma$ outside $[S_k-\epsilon_k,S_k+\epsilon_k]$ (this can be produced for instance applying \cite[Theorem 2.1]{G2002}). A standard computation show that the sectional curvature of $(M,\tilde g)$ are given by \[ \Sect_{rad} (t,\Theta) = -\frac{\tilde \sigma '' (t)}{\tilde\sigma (t)},\qquad \Sect_{tg} (t,\Theta) = \frac{1-(\tilde \sigma ' (t))^2}{\tilde\sigma (t)^2},\] for tangent planes respectively containing the radial direction, or orthogonal to it. Since $\tilde \sigma$ is concave for $t \in (T_k+d_{k-1}-e_{k-2},T_{k+1})$ then \[\Sect_{rad} (t,\Theta)\ge 0 \ge -\lambda (t).\] If $\alpha_{k-1}\leq 1$ and $\gamma_{k-1}\geq -1$ then $\Sect_{tg}(t,\Theta) \geq 0 \geq -\lambda(t)$ in a trivial way. Otherwise, \[ \Sect_{tg}(t,\Theta)>\Sect_{tg}(T_k+d_{k-1}-e_{k-2},\Theta)\ge -\kappa_k>-\lambda (t) \] for $t \in (T_k+d_{k-1}-e_{k-2},S_k-\epsilon_k)$ and \[\Sect_{tg}(t,\Theta)>\Sect_{tg}(T_{k+1},\Theta)\ge -\kappa_k>-\lambda (t)\] for $t \in (S_k+\epsilon_k, T_{k+1})$. Finally, for $t\in[S_k-\epsilon_k,S_k+\epsilon_k]$, by concavity \[ 1-(\tilde\sigma'(t))^2\ge \min\{1-(\tilde\sigma'(S_k-\epsilon_k))^2;1-(\tilde\sigma'(S_k+\epsilon_k))^2\},\] while $\tilde\sigma(t)$ is arbitrarily close to $\tilde\sigma(S_k-\epsilon_k)$ and to $\tilde\sigma(S_k+\epsilon_k)$ for $\epsilon_k$ small enough. Accordingly, we can choose $\epsilon_k$ small enough so that $\Sect_{tg}(t,\Theta)>-\lambda(t)$ also for $t \in [S_k-\epsilon_k,S_k+\epsilon_k]$. Hence, we have proved that for all $t\ge T_3$, the sectional curvature of $(M,\tilde g)$ at $(t,\Theta)$ are lower bounded by $-\lambda(t)$. Observe that $([T_k,T_k+d_{k-1}-e_{k,2}]\times\mathbb S^{n-1},\tilde g)$ is isometric to $([e_{k,2},d_{k-1}]\times\mathbb S^{n-1},g)$. Then we conclude by defining $w_k(t,\Theta)=u_{k-1}(t+e_{k-2}-T_{k},\Theta)$ so that the $w_k$ are smooth, compactly supported in $[T_k+a_{k-1}-e_{k-2},T_k+b_{k-1}-e_{k-2}]\times\mathbb S^{n-1}$ and verify \[ \frac{\Vert \Hess w_k \Vert_{L^p}}{\Vert \Delta w_k \Vert_{L^p} + \Vert w_k \Vert_{L^p}}\to \infty,\qquad\text{as }k\to\infty. \] \end{myproof}
2023-04-23T08:17:59.994Z
2020-11-30T02:03:13.000Z
redpajama/arxiv
arxiv_0000
994
7,766
2f1b029d659f5feb40caf8ce13de0b1c16671629
\section{Introduction} In music composition, production and engineering, audio effects play an essential role in altering the sound towards the desired final result. For instruments like the electric guitar, the processing signal chain can often be viewed as part of the artist’s creative expression \cite{case2010recording}. Entire musical genres and styles are frequently defined and identified by the type of audio effects adopted \cite{blair2015southern, williams2012tubby}; and renowned musicians commonly rely on specific combinations of guitars, amplifiers and effects to achieve a unique sound \cite{prown2003gear}. Through the decades, artists, engineers and producers have defined a palette of sounds that have become a reference for guitar players. In the effort to recreate a specific sound or atmosphere, professionals and amateurs go to great lengths to identify the exact gear that was used in a certain recording. When describing a desired result, people often rely on naming a reference style, artist or song, rather than talking in terms of sound features or effect parameters. Although the design, reconstruction and emulation of audio effects is well-studied \cite{zolzer2011dafx, pakarinen2009review}; it is less so for their recognition and their parameter estimation. Therefore, in our work, we set out to develop a deep learning model capable of recognising which specific guitar pedal effect was used in a recording as well as estimating its parameters. Being the single most important effect for electric guitar, and the one that usually triggers most discussions, this work focused on nonlinear effects, i.e. overdrive, distortion and fuzz. The rest of this paper is organised as follows: in section \ref{sec:background} we look at the state of the art and work related to guitar effects recognition, section \ref{sec:dataset} introduces the dataset we assembled for this work, while the networks architecture is described in section \ref{sec:architecture}; sections \ref{sec:experiments}, \ref{sec:results} and \ref{sec:conclusion} outline experiments, results and conclusions. \section{Background} \label{sec:background} The recognition of musical instrument sounds has been of interest to the information retrieval community for long time \cite{herrera2003automatic}; with applications in musical sounds databases, intelligent music search and recommendation systems. The estimation of instruments' parameters, as well as the classification of playing gestures or styles has also been extensively studied - and applied in contexts like automatic music transcription \cite{benetos2013automatic, kehling2014automatic}, music education \cite{dittmar2012music} or musicology \cite{abesser2016score}. Many papers focused on guitar parameters. String, fret and plucking position estimation have been extensively studied, including works on classical and acoustic \cite{barbancho2009pitch} and electric \cite{abesser2012automatic} guitar. Most works are based on features extraction from recorded sounds \cite{barbancho2009pitch, abesser2012automatic, dittmar2013real, kehling2014automatic, geib2017automatic}, but there are also examples of estimation based on guitar-string physical models \cite{hjerrild2019estimation, hjerrild2019physical} for high-tempo and real-time applications. Plucking and pickup position estimation has also been studied, with Mohamad et al exploring solutions based on spectral features (comparing recordings with string models) and autocorrelation \cite{mohamad2017pickup2}. The study was also extended to the case of nonlinear audio effects in the signal chain \cite{mohamad2017pickup}. Classification of playing styles and techniques has also received substantial attention. In \cite{abesser2010feature}, the authors compare the performance of several classifiers (\gls{svm}, Gaussian mixture models, nearest neighbours) on 5 plucking styles (e.g. finger, pick, slap) from bass guitar recordings. In \cite{schuller2015parameter} Schuller et al extend the work to include expression styles (e.g. bend, slide, vibrato). Su et al, in \cite{su2014sparse}, apply sparse dictionary learning to classify guitar playing techniques (e.g. vibrato, hammer-on, pull-off). The same classification problem is solved in \cite{wang2020spectral} using a deep belief network. Other examples of work on guitar-related classification problems focused on playing mode (e.g. bass, solo melodic improvisation, chordal playing) \cite{foulon2013automatic}, chords fingering \cite{barbancho2011automatic} and guitar model \cite{dosenbach2008identification, johnson2015guitar, profeta2019feature, profeta2019comparison}. However, there is only a small corpus of research on guitar effects recognition \cite{stein2010automatic, stein2010automatic2, eichas2015feature, schmitt2017recognising}. In \cite{stein2010automatic}, Stein worked with guitar and bass recordings on recognition of 11 different effects: feedback and slapback delay, reverb, chorus, flanger, phaser, tremolo, vibrato, distortion and overdrive. In a subsequent work \cite{stein2010automatic2}, the author extended his method - based on spectral, cepstral and harmonic features and \gls{svm}s - to cascaded effects. In \cite{schmitt2017recognising}, using the same dataset, the authors aimed to understand which are the most relevant features for the classification task adopting a ”bag-of-audio-words” approach; while in \cite{eichas2015feature}, the authors - making use of specific input test signals, and extracting features from the output - worked on classification of 10 analogue effect units into 5 categories. In \cite{eichas2018gray}, the guitar amplifier modelling process includes emulation of nonlinear blocks and estimation of parameters. In these last examples, the approach is limited to the case in which the unit to classify/model is accessible, which defeats the purpose of estimation from recordings. The closest study to our work is \cite{jurgens2020recognizing}, and it is the only one that estimates effects parameters from guitar recordings. Similarly to the previous studies, the authors used SVMs to classify the same effects listed above with a reduced features set; and extended the work by training shallow neural networks on parameter estimations for 3 effects (distortion, tremolo, delay). The main limitation of this study is the necessity of separate features selection and network training for each effect. In all these cases, the authors worked on generic audio effects or categories and, to the best of our knowledge, there is no previous research focusing on classification and parameter estimation of specific plugins or effect units from guitar recordings. \section{Dataset} \label{sec:dataset} We assembled a novel dataset of processed electric guitar samples using unprocessed recordings from the IDMT-SMT-Audio-Effects dataset \cite{stein2010automatic}. The dataset \footnote{\href{https://www.idmt.fraunhofer.de/en/business_units/m2d/smt/audio_effects.html}{https://www.idmt.fraunhofer.de/en/business\_units/m2d/smt/audio\_effects.html}} includes monophonic (624 single notes) and polyphonic (420 intervals and chords) recordings (wav - 44.1kHz, 16bit, mono) from 2 different electric guitars, each with two pick-up settings and up to 3 plucking styles. The monophonic recordings cover the common pitch range of a 6-string electric guitar, and the polyphonic samples were obtained mixing single notes recordings to generate two-notes intervals and 3- or 4-notes chords. All samples are 2 seconds long. The monophonic recordings required removal of background noise before the note onset, which we obtained using a python script together with \textit{Librosa}'s \cite{mcfee2015librosa} onset detection function. To assemble our dataset we selected 13 overdrive, distortion and fuzz plug-ins (see Table \ref{table:plugins}) designed to emulate some of the most iconic and widely used analogue guitar effect pedals. All the plugins have 2 or 3 controls and, regardless of the specific name adopted by the designer, the controls can be identified by their processing function: Level, Gain, Tone/Equalisation. For training and testing purposes, 4 sub-datasets were generated, which will be referred to as Mono Discrete, Poly Discrete, Mono Continuous and Poly Continuous. The first two subsets (Mono Discrete, Poly Discrete) use a discrete set of combinations selected as the most common and representative settings a person might use: Gain = [0.0, 0.1, 0.2, 0.5, 0.8, 1.0], Tone/Eq = [0.0, 0.2, 0.5, 0.8, 1.0]. Also, since the Level control has no effect on the output timbre it was set to 1.0 for every combination. A summary of the controls and settings is shown in Table \ref{table:settings}. Most plugins do not include Gain values below 0.2 - this is because for such values there is no audible change between input and output or even a level attenuation (with no output for Gain = 0.0). Every monophonic and polyphonic sample was processed with all the combinations, generating a total of ${\sim}$200000 processed samples (${\sim}$120000 monophonic, ${\sim}$80000 polyphonic), for a total of about 110 hours. For the second two subsets (Mono Continuous, Poly Continuous), both unprocessed samples as well as settings' values were drawn from a uniform distribution. We generated 10000 random samples for each plugin, obtaining a total of 260000 samples (130000 monophonic, 130000 polyphonic), equivalent to about 140 hours. Settings values were limited to fall between the extremes shown in Table \ref{table:settings}. Generating these four subsets we aimed at gaining a deeper understanding about the generalisation capabilities of our models. The samples' were processed in MATLAB - making use of its VST plugin host features - and both unprocessed inputs and processed outputs were normalised to -6dBFS. \input{tables/table_plugins.tex} \input{tables/table_settings.tex} \section{Architecture} \label{sec:architecture} Our neural network architecture (Table \ref{table:architecture}) is based on a combination of 2 convolutional and 3 fully connected layers, with batch normalisation layers at each hidden level. Except for the output layers' size and activation functions, the same configuration was used to train networks on both effects classification and settings estimation. Four different networks - and training paradigms - were implemented: \begin{itemize} \item effects classification network (FxNet) \item settings estimation network (SetNet) \item multitask classification and estimation network (MultiNet) - where the 2 convolutional layers are shared. \item settings estimation conditional network (SetNetCond) - where an extra embedding layer is added to condition the estimation on the effect class. \end{itemize} The loss functions adopted for the classification and estimation problems were, respectively, the cross-entropy loss and the mean square error (MSE). To evaluate the settings estimation networks, we also defined an accuracy metric for which a prediction is considered correct when the absolute error for every parameter is less then 0.1. For effects that do not have a Tone/Eq control, we represented the absence using a value of -1.0 for the prediction. The threshold of 0.1 was chosen to simplify the comparison of networks performance in different training settings and on different datasets. The value is based on the authors' experience and informal listening tests during the dataset creation phase. However, parameters' sensitivity varies between effects and controls, and differences in absolute value often do not relate linearly with perceptual differences. To overcome this limitations we do rely also on \gls{mae} and \gls{rmse} to evaluate our models. As input features to all our networks we used mel power-spectrograms, extracted from audio in 23 ms Hann windows with 50\% overlap. In total, 128 mel-bands were used in the 0–22,050 Hz range. For a given 2 s audio input, the feature extraction produced a T x 128 output (T = 87). These features are motivated by human auditory perception, and are a common choice in acoustic scene classification \cite{abesser2020review}. Due to the good performance of our models, we did not deem important for this study to test other features, but it could be worth exploring in the future. \input{tables/table_architecture.tex} \section{Experiments} \label{sec:experiments} Some preliminary experiments were conducted to obtain baseline performances for the settings estimation problem and to compare the results when using a multitask approach or a conditional network. The literature shows how multitask learning can be effective at solving related tasks \cite{ruder2017overview} and more efficient than training several networks. In the multitask paradigm, the network was trained to classify a sample and estimate its settings at the same time. In the conditional paradigm the networks were trained in "series", with the classification network (FxNet) used to condition the settings estimation network (SetNetCond). The experiments were conducted on the Mono Discrete dataset. All networks were trained for 50 epochs, which resulted in the following test accuracy: \begin{itemize} \item SetNet = 40.3\% \item MultiNet = 40.88\% (87.0\% classification accuracy, 44.6\% estimation accuracy) \item FxNet + SetNetCond = 57.3\% (89.7\% classification accuracy, 60.7\% estimation accuracy) \end{itemize} The results show no appreciable difference in effect classification accuracy between the multitask and the conditional paradigms but do show an impact on the settings estimation accuracy, with the conditional network performing better. Further experiments were therefore centred on the analysis of the classification network (FxNet) and the conditional estimation network (SetNetCond) when trained/tested on the 4 different sub-datasets. In the following section we illustrate the results of training the networks for 100 epochs with an early stop when the validation accuracy sees no improvement for 15 epochs. For weights update we opted for the Adam optimiser \cite{kingma2014adam} with a fixed learning rate of 0.001. \section{Results} \label{sec:results} \subsection{Effects Classification} Table \ref{table:fxnet_acc} shows the accuracy results for the effect classification problem. Note that the test accuracy is higher for networks trained on continuous datasets, respectively 90.9\% for the Mono Continuous dataset and 91.4\% for the Poly Continuous dataset. At the same time, networks trained on discrete datasets performed better when tested on continuous ones than the opposite condition (networks trained on continuous and tested on discrete datasets): 83.1\% vs 81.3\% for monophonic samples and 89.4\% vs 84.1\% for polyphonic ones. \input{tables/table_fxnet_acc.tex} By analysing the confusion matrices, we can gain a better understanding about the networks' performance as well as the challenges behind the classification. Figure \ref{fig:fig1} shows the details for networks trained on discrete datasets. About 10\% of the errors in both datasets are due to the misclassification between 808 and TS9. The plugins are emulations of two overdrive effects from the same manufacturer (Ibanez TS808 and TS9). The two effects are supposed to have similar gain and frequency response, but, upon studying the circuits schematics, we noticed how there is actually no difference between the two. Therefore - assuming the plugins are faithful models of the analogue circuits - it is plausible that our network would confuse samples from either effects. An explanation for the misclassification imbalance between monophonic and polyphonic samples might be related to the training procedure. We used batches of 100 samples, randomly selected across the whole dataset and without control for batch to dataset ratio over the 13 classes. Assuming identical plugins, the network will tend to classify samples from either as belonging to the class that was seen first or more during training. Similar observations are valid for errors in classifying OD1 and SD1. Again, the plugins are emulations of effect pedals from the same manufacturer, and in their original analogue version use very similar designs. The two share the same clipping circuit; although, while the SD1 includes a tone control that combines a treble boosting first order shelving filter with a first order lowpass, the OD1 has no tone control and a fixed first order lowpass. Analysing the classification errors for the Mono Discrete dataset, we noticed how, of the 345 times the OD1 is classified as SD1: 31\% of the times is when the Gain control is set to 0.2, and another 31\% when Gain = 0.5. Ignoring the specific note being played, this result might be consequence of low gain settings, where the spectral differences might be too small. On the other hand, the SD1 is confused for the OD1 88 times, and all cases are from samples where the Tone is set to 0, 0.2 or 0.5. For low Tone values, the spectral differences might be unnoticeable, most of the high frequency harmonics might be filtered. In this case we did not observe correlation with the Gain control values. \begin{figure*} \hspace{-0.4cm} \subfloat{\includegraphics[width=.515\linewidth]{figures/fxnet_cm_MD_MD.eps}} \subfloat{\includegraphics[width=.515\linewidth]{figures/fxnet_cm_PD_PD.eps}} \caption{confusion matrices for discrete datasets} \label{fig:fig1} \end{figure*} Another interesting example, more noticeable for the Mono Discrete dataset, is the misclassification of the DPL as RAT. In this case, we are referring to effects from different manufacturers which do share some circuit design choices. Although the DPL does not have a tone control, the two circuits use similar clipping stages and the same maximum gain. But, they differ in the filtering after the clipping section and in the type of clipping diodes: germanium in the DPL, silicon in the RAT; with the first type determining a softer clipping. Looking at the results we noticed how 72\% of the times the DPL is classified as RAT, the Gain control was set to 1.0. On the other hand, 60\% of the times the RAT was classified as DPL, the Gain was set to 0.5 and the Tone (a first order lowpass) to 0 (no high frequency attenuation). This might be related to the clipping diodes, a low gain with silicon diodes might be comparable to the softer clipping of germanium diodes. It is also relevant to observe that 808, TS9, MGS and SD1, despite being based on the same circuit, with similar clipping sections and tone controls, are almost never confused. To analyse the performance of our classifiers trained on continuous datasets and tested on discrete ones, we refer to Figure \ref{fig:fig2}. It can be noticed how the errors are similar to the previous cases, although, a major impact on the performance is due to the misclassification of BD2. In this case, we could not identify correlations between the different circuit designs and/or the controls values. \begin{figure*} \hspace{-0.4cm} \subfloat{\includegraphics[width=.515\linewidth]{figures/fxnet_cm_MC_MD.eps}} \subfloat{\includegraphics[width=.515\linewidth]{figures/fxnet_cm_PC_PD.eps}} \caption{confusion matrices for test on discrete datasets} \label{fig:fig2} \end{figure*} \subsection{Settings Estimation} In this section we present the results for the settings estimation problem using the conditional network (SetNetCond) conditioned on the effect class ground truth. Table \ref{table:setnet_acc} shows the accuracy results, where all settings are estimated with an error below 0.1. Networks trained and tested on polyphonic datasets are the best performing, probably due to richer information content of the spectra with respect to monophonic samples. For both monophonic and polyphonic samples, the networks trained and tested on continuous settings reach higher accuracy than their counterparts trained and tested on discrete settings. This is somewhat surprising since the estimation of discrete values was expected to be a simpler problem to solve. Further insights are offered by Table \ref{table:setnet_mae_rmse}, where we show mean absolute error (MAE) and root mean square error (RMSE) for the different training and testing configurations. In the majority of cases (12 out of 16) the MAE is below 0.05 and for all cases except one, the RMSE is below 0.1. We obtained the lowest errors when training and testing on polyphonic samples. The highest errors result from training on monophonic continuous and discrete samples and testing respectively on discrete and continuous ones. The table also includes the average errors for the Gain and Tone/Eq controls; the two are comparable, which shows that they present similar difficulty to the estimation. Figure \ref{fig:fig3} shows the box-plots for the best and worst cases highlighted in Table \ref{table:setnet_mae_rmse}. Moreover, we wanted to analyse the generalisation capabilities of the networks trained on discrete settings, as well as the performance of networks trained on continuous settings on discrete ones. Figure \ref{fig:fig4} shows the scatter plots for the network trained on the Mono Discrete dataset when tested on the Mono Continuous dataset. For the Gain estimation we notice a bias towards the discrete values seen during training; also, the network manages to interpolate and estimate continuous values, but seems to do it satisfactorily only for Gain values above 0.5. The Tone estimation seems to be less affected, and the output approximates fairly well the input uniform distribution. An explanation for this difference might reside in the fact that for the Tone control we chose a balanced set of discrete values ([0.0, 0.2, 0.5, 0.8, 1.0]). For the Gain control this was not possible. As explained in section \ref{sec:dataset}, most distortion effects do not produce any perceivable timbral difference for low gain values and some introduce attenuation. The scatter plots for the network trained on the Mono Continuous and tested on the Mono Discrete datasets (Figure \ref{fig:fig5}) show some interesting behaviour. Tested on a discrete dataset, the network fails to maintain the performance at the same levels as on the continuous one. In particular, we notice a higher variance, especially for Gain = [0.2, 0.5] and for most of the Tone values. Also, a skew in the estimations' distributions is visible. To analyse these in more details, Figure \ref{fig:fig6} and \ref{fig:fig7} show the mean error and skew as a function of Gain and Tone for those networks trained on discrete datasets and tested on continuous ones and vice-versa. Except for the case of Gain values below 0.1 in Figure \ref{fig:fig7}a (train on Poly Discrete and test on Poly Continuous), all mean errors for tests on opposite datasets are below 0.1. With the same exception, the mean errors for training on discrete datasets and test on continuous ones are lower than 0.05 (Figures \ref{fig:fig6}a and \ref{fig:fig7}a). The skew for training on discrete and test on continuous datasets (Figures \ref{fig:fig6}b and \ref{fig:fig7}b) is in many cases lower than the inverse conditions (Figures \ref{fig:fig6}b, \ref{fig:fig6}d and \ref{fig:fig7}b, \ref{fig:fig7}d). To conclude, even if the networks perform better on continuous datasets, there seems to be an argument for considering discrete values. A dataset which uses discrete values for the independent variables is easier to design, control, analyse and to extend or reduce. In our specific application, an estimation error within 0.1 of the target value or some bias is acceptable. Also, the use of balanced values or a form of regularisation in the cost function could help the interpolation in case of estimation on continuous values or unseen data. \input{tables/table_setnet_acc.tex} \input{tables/table_setnet_mae_rmse.tex} \begin{figure*} \hspace{-.3cm} \subfloat{\includegraphics[width=.255\linewidth]{figures/setnetcond_bp_PD_PD.eps}} \subfloat{\includegraphics[width=.255\linewidth]{figures/setnetcond_bp_PC_PC.eps}} \subfloat{\includegraphics[width=.255\linewidth]{figures/setnetcond_bp_MD_MC.eps}} \subfloat{\includegraphics[width=.255\linewidth]{figures/setnetcond_bp_MC_MD.eps}} \caption{settings estimation errors} \label{fig:fig3} \end{figure*} \begin{figure*}[b!] \centering \subfloat{\includegraphics[width=.4\linewidth]{figures/setnetcond_sp_gain_MD_MC.png}} \subfloat{\includegraphics[width=.4\linewidth]{figures/setnetcond_sp_tone_MD_MC.png}} \caption{scatter plots for settings estimation on continuous datasets} \label{fig:fig4} \centering \subfloat{\includegraphics[width=.4\linewidth]{figures/setnetcond_sp_gain_MC_MD.png}} \subfloat{\includegraphics[width=.4\linewidth]{figures/setnetcond_sp_tone_MC_MD.png}} \caption{scatter plots for settings estimation on discrete datasets} \label{fig:fig5} \end{figure*} \begin{figure*} \subfloat[]{\includegraphics[width=.26\linewidth]{figures/setnetcond_me_MD_MC.eps}} \subfloat[]{\includegraphics[width=.245\linewidth]{figures/setnetcond_sk_MD_MC.eps}} \subfloat[]{\includegraphics[width=.26\linewidth]{figures/setnetcond_me_MC_MD.eps}} \subfloat[]{\includegraphics[width=.245\linewidth]{figures/setnetcond_sk_MC_MD.eps}} \caption{mean error and skew for networks trained/tested on monophonic samples} \label{fig:fig6} \subfloat{\includegraphics[width=.26\linewidth]{figures/setnetcond_me_PD_PC.eps}} \subfloat{\includegraphics[width=.245\linewidth]{figures/setnetcond_sk_PD_PC.eps}} \subfloat{\includegraphics[width=.26\linewidth]{figures/setnetcond_me_PC_PD.eps}} \subfloat{\includegraphics[width=.245\linewidth]{figures/setnetcond_sk_PC_PD.eps}} \caption{mean error and skew for networks trained/tested on polyphonic samples} \label{fig:fig7} \end{figure*} \section{Conclusions and Future Work} \label{sec:conclusion} In this paper we introduced a CNN-based model for the classification of guitar effects and estimation of their parameters. Using electric guitar recordings of single notes, 2-notes intervals and 3- or 4- notes chords; we were able to classify with high accuracy samples processed with 13 overdrive, distortion and fuzz plugins. The plugins we used are emulations of some of the most famous and commonly used analogue guitar effect pedals. We were also able to estimate the parameters settings with high precision. For this work, we generated a dataset of processed monophonic and polyphonic samples; the plugins' settings were either combinations of discrete values commonly used by musicians or continuous values randomly extracted from a uniform distribution. This allowed us to gain further understanding about the performance on datasets generated using discrete or continuous independent variables. We described the benefits of discrete datasets and postulated about the possibility of obtaining equally high performances. To the best of our knowledge, our work is the first attempt at both classification and parameter estimation of specific guitar effect units. Being able to extrapolate information about the specific signal chain used in a recording could benefit music production; artists, engineers and producers often rely on "reference" sounds from other recordings. This knowledge could also be applied to automatic music transcription \cite{benetos2013automatic}, music education \cite{dittmar2012music} or musicology \cite{abesser2016score}. Musicians, genres and styles are identified by and associated with specific sounds and effects; this could be applied to intelligent music search and recommendation systems. Future work might branch out in different directions: our model could be compared with human performance; the architecture and datasets could be extended for higher accuracy and better generalisation on unseen data; and the research could be extended to other nonlinear units, different categories of effects, or tested on other guitars or instruments. \section{Links} \label{sec:repos} Code: \begin{itemize} \item \href{https://github.com/mcomunita/gfx\_classifier}{https://github.com/mcomunita/gfx\_classifier} \item \href{https://github.com/mcomunita/gfx_classifier\_models\_and_results}{https://github.com/mcomunita/gfx\_classifier\_models\_and\_results} \end{itemize} Dataset: \begin{itemize} \item Mono Cont. - \href{https://doi.org/10.5281/zenodo.4296040}{https://doi.org/10.5281/zenodo.4296040} \item Mono Disc. - \href{https://doi.org/10.5281/zenodo.4298000}{https://doi.org/10.5281/zenodo.4298000} \item Poly Cont. - \href{https://doi.org/10.5281/zenodo.4298017}{https://doi.org/10.5281/zenodo.4298017} \item Poly Disc. - \href{https://doi.org/10.5281/zenodo.4298025}{https://doi.org/10.5281/zenodo.4298025} \end{itemize} \section{Acknowledgement} \label{sec:acknowledgement} This study has been funded by UKRI and EPSRC as part of the “UKRI Centre for Doctoral Training in Artificial Intelligence and Music”, under grant EP/S022694/1. \bibliographystyle{unsrt}
2023-04-23T08:18:00.652Z
2020-12-08T02:20:02.000Z
redpajama/arxiv
arxiv_0000
1,017
4,355
d829cba4c69da5cf0220c78ef1f5fba475f2d3e5
\section{Introduction} Over the past century, the notion of symmetry has become an indispensable feature of theoretical physics. It no longer merely facilitates the simplification of a difficult calculation, nor lurks behind the towering conservation laws of Newton's time, but rather unveils fundamental features of the universe around us, describes how its basic constituents interact, and places deep constraints on the kinds of theories that are even possible. In light of this, one natural approach to contemplating nature would be to take symmetries {seriously}. In other words, one may aim to reformulate as much of our understanding of nature in the language most natural for describing symmetries. More specifically, what we will be concerned with here is the notion of a \textit{relativity symmetry}. Using the archetypal example handed down to us by Einstein, we hope to illustrate that not only does a relativity symmetry relate frames of reference in which the laws of physics look the same, but also captures the structure of physical spacetime\footnote{ Throughout this note we will reserve ``spacetime'' specifically for the notion of physical spacetime underlying Einsteinian special relativity, while using ``space'' to cover the corresponding notion in general. Spacetime in the Newtonian setting, or space-time used instead whenever admissible, refers to the sum of the mathematically independent Newtonian space and time. } itself as well as much of the theory of particle dynamics on it. Moreover, as we will detail below, formulating our theory in these terms provides one with a natural language in which approximations to the theory in various limits can be described. The term relativity symmetry, though much like introduced into physics by Einstein, is in fact a valid notion for Newtonian mechanics too. It just has a different relativity symmetry, the Galilean symmetry. Newtonian mechanics can be called the theory of Galilean relativity. The symmetry for the Einstein theory is usually taken as the Poincar\'e symmetry. Note that we neglect the consideration of all discrete symmetries like the parity transform in this article. We talk about the relativity symmetries without such symmetries included. In order to parse the details of this fascinating tale, we begin with an examination of exactly how one can naturally pass from the (classical) relativity symmetry group/algebra to its corresponding geometric counterparts such as the model of the spacetime and the phase space for a particle in Section II, for the case of the Poincar\'e symmetry $ISO(1,3)$. The full dynamical theory, for spin zero case, under the Hamiltonian formulation is to be presented from the symmetry perspective in Section III. From there, in Section IV, we provide a brief introduction to the language of approximating a symmetry: contractions of Lie algebras/groups, and their representations. This is augmented by a continuation of the exploration of special relativity, and in particular the way in which the Newtonian limit is to be understood within this context, before giving some concluding remarks in the last section. Finally, we have added an appendix (Appendix~A) discussing what is, in essence, the reverse procedure of what is described in this paper: symmetry deformations. This is another fascinating facet of the overall story, which provides one with a reasonable procedure for determining what sort of theories a given theory may be an approximation of. We could have discovered Einstein's Theory of Special Relativity from the deformation of Galilean relativity if we had the idea \cite{D,Min}. Historically, however, careful studies of symmetries and their role in physics barely started in the time of Einstein, and only began to pick up momentum with the development of quantum mechanics in the 1930s. The basic material treated here is, in our opinion, important for a good appreciation of the physical theories, yet perhaps not as well-known as it should be. The key parts of the presentation have, apparently, not been otherwise explicitly available in the literature, not to say the full story in one place. Hence, we make this effort to present it, aiming at making it accessible even to students with limited background. For the latter purpose, we include an extra appendix ({Appendix}~B) giving a physicist's sketch of necessary group theory background, to make the article more self-contained. We see the presentation as useful to physicists in a couple of ways. Firstly, for the existing theories, it gives a coherent and systematic way to organize all aspects of the theories within one framework, highlighting their mutual relationship. That can improve our understanding of all aspects of the theoretical structure. Second, a particular way to look at a theory, even if not in any sense superior to the other ways, may provide a specific channel to go to theories beyond. The authors' attention on the subject matter is closely connected to our recent studies essentially on the exact parallel constructions for the theories of quantum mechanics, including retrieving of the `nonrelativistic' from the `relativistic' as well as classical from quantum \cite{070,087}. The Lorentz covariant theory of quantum mechanics resulted is new, with a quantum notion of Minkowski metric. A better understanding of that actually gives also a notion of Newtonian mass and a new insight into the Einstein on-shell mass condition, to be reported in a forthcoming article \cite{094}. The symmetry for the `nonrelativistic' quantum mechanics is essentially a $U(1)$ central extension \cite{Gil,gq} of the Galilean symmetry, or the one with the Newtonian time translation taken out -- we called that $H_{\!\ssc R}(3)$ as the Heisenberg-Weyl symmetry with three noncommuting $X$-$P$ pairs supplemented by the $SO(3)$ \cite{070,094}. For the `relativistic' case, we found it necessary to go beyond the Poincar\'e symmetry to the larger $H_{\!\ssc R}(1,3)$ \cite{087,094}. The last reference also addresses nonzero spin and composite systems. Details of those are beyond the scope of the present article. All that illustrate well the value of looking at the well known theories from a somewhat different point of view seriously, as done here. \section{From Relativity to Physical Spacetime and the Particle Phase Space} A conventional path to the formulation of a physical theory is to start with a certain collection of assumptions about the geometry of the physical spacetime objects in this theory occupy. That is to say, the theory starts with taking a mathematical model for the intuitive notions of the physical space and time. After all, dynamics means the study of motion, which is basically the change of position with respect to time. In Newtonian mechanics, Newton himself followed the basic definitions in his \textit{Principia} with his Scholium arguing for Euclidean space coupled with absolute time as the foundation of the description of the physical world; the study of special relativity may be introduced via Minkowski spacetime; general relativity typically assumes the universe is a (torsion-free) Lorentzian manifold, and the list goes on. It is then \textit{from} this `foundation' that one infers the symmetries present in the model. Note that in Newton's time, Euclidean geometry is really the only geometry known. What we hope to convince the reader of in this section is that the opposite path can be just as fruitful, if not more so. In particular, we will \textit{start} with the relevant (relativity) symmetry, given by a Lie group (and its associated Lie algebra), and couple it to the representation that naturally captures the underlying geometry. Once the basic definitions are in place, we use special relativity as an illustrative example of this procedure. The approach will be extended to present the full theory of particle dynamics in the next section. Note that the model for the physical space or spacetime is closely connected to the theory of particle dynamics on it. First of all, Newton introduced the notion of particle as point-mass to serve as the ideal physical object which has a completely unambiguous position in his model of the physical space. Conversely, in a theory of particle dynamics, there is no other physical notion of the physical space itself rather than the collection of possible positions for a free particle (or the center of mass for a closed system of particles which, however, have to be defined based on the full particle theory, for example the three Newton's Laws). It is and has to be the configuration space for the free particle. \subsection{The Coset Space Representation} In his seminal paper \textit{Raum und Zeit} \cite{Min}, Hermann Minkowski famously said, \begin{quote}The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. \end{quote} In this statement, Minkowski reveals something of tremendous importance: the idea of Lorentz symmetry as the right transformations sending inertial frames to inertial frames directly alter the model geometry of the physical space and time, or spacetime, from the Newtonian theory. The model for the physical spacetime itself depends on the explicit form of the \textit{Principle of Relativity} being postulated, {\em i.e.} the relativity symmetry of the theory. In this subsection, we will take this realization to heart and explore precisely how one goes about recovering the model for the physical spacetime naturally associated with a given relativity structure for the classical theories. Consider a Lie group $G$, with associated Lie algebra $\mathfrak{g}$, which we take as capturing the finite and infinitesimal transformations, respectively, that we can perform on a given physical system without changing the form of the physical laws. In other words, those transformations which take a given (inertial) frame of reference into another equally valid frame. $G$ is then the relativity symmetry, or the symmetry group of the spacetime model of the theory of particle dynamics. The use of the word ``transformation'' above already hints at the need for a representation-theoretic perspective of what, exactly, the relativity symmetry encodes. Indeed, as it stands the mathematical group $G$ is merely an abstract collection of symbols obeying certain rules -- a representation capturing the group structure is required to illuminate what these rules really mean in terms of \textit{physical transformations}, which are mathematically transformations on a vector space. The best examples of the latter are our Minkowski spacetime and the Newtonian space-time. The first, perhaps prosaic, step in this direction is simply to use the group multiplication, thought of as a (left) action of $G$ on itself: \[g'\cdot g \mapsto g'g.\] In other words, we can try to imagine that what we mean by a location/position in the ``physical spacetime'' is nothing more than an element $g\in G$, and that a transformation is then simply furnished directly by the group operation. We have at hand the Poincar\'e symmetry denoted by $ISO(1,3)$ consisting of the rotations and translations conventional defined as isometries of the Minkowski spacetime. However, to conform completely to the perspective of taking the symmetry group as the starting point, we are going to simply see the group as the Lie group obtained from the corresponding Lie algebra $\mathfrak{iso}(1,3)$ presented in Eq.(\ref{PS}) below as abstract mathematical objects. We can take each element of the pure translations as a point in the Minkowski spacetime, which is equivalent to saying that each point is to be identified as where you get to after a particular spacetime translation from the origin. Note that while the rotations take any point other than the origin to a different point, they do not move the origin. From the abstract mathematical point of view, what we described here is called a coset space. The Minkowski spacetime is a coset space of the Poincar\'e symmetry. From here we consider the coset space $M:=G/H$, defined mathematically as like a quotient of the group $G$ by a closed subgroup $H<G$. A coset $gH$ containing the element $g$ is the collection of all group elements of the form $gh$ where $h$ is any element in $H$. Note \[ g'H = g(g^{-1}g' H) =gH \qquad \mbox{for} \quad g^{-1}g' \in H \;. \] Observe that the above action descends to an action of the full group $G$ on $M$ in an obvious way as \[g'\cdot (gH)=(g'g)H \;.\] It is more convenient to use the Lie algebra notation. We write a group element in terms of \[g=\exp( a^i X_i)\;,\] where the $X_i$ are the generators and $a^i$ real parameters (note that, as is typical, we are using the Einstein summation convention). $X=a^i X_i$ as a linear combination of the generators, as basis elements, is an element of the Lie algebra $\mathfrak{g}$. Each coset then can be conveniently identified with an element \[ \exp(s^j Y_j) \] where $Y_j$ are the generators among the $X_i$ set which serves as a basis for the vector subspace $\mathfrak{p}$ of $\mathfrak{g}$ complementary to the subalgebra $\mathfrak{h}$ for $H$, {\em i.e.} $\mathfrak{g}=\mathfrak{h} +\mathfrak{p}$ as a vector space. The real numbers $s^j$ can be seen as coordinates for each coset as a point in the coset space (space of the cosets) and the group action as symmetry transformations on the coset space, or equivalently the reference frame transformations. Let us look at such a transformation at the infinitesimal limit. We are going to need a specific form of the {Baker-Campbell-Hausdorff} (BCH) series for the case of products between a coset representative $\exp(Y)$ and an infinitesimal element $\exp(\bar{X})$. In particular, the result \bea \label{bch2} \exp(\bar{X})\exp(Y) = \exp\!\left( Y - [Y,\bar{X}] \right) \exp(\bar{X})\;, \eea can be easily checked to hold in general, though no similarly simple expression can be find for two operators/matrices neither infinitesimal, with generic commutation relation. \subsection{From the Poincar\'{e} Algebra to Minkowski Space} The protagonists of our story are the Poincar\'{e} group and algebra $ISO(1,3)$ and $\mathfrak{iso}(1,3)$. These describe the finite and infinitesimal transformations, respectively, that turn one (relativistic) inertial frame into another, i.e. the symmetry which puts the ``relativity'' in Einstein's special relativity. Recall that the Lie algebra $\mathfrak{iso}(1,3)$ possesses ten generators, which are split up into the six generators of rotations, among the spacetime directions, $J_{\mu\nu}$ (where $0\leq\mu<\nu\leq 3$) and the four generators of translations along the four directions\footnote{ The conventional description of $\mathfrak{iso}(1,3)$ uses instead the ``momentum" $P_\mu$ as generators, which are related to the generators as``energy" used here by $E_\mu=cP_\mu$. As we will see in the following sections, $E_\mu$ are the more natural choice from the perspective of symmetry contractions.} $E_\mu$, and which satisfy the following commutation relations\footnote{ In the mathematicians' notation, the commutator is really the Lie product defining the real Lie algebra to which the set of generators is a basis more naturally without all the $i\hbar$. Physicists version among to rescaling all the generators by the $i\hbar$ factor, the mathematically unreasonable $i$ to have the generators correspond (in a unitary representation) to physical observables and $\hbar$ to give the proper (SI) units to them. Strictly speaking, we should be thinking about like $-\frac{i}{\hbar} E_\mu $ and $-\frac{i}{\hbar} J_{\mu\nu}$ as our basis vectors are, {\em i.e.} the true generators, of the real Lie algebra, which is the real linear combination of them, with parameters in the proper physical dimensions.}: \begin{align} &[J_{\mu\nu},J_{\lambda\rho}]= - i\hbar(\eta_{\nu\lambda}J_{\mu\rho}-\eta_{\mu\lambda}J_{\nu\rho} +\eta_{\mu\rho}J_{\nu\lambda}-\eta_{\nu\rho}J_{\mu\lambda})\;, \nonumber \\ [J_{\mu\nu},E_{\rho}]= - i\hbar(\eta_{\nu\rho}E_{\mu}-\eta_{\mu\rho}E_{\nu})\;, \qquad [E_{\mu},E_{\nu}]=0 \;, \label{PS} \end{align} with $J_{\mu\nu}$ with $\mu>\nu$ to be interpreted as $-J_{\nu\mu}$, and we use $\eta_{\mu\nu}=\{-1,1,1,1\}$ as like the Minkowski metric. For easy reference, we take a notation convention which is essentially the same as that of the popular text book by Tung \cite{T}, besides using $E_\mu$ and an explicit $\hbar$. It is intuitively clear (and easy to check) that the subset $\mathfrak{so}(1,3)$ generated by the $J_{\mu\nu}$ generators forms a subalgebra of $\mathfrak{iso}(1,3)$ -- the subalgebra of spacetime rotations called Lorentz transformations. Thus, if we are interested in the coset representation introduced in the previous section, the candidate for our Minkowski spacetime should be the coset space $\mathfrak{M}:=ISO(1,3)/SO(1,3)$. We write a generic element $X\in\mathfrak{iso}(1,3)$ and $Y\in\mathfrak{iso}(1,3)- \mathfrak{so}(1,3)$ (as the complementary space $\mathfrak{p}$) as \[ X= -\frac{i}{\hbar} \left( \frac{1}{2}\omega^{\mu\nu}J_{\mu\nu}+b^{\mu}E_{\mu} \right) \qquad\textnormal{and} \qquad Y= -\frac{i}{\hbar}t^{\rho}E_{\rho} \;, \] respectively. Note that we have put in a factor of $\frac{1}{2}$ in the sum $\om^{\mu\nu}J_{\mu\nu}$, with $\om^{\mu\nu}=-\om^{\nu\mu}$, to lift the $\mu<\nu$ condition for convenience. Distinct elements in the form $Y$ are in one-to-one correspondence with the distinct cosets. Next, as we saw in the preceding discussion, we will pass from this to an action on the corresponding coset space $\mathfrak{M}$ (which, as we will see below, is isomorphic to Minkowski space, $\mathds{R}^{1,3}$). Consider an infinitesimal transformation given in the group notation as $g'=\exp(\bar{X}_{\!\ssc H} + \bar{Y}) = 1 +\bar{X}_{\!\ssc H} + \bar{Y} =\exp(\bar{Y})\exp(\bar{X}_{\!\ssc H})$, with $\bar{X}_{\!\ssc H} = -\frac{i}{2\hbar}\bar\om^{\mu\nu}J_{\mu\nu}$ and $\bar{Y} = -\frac{i}{\hbar}\bar{t}^{\mu} E_{\mu}$. We first check that \begin{align*} [\bar{X}_{\!\ssc H},Y] &= -\frac{1}{2\hbar^2}\bar\omega^{\mu\nu}t^{\rho}[J_{\mu\nu},E_{\rho}]\\ &= \frac{i}{2\hbar} t^{\rho}(\bar\omega^{\mu\nu} \eta_{\nu\rho}E_{\mu}+\bar\omega^{\nu\mu}\eta_{\mu\rho}E_{\nu}) \\ &= \frac{i}{\hbar} \bar\omega^{\mu}_{\,\rho}t^{\rho}E_{\mu}\;, \end{align*} and $[Y, [\bar{X}_{\!\ssc H},Y]] =0$. Applying our BCH formula (\ref{bch2}) for the case, we have \begin{align*} \exp(\bar{X}_{\!\ssc H})\exp(Y) &= \exp(Y-[Y,\bar{X}_{\!\ssc H}]) \exp(\bar{X}_{\!\ssc H}) \\ &= \exp( [\bar{X}_{\!\ssc H},Y]) \exp(Y) \exp(\bar{X}_{\!\ssc H}) \end{align*} as exact in the infinitesimal parameters in $\bar{X}_{\!\ssc H}$. Thus, the multiplication $g'\cdot(gSO(1,3))$ yields \begin{align*} & \exp\!\left( -\frac{i}{\hbar}\bar{t}^{\mu} E_{\mu} \!\right) \exp\!\left( -\frac{i}{2\hbar}\bar\om^{\mu\nu}J_{\mu\nu} \!\right) \exp\!\left( -\frac{i}{\hbar}t^{\rho}E_{\rho} \!\right) SO(1,3) \\ & =\exp\!\left( -\frac{i}{\hbar} \textcolor{red}{ \bar{t}^{\mu} E_{\mu} } \!\right) \exp\!\left( \frac{i}{\hbar} \textcolor{red}{ \bar\omega^{\mu}_{\,\rho}t^{\rho}E_{\mu} } \!\right) \exp\!\left( -\frac{i}{\hbar} \textcolor{blue}{t^{\rho}E_{\rho} }\!\right) \textcolor{green}{ \exp\!\left( -\frac{i}{2\hbar}\bar\om^{\mu\nu}J_{\mu\nu} \!\right) } SO(1,3) \\ &= \exp\!\left( \!-\frac{i}{\hbar}\bigg(\textcolor{blue} {\equalto{t^{\mu}E_{\mu}}{\textnormal{original } t^{\mu} \textnormal{ part}}} +\textcolor{red}{\equalto{( \bar{t}^{\mu} -\bar\omega^{\mu}_{\,\rho}t^{\rho} ) E_{\mu}}{\textnormal{infinitesimal change}}} \bigg)\!\!\right) \textcolor{green}{SO(1,3)} \;, \end{align*} which is the resulted coset of \[ \exp\!\left( -\frac{i}{\hbar} (t^{\mu} +dt^{\mu}) E_{\mu} \!\right) {SO(1,3)} \] where the infinitesimal change in coordinate $t^{\mu}$ is given by $dt^{\mu}= -\bar\omega^{\mu}_{\;\nu}t^{\nu} +\bar{t}^{\mu}$. The last equation can be seen as giving a representation of $\mathfrak{iso}(1,3)$ on $\mathfrak{M}$ by identifying the coset represented by $Y$ with the column vector $(t^{\mu},1)^{\ssc T}$ and $\bar{X}= \bar{X}_{\!\ssc H}+\bar{Y}$ with the matrix: \[ \bar{X}= \frac{i}{\hbar}\big(-\bar\omega^{\mu\nu} J_{\mu\nu} +\bar{t}^{\mu} E_{\mu}\big) \xrightarrow{\textnormal{represented by}} \left( \begin{array}{cc} -\bar\omega^{\mu}_{\;\nu} & \bar{t}^{\mu} \\ 0 & 0 \\ \end{array} \right) \] so that \bea\label{t-trans} \left( \begin{array}{c} dt^{\mu} \\ 0 \\ \end{array} \right) = \left( \begin{array}{cc} -\bar\omega^{\mu}_{\;\nu} & \bar{t}^{\mu} \\ 0 & 0 \\ \end{array} \right) \left( \begin{array}{c} t^{\nu} \\ 1 \\ \end{array} \right) =\left( \begin{array}{c} -\bar\omega^{\mu}_{\;\nu}t^{\nu} +\bar{t}^{\mu} \\ 0 \\ \end{array} \right)\;. \eea We have derived above the representation of the Lie algebra $\mathfrak{iso}(1,3)$ for the infinitesimal transformations of the coset space $\mathfrak{M}$ which obviously can be seen as a vector space with $t^\mu$ being the four-vector. The elements of $\mathfrak{iso}(1,3)$ associated with the infinitesimal transformations with $\bar{t}^{\mu}=0$, {\em i.e.} elements of the Lorentz subalgebra $\mathfrak{so}(1,3)$, indeed exponentiate into a $SO(1,3)$ Lorentz transformation on $t^\mu$ as \begin{align*} \left( \begin{array}{cc} -\omega^{\mu}_{\;\nu} & 0 \\ 0 & 0 \end{array} \right) &\xrightarrow{\quad\exp\quad} \left( \begin{array}{cc} \Lambda^{\mu}_{\;\nu} & 0 \\ 0 & 1 \end{array} \right) \\ &\xrightarrow[\textnormal{the action}]{\textnormal{leads to}} \left( \begin{array}{cc} \Lambda^{\mu}_{\;\nu} & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{c} t^{\nu} \\ 1 \end{array} \right) =\left( \begin{array}{c} \Lambda^{\mu}_{\;\nu} t^{\nu}\\ 1 \end{array} \right)\;. \end{align*} Similarly, the infinitesimal translations exponentiate into the finite translations \[ \exp \!\left( \begin{array}{cc} 0 & b^{\mu} \\ 0 & 0 \\ \end{array} \right) = \left( \begin{array}{cc} \delta^{\mu}_{\nu} & B^{\mu} \\ 0 & 1 \end{array} \right) \;. \] In fact, the Poincar\'e symmetry is given in physics textbooks typically as the transformations \[ x^\mu \rightarrow \Lambda^{\mu}_{\;\nu} x^\nu + A^{\mu} \;. \] from which one can obtained the same infinitesimal transformations with $d (\Lambda^{\mu}_{\;\nu}) = (-\omega^{\mu}_{\;\nu})$ and $\frac{1}{c} dA^{\mu}$ similarly associated with $b^{\mu}$, switching from $x^\mu$ to our $t^\mu= \frac{1}{c} x^\mu$. That is actually defining a symmetry group through a representation of its generic element. Putting that in the matrix form, we have \begin{align*} \left( \begin{array}{c} \Lambda^{\mu}_{\;\nu} t^{\nu} + B^\mu\\ 1 \end{array} \right) &= \left[ \left( \begin{array}{cc} \delta^{\mu}_{\rho} & B^{\mu} \\ 0 & 1 \end{array} \right) \left( \begin{array}{cc} \Lambda^{\rho}_{\;\nu} & 0 \\ 0 & 1 \end{array} \right) \right] \left( \begin{array}{c} t^{\nu} \\ 1 \end{array} \right) \\ &= \exp \!\left( \begin{array}{cc} 0 & b^{\mu} \\ 0 & 0 \\ \end{array} \right) \exp \! \left( \begin{array}{cc} -\omega^{\rho}_{\;\nu} & 0 \\ 0 & 0 \end{array} \right) \left( \begin{array}{c} t^{\nu} \\ 1 \end{array} \right) \;, \end{align*} from which we can see the infinitesimal limit of the transformation matrix being \[ \left[ I + \left( \begin{array}{cc} 0 & b^{\mu} \\ 0 & 0 \\ \end{array} \right) \right] \left[ I + \left( \begin{array}{cc} -\omega^{\mu}_{\;\nu} & 0 \\ 0 & 0 \end{array} \right) \right] = I + \left( \begin{array}{cc} -\omega^{\mu}_{\;\nu} & b^{\mu} \\ 0 & 0 \end{array} \right) \;. \] In fact, we can think of each point $(t^\mu,1)^{\ssc T}$ in $\mathfrak{M}$ as being defined by the action of the above matrices on the coordinate origin $(0,1)^{\ssc T}$ by taking $B^\mu=t^\mu$. Indeed \bea \left(\begin{array}{c} t^{\mu} \\ 1 \end{array}\right) \equiv \left(\begin{array}{cc} \Lambda^{\mu}_{\,\nu} & t^{\mu} \\ 0 & 1 \end{array}\right) \left(\begin{array}{c} 0 \\ 1 \end{array}\right) = \left(\begin{array}{c} t^\mu\\ 1 \end{array}\right) \; ; \eea hence the $t^{\mu}$-space is essentially isomorphic to the collection of matrices of the form \[ \left(\begin{array}{cc} \Lambda^{\mu}_{\,\nu} & t^{\mu} \\ 0 & 1 \end{array}\right)\;. \] Then, each of the translational elements can be taken as the standard representative for the coset \[ \left(\begin{array}{cc} \delta^\mu_\nu & t^{\mu} \\ 0 & 1 \end{array}\right) SO(1,3) \;. \] The latter, therefore, describes a full coset and the vector space of all such cosets is isomorphic to that of the collection of all $e^{\left(-\frac{i}{\hbar} t^\mu E_\mu\right)} SO(1,3)$ from the abstract mathematical description we start with. When the Minkowski spacetime is taken as the starting point, it is a homogeneous space in the physical sense that every point in it is really much the same as another. Each can be taken as the origin on which we can put in a coordinate system fixing a frame of reference. The symmetry of it as a geometric space is caught in the mathematical definition of a homogeneous space as a space with a transitive group of symmetry, meaning every two points in it can be connected through the action of a group element. For a particular point like the origin, there is a subgroup of the symmetry that does not move it, which is called the little group. It is a mathematical theorem that the homogeneous space is isomorphic to the coset space of the symmetry group ``divided by" the little group. Our result of the Minkowski spacetime as $ISO(1,3)/SO(1,3)$, whether in terms of the $t^\mu$ or the $x^\mu$ coordinates, is just a case example. Indeed, using $t^\mu$ as the coset space coordinates is really no different from using $P_\mu$ as generators and $x^\mu$. This is because we can write Lorentz transformations as \bea x^{\prime\ssc 0} &=& \gamma ( x^{\ssc 0} + \beta_i x^i) \; \nonumber \\ x^{\prime i} &=& \gamma ( x^{i} + \beta^i x^{\ssc 0}) \;, \eea or equivalently as \bea t^{\prime\ssc 0} &=& \gamma ( t^{\ssc 0} + \beta_i t^i) \; \nonumber \\ t^{\prime i} &=& \gamma ( t^{i} + \beta^i t^{\ssc 0}) \;, \eea with $\beta_i=\tfrac{v_i}{c}$, $\beta^i=\tfrac{v^i}{c}$, and $\gamma=\frac{1}{\sqrt{1-\beta_i\beta^i}}$. Both of the above are equivalent to \bea t^{\prime} &=& \gamma ( t +\frac{\beta_i}{c} x^i) = \gamma ( t +\frac{ v_i}{c^2} x^i)\; \nonumber \\ x^{\prime i} &=& \gamma ( x^{i} +\beta^i ct) = \gamma ( x^{i} + v^i t)\;, \eea where $t\equiv t^{\ssc 0}$. In other words, $t^\mu$ and $x^\mu$ describe the same spacetime ``position'' four-vector, they are simply expressed in time and space units, respectively. Einsteinian relativity says space and time are coordinates of a single spacetime, hence they are naturally to be expressed in the same units. It does not say that the spatial units are preferable, or in some sense more natural, than the time units! Straight to the spirit of special relativity, we should rather use the same unit to measure $t^\mu$ and $x^\mu$ in which $c=1$. With the different units, although textbooks typically use $x^\mu$, what we show below is that we should indeed start with $t^\mu$ as coordinates for Minkowski spacetime, as we have done above, if we want to directly and naturally recover $t$ and $x^i$ as coordinates of the representation space of Newtonian physics in the Newtonian limit, {\em i.e.} under the symmetry contraction described in the following section. In physical terms, $J_{\mu\nu}$ has the units of $\hbar$, while the algebra element $-\frac{i}{\hbar}(\omega^{\mu\nu} J_{\mu\nu}+ b^\mu E_\mu)$ has no units (for we do not want to exponentiate something that has units). Hence, $\omega^{\mu\nu}$ must also have no units, and $b^\mu E_\mu$ has the units of $\hbar$, giving $b^\mu$ the unit of time. Similarly, $a^\mu$, and $x^\mu$, as well as $A^\mu$, have the units of $\hbar$ divided by that of $P_\mu$. All quantities now have the right units, and $c$ of course has the units of $\frac{x^\mu}{t^\mu}$, {\em i.e.} distance over time. \subsection{The Phase Space for Particle Dynamics as a Coset Space} After the Minkowski spacetime $\mathfrak{M}$ described above, we come to another important coset space of the Poincar\'e symmetry, one that serves as the phase space for a single particle. Besides the spacetime coordinates, we also need the momentum or equivalently the velocity coordinates. However, the only parameters in the description of the group elements that correspond to velocity are those for the components of the three-vector $\zb^i=\om^{i\ssc 0}$. The candidate coset space is $ISO(1,3)/SO(3)$ which is seven-dimensional. An otherwise candidate is $ISO(1,3)/T_{\!\ssc H} \times SO(3)$ where $T_{\!\ssc H}$ denotes the one-parameter group of (`time') translations generated by $H=E_{0}$, which corresponds to the physical energy. That space loses the time coordinate $t^{\ssc 0}$ which cannot be desirable. There is a further option of extending $ISO(1,3)/SO(3)$. Let us first look carefully at the latter coset space. Instead of deriving that coset space `representation' from the first principle as for the Minkowski spacetime above, however, we construct it differently. The coset space here is not a vector space, hence the group action on it is not a representation. Without the linear structure, the group transformations cannot be written in terms of matrices acting on vectors representing the states each as a point in the space. Moreover, obtaining the resultant coset of a generic group transformation on a coset following the approach above is a lot more nontrivial. A vector space description of a phase space as a simple extension of the coset space can be constructed from physics consideration. Newtonian mechanics as the nonrelativistic limit to special relativity has of course a six-dimensional vector space as the phase space, each point in which is described by two three-vectors, the position vector $x^i$ and the momentum vector $p^i$. The two parts are in fact independent coset space representations of the corresponding relativity symmetry -- the Galilean relativity. Or the full phase space can be taken as a single coset space. Going to special relativity, the three-vectors are to be promoted to Minkowski four-vectors. A four-vector is an element in the four dimensional irreducible representation of the $SO(1,3)$ symmetry, while a three-vector belongs to the three dimensional irreducible representation of the $SO(3)$ group as a subgroup of $SO(1,3)$. Promoting $x^i$ to $x^\mu$ we get the Minkowski spacetime $\mathfrak{M}$ depicted with $t^\mu=\frac{x^\mu}{c}$ as the $ISO(1,3)/SO(1,3)$ coset space. Things for the momentum four-vector $p^\mu$ are somewhat different. It is a constrained vector with magnitude square $p_\mu p^\mu$ fixed by the particle rest mass $m$ as $-(mc)^2$, so long as the theory of special relativity is concerned. The actual admissible momenta only corresponds to points on the hyperboloid $p_\mu p^\mu=-(mc)^2$, which is a three-dimensional curved space. This suggests using the eight-dimensional vector space of $(x^\mu,p^\mu)$, or equivalently $(t^\mu, u^{\mu})$ with $u^{\mu}= \frac{p^\mu}{mc}$, the velocity four-vector in $c=1$ unit, for a Lorentz covariant formulation. The dimensionless `momentum' $u^{\mu}$ is used for the conjugate variables mostly to match better to the group coset language. The value of $-(mc)^2$ though is a Casimir invariant of the Poincar\'e symmetry which is a parameter for characterizing a generic irreducible representation of the symmetry \cite{T}. So, it makes good sense to use the momentum variables, though it really makes no difference when only a single particle is considered. The momentum or rather velocity hyperboloid $u_\mu u^\mu=-1$, recall $u^\mu=(\zc, \zc\zb^i)^{\!\ssc T}$, is indeed a homogeneous space of $SO(1,3)$ corresponding to the coset space $SO(1,3)/SO(3)$. $SO(3)$ which keeps the point $u^{\mu}=(1,0,0,0)^{\!\ssc T}$ fixed is the little group. A simple way to see that is to identify each point in the hyperboloid by the Lorentz boost that uniquely takes the reference point $u^{\mu}=(1,0,0,0)^{\!\ssc T}$ to it, hence equivalently by the coset represented by the boosts. Matching with the group notation as we have above, each coset is an $\exp(-\frac{i}{\hbar} \om^{{\ssc 0}i} J_{{\ssc 0}i}) SO(3)$. In fact, the coordinate for the coset $ \om^{{\ssc 0}i}=-\om^{i{\ssc 0}}$ can be identified with $-\zb^i$, for example from $t'^{\ssc 0}=\zc (t^{\ssc 0} + \zb_i t^i)$ giving $dt^{\ssc 0}=\bar\zb_i t^i =-\bar\om^{\ssc 0}_i t^i$. Putting together the `phase space' as a product of the configuration space and the momentum space, we have \[ ISO(1,3)/SO(1,3) \times SO(1,3)/SO(3) \;, \] which is mathematically exactly $ISO(1,3)/SO(3)$. We cannnot use it as the actual phase space in the Hamiltonian formulation of the particle dynamics, which has to have coordinates in conjugate pairs. Note that no parameter in the full Poincar\'e group can correspond to $u^{\ssc 0}$ and $\zb^i$ cannot be part of a four-vector. But there is no harm using the redundant coordinates $u^\mu$ to describe points in the velocity hyperboloid. That is mathematically a natural embedding of the velocity hyperboloid into the Minkowski four-vector velocity space $\mathfrak{M}_v$. Let us write down the explicit infinitesimal action of $SO(1,3)$ on $SO(1,3)/SO(3)$. Note that the translations generated by $E_\mu$ in the Poincar\'e group do not act on the velocity four-vector $u^{\mu}$. The action hence can be seen as the full action of the Poincar\'e group. Obviously, we have simply $d u^{\mu} = -\bar\om^\mu_{\,\nu} u^{\nu}$. Rewriting that by taking out a $\zc=u^{\ssc 0}$ factor, we have \bea\label{db} d \zb^i + \zb^i\frac{d\zc}{\zc} = - \bar\om^i_j \zb^j + \bar\zb^i \;, \eea and $\frac{d\zc}{\zc}={\bar\zb_k \zb^k}$. The latter as the extra term in the $d \zb^i$ expression shows the complication of the description in terms of the coset coordinates $\zb^i$ or $\om^{{\ssc 0}i}$ versus the simple picture in terms of $u^{\mu}$. \section{Special Relativity as a Theory of Hamiltonian Dynamics} The Hamiltonian formulation of a dynamical theory is a powerful one which is also particularly good for a symmetry theoretical formalism. Here, we consider a coset space of the relativity symmetry group as the particle phase space, one bearing the geometric structure of a so-called symplectic space. The structure can be seen as given by the existence of a Poisson bracket as a antisymmetry bilinear structure on the algebra of differentiable functions $F$ on the space to be given under local coordinates $z^n$ as \[ \{ F(z^n), F'(z^n) \} = \Omega^{mn} \frac{\partial F}{\partial z^m} \frac{\partial F'}{\partial z^n} \;, \qquad \Omega^{mn} = -\Omega^{nm} \;, \;\; \det\Omega =1 \;. \] In terms of canonical coordinates, for example the position and momentum of a single (`nonrelativistic') Newtonian particle, we have \[ \{ F(x^i, p^i), F'(x^i, p^i) \} = \delta^{ij} \left( \frac{\partial F}{\partial x^i} \frac{\partial F'}{\partial p^j} - \frac{\partial F}{\partial p^j} \frac{\partial F'}{\partial x^i} \right)\;. \] General Hamiltonian equation of motion for any observable $F(z^n)$ is given by \bea \frac{d}{dt} F(z^n) = \{ F(z^n), {{H}}_t (z^n) \} \;, \eea where ${{H}}_t (z^n)$ is the physical Hamiltonian as the energy function on the phase space, which for case of $F$ being $x^i$ or $p^i$ reduces to \[ \frac{d}{dt} x^i = \frac{\partial {H}_t}{\partial p^i} \;, \qquad \frac{d}{dt} p^i = -\frac{\partial {H}_t}{\partial x^i} \;. \] Note that the configuration/position variables $x^i$ and momentum variables $p^i$ are to be considered the basic independent variables while the Newtonian particle momentum being mass times velocity is to be retrieved from the equations of motion for the standard case with the $p^i$ dependent part of ${{H}}_t$ being $p_ip^i/2m$. \subsection{Dynamics as Symmetry Transformations} The key lesson here is to appreciate that the phase space (symplectic) geometric structure guarantees that for any generic Hamiltonian function ${\mathcal{H}}_s$, points on the phase space having the same value for the function lie on a curve of the Hamiltonian flow characterized by the monotonically increasing real parameter $s$ on which any observables $F(z^n)$ satisfy the equation \bea\label{hs} \frac{d}{ds} F(z^n) = \{ F(z^n), {\mathcal{H}}_s (z^n) \} \;. \eea The equation of motion for the usual case is simply the case for ${\mathcal{H}}_t$, {\em i.e.} time evolution. Such a physical Hamiltonian can have more than one choice, so long as the evolution parameter is essentially a measure of time. The class of Hamiltonian flows each generated by a Hamiltonian function having a vanishing Poisson bracket with the physical Hamiltonian function are then the symmetries of the corresponding physical system and the Hamiltonian functions the related conserved quantities. In fact, a Hamiltonian flow is the one-parameter group of symmetry transformations with ${\mathcal{H}}_s$ the generator function. We have the Hamiltonian vector field \bea\label{Xs} X_s = - \{ {\mathcal{H}}_s (z^n), \cdot \} \eea as a differential operator being the generator and the collection of such $X_s$ being a representation of the basis vectors of the symmetry Lie algebra. Hence, we have \bea\label{dXs} \frac{dF}{ds} = X_s (F) \;. \eea The structure works at least for any theory of particle dynamics with any background relativity symmetry including for examples Newtonian and Einsteinian ones of our focus here as well as quantum mechanics. Mathematics for the latter case is quite a bit more involved and in many ways more natural and beautiful from the symmetry point of view. Interested readers are referred to Ref.\cite{070,087,094}. \subsection{Particle Dynamics of Special Relativity} For the phase space formulation of particle dynamics of special relativity, we can have a picture of the particle phase space as the coset space $\mathfrak{P}:=ISO(1,3)/ T_{\!\ssc H}\times SO(3)$ with canonical coordinates $(t^k,u^k)$. The standard Hamilton's equations in our canonical coordinates are \bea && \frac{dt_i}{dt}=\frac{\partial {\mathcal{H}_t} (t^k,u^{k})}{\partial u^{i}}\;, \qquad \frac{du_{i}}{dt}=-\frac{\partial {\mathcal{H}_t}(t^k,u^{k})}{\partial t^i}\;, \eea where Hamiltonian function ${\mathcal{H}_t}(t^k,u^{k})=\sqrt{1+u_{k}u^{k}}$, which is basically energy per unit mass in the dimensionless velocity unit ($mc^2 {\mathcal{H}_t}= c \sqrt{m^2c^2+p_{k}p^{k}}=c \, p^{\ssc 0}$). The equations are only special cases of Eq.(\ref{hs}). Note that the first equation really gives $\frac{dt_i}{dt}= \frac{u_i}{\sqrt{1+u_{k}u^{k}}}= \zb_i$ as ${\sqrt{1+u_{k}u^{k}}}=\zc$, and the second $\frac{du_{i}}{dt}=0$. For the extended phase space $\mathfrak{P}_e:= \mathfrak{M}\times \mathfrak{M}_v$, with canonical coordinates $(t^\mu,u ^\mu)$, we have \bea && \frac{dt_\mu}{d\zeta}=\frac{\partial \tilde{\mathcal{H}}_\zeta (t^\nu,u ^\nu)}{\partial u^\mu}\;, \qquad \frac{du_\mu}{d\zeta}=-\frac{\partial \tilde{\mathcal{H}}_\zeta (t^\nu,u ^\nu)}{\partial t^\mu}\;, \eea with the extended Hamiltonian $\tilde{\mathcal{H}}_\zeta(t^\nu,u ^\nu)={\mathcal{H}_t} -u^{\ssc 0}$ giving, besides the same results as from ${\mathcal{H}_t}$ above, $\frac{du_{\ssc 0}}{d\zeta}=0$ for consistency and $\frac{dt_{\ssc 0}}{d\zeta}=-1$, hence $\zeta$ as essentially the coordinate time $t^{\ssc 0} \equiv t$, and the same dynamics \cite{J}. Alternatively, we can have a covariant description with the proper time evolution \bea && \frac{dt_\mu}{d\tau}=\frac{\partial \tilde{\mathcal{H}}_\tau (t^\nu,u ^\nu)}{\partial u^\mu} = u_\mu\;, \qquad \frac{du_\mu}{d\tau}=-\frac{\partial \tilde{\mathcal{H}}_\tau (t^\nu,u ^\nu)}{\partial t^\mu} =0 \;, \eea where $\tilde{\mathcal{H}}_\tau (t^\nu,u ^\nu)= \frac{u_\nu u^\nu}{2}$. All formulations have equations of the form (\ref{hs}). In fact, they can be seen all as special cases of the single general equation from the symmetry of the symplectic manifold coordinated by $(t^\mu,u ^\mu)$. We only write free particle dynamics here. The reason being special relativity actually does not admit motion under a nontrivial $x^\mu$ or $t^\mu$ dependent potential without upsetting $u_\mu u^\mu=-1$. Motion under gauge field, like electromagnetic field, modifies the nature of the conjugate momentum and the story is somewhat different. \subsection{Hamiltonian Flows Generated by Elements of the Poincar\'e Symmetry} We first look at the $(t^\mu, u^\mu)$ phase space picture. With the canonical coordinates, we have from Eq.(\ref{Xs}) \bea && d t_\mu = \bar{s} \frac{\partial \tilde{\mathcal{H}}_{s}}{\partial u^\mu} \;, \qquad d u_\mu = -\bar{s} \frac{\partial \tilde{\mathcal{H}}_{s}}{\partial t^\mu} \;, \eea where $\bar{s}=ds$ is the infinitesimal parameter in line with the notation of our coset descriptions above. We can see that the canonical transformations given by the equations for the generators of the Poincar\'e symmetry exactly agree with our coset picture above. For $\tilde{\mathcal{H}}_{\omega^{\mu\nu}} = t_\mu u_\nu - t_\nu u_\mu$, we have $dt^\rho = -\delta^\rho_\mu \bar\omega^{\mu}_{\,\nu} t^\nu + \delta^\rho_\nu \bar\omega_{\mu}^{\,\nu} t^\mu$ and $du^\rho = - \delta^\rho_\mu \bar\omega^{\mu}_{\,\nu} u^\nu + \delta^\rho_\nu \bar\omega_{\mu}^{\,\nu} u^\mu$, while for $\tilde{\mathcal{H}}_{b^{\mu}} = u_\mu $, we have $dt^\rho = \delta^{\rho}_\mu\bar{b}^\mu$, $du^\rho =0$ --- note that here we are talking about specific $\tilde{\mathcal{H}}_s$ functions with specific infinitesimal parameters $\bar{s}$ on specific phase space variables and there is no summation over any of the indices involved in the expressions. The ten Hamiltonian functions $\tilde{\mathcal{H}}_{\omega^{\mu\nu}}$ and $\tilde{\mathcal{H}}_{b^{\mu}}$ combined together gives a full realization of the action of the Poincar\'e symmetry as transformations on the covariant phase space of $(t^\mu, u^\mu)$. One can check that with the Poisson bracket as the Lie bracket, they span a Lie algebra: \bea && \{ \tilde{\mathcal{H}}_{\omega^{\mu\nu}}, \tilde{\mathcal{H}}_{\omega^{\lambda\rho}} \}_{\!\ssc 4} = -( \eta_{\nu\lambda} \tilde{\mathcal{H}}_{\omega^{\mu\rho}} - \eta_{\mu\lambda} \tilde{\mathcal{H}}_{\omega^{\nu\rho}} + \eta_{\mu\rho} \tilde{\mathcal{H}}_{\omega^{\nu\lambda}} - \eta_{\nu\rho} \tilde{\mathcal{H}}_{\omega^{\mu\lambda}} ) \;, \sea \{ \tilde{\mathcal{H}}_{\omega^{\mu\nu}},\tilde{\mathcal{H}}_{b^{\rho}}\}_{\!\ssc 4} = -( \eta_{\nu\rho} \tilde{\mathcal{H}}_{b^{\mu}} - \eta_{\mu\rho} \tilde{\mathcal{H}}_{b^{\nu}} )\;, \qquad \{\tilde{\mathcal{H}}_{b^{\mu}},\tilde{\mathcal{H}}_{b^{\nu}}\}_{\!\ssc 4} =0 \;, \eea where we have explicitly \[ \{\tilde{\mathcal{H}}_{\ssc 1},\tilde{\mathcal{H}}_{\ssc 2}\}_{\!\ssc 4} = \eta^{\mu\nu} \left( \frac{\partial \tilde{\mathcal{H}}_{\ssc 1}}{\partial t^\mu} \frac{\partial \tilde{\mathcal{H}}_{\ssc 2}}{\partial u^\nu} - \frac{\partial \tilde{\mathcal{H}}_{\ssc 1}}{\partial u^\nu} \frac{\partial \tilde{\mathcal{H}}_{\ssc 2}}{\partial t^\mu} \right) \;. \] Matching $\tilde{\mathcal{H}}_{\omega^{\mu\nu}}$ to $i\hbar J_{\mu\nu}$ and $\tilde{\mathcal{H}}_{b^{\mu}}$ to $i\hbar E_{\mu}$ we can see that the Lie algebra is that of the Poincar\'e symmetry given by Eq.(\ref{PS}). In fact, it is a representation of the symmetry on the space of functions of the phase space variables. If the phase space $\mathfrak{P}$ is taken, however, we can have only as Hamiltonian functions ${\mathcal{H}}_{\omega^{ij}}$ and ${\mathcal{H}}_{b^{i}}$, with identical expressions to $\tilde{\mathcal{H}}_{\omega^{ij}}$ and $\tilde{\mathcal{H}}_{b^{i}}$, illustrating only the $ISO(3)$ symmetry of translations and rotations, with the Lie product \[ \{ {\mathcal{H}}_{\ssc 1},{\mathcal{H}}_{\ssc 2}\}_{\!\ssc 3} = \eta^{ij} \left( \frac{\partial {\mathcal{H}}_{\ssc 1}}{\partial t^i} \frac{\partial {\mathcal{H}}_{\ssc 2}}{\partial u^j} - \frac{\partial {\mathcal{H}}_{\ssc 1}}{\partial u^j} \frac{\partial {\mathcal{H}}_{\ssc 2}}{\partial t^i} \right) \;. \] The time translation symmetry can be added with ${\mathcal{H}}_{t}$ given above, which has the right vanishing Lie product as $\{ {\mathcal{H}}_{\omega^{ij}},{\mathcal{H}}_t \}_{\!\ssc 3}=0$ and $\{ {\mathcal{H}}_{b^{i}},{\mathcal{H}}_t \}_{\!\ssc 3}=0$. Not being able to have the boosts as Hamiltonian transformations is one of the short-coming of not using the covariant phase space. \section{Contractions as Approximations of Physical Theories} With an understanding of how the principle of relativity informs our notion of physical spacetime and the theory of particle dynamics behind us, we can move on to the important connection this language provides us between different theories from the relativity symmetry perspective. Broadly speaking this can be put as: it is commonplace to find phrases like ``Newtonian physics arises from special relativity when $c\to\infty$'' and we will place such comments on a firm mathematical foundation within the relativity theoretical symmetry setting. \subsection{A Crash Course on Symmetry Contractions} Imagine we are standing on a perfectly spherical, uninhabited planetary body\footnote{ This example, and indeed the entire examination of contractions found here, is strongly influenced by the wonderful discussion found in \cite{Gil}.}. The transformation that arise as symmetries of said body are nothing more than the $SO(3)$ group elements as rotations about the center. Now consider what we can say if this body began to rapidly expand without limit. It is intuitively clear that as the radius of the sphere becomes larger and larger, making the surface of the sphere more and more flat, the symmetries of this body should be approaching, in some sense, those of the Euclidean plane, {\em i.e.} $ISO(2)$. It might not, however, be immediately clear how exactly this is encoded in the structure of the Lie algebras. How might one achieve such a description? It is ultimately this question, applied to a general Lie algebra $\mathfrak{g}$, that we are concerned with in this section. The notion of a \textit{contraction} is precisely the answer we are looking for. In particular, we will focus on the simplest form of contractions: the so-called \textit{In\"{o}n\"{u}-Wigner contractions} \cite{IW}. The setup is as follows: consider a Lie algebra $\mathfrak{g}$ with a decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{p}$, where $\mathfrak{h}$ is an $n$-dimensional subalgebra and $\mathfrak{p}$ the complementary $m$-dimensional vector subspace. In terms of our example above, the idea is that we collect the portion of the symmetries that do not change in the limit (which are the rotations around the vertical axis through where we stand on the planet, for the example at hand) and call their Lie algebra $\mathfrak{h}$. The rest, or the span of the independent generators is $\mathfrak{p}$. Then we can form a one-parameter sequence of base changes, corresponding directly to the change in scale of the physical system, of the form \[\left( \begin{array}{c} \mathfrak{h} \\ \mathfrak{p}' \end{array} \right)=\left( \begin{array}{cc} I_n & 0 \\ 0 & \frac{1}{R} I_m \end{array} \right)\left( \begin{array}{c} \mathfrak{h} \\ \mathfrak{p} \end{array} \right)\] for any nonzero value of $R$ here taken conveniently as positive. For any finite $R$, the Lie algebra hence our symmetry is not changed. In the $R \to \infty$ limit, however,we obtain the contracted algebra $\mathfrak{g}'=\mathfrak{h}\oplus\mathfrak{p}'$. Note that, although the change of basis matrix is singular in the limit, the commutation relations still make sense: \begin{align*} [\mathfrak{h},\mathfrak{h}] = [\mathfrak{h},\mathfrak{h}] & \subseteq \mathfrak{h} \qquad \qquad \qquad\qquad \xrightarrow{\quad R \to \infty \quad} \quad \mathfrak{h} \;, \\ [\mathfrak{h}',\mathfrak{p}'] = \frac{1}{R} [\mathfrak{h},\mathfrak{p}] & \subseteq \frac{1}{R} (\mathfrak{h}+\mathfrak{p}) =\frac{1}{R} \mathfrak{h}+\mathfrak{p}' \quad \xrightarrow{\quad R \to \infty\quad} \quad \mathfrak{p}'\;, \\ [\mathfrak{p}',\mathfrak{p}'] = \frac{1}{R^2} [\mathfrak{p},\mathfrak{p}] & \subseteq \frac{1}{R^2} (\mathfrak{h}+\mathfrak{p}) = \frac{1}{R^2} \mathfrak{h}+\frac{1}{R} \mathfrak{p}' \xrightarrow{\quad R \to \infty \quad} \quad 0 \;. \end{align*} Though the vector space is the same, the Lie products, or commutators, change. $\mathfrak{p}$ is in general not even a subalgebra of $\mathfrak{g}$. $\mathfrak{p}'$ is however an Abelian subalgebra of $\mathfrak{g}'$ and is an invariant one. Take the explicit example we have. The Lie algebra $\mathfrak{so}(3)$ for the group $SO(3)$ is given by the commutation relations \[ [J_x,J_y]= i\hbar J_z, \quad[J_y,J_z]=i\hbar J_x, \quad \textnormal{and} \quad [J_z,J_x]=i\hbar J_y \;. \] Under the rescaling $P_x=\tfrac{1}{R}J_x$, $P_y=\tfrac{1}{R}J_y$, and $J_z$ as the generator of $\mathfrak{h}$ is not changed (taking a coordinate system with where we stand as the on the positive $z$-axis), the commutators become \begin{align*} &[P_x,P_y]=\frac{1}{R^2}[J_x,J_y] =\frac{i\hbar}{R^2}{J}_z \to 0 \;, \\ &[{J}_z,P_x]=\frac{1}{R}[J_z,J_x]= i\hbar P_y \;, \\ &[{J}_z,P_y]=\frac{1}{R}[J_z,J_y]=- i\hbar P_x \;, \end{align*} in the limit as $R\rightarrow\infty$. Therefore, we recover precisely commutation relations of the Lie algebra\footnote{ Our notation is such that it has a nice matching to the Poincar\'e symmetry ones used above, with the identification of $J_x$, $J_y$, $J_z$, $P_x$ and $P_y$ as $J_{\!\ssc 23}$, $J_{\!\ssc 31}$, $J_{\!\ssc 12}$, $E_{\!\ssc 1}$ and $E_{\!\ssc 2}$, respectively. } $\mathfrak{iso}(2)$. From the physical geometric perspective, we see that what is really happening in the limit is that the ratio of the characteristic distance scales we have chosen, like the length of our foot step or the distance we can travel and that of the radius, is becoming zero. The radius $R$ is effectively infinity to us if we can only manage to explore a distance tiny in comparison. The planet is as good as flat to us then, though it is only an approximation. \subsection{The Poincar\'e to Galilean Symmetry Contraction} Our starting point for describing the transition from Einsteinian relativity to Galilean relativity is the following natural choice of a contraction of the Poincar\'e algebra to the Galilean algebra. Moreover, we will see that this takes Minkowski spacetime, viewed as a coset space of $ISO(1,3)$, to ordinary Newtonian space-time, viewed as a coset space of $G(3)$. Actually, it goes all the way to take the full dynamical theory as given by the symplectic geometry of the phase space as a representation space from that of special relativity to the Newtonian one. The contraction is performed via the new generators $K_i= \frac{1}{c} J_{i{\ssc 0}}$ and $P_i = \frac{1}{c} E_i$, keeping $J_{ij}$ and $E_{\!\ssc 0}$ is renamed $-H$. Then we have \bea && [J_{ij}, J_{hk}] = - i \hbar ( \delta_{jh} J_{ik} - \delta_{ih} J_{jk} + \delta_{ik} J_{jh}- \delta_{jk} J_{ih})\;, \nonumber \\ && [J_{jk}, H] =0 \;, \eea which is the subalgebra that is not rescaled ($\eta_{ij}=\delta_{ij}$). As for the other commutators, we have \bea && [J_{ij}, K_k]= \frac{1}{c} [ J_{ij}, J_{k{\ssc 0}} ] = - i \hbar ( \delta_{jk} K_i -\delta_{ik} K_j ) \;, \nonumber \\ && [K_i, K_j] =\frac{1}{c^2} [J_{i{\ssc 0}}, J_{j{\ssc 0}}] = -i \hbar\frac{1}{c^2} J_{ij} \;, \nonumber \\ && [J_{ij}, P_k ] = \frac{1}{c} [J_{ij}, E_k ] = -i \hbar (\delta_{jk} P_i -\delta_{ik} P_j) \;, \nonumber \\ && [J_{ij}, H] = 0 \;, \nonumber \\ && [K_i, P_j] = \frac{1}{c^2} [ J_{i\ssc 0}, E_{j} ] = -i \hbar\frac{1}{c^2} \eta_{ij} H \;, \nonumber \\ && [K_i, H] = -\frac{1}{c} [J_{i{\ssc 0}}, E_{\ssc 0}] = -i \hbar P_i \;, \nonumber \\ && [H, P_i] = -\frac{1}{c}[E_{\ssc 0}, E_i]=0 \;, \nonumber \\ && [P_i, P_j]= \frac{1}{c^2}[E_i, E_j]=0 \;. \eea When we take the $c\to\infty$ limit, we have $[K_i, K_j] =0$ and $[K_i, P_j] =0$. That is, we recover the Galilean symmetry algebra. Note that we need the $\tfrac{1}{c}$ factor in $K_i= \tfrac{1}{c} J_{i{\ssc 0}}$ in order to get $[K_i, K_j] =0$, hence, Lorentz boosts becoming commutating Galilean boosts. Moreover, this will give $[K_i, P_j]=0$ as well if we simply take $P_i=E_i$. However, this will also yield $[K_i, H] =0$ in the contraction limit which cannot be the Galilean symmetry. By taking $P_i = \frac{1}{c} E_i$ though, one can see that this saves $[K_i, H] =-i \hbar P_i $, as needed. This is actually precisely the reason we wanted to start with $E^\mu$, instead of $P^\mu$! Indeed, the momentum $P_i$ are not the generators of the Poincar\'e algebra we started with before the introduction of the nontrivial factor of $c$. The mathematical formulation of the contraction above can also be understood from a geometric picture. It is about an approximation when the relevant velocities of particle motion have magnitudes small relative to the speed of light $c$, {\em i.e.} $\beta^i <\!\!< 1$. The velocity space for particle motion under special relativity is the three-dimensional hyperboloid of `radius' $c$ -- the four-velocity $cu^\mu$ is a timelike vector of magnitude $c$. When we are only looking at a small region around zero motion of $u^\mu=(1,0,0,0)^{\!\ssc T}$, the velocity space seems to be flat, like the {\em Euclidean} space of Newtonian three-velocity $v^i$, and the boosts as commuting velocity translations. \subsection{Retrieving Newtonian Space-Time from Minkowski Spacetime} Now we can parse the changes in the Minkowski spacetime coordinates $t^\mu$, as a representation, under the contraction. First of all, we have to write our algebra elements in terms of these new generators, in order to paint a coherent picture. We have \bea -\frac{i}{\hbar}\left(\frac{1}{2}\omega^{\mu\nu} J_{\mu\nu}+ b^\mu E_\mu\right) &=&-\frac{i}{\hbar}\left(\frac{1}{2}\omega^{ij} J_{ij}+ b^{\ssc 0} E_{\ssc 0} +\omega^{{\ssc 0}i} J_{{\ssc 0}i}+ b^{i} E_{i} \right) \nonumber \\ &=&-\frac{i}{\hbar}\left(\frac{1}{2}\omega^{ij} J_{ij}+ b^{\ssc 0} E_{\ssc 0} +c\,\omega^{i{\ssc 0}} K_i+ c\,b^{i} P_{i} \right) \nonumber \\ &=&-\frac{i}{\hbar}\left(\frac{1}{2}\omega^{ij} J_{ij}+ b^{\ssc 0} E_{\ssc 0} +v^i K_i+ a^{i} P_{i} \right), \eea where $v^i= c \,\omega^{i{\ssc 0}}$ and $a^i =c \,b^i$ are the new parameters for the boosts and spatial translations (i.e. the $x^i$ translations). The representation for the algebra is given by \bea \left(\begin{array}{c} dt \\ dx^i =c \, dt^i\\ 0 \end{array}\right) &\equiv& \left(\begin{array}{ccc} 0 & -\frac{1}{c} \bar\omega^{\ssc 0}_{\,j} & \bar{b} \\ -c\,\bar\omega^i_{\,\ssc 0} & -\bar\omega^i_{\,j} & \bar{a}^i =c\,\bar{b}^i \\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{c} t \\ x^i= c \, t^i\\ 1 \end{array}\right) = \left(\begin{array}{c} -\frac{1}{c} \bar\omega^{\ssc 0}_{\,j} x^j+ \bar{b} \\ -c\,\bar\omega^i_{\,\ssc 0} t - \bar\omega^i_{\,j} x^j + \bar{a}^i \\ 0 \end{array}\right) \nonumber \\ &=& \left(\begin{array}{ccc} 0 & \frac{1}{c^2} \bar{v}_j & \bar{b} \\ \bar{v}^i & -\bar\omega^i_{\,j} & \bar{a}^i \\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{c} t \\ x^i \\ 1 \end{array}\right) = \left(\begin{array}{c} \frac{1}{c^2} \bar{v}_j x^j+ \bar{b} \\ \bar{v}^i t - \bar\omega^i_{\,j} x^j + \bar{a}^i \\ 0 \end{array}\right) \;. \eea where we have used \[ c\,\bar\omega^i_{\,\ssc 0} = -c \, \bar\omega^{i{\ssc 0} } = - c\, \bar\zb^{i} = - \bar{v}^i \;, \qquad {\bar\omega^{\ssc 0}_{\,j}} = {\bar\omega^{{\ssc 0}j}} =-\frac{1}{c} v_j \;. \] Lastly, we take the limit $c\to\infty$ and get \bea \left(\begin{array}{c} dt \\ dx^i \\ 0 \end{array}\right) =\left(\begin{array}{ccc} 0 & 0 & \bar{b} \\ \bar{v}^i & -\bar\omega^i_{\,j} & \bar{a}^i \\ 0 & 0 & 0 \end{array}\right) \left(\begin{array}{c} t \\ x^i \\ 1 \end{array}\right) = \left(\begin{array}{c} \bar{b} \\ \bar{v}^i t - \bar\omega^i_{\,j} x^j + \bar{a}^i \\ 0 \end{array}\right) \;. \eea The group of finite transformations can be written in the form \bea \left(\begin{array}{c} t' \\ x'^i \\ 1 \end{array}\right) =\left(\begin{array}{ccc} 1 & 0 & B \\ V^i & R^i_{\,j} & A^i \\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{c} t \\ x^i \\ 1 \end{array}\right) = \left(\begin{array}{c} t+B \\ V^i t + R^i_{\,j} x^j + A^i \\ 1 \end{array}\right) \;. \eea Newtonian space-time with transformations under a generic element in the Galilean group has been retrieved. Now we can see that the Newtonian space-time `points' can be described by the coset \[ \left(\begin{array}{ccc} 1 & 0 & t \\ V^i & R^i_{\,j} & x^i \\ 0 & 0 & 1 \end{array}\right) =\left(\begin{array}{ccc} 1 & 0 & t \\ 0 & \delta^i_k & x^i \\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{ccc} 1 & 0 & 0 \\ V^k & R^k_{\,j} & 0 \\ 0 & 0 & 1 \end{array}\right)\;, \] as \[ \left(\begin{array}{ccc} 1 & 0 & t \\ V^i & R^i_{\,j} & x^i \\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right) = \left(\begin{array}{c} t \\ x^i \\ 1 \end{array}\right)\;. \] Indeed, the matrix expressed as that product of two is exactly in the form of the first matrix representing a particular element $\exp\!\big(-\frac{i}{\hbar}(tH +x^iP_i)\big)$ of pure translations multiply to any element with the rotations and Galilean boosts, as translations on the space of Newtonian velocity, only, hence any element of the coset $\exp\!\big(-\frac{i}{\hbar}(tH +x^iP_i)\big) ISO_v(3)$. The Newtonian space-time as a coset space is given by $G(3)/ISO_v(3)$, and the $ISO_v(3)$ subgroup is exactly the result of the contraction from $SO(1,3)$, {\em i.e.} we have \[ ISO(1,3)/SO(1,3) \longrightarrow G(3)/ISO_v(3) \;. \] The infinitesimal action of the $G(3)$ group on the coset here obtained from the contraction may also be obtained directly from first principle. The simpler commutation relations actually make the calculation easier. In Einstein relativity, spacetime should be described by coordinates with the same units. The natural units are given by the $c=1$ units, which identifies each spatial distance unit with a time unit, and vice versa. If one insists on using different units for the time and space parts, $c$ has then the unit of distance over time and can be written as any value in different units, like $\sim 3\times 10^{8}\, ms^{-1}$, or $\sim 3\times 10^{28}\, A \, yr^{-1}$, or $\sim 3 \times 10^{-7}\, km \,ps^{-1}$, or $\sim 10^{-26} \, Mpc \, ps^{-1}$. The exact choice of units is arbitrary. The structure of the physical theory is independent of that. Hence, any finite value of $c$ describes the same symmetry represented by spacetime coordinates in different units. The $c\to\infty$ limit is different. Infinity is infinity in any units, and the algebra becomes the contracted one, which is to say that the relativity symmetry becomes Galilean. The latter is practical as an approximation for physics at velocity much less than $c$. Pictured in the Minkowski spacetime, such lines of motion hardly deviate from the time axis, giving the idea of the Newtonian absolute time. The relativity symmetry contraction picture gives a coherent description of all aspects of that approximate theory, including the dynamics to which we will turn below. \subsection{Hamiltonian Transformations and Particle Dynamics at the Newtonian Limit} Turning to the phase space pictures, we have already $dt^\mu = -\bar\om^\mu_\nu t^\nu +\bar{t}^\mu$ giving at the contraction limit $dt=\bar{t}$ and $dx^i= \bar{v}^i t - \om^i_j x^j +\bar{x}^i$. Similarly, we can see that \bea && du^{\ssc 0}= -\bar\om^{\ssc 0}_i u^i = \bar\zb_i u^i \quad\Longrightarrow \quad d\zc = \frac{\bar{v}_i v^i \zc}{c^2} \to 0 \;, \sea du^i = -\bar\om^i_\nu u^\nu \quad\Longrightarrow\quad dv^i = - v^i\frac{d\zc}{\zc} -\bar\om^i_j v^j + \bar{v}^i \to -\bar\om^i_j v^j + \bar{v}^i \;, \eea where we have used Eq.(\ref{db}). The phase space $\mathfrak{P}$ at the contraction limit should be described with coordinates $(x^i,v^i)$. The coset space of $ISO(1,3)/SO(3)$ or $\mathfrak{P}_e$ with $(t,x^i,v^i)$ as $\zc \to 1$ can no longer have $t$ as a meaningful coordinate. We have then only one sensible phase space as the Newtonian one here with $(x^i,v^i)$. To look at the Hamiltonian symmetry flows or the dynamics at the contraction limit, the notation of the Hamiltonian vector field is convenient. On $\mathfrak{P}$ with $\{\cdot,\cdot \}_{\ssc 3}$ we have \bea X_s^{\ssc (3)} &=& - \{ {\mathcal{H}}_s(t^i,u^i),\cdot\}_{\!\ssc 3} = - \eta^{ij} \left( \frac{\partial {\mathcal{H}}_s(t^i,u^i)}{\partial t^i} \frac{\partial}{\partial u^j} - \frac{\partial {\mathcal{H}}_s(t^i,u^i)}{\partial u^j} \frac{\partial }{\partial t^i} \right) \sea = - \delta^{ij} \left( \frac{\partial {c^2\mathcal{H}}_s(x^i,v^i)}{\partial x^i} \frac{\partial}{\partial (\zc v^j)} - \frac{\partial {c^2\mathcal{H}}_s(x^i,v^i)}{\partial (\zc v^j)} \frac{\partial }{\partial x^i} \right)\;. \eea For ${\mathcal{H}}_t=\sqrt{1+u_ku^k}$ in particular, we have $c^2 {\mathcal{H}}_t=c^2 + \frac{1}{2}\zc^2 v_kv^k + \dots$ where the terms not shown contain negative powers of $c^2$ and vanish at the $c \to \infty$ contraction limit. Multiply by the mass $m$ and take the expression to the contraction limit, the first term is diverging, but is really the constant rest mass contribution to energy, while the finite second term is the kinetic energy $m H_t=\frac{1}{2}m v_kv^k$. Anyway, the $c^2$ term being constant does not contribute to $X_t^{\ssc (3)}$, which then reduces to \bea X_t^{\ssc (3)} \quad \to \quad X_t = - \delta^{ij} \left( \frac{\partial H_t}{\partial x^i} \frac{\partial}{\partial v^j} - \frac{\partial H_t}{\partial v^j} \frac{\partial }{\partial x^i} \right)\;. \eea The Hamilton's equations of motion are more directly giving $\frac{dx_i}{dt}=v_i$ and $\frac{dv_i}{dt}=0$. We have retrieved free particle dynamics of the Newtonian theory, though with the mass $m$ dropped from the description. The case with a nontrivial potential energy $V$ can obviously be given by taking $H_t=\frac{1}{2} v_kv^k + \frac{V}{m}$. The fact that the case cannot be retrieved from the contraction limit of special relativity is a limitation of the latter which cannot describe potential interaction other than those from gauge fields \cite{noi}. On the Lorentz covariant phase space, we have \bea X_s^{\ssc (4)} &=& - \{ \tilde{\mathcal{H}}_s(t^\mu,u^\mu),\cdot\}_{\!\ssc 4} = - \{ \tilde{\mathcal{H}}_s,\cdot\}_{\!\ssc 3} - \eta^{\ssc 00} \left( \frac{\partial \tilde{\mathcal{H}}_s}{\partial t^{\ssc 0}} \frac{\partial}{\partial u^{\ssc 0}} - \frac{\partial \tilde{\mathcal{H}}_s}{\partial u^{\ssc 0}} \frac{\partial }{\partial t^{\ssc 0}} \right)\;. \eea This, together with the above, shows that for $c \to \infty$ \bea X_\zeta^{\ssc (4)} = X_t^{\ssc (3)} + \frac{\partial }{\partial t^{\ssc 0}} \quad \to \quad X_t + \frac{\partial }{\partial t} \;, \eea giving the same dynamics. Similarly, we have $X_\tau^{\ssc (4)}$ giving the same limit, as $c^2 \tilde{\mathcal{H}}_\tau = H_t + \frac{c^2\zc^2}{2} \to H_t + \frac{c^2}{2}$. The exact limits of the $X_s^{\ssc (4)}$ are generally vector fields on the space of $(t,x^i,v^i)$ though. The space can be seen as an extension of the Newtonian phase space with the time coordinate, and a Poisson bracket defined independent of the latter. The true Hamiltonian vector field as a vector field on the phase space should have the $t$ part dropped from consideration, like $X_t + \frac{\partial }{\partial t}$ projected onto $X_t$. Further extending the analysis to the Hamiltonian functions ${\mathcal{H}}_{\om^{ij}}=\tilde{\mathcal{H}}_{\om^{ij}}$, ${\mathcal{H}}_{b^{i}}=\tilde{\mathcal{H}}_{b^{i}}$, ${\mathcal{H}}_t$, $\tilde{\mathcal{H}}_{\om^{\ssc i0}}$, and $\tilde{\mathcal{H}}_{b^{\ssc 0}}$, one can retrieve \bea && \{ H_{\omega^{ij}}, H_{\omega^{hk}} \} = -( \delta_{jh} H_{\omega^{ik}} - \delta_{ih} H_{\omega^{jk}} + \delta_{ik} H_{\omega^{jh}} - \delta_{jk} H_{\omega^{ih}} ) \;, \sea \{ H_{\omega^{ij}},H_{v^{k}}\} = -( \delta_{jk} H_{v^{i}} - \delta_{ik} H_{v^{j}} )\;, \qquad \{H_{v^{i}},H_{v^{j}}\} =0 \;, \sea \{ H_{\omega^{ik}},H_{a^{k}}\} = -( \delta_{jk} H_{a^{i}} - \delta_{ik} H_{a^{j}} )\;, \qquad \{H_{a^{i}},H_{a^{j}}\} =0 \;, \sea \{H_{v^{i}},H_{a^{j}}\} = -\delta_{ij} \;, \qquad \{H_{v^{i}},H_{t}\} = - H_{a^{i}} \;, \qquad \{H_{t},H_{a^{i}}\} =0 \;, \sea \{ H_{\omega^{ij}}, H_{t} \} =0 \;, \eea with $H_{\omega^{ij}} = x_i v_j -x_j v_i$, $H_{v^{i}} = -x_i$, and $H_{a^{i}}= v_i$. We have already looked at $H_t$ from $c^2 {\mathcal{H}}_t$. $\tilde{\mathcal{H}}_{b^{\ssc 0}}= u_{\ssc 0}$ can of course be rewritten as $ - {\mathcal{H}}_t$, which is in line with the Galilean generator $H$ as $-E_{\ssc 0}$. Note that the set of Hamiltonian vector fields as differential operators serve as a representation of the Lie algebra generators, as infinitesimal transformations on the phase space, with their commutators as the Lie product/brackets. While the corresponding Hamiltonian functions are usually talked about as generating function or even generators for the Hamiltonian flows on the phase space, they are really elements of the observable algebra as the corresponding representation of the universal enveloping algebra or some extension of the group algebra with the simple functional product. The Lie product/brackets is there represented again as the commutator which vanishes between all Hamiltonian functions. The Poisson bracket as an alternative Lie bracket on the latter realizes rather the Lie bracket of the representation of the $U(1)$ central extension of the Galilean symmetry \cite{Gil,gq}. The latter is essentially the relativity symmetry for the in quantum mechanics. In fact, the `mismatch' between the two parts, as $\{H_{v^{i}},H_{a^{j}}\} = -\delta_{ij}$ versus $[X_{v^{i}},X_{a^{j}}]=[K_i,P_j]=0$ can be better understood from the symmetry contraction of the quantum theory, hence can be seen to have a quantum origin \cite{070}. $H_{\omega^{ij}}$ and $H_{a^{i}}= v_i$ are from the limit of $c^2 \tilde{\mathcal{H}}_{\om^{ij}}$ and $c \tilde{\mathcal{H}}_{b^{i}}$, respectively. We can see from the above Hamiltonian vector field analysis that using the $(x^i,v^i)$ instead of $(t^i,u^i)$ as canonical coordinates implies a $c^2$ factor for the matching Hamiltonian function. The $\frac{1}{c}$ extra factor in $H_{a^{i}}$ is from the symmetry contraction of $P_i = \frac{1}{c} E_i$. The a bit more complicated case is with the Hamiltonian generator for the Galilean boosts $H_{v^{i}}$. We are supposed to take again $c \tilde{\mathcal{H}}_{\om^{0i}}$ to the $c \to \infty$ limit, which gives $v_i t- x_i$ and the Hamiltonian vector field as $\frac{\partial }{\partial v^i} - t \frac{\partial }{\partial x^i}$. Projecting that vector field on space of $(t,x^i,v^i)$ to a Hamiltonian vector field on the phase space gives only $\frac{\partial }{\partial v^i}$, which corresponds to our Hamiltonian function of $H_{v^{i}}=-x_i$. If $v_i t- x_i$ is naively taken, all expressions would be the same except $\{H_{v^{i}},H_{v^{j}}\}$ which would then be $-2t \delta_{ij}$. \section{Concluding Remarks} As we have seen above, quite a lot of information about our description of physical systems is actually encoded in the underlying relativity symmetry algebra. What we hope to emphasize here is that this is really a great, if not the only correct, perspective from which one can classify a physical paradigm, as well as the possible extensions and approximations. It is also important to note that this story is not unique to special relativity and the Newtonian limit. There is, indeed, an additional question motivating this note, namely, how the symmetry perspective can be used to understand better quantum mechanics and its classical limit. The relativity symmetry contraction picture can be seen as a way to understand the classical phase space as an approximation to the quantum phase space \cite{070}, and even suggests a notion of a quantum model for the physical space \cite{066}. In a broader scope, relativity symmetry deformations has been much pursued as a probe to possible dynamical theories at the more fundamental levels \cite{dsr,dsr2,BL,dsr3,dsr31}. Concerning the classical theories with some coset spaces serving essentially as the phase spaces for dynamical theories under the corresponding symmetries, it should be mentioned that the so-called coadjoint orbits of Lie groups are essentially the only nontrivial mathematical candidates for symplectic geometries. The full structures of all such symplectic geometries and hence dynamical theories can be derived \cite{gq,GS,ims} though the detailes mathematics is not so easily appreciable to many students. Coset spaces are also the natural candidates for homogeneous geometric spaces. \begin{appendices} \section{Deformations as Probes of More Fundamental Physics} We would expect the symmetry of a physical system to be robust under small perturbations, as otherwise our limited precise in measurements would imply that we can \textit{never} actually correctly determine or identify the symmetry of a given system. Indeed, the fact that a minute perturbation -- too small to be detected by our best measuring apparatuses -- could yield a different symmetry Lie algebra than the actual one means that we are epistemologically blind to the underlying physics. As such, it makes sense to focus our attention on algebras that \textit{are} significantly robust under small perturbations. For a Lie algebra, a perturbation can be taken as a (small) modification of the structural constants. For example, taking the Lorentz symmetry of $SO(1,3)$ with generators at the standard physical units, the commutators/Lie brackets among the infinitesimal Lorentz boost generators is proportional to $1/c^2$, as can be seen in the main text. Actually, for any finite value of $c$, the symmetry is the same mathematical group/algebra. Again, at $1/c^2=0$, {\em i.e.} speed of light being infinity, it is a different symmetry, the $ISO(3)$ of rotations and Galilean boosts. If we have not measured the finite speed of light, we would only be able to have an experimental lower bound for it. Confirming $1/c^2=0$ requires infinite precision, which can never be available. It makes sense then to prefer the Lorentz symmetry and see the Galilean one as probably only an approximation at physical velocities small compared to the yet undetermined large speed of light. That is more or less the argument Minkowski had \cite{D,Min} on one could have discovered special relativity from mathematical thinking alone. Here it is only about the idea of the zero structural constant in the Galilean symmetry making it unstable upon perturbations, or deformations. The physical identification of the constant as $1/c^2$ is not even necessary. It is straightforward to check that $SO(1,3)$ and $SO(4)$ are the only possible deformations of $ISO(3)$, within the Lie group/algebra setting. And they themselves are stable against deformation. One can argue that we have a similar situation with the commutator between a pair of position and momentum operators as generators for the Heisenberg-Weyl symmetry behind quantum mechanics. Actually, the zero commutator limit of which can be essentially identified as that between the $K_i$ and $P_i$ generators of the Galilean symmetry, with $K_i=mX_i$ and $m$ being the particle mass. Then, one can also further contemplate deforming the zero commutators among the position and momentum operators, all the way till reaching a stable Lie algebra, one that no further deformation to a different Lie algebra is mathematically possible \cite{dsr2}. Within the Lie group/algebra symmetry framework, the scheme may suggest a bottom-up approach to construct some plausible more fundamental theories. \section{A Physicist's Sketch of the Necessary Group Theory Background} A group is the abstract mathematical description of a system of symmetries, or symmetry transformations. Lie groups are continuous symmetry transformations, like a collection of rotations through any possible real value of the angle. A symmetry group of a geometric space can be seen as a set of transformations that do not change the space. Note that apparently different groups of transformations on different spaces may be mathematically the same group. For example, the group of rotations, around the origin, on a plane is mathematically identified with the group of translations along the circle as a one-dimensional (curved) space. From the abstract mathematical point of view, each transformation is an abstract element of the group as a collection, which is given with a set of conditions, the group axioms, on the not necessarily commutative product defined between the element to be satisfied. The latter is automatic for a Lie group defined as below. When a group is described as transformations on a vector space, that is called a representation of the group. Physicists often start formulating a group from one of its representations, like as symmetry transformations on a model of the physical space or phase space for a particle. For continuous symmetries, we would like to think about their infinitesimal counterparts. The mathematical description, or abstraction, is given by a (real) Lie algebra. It has a set of generators $X_i$ giving a generic element as a linear combination $a^i X_i$ (summed over $i$) for any set of real numbers $a^i$. Or we talk about the $a^i$ as real parameters, and each distinct set of values for them specifies an element. The Lie algebra is further defined by having a Lie product, $[\cdot,\cdot]$, between its elements given with \[ [X_j, X_k] = c_{jk}^i X_i \] which is antisymmetric ($[X,Y]=-[Y,X]$) and satisfies the Jacobi identity \[ [[X,Y],Z] + [[Y,Z],X] + [Z,X],Y] =0 \;. \] The real numbers $c_{jk}^i$, not all independent, are called the structural constants of the Lie algebra. A Lie algebra is hence an abstract vector space, with no notion of vector magnitude or inner product, and the set of generators a basis of it. A generic element of the Lie group $G$ with an associated Lie algebra $\mathfrak{g}$ can be written as $\exp(a^i X_i)$, the formal power series, and an infinitesimal transformation as $\exp(\bar{a}^i X_i) =1+\bar{a}^i X_i$ for infinitesimals $\bar{a}^i$ ($1$ being the identity transformation, {\em i.e.} no transformation). Note that \[ X_k = \frac{d}{da^k} \exp(a^i X_i)_{|_{a^i=0}} \;. \] A word of caution : The Lie product is a commutator with respect to the formal product $XY$, {\em i.e.} $[X,Y]=XY - YX$, which is however not an element of the $\mathfrak{g}$ or $G$. It is straightly speaking not otherwise defined mathematically unless within the context of the corresponding universal enveloping algebra or a representation setting as a matrix/operator product. A subgroup is a part of the group which makes a group in itself. A Lie subgroup $H$ of a Lie group is associated to a Lie subalgebra $\mathfrak{h}$ of $\mathfrak{g}$. In general, a subgroup can be used to divide the group into a, possibly infinite, number of distinct cosets. For the case of a Lie group $G$, which can in itself obviously be seen as geometric space, the collection of cosets of a Lie subgroup $H$ also can be seen as a geometric space, the coset space denoted by $G/H$, with each coset taken as an abstract point. A good picture of that has been presented in the main text. Note that coset spaces are not necessarily flat, {\em i.e.} may not be vector spaces, but always homogeneous, {\em i.e.} look the same from every single point inside. \end{appendices} \acknowledgments The authors wish to thank the students from their \textit{Group Theory and Symmetry} course at the National Central University, in which a preliminary version of the above material was first prepared as a supplementary lecture note. The work is partially supported by research grant number 109-2119-M-008-016 of the MOST of Taiwan.
2023-04-23T08:18:00.687Z
2021-12-17T02:11:49.000Z
redpajama/arxiv
arxiv_0000
1,019
13,202
e9e3e94b4f5c8e8a23db5064d535542725d996a2
\chapter{Introduction} \begin{quote} After all a man cant only do what he has to do, with what he has to do it with, with what he has learned, to the best of his judgment. And I reckon a hog is still a hog, no matter what it looks like. So here goes, \\ William Faulkner, \underline{Old Man} \end{quote} \par Man's desire to understand the behavior of nature at its most fundamental level has existed for several millennia. As technology has advanced, ever finer distance scales have been revealed. Accompanying these revelations have been revolutions in what has been considered fundamental, as displayed in Figure \ref{fig:c1f1}. \begin{figure}[t!] \centering \includegraphics[scale=0.75]{c1f1t} \caption{Illustrative sketch of the approximate number of elementary particles versus time.} \label{fig:c1f1} \end{figure} Revolutions may be characterized by the observation of a plethora of states considered to be fundamental, which are ultimately explained as being composed of a newer set of constituents. The theory currently in vogue is referred to as the $SU(3)_c \times SU(2) \times SU(1)$ Standard Model of the Strong and Electroweak interactions. It describes the interactions of twelve fundamental fermions (matter fields) via the exchange of gauge bosons belonging to three distinct forces (strong, electromagnetic, weak). It is an anomaly free, fully renormalizable theory, with an impressive record of performance. Yet, in its minimal form it has a total of 18 parameters; three gauge couplings, two parameters for the Higgs sector, nine masses for the fundamental fermions (three are expected to be massless), and three angles plus a phase which describe quark mixing in the weak interaction. The centerpiece of the standard model is the Higgs boson, which breaks the electroweak symmetry and provides the mass of the weak intermediate vector bosons and the fundamental fermions. The Higgs remains unobserved experimentally, and many of the parameters are not predicted by the theory. These must be determined experimentally (the Higgs mass is among them). We briefly recount the salient features of the fundamental fermions, and the three forces that govern their dynamics. Detailed expositions on the standard model can be found in several textbooks \cite{cpp,ql}. \section{Matter Fields} Symmetry pervades our classification of the fundamental fermions. The twelve are broken into two groups of 6, being the leptons and the quarks. Of the six in each family there are two types of particles which are distinguished by electric charge (-1,0 for leptons, $+ {2\over 3}, -{1 \over 3}$ for quarks). It is then convenient to group the $ -1 \choose 0$, $ + {2\over 3} \choose -{1 \over 3} $ pairs into generations. For the left-handed states each generation forms a natural isodoublet. This hierarchy is summarized in Table~\ref{t:1p1}. \begin{table}[t!] \centering \caption{Quark and Lepton Doublets} \begin{tabular}{|c|c|c|} \hline Generation & Quarks & Leptons\\ 1 & $ {u \choose d }_L$ & $ {\ifmmode{\mathchar"117}\else{$\mathchar"117$}\fi_e \choose e^- }_L$ \\ 2 & $ {c \choose s}_L$ & $ {\ifmmode{\mathchar"117}\else{$\mathchar"117$}\fi_{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} \choose \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^- }_L$ \\ 3 & $ {t \choose b}_L$ & $ {\ifmmode{\mathchar"117}\else{$\mathchar"117$}\fi_{\ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi} \choose \ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi^- }_L$ \\ \hline \end{tabular} \label{t:1p1} \end{table} The electrically neutral leptons are called neutrinos. Since neutrinos are thought to be massless, right handed states are only allowed for anti-neutrinos. Thus the charged leptons form right handed singlets, as do both types of electrically charged quarks. Despite the topological similarity, two rather profound differences exist between the quarks and leptons: \begin{enumerate} \item The first great distinction between the two families is that quarks carry color charge and are thus allowed to participate in the strong interaction. Since there are three types of color charge, there are in principle three times as many quarks as leptons. \item The second is that each lepton generation possess a distinctive identity, such that the total number of leptons $(L_i = 1)$ and anti-leptons $(L_i = -1)$ from each doublet $(i = 1, 2, 3)$ are separately conserved for all known interactions. Quarks have a much weaker identity. The first generation forms an isospin doublet, while each of the remaining four flavors each carry their own quantum number. The quark flavor quantum numbers are only preserved in the strong interaction. \end{enumerate} We shall now examine the ramifications of these differences. \section{Gauge Fields} \begin{table}[ht!] \centering \caption{Properties of Gauge Bosons} \begin{tabular}{|c|c|c|c|c|c|} \hline Force & Symmetry & Quanta & Mass & Coupling & Charge \\ \hline Strong & $SU(3)$ & 8 gluons (g)& No & $\alpha_s(Q^2)$ & color\\ \hline Electroweak & $U(1)$ & Photon ( \ifmmode{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi}\else{$\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$}\fi )& No & $e$ & No \\ \hline Electroweak & $SU(2)$ & \ifmmode{{\particletwo Z}}\else{{\particletwo Z}}\fi\ & 92.6 GeV &$ g = $ & No\\ Electroweak & $SU(2)$ & 2 \ifmmode{{\particleone W}^\pm}\else{{\particleone W}$^\pm$}\fi\ & 81.8 GeV & ${e / \sin\Theta_W}$ & electric \\ \hline \end{tabular} \label{t:1p2} \end{table} Identical fermions are prevented from occupying the same spacetime point, integer spin (bosons) particles suffer no such restriction and are thus naturally associated with fields. Particles are subjected to a given force by exchanging the field quanta of that particular force. Table~\ref{t:1p2} presents the properties of the Gauge bosons, and \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c1f2t} \caption{Feynman diagrams for the electric, strong, weak neutral, and weak charged interactions, and associated parameters.} \label{fig:c1f2} \end{figure} Figure~\ref{fig:c1f2} collects the Feynman rules for these interactions. The coupling constants for these forces are very different, obeying the approximate relationship $$ \rm Strong:EM:Weak \hskip .2in\ 1:{1\over 137}:10^{-5}$$ \par The dominant theme in the Standard Model is local gauge (or phase) invariance. The Lagrangian for any interaction is the particle physics analog of DNA. It contains all the information and possible reactions for a given interaction. All Lagrangians are required to be invariant under the transformation $ e^{-i\alpha(x)}$. It is precisely the terms which arise in the Lagrangian to preserve local gauge invariance which insure renormalizability and give each force its distinctive character. \subsection{Strong Interaction} Gluons themselves posses color charge, thus not only transmit the strong force between colored objects, but are capable of interacting with themselves. The strong interaction is Non-Abelian, and part of this is manifested in that the coupling constant of the strong force is not constant, but changes with energy or 1/distance. To lowest order the coupling constant is: $$ \alpha_s(Q^2) = {12\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi \over (33 - 2n_f)ln(Q^2/ \ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi^2)} $$ where $n_f$ is the number of quark flavors and $\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi$ is the QCD scale parameter. This causes an effect referred to as asymptotic freedom. When quarks are probed at very short distance they appear almost free, however as quarks are separated the force becomes so great that it is energetically favorable to pop another q\=q pair from the vacuum. Colored objects (i. e. quarks) have never been directly observed. They always combine in two's (q\=q) and three's (qqq) to produce colorless final states. These hadrons are very complex structures, consisting of the valence quarks, the gluons which bind them, and virtual (q\=q) pairs (or sea quarks). Deep inelastic scattering experiments, which measure the momentum fractions of composite objects, found that about 50\% of a proton's momentum is carried by neutral particles, substantiating the role of gluons as the hadronic ``glue." \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c1f3t} \caption{Momentum fraction of proton components versus $Q^2$ of the probe.} \label{fig:c1f3} \end{figure} Figure~\ref{fig:c1f3} shows a calculation of the momentum fraction of the components of a proton versus $Q^2$ of the probe, done by Eitchen \cite{eicht}. The theory of color interactions is known as quantum chromodynamics (QCD). \subsection{Electroweak Interaction} In a master stroke, the weak and electromagnetic forces were unified in the same fashion as the electric and magnetic forces were unified by Maxwell. This is the so called Weinberg-Salam $SU(2)_L \times U(1)_Y$ theory, which merited a Nobel prize. Here the four electroweak vector bosons are split, where the three weak vector bosons (two charged and one neutral) acquire a mass on order 100 GeV, and the photon remains massless. Because the weak bosons are massive, the range over which the weak force can be transmitted is greatly reduced. \par The exchange of the charged \ifmmode{{\particleone W}^\pm}\else{{\particleone W}$^\pm$}\fi\ bosons governs the transmutations of the quarks and leptons. The most general transformation consists of a quark undergoing a flavor and charge changing transition by emitting a virtual \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ boson. The \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ then fragments into either a particle antiparticle pair from a lepton doublet, or quark pair. The pivotal factor is that no constraint such as lepton number exists for weak transitions of quarks. Thus a $ + {2 \over 3}$ quark could transform into any lighter $ - {1 \over 3}$ quark. Similarly, the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ may also fragment into quarks which do not belong to the same doublet. The quark ``mass" eigenstates are not the eigenstates of the weak interaction. The weak eigenstates are obtained by the rotation of the $ - {1 \over 3}$ quarks $$ \left ( \matrix{ d' \cr s' \cr t' \cr } \right ) = \left( \matrix{ V_{ud} & V_{us} & V_{ub} \cr V_{cd} & V_{cs} & V_{cb} \cr V_{td} & V_{ts} & V_{tb} \cr} \right) \left ( \matrix{ d \cr s \cr t \cr } \right ) $$ $V$ is a $ 3 \times 3$ unitary matrix often referred to as the K-M matrix, after Kobayashi and Maskawa \cite{kmm} who first generalized the quark mixing matrix to three generations. It contains three angles and a phase for three generations of quarks. The parameterization of Chau and Keung \cite{chau} is shown in Figure~\ref{fig:c1f4} \begin{figure}[htp!] \centering \includegraphics[scale=0.60]{c1f4t} \caption{Quark mixing matrix in the parameterization of Chau and Keung. The three angles are x, y, and z. $\rm s_x = \sin(x)$ and $\rm c_x = \cos(x)$.} \label{fig:c1f4} \end{figure} It has the approximate value $$ \left( \matrix{ 1 & s & s^3 \cr -s & 1 & s^2 \cr s^3 &-s^2 &1\cr} \right) \hskip 0.5in s \sim 0.23 $$ Off diagonal transitions occur much less frequently, and are historically referred to as Cabbibo suppressed. This fortunate feature of weak decays lends a great deal of richness to the decays of heavy quarks. Flavor changing neutral currents, however, are excluded by the unitarity of the K-M matrix. \section{The Role of Charm} The prediction of charm and the four quark mixing (Cabbibo) matrix were important progenitors to the three generation picture of the standard model. Initially, three quarks $u,d,s$ formed an approximate flavor symmetry, and were used to classify the known states of the day. At that time the first two lepton generations were known, and the lepton quark asymmetry was both technically and aesthetically displeasing to theorists. It became apparent that certain kaon decays, such as \ifmmode{{\particleone K}^0_L}\else{{\particleone K}$^0_L$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-$}\fi\ were notoriously absent. Unitarity of the quark mixing matrix was first prognosticated by Glashow, Iliopolos, and Maiani \cite{gim} who completed the Cabbibo mixing matrix, casting it in the form: $$ \pmatrix{ d' \cr s' \cr} = \pmatrix{ \cos\theta_c & \sin\theta_c\cr -\sin\theta_c & \cos\theta_c } \pmatrix{ d \cr s \cr}$$ This provided a clean mechanism to remove \ifmmode{{\particleone K}^0_L}\else{{\particleone K}$^0_L$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-$}\fi\ (Figure~\ref{fig:c1f5}). \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c1f5t} \caption{Cancellation of the decay \ifmmode{{\particleone K}^0_L}\else{{\particleone K}$^0_L$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-}\else{$\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^-$}\fi\ via the GIM mechanism.} \label{fig:c1f5} \end{figure} This prediction was confirmed some four years later by the observation of a c\=c bound state by groups at SLAC and Brookhaven. This represented the first major step in proliferation of the heavy quarks, which now totals two and a third is expected. The observation of the charmed quark was an event of such magnitude that the experimentalists who made the discovery were awarded a Nobel prize. This sparked off a flurry of activity in the study of charmed particles, which still continues briskly today. \chapter{ Theory of the Production and Decay of Charmed Mesons} Here we review the progress made in understanding how charmed quarks hadronize and eventually decay. These are two entirely different mechanisms, and this is reflected in the paths taken in our theoretical understanding of these processes. Quark hadronization is theoretically intractable, and relies on extensive computer modeling and phenomenology. The theory of the decays of charmed mesons enjoys a (limited) quantitative stature, however the predictive power of these theories has been poor. In both instances, theoretical advancement has relied heavily on experimental analysis. \section{Fragmentation Fundamentals} Experimentally, we have not observed free quarks. Fragmentation theory aims to describe the process through which an uncombined quark evolves into the hadrons which we observe in our detectors. This section aims to address the theoretical progress that has been made in understanding heavy quark fragmentation. It will emphasize the physics of charm quark fragmentation to charmed mesons in $e^+e^-$ collisions. \par One of the differences among the flavors is the quark mass. While this is in general an ill defined concept since quarks cannot be observed in isolation, qualitative trends still exist. The estimated constituent (quark + surrounding gluons) quark masses obey the approximate proportions: $$m_u:m_d:m_s:m_c:m_b \rightarrow 1:1:1.4:5.1:14.9$$ The first three flavors are referred to as `light' quarks or q, and the rest as `heavy' quarks or Q. The most important effect of mass occurs during quark anti-quark production in the color field. When examined from a ``thermodynamic" standpoint \cite{peterson} quark pair production is governed by the expression $$ \rm rate \propto exp\left({- 2\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi c^2 \over kT} \right)$$ where $\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi$ is the quark mass and $T$ is the universal temperature, corresponding to approximately to 160 MeV. This leads to the hadron production ratios \cite{hadi} $$ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi : \ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi : \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi : \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ \simeq {1: 0.04 :10^{-6}:10^{-15} }$$ This suggests that heavy Q\=Q pairs are almost never predicted in the hadronization process. Our experience with the electromagnetic production of quark pairs points toward more of a threshold effect, where quark electric charge and not mass is the dominant factor in the fragmentation of a virtual photon. Gluons hadronize independently of quark flavor, and a sufficiently hard gluon spectrum could be a prominent source of heavy quark production. Charmed particle cross sections in \ifmmode{{\particleone p}}\else{{\particleone p}}\fi\ifmmode{\overline{{\particleone p}}\ collisions have been much larger than originally anticipated. The inclusive charmed particle cross section \cite{hadi} $\sigma_{incl}$ ranges from $\simeq 5 \times 10^{-1}$ mb at $s = 2 \times 10^2\ \rm GeV^2$ to $\simeq 1 $ mb at $s = 1 \times 10^4\ \rm Ge{V^2}$\kern-.4em . \ This is to be compared with a pion cross section that remains essentially constant at $10^2$ mb. While this effect is not fully understood, QCD flavor-annihilation and flavor-excitation processes are the most likely explanation. At CESR energies, it is highly unlikely that radiative gluons are capable of producing c\=c pairs, and it is a physically reasonable to assume that charmed quarks are only produced electromagnetically. \par The variation in mass will also introduce kinematical differences between light and heavy quark fragmentation. Attaching a light quark to a moving heavy quark will not decelerate it very much. Thus hadrons formed from a primary heavy quark are expected to be `hard', retaining a large fraction of the original quark momentum. This is in contrast to light quark fragmentation, which is `soft.' A particularly convenient way to observe hadronization is the reaction $${e^+}{e^-} \rightarrow \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^\ast,Z^0 \rightarrow f\bar f$$ where $f$ is a fermion. Electromagnetically, q\=q pairs are produced relative to muon pairs by $3e^2_q$ where $e_q$ is the quark charge. Thus heavy quarks are produced, above threshold, as often as light quarks with the same charge. If heavy quarks are not produced during hadronization, a reconstructed heavy hadron can be directly compared to the primary Q that may have caused it. One exception to this, of course, would be heavy quarks produced in the cascade decays of even heavier quarks. The kinematics of these decays are well known and the contribution to the fragmentation data of these hadrons can be sorted from those of primary Q\=Q pairs. Heavy quark production is not limited to $e^+e^-$ collisions. Important early results in charm fragmentation were gleaned from $\ifmmode{\mathchar"117}\else{$\mathchar"117$}\fi N \rightarrow \rm charm$ reactions and leptoproduction \cite{chirin} but these will not be discussed here. \par Fragmentation of quarks into hadrons is not well understood. The principal reason for this is that the process is very violent, and takes place where $\alpha_s(Q^2)$ is large. We are therefore unable to apply perturbative QCD to do any calculations. We have circumvented this by proposing a probability function $D^H_q(z)$, which describes the probability that a hadron H will be found in a jet formed by the fragmentation of q with a fraction z of the original quarks energy-momentum. These are the so-called fragmentation functions. A number of different forms for these fragmentation functions have been put forth, based on a variety of ideas. This section will attempt to illuminate the various approaches that have used to describe fragmentation. Some of the fundamental tools needed to understand fragmentation will also be described There are three scenarios which have been put forward to describe fragmentation. These are Independent Fragmentation (IF), String Fragmentation (SF), and, more recently, Cluster Fragmentation (CF). IF was proposed by Field and Feynman \cite{fefe} as a first guess model of jet formation. Its central theme is that all partons fragment independently of each other. The SF model, based on an idealization of the color field, was due initially to Artu and Messier \cite{artu} Their work was highly embellished by the Lund \cite{ander} group and provides the foundation for many of the Monte Carlos currently employed in high energy physics. A promising development, CF is based on leading log QCD. Fragmentation takes place as a QCD shower generated as an off shell parton comes on shell. \subsection{Independent Fragmentation} IF is a relatively uncomplicated idea. It is described by the diagram shown in Figure~\ref{fig:c2f1}. \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c2f1t} \caption{Independent fragmentation.} \label{fig:c2f1} \end{figure} A $q_1\bar{q_1}$ pair is produced in the color field of uncombined quark $q_0$ moving through spacetime. The initial quark combines with the appropriate anti quark to form a meson ($q_0\bar{q_1}$) while the remaining quark $q_1$ is left uncombined to continue the fragmentation process. A serious problem with this concept is that there is eventually one quark left over, which creates difficulties in terms of flavor and four momentum conservation. This is dealt with in Monte Carlo at the end of event generation by globally imposing energy and momentum conservation throughout the entire event. Despite its counter intuitive nature, IF was unsophisticated, available and became popular. A number of its deficiencies, particularly the way it handles gluons, have made it less popular today in light of the better models available. \subsection{String Fragmentation} Because the field quanta of QCD, the gluons, carry color charge, the field lines of the color field collapse into a flux tube or `string' between quarks. String models are defined in 1$+$1 dimensions \textit{i.e.} 1 time and 1 space or 1 energy and 1 momentum. The primary parameter is the string tension constant $\kappa$, which has the approximate value $\kappa \approx 0.2 \ {GeV}^2 \approx \ 1 \left( GeV \over fermi \right)$. The key equations of SF are $$ \left( dp \over dt \right) = \pm \kappa $$ $$ \left(dE \over dx \right) = \kappa$$ where the + -- refer to the string pulling the parton to the right and left respectively. These two equations can be integrated and combined with the Lorentz invariant expression $E^2 = p^2 + m^2$ to yield $$ \left(x-x_0\right)^2 -\left(t-t_0\right)^2 = \left({\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi \over \kappa} \right)^2 \eqno(1)$$ Thus, massive quarks move along hyperbolas in spacetime with light cone asymptotes. `Massless' quarks would move along the light cone. There are several important questions that need to be addressed: what determines how and when the string breaks, and is there any relation between the color field and the mass of the hadron formed? The first question is a pivotal issue with SF, and we will see later how different approaches to this question lead to slightly different fragmentation functions. \subsection{Cluster Fragmentation} A third and relatively new approach to fragmentation is the called Cluster Fragmentation. It is based on leading log QCD and the Altareli-Parisi formalism. Each branching is considered to be an independent event, and the parton comes more on shell at each branch. The classical branching probabilities are given by $$P_{q\rightarrow qg} = {4\over 3} {\left(1 +x^2 \over 1-x\right)} $$ $$P_{g\rightarrow gg} = 3\left[ {1-x \over x} + {x \over 1-x} + x\left(1-x\right) \right] $$ $$P_{g\rightarrow qq} = \left[x^2 + (1-x^2)\right]$$ where $P_{a \rightarrow bc}$ represents the probability that $b$ will retain a fraction x of $a$'s momentum. Work has been done in this area by Field and Wolfram \cite{fiewo} Gottschalk \cite{gotts} and the most popular by Webber \cite{webbe}. The shower is terminated at the point which the parton reaches the appropriate quark mass or cutoff $Q_o$ for gluons (to prevent infrared divergences). At this point all the gluons are split into q\=q pairs and color flow is regulated to produce colorless clusters which decay through phase space. The Webber model has the advantage of requiring only a few parameters: $\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_{QCD}, M_{max}, m_u,m_d,m_s,Q_o $. Additional features of the Webber model include successively reducing the opening angles of parton emission to account for leading interference effects, and decaying clusters exceeding some $M_{max}$ as a string. Initially, c and b quarks decayed weakly before the clusters were formed. Now c quarks are kept in the shower and form charmed clusters which decay into $D^{\ast}$ and a $\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$ or K. While this is not directly suited to predicting charm yields, charm fragmentation is generated directly via the cluster decay algorithm. While Cluster Fragmentation is still in the developmental phase, it has shown itself able to compete with SF and IF. The purpose of this cursory treatment of CF is to demonstrate that we may be finally approaching the point where fragmentation is able to proceed naturally out of our Monte Carlo models without the imposition of phenomenological kludges. \subsection{Reciprocity} The reciprocity relation is often invoked as a check on fragmentation functions. It is essentially a boundary condition that proposes $$ D^H_q(z)_{z \rightarrow 1} \Rightarrow f^q_H(x)_{z \equiv {1\over x} \rightarrow 1}$$ What this means is that the fragmentation function of a hadron containing virtually all of the original quarks momentum should be equal to the structure function of a hadron where that quark posses a momentum fraction that approaches one. This can be understood in that as a quark obtains a large momentum fraction, the remaining quarks in the hadron must compensate for this by giving up their momentum. As $x \rightarrow 1$, the other valence quarks must be almost at rest. If this reaction is reversed in time, one obtains fragmentation. If reciprocity holds, the appropriate question is how the structure functions behave as $x \rightarrow 1$ ? Structure functions are on a somewhat better theoretical footing than fragmentation functions, but the answer is still uncertain. The `standard' textbook formula \cite{ql}[p. 200] is $$ f^q_H(x)_{x \rightarrow 1} = \left(1 - x\right)^{2n_s-1} \eqno(2)$$ where $n_s$ is the number of spectator valence quarks. This relation predicts $(1-x)$ for mesons and $(1-x)^3$ for baryons. An alternate \cite{gplp} form is $$ f^q_H(x)_{x \rightarrow 1} = \left(1 - x\right)^{2n_s-1+2\Delta S} \eqno(3)$$ The $\Delta S$ term is the absolute difference between the initial hadron spin and the quark spin $({1\over 2})$ . This conflicts with the above expression in the prediction for mesons, yielding $(1-x)^2$. As we shall see, almost all the fragmentation functions show a $(1-z)$ behavior as $z \rightarrow 1$, agreeing with the first prediction. \subsection{Odds and Ends} Finally, we present three items that will be needed in order to coherently proceed to the next section. These are the meaning of z, the light-cone variables, and some elementary relations of fragmentation functions. Fragmentation functions are parametrized in terms of `z', but what exactly is z? It is meant to describe the fraction of energy-momentum or momentum of the primary quark retained by the hadron in the fragmentation process. Two definitions that appear in the literature are $$ z^+ \equiv {\left(E + p_{\parallel}\right)_{had}\over \left(E + p\right)_{quark} } $$ and $$ z_p = \left({p_{had}\over p_{quark}}\right)$$ The first definition is Lorentz invariant for boosts along the quark direction, hence is more desirable theoretically. This is true because $L(E \pm p) = f^{\pm}(\beta)(E \pm p)$, where $f^{\pm}(\beta)$ is a constant that is a function of the boost parameter $\beta$. These constants trivially cancel in numerator and denominator. We adopt the convention that z will refer to the first definition. Unfortunately, the kinematical variables E and p of the quark are unavailable to the experimentalist. Fragmentation functions are measured in terms of x, where x also has two definitions $$x_E \equiv \left( {E_{had}\over E_{beam} } \right) $$ $$x_p \equiv \left( {p_{had}\over \sqrt{E^2_{beam} - {m}^2_{had} }}\right)$$ Both $x_E$ and $x_p$ have been used by experimentalists, although $x_p$ has the advantage of ranging from 0 to 1 for all experiments. The variables x and z are not equivalent. In general $z\geq x$ because perturbative QCD gluon radiation and initial state photon radiation tend to make $E_{quark} \leq E_{beam}$. The light cone formalism is a handy tool when working with fragmentation. $$ x^\pm \equiv x^0 \pm x^3 $$ $$ p^\pm \equiv p^0 \pm p^3 $$ $p_{\perp}$ is absorbed in the mass term. As mentioned earlier these combinations transform trivially under Lorentz transformations, making frame transformations straightforward. They have the added convenience that $$ z \equiv {\left(E + p_{\parallel}\right)_{had}\over \left(E + p\right)_{quark} } \equiv \left(p^+_{had}\over p^+_{quark} \right) $$ when $p_\perp$ is neglected. Thus, the light cone variables lend themselves to a more natural expression of z. One is still faced with a dilemma as to what the denominator should be. It has been suggested \cite{galh} that $$ p^+_{quark} \simeq E_{max} + p_{max} = E_{beam} + \sqrt{E^2_{beam} - {\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi}^2}$$ however it is not obvious that this is a physically reasonable assumption. Measurements of fragmentation functions often appear as plots of $s\left( {d\sigma \over dz}\right)$ vs. $(X_E$ or $X_p)$. The master equation for fragmentation in $e^+e^-$ collisions is \cite{qp} $$ \left( {1\over \sigma_{had}} \right) {d\sigma \left(e^+e^- \rightarrow HX\right) \over dz} ={ \ifmmode{\mathchar"7006}\else{$\mathchar"7006$}\fi_i e_i^2 \left[D^H_i(z) + D^H_{\bar i}(z)\right] \over \ifmmode{\mathchar"7006}\else{$\mathchar"7006$}\fi_i e^2_i }$$ where $i$ sums over the quark flavors participating in the reaction. The fragmentation functions of both i and \=i are summed over if the hadron H could be produced in the jets of $q_i$ and $\bar q_i$. Applying $\sigma_{had} = \sigma_{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} \left( 3\ifmmode{\mathchar"7006}\else{$\mathchar"7006$}\fi_i e^2_i \right)$ and $ \sigma_{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} = \left( {4\over 3}{ {\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi \alpha^2}\over s}\right)$, this can be recast into $$ s \left( {d\sigma \over dz} \right) = 4 \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi \alpha^2 \ifmmode{\mathchar"7006}\else{$\mathchar"7006$}\fi_i \left[D^H_i(z) + D^H_{\bar i}(z) \right] $$ which explicitly shows the proportionality of $s\left( {d\sigma \over dz} \right)$ to the fragmentation functions. For the production of heavy mesons $H_Q$ = $(Q\bar q)$ the form is slightly different. $H_Q$ can only be only be found in the debris of Q since heavy quarks are not produced in the color field. The first equation becomes $$ \left( {1\over \sigma_{had}} \right) {d\sigma \left(e^+e^- \rightarrow H_Q X\right) \over dz} ={ 3e_Q^2 D^{H_Q}_Q(z) \over \ifmmode{\mathchar"7006}\else{$\mathchar"7006$}\fi_i e^2_i }$$ \section{Fragmentation Functions} FF arise out of our basic inability to calculate hadronization from perturbative QCD. None of them represent a great tribute to theoretical physics, and actually only one has received considerable attention from experimentalists. We analyze five different functions; two based on IF, two on SF, and one derived from the reciprocity relation. An important fact to bear in mind is that these functions describe the initial fragmentation of a primary heavy quark Q into a heavy hadron $H_Q$ $(Q\bar q)$ containing Q. The gross qualitative features of heavy quark fragmentation were predicted by several theorists in the late 70's. Bjorken \cite{bjork} on the basis of simple kinematics, arrived at $$ <z_p> \sim 1 - \left( {1 GeV \over {\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} } \right) $$ where $\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi$ is the quark mass and 1 GeV is a constant chosen for `didactic convenience.' His argument was that ordinary hadrons were produced with $\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$ less than or the order of that of the primary Q. Since $p = \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi m v$, the heavier objects would take a larger fraction of the available momentum. \subsection{Functions Based on Reciprocity} One of the first charm fragmentation functions was that of Kartvelishvili \cite{kart}. Their efforts were essentially directed toward computing the structure function of a heavy charmed meson using the Kuti-Weisskopf \cite{kuti} model, and connecting that to the fragmentation function via reciprocity. They arrived at the structure function: $$ f^c_{H_c}(z) = { {\Gamma(2 + \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi -\alpha_c -\alpha_q)z^{-\alpha_c} (1-z)^{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi - \alpha_q} } \over {\Gamma(1-\alpha_c) \Gamma(1 + \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi - \alpha_q) } } $$ Where $\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$ is a constant that equals $3\over 2$ and $\alpha_c$ is the intercept of the Regge trajectory for the c quark. $\alpha_q$ is the intercept of the light quark Regge trajectory, set to $1\over 2$ based on the $\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi$, $\ifmmode{\mathchar"121}\else{$\mathchar"121$}\fi$, $A_2$, and $f$. This expression can be used to calculate the average momentum carried by the valence quarks $$<z_c> = { {1- \alpha_c}\over {2 + \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi - \alpha_c -\alpha_q} }$$ Their rational was that since the structure function peaks at high z, the structure function can be set equal to the fragmentation function at all z. As can be easily seen from the expression for the structure function, the fragmentation function becomes $$D{^H_Q}(z)=Nz^{-\alpha_c} (1 - z)$$ The parameter $\alpha_c$ was unknown, the initial guess was -3. In a later and often referenced paper \cite{kart2} they choose $13.44 z^{2.2}(1-z)$ without explanation. It certainly follows, though, from the logic of their earlier work. Claiming that reciprocity is valid at all z (or at least from .6 to 1) is certainly dubious and must raise doubts as to the credibility of the Kartvelishvili function. The Kartvelishvili function is based on the structure functions of charmed mesons. Differences between charmed meson and baryon structure functions would have to be accounted for before it could properly be applied to charmed baryons. \subsection{Functions Based on Independent Fragmentation} The most celebrated heavy quark FF is that of Peterson \cite{peter2} \it{ et al}. \rm Its success is based on the simplicity of its form and its flexibility. The function is based on quantum mechanics and IF. Basically a q\=q pair is produced in the color field of a heavy quark Q. Q then combines with the appropriate anti-quark, and one quark is left uncombined to continue the fragmentation process. The rate goes as ${\Delta E}^{-2}$, the difference in energy between the initial heavy quark state and the final state hadron + q, $$\Delta E = \sqrt{ m_H^2 + z^2p^2} + \sqrt{m_q^2 + (1-z)^2p^2} - \sqrt{m_Q^2 +p^2} $$ Approximating $m_H = m_Q$ and after a binomial expansion we obtain $$ \Delta E \propto 1- {1\over z} - {\epsilon_Q \over 1-z} $$ Squaring, and tacking on a factor of $1 \over z$ for longitudinal phase space produces the well known result $$D{^H_Q}(z) = {N \left(z\left[1-{1 \over z}-{\epsilon_Q\over(1-z)}\right] ^2\right)^{-1}}$$ The parameter $\epsilon_Q$ is proportional to $m_{q\perp}^2/m_{Q\perp}^2$ , and the transverse mass is defined as ${m_\perp}^2 \equiv {m^2 + {p_\perp}^2}$. As defined, z is the fraction of the momentum of the heavy quark Q retained by the hadron. The binomial expansion is a delicate issue, especially in the region $z \rightarrow 0$. It is mentioned in the paper that the expansion is strictly valid in the $p \rightarrow \infty$ limit. The function peaks at $$ z_{peak} \simeq 1 - 2\epsilon_Q $$ with width $\simeq \epsilon_Q$. The function was not initially derived in terms of the light cone variables, but it is pointed out that light cone variables may be more appropriate at finite energies, and the above expression may be carried over provided it is cut off below ${p^+}_{min}$. It is reasonable to assume that a di-quark pair could have been popped and ${{m_q}_\perp}^2 \rightarrow {{m_d}_\perp}^2 $ in the formula for $\epsilon$. Both the Peterson and Andersson functions contain a term that depends on $m_\perp$. Often the $p_\perp$ is neglected and $m_\perp \rightarrow m$. A consistent interpretation needs to be arrived at. The Peterson function goes as $(1-z)^2$ as $z \rightarrow 1$. This is in disagreement with reciprocity using Eq. $2$. This potentially disquieting fact motivated some recent work by Collins and Spiller \cite{collin} who set out to produce a fragmentation function that did not `violate' reciprocity and dimensional counting rules. They calculate the cross section $\sigma\left({e^+}{e^-} \rightarrow HX \right)$ which depends on the fragmentation function $D^H_Q(z)$ . They use Independent Fragmentation and compute the vertex $Q\rightarrow Hq$. This depends on the transverse momentum distributions of H and q with respect to the Q direction, and the longitudinal momentum distributions. The longitudinal momentum distribution, they claim, happens to be equal to the function which describes the momentum distributions of the two valence quarks in a meson. The final result with the approximations $ m_H = m_Q \gg <k_T^2 >$ $$D{^H_Q}(z{^+}) \simeq N\left({1-z{^+}\over z{^+}}+ {2-z{^+}\over 1-z{^+}} {\varepsilon_Q}\right)(1+z{^+}^2){\left(1-{1 \over z{^+}}-{{\varepsilon_Q} \over(1-z{^+})}\right)}^{-2}$$ We have elected to use $z^+$ to indicate functions explicitly derived in terms of light cone variables. The parameter $\varepsilon_Q$ is defined $\equiv \left( <k{^2_T}> \over m{^2_Q}\right)$. $<{k_T}^2> = (0.45 \ GeV)^2$ `represents the size of the hadron in momentum space'. A Peterson like term is apparent, and the two functions are noticeably similar. This function, in contrast to the Peterson function, exhibits a $(1-z^+)$ behavior as $z^+ \rightarrow 1$. It is uncertain how this function would differ for \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ and \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ mesons, since it is unclear if $k_T$ is a constant or depends on $u,\ d\ {\rm or}\ s$ quarks. This function would not be directly suitable for baryons. Like the Kartvelishvili function, it is partially derived from the structure function of mesons. They have also proposed a kernel for $\rm vector\Rightarrow pseudoscalar$ decays, to allow calculation of inclusive fragmentation functions. This is an important problem when attempting to determine \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ or \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ fragmentation, since the feed down from \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ must be taken into account. They calculate the fragmentation function for the decay $ H^{\ast}_Q \rightarrow H_Q \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$, z being the fraction of momentum retained by $H_Q$ $$ P_Q(z) = {N\over z(1-z)\left( {m^2_{H^\ast} \over <k_T^2>} - { {m^2_H + <k_T^2> } \over z} - { {m^2_{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi} + <k_T^2>} \over 1-z} \right)^2 }$$ The inclusive fragmentation is the given by $$ D^{H_Q}_Q(z) = \delta D^{H_Q}_Q(z) + \left(1-\delta \right) \int_z^1 { dy \over y} D^{H^{\ast}}_Q(y) P_Q({y\over z}) $$ $0 \leq \delta \leq 1$ is the fraction of direct pseudoscalar production. The first term represents direct pseudoscalar production and the second feed down from vector mesons. \subsection{Functions Based on String Fragmentation} The function of Andersson \cite{ander2} \textit{et al.}\ is based on the symmetric Lund String Fragmentation model. The symmetric Lund function assures that starting at either the q or \=q end and iterating will produce the same result. The left right symmetry is a non-trivial property of an iteratively generated cascade. The Andersson function is derived by solving a set of coupled equations that demand this property. The resulting form is $$D{^H_Q}(z{^+})={N\over z{^+}}(1-z{^+})^a exp({-b \cdot {{m_H}_\perp}^2\over z{^+}} )$$ Where $a$ and $b$ are constants related to the production of q\=q vertices in spacetime. The constant $b$ is flavor independent, while $a$ may depend on flavor. The above expression assumes all $a$'s are equal. The constant $m_\perp$ is the transverse mass of the hadron produced. It is anticipated that $ 0\leq a \leq 2$ and $ b\geq 0 $,with the current values set at $a \sim 1$ and $b \sim \left(1 \over 2.25\right)$. The mean is predicted to be $$ <z^+> \simeq 1 - { {(a +1)} \over {bM^2} } $$ M is the mass of the meson produced. The Andersson expression was derived using massless quarks moving along light cones. It is expected to carry over to an initial heavy Q\=Q pair, which move along hyperbolas with light cone asymptotes. A definite difficulty with this function is that it differentiates light and heavy quark fragmentation only by the mass of the hadron formed. Although it is derived in the full glory of the string model, it has to be recognized that symmetry was the main impetus for this function. An event generated using SF has the following structure (Figure~\ref{fig:c2f2}), \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c2f2t} \caption{Models of string fragmentation. The top diagram (a) shows a model where the energy stored in the quark flux tube becomes so great that additional quark pairs are produced. The bottom (b) diagram shows a light cone view.} \label{fig:c2f2} \end{figure} q\=q vertices appear in spacetime in the light cone of the initial $q_0 \bar q_0$ pair. When quark world lines cross a hadron is formed. The hadron mass is proportional to the area of the color field between the quarks (hatched region). We know from Eq. $1$ that massive objects must move along hyperbolas in spacetime. In order to produce a hadron of mass m, the secondary q\=q pair must lie on this hyperbola. The heart of the Lund model is that when a string breaks to produce a hadron of mass m, vertices of the secondary q\=q will be uniformly distributed along the production hyperbola. In this scheme, quarks fragment into hadrons with a discrete mass spectrum. An alternative point of view was taken by Bowler \cite{bowl} who choose to break the string in a more classical way. The probability $dP$ of a break occurring at spacetime coordinates $(x + dx,t + dt)$ is $$ dP = \Pi dx dt $$ $\Pi$ is a constant per unit length of string and unit time. The function is derived utilizing the basic equations of the string model. The primary Q\=Q pair is massive and the secondary quark pairs are massless. Two sets of coordinates are considered in the fragmentation process, $(x_1,t_1)$ the point where secondary q\=q pair is produced, and $(x_m,t_m)$ the coordinates where the world lines cross and the hadron is formed. The derivation is straightforward, applying $$ dP = \Pi dA exp(-\Pi A) $$ the probability of creating the secondary pair within $dA$ where A is the area of the color field in the absolute past of the point $(x_m,t_m)$ gives $$D{^H_Q}(z)=\left(B\over z\right)exp(-Bm{^2_Q} ( {m{^2_H}\over m{^2_Q}z} - 1 - ln( {m{^2_H}\over m{^2_Q}z})))$$ B equals $\left(\Pi / 2k^2\right)$. $\Pi$ being the constant probability the string will break, and k is the string constant. The function has recently been modified \cite{galh} to include light cone formalism and be made more suitable for CESR energies. Bowler also later suggested the addition of the term $ (1-z)^{\beta}$ to account for the fact that the string is not `straight' due to soft gluon emission. The modified Bowler function is: $$D{^H_Q}(z{^+})=(1-z{^+})^{\beta}\left(B\over z{^+}\right)exp(-Bm{^2_Q} ( {m{^2_H}\over m{^2_Q}z{^+}} - 1 - ln( {m{^2_H}\over m{^2_Q}z{^+}})))$$ It is expected that $\beta$ will be close to 1. One criticism of this approach is that the Bowler function is singular as $m_H \rightarrow 0 $ and would highly favor 0 mass mesons unless a low mass cutoff is imposed. \subsection{ Comparison of the Functions} \begin{table}[ht!] \centering \caption{Properties of Fragmentation Functions} \begin{tabular}{|c|c|c|c|c|c|} \hline Function & Model & Reciprocity & Variable & Baryons & $ z\rightarrow 1$ \\ \hline Kartvelishvili & - & Yes & CT & No & $(1-z)$ \\ \hline Peterson & IF & No & CT,LC & Possible & $(1-z)^2$ \\ \hline Collins & IF & Yes & LC & No & $(1-z)$ \\ \hline Andersson & SF & No & LC & ? & $ \sim (1-z)$ \\ \hline Modified Bowler& SF & No & LC & ? & $\sim (1-z)$\\ \hline \multicolumn{3}{|c|}{ LC = Light Cone} & \multicolumn{3}{c|}{ CT = Cartesian} \\ \hline \end{tabular} \label{t:2p1} \end{table} Table~\ref{t:2p1} presents the collected features of the various fragmentation functions. The two functions derived from IF are similar, as are the two based on SF. There are differences between the two models. The most distinct difference is the exponential term in the SF functions. This causes the fragmentation function to go to 0 prematurely. SF also appears to be flatter, but this feature strongly depends on $\epsilon$ for the IF functions. We show the Andersson and Bowler functions in Figure~\ref{fig:c2f3}, \begin{figure}[htp!] \centering \includegraphics[scale=0.60]{c2f3t} \caption{String inspired functions. Andersson fragmentation function (solid line), and Modified Bowler function (dash-dotted line).} \label{fig:c2f3} \end{figure} , and the Collins and Peterson forms in Figure~\ref{fig:c2f4}. \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c2f4t} \caption{Independent fragmentation inspired functions. Collins function (dash-dotted line), and the Peterson function (solid line).} \label{fig:c2f4} \end{figure} Another difference is the $z \rightarrow 1$ behavior, where reciprocity can be used as a guide. All the functions, save the Peterson, lean toward a $(1-z)$ behavior. The Peterson function has a $(1-z)^2$ as $z \rightarrow 1$. From an experimental perspective, the low z region is difficult to measure because of poor efficiencies of low momentum tracks. For the high z end, a large amount of statistics would be necessary to accurately determine the power dependence of the curves. So experimental clarification of these issues will only come with great effort. A detailed comparison of these five functions to CLEO's \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ fragmentation distribution can be found elsewhere \cite{frag}. One of the more serious shortcomings of IF stems from the so-called string effect \begin{figure}[htp!] \centering \includegraphics[scale=0.7]{c2f5t} \caption{A qqg event in SF (a) and IF (b) models.} \label{fig:c2f5} \end{figure} (Figure~\ref{fig:c2f5}). In IF models all the partons fragment independently, while in SF models a color ``web'' stretches from the quarks to the gluons. This acts to increase the density of hadrons in the vicinity of the gluon direction. This has been observed experimentally \cite{bart}. While the exact cause of this effect in SF Monte Carlos is unclear, it cannot be accommodated in IF schemes. This, along with other undesirable features has all but terminated interest in IF as a viable \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ fragmentation model. \section{Theoretical Approaches to Charm Decay} Here we briefly recount the evolution of our understanding of the weak decay of charmed mesons. The weak force has supplied the physics community with a great deal of beauty and bedazzlement, with the charm sector being no exception. This field also attests to the vital interplay of experimental and theoretical physics, as theoretical predictions for charm decays have been consistently defied by experimental observations. Here we trace our understanding of charm decays from the failings of the simple spectator model to more advanced approaches. We also detail outstanding conflicts with experimental measurements. Voluminous amounts of material has previously been written on this subject. The reader may find of particular use the experimental summaries of Hitlin \cite{Hitlin}, \ Schindler \cite{rafe},\ and the theoretical treatments of R{\"u}ckl \cite{ruck}, Shifman \cite{shif} and the recent review by Bigi \cite{bigi} to append the rather quick distillation presented here. \subsection{The Trivial Spectator Model} The lowest level of understanding of charm decays utilizes the method of quark diagrams. The first order processes are collected in Figure~\ref{fig:c2f6}. They may be distinctly be separated topologically into two classes; the ``spectator class" (a - b), and the nonspectator or ``annihilation class" (c - d). The primary difference is in the the location of the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertices. In spectator processes the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertex occurs only on the c quark world line, it then hadronizes either independently (spectator) or in conjunction with the light quark (color suppressed). In annihilation processes the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertex touches both quark lines of the initial meson. The trivial spectator model makes the following predictions: \begin{enumerate} \item The dominant process in the weak decay of charmed mesons is the spectator diagram. This approximately corresponds to the beta decay of the charmed quark. The rate for charm decay $(\Gamma_0)$ can be obtained (to first order) by scaling the muon lifetime $ \ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi_{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} = {192\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^3 \over G^2_F m^5_\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} \approx 2.2 \times 10^{-6} \rm sec $ by a factor of $ {1 \over 5} \left ( { m_{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi} \over m_{c} } \right)^5 \approx 7 \times 10^{-13} \rm sec$, to account for the charmed quark mass and the additional number of decay channels available to the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi. This form suffers from an exceptional (fifth power) dependence on the charmed quark mass which is an undefined quantity. It is generally approximated to be in the range of $1.5-1.6$ GeV. \item The color suppressed spectator diagram occurs at a rate reduced by a factor of $\approx$ 10. The is because the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ has to hadronize a quark pair with the right color combination to make color singlet hadrons. \item The exchange diagram, which is contributes to \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ decay on the Cabbibo allowed level is expected to be strongly suppressed by a small wave function overlap $ | \ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi(0) |^2 \propto {f^2_D \over M_c^2} $ where $f_D \sim .15 $ GeV. Similarly the \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ annihilation is expected to suffer helicity suppression (as in pion decay) by a factor $ {m^2_q \over M_c^2}$ \item Taking these factors into account, the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ are expected to have the same lifetimes and semi-leptonic branching ratios. \end{enumerate} \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c2f6t} \caption{Lowest order decay diagrams for charmed mesons. a) the spectator diagram, b) color matched or color suppressed spectator diagram, c) \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange diagram, d) \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ annihilation.} \label{fig:c2f6} \end{figure} \subsection{ Experimental Controversies} \begin{table}[ht!] \centering \caption{Lifetimes of Charmed Mesons} \begin{tabular}{|c|c|c|c|} \hline Group & \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ $\times 10^{-13} \rm sec$ & \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ $\times 10^{-13} \rm sec$ & \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ $\times 10^{-13} \rm sec$ \\ \hline CLEO & 11.4 $\pm$\ 1.6 $\pm$\ 0.7 & 5.0 $\pm$\ 0.7 $\pm$\ 0.4 & 4.7 $\pm$\ 2.2 $\pm$\ 0.5 \\ \hline World Average & $10.45^{+0.31}_{-0.29}$ & 4.27 $\pm$\ 0.10 & $4.31^{+0.36}_{- 0.32}$ \\ \hline \end{tabular} \label{t:2p2} \end{table} After the experimental picture of charm decays began to unfold, several failings of the trivial Spectator model were revealed. \begin{enumerate} \item The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ and \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mesons were found to have very different decay rates. This has been determined by measuring the lifetimes of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi, as collected in Table~\ref{t:2p2}. We have presented the measurements of CLEO \cite{csor} and the current world average \cite{Hitlin}. Empirically, we measure $ { \ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) \over \ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi) } = 2.45 \pm\ 0.09$. This difference has also been established by a large discrepancy in the semileptonic branching ratios of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi. A recent Measurement by the MARK III \cite{balt} has determined $ \rm { B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{e^+}\else{$e^+$}\fi X) \over B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{e^+}\else{$e^+$}\fi X)} =2.3^{+0.5+0.1}_{-0.4-0.1}$, which is in good agreement with the measured lifetime difference. \item Several color suppressed \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decays have been observed to occur with a healthy rate. Among them \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0$}\fi\ has been found to occur at a rate of 0.45 $\pm$\ 0.9 times that of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. CLEO \cite{haas} and ARGUS \cite{halb1} have also observed a color suppressed decay of the \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ meson, B( B \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi}\else{$\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi$}\fi X) $\sim$ 1.2 \%. \item Evidence for annihilation processes appears to exist. The first was the observation of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0. This mode was first observed by ARGUS \cite{halb2} and later by CLEO and MARK III. These measurements have confirmed a branching fraction of $\simeq 1\%$. Other clean annihilation class signatures have been sought in \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ decay. The E-691 experiment \cite{anjos} has placed a stringent limit on the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ mode, finding $ \rm { B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi) \over B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi)} = < 0.08$ at the 90 \% CL. This same group has, however, has observed the annihilation decay candidate $ \rm { B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi)_{nonres} \over B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi)} = 0.29 \pm\ 0.07 \pm\ 0.05$. \end{enumerate} \subsection{Patching up the Spectator Model using QCD} In light of these difficulties, reexamination of the spectator model was in order. The charged weak current has as its hadronic component: $$ J_{-}^\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi = \left(J_{+}^\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi \right)^t = \left(\bar u , \bar s , \bar t \right) \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi \left ( 1 - \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^5 \right) \left( \matrix{ V_{ud} & V_{us} & V_{ub} \cr V_{cd} & V_{cs} & V_{cb} \cr V_{td} & V_{ts} & V_{tb} \cr} \right) \left ( \matrix{ d \cr s \cr t \cr } \right ) $$ Where the $V_{xy}$ are the familiar K-M matrix elements. The current-current approximation $( q^2 << M^2_W )$ of the weak Hamiltonian leads to the ``Bare" Hamiltonian for hadronic charm changing interactions of $$ H^{(0)}_W(\Delta c= -1) = { G_F \over \sqrt{2}} V_{cs}V_{ud}^{\ast} \big [ (\bar s c)_L (\bar du)_L \big ]$$ where the notation $(\bar a b)_L$ implies the canonical V-A structure $ \bar a \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^{\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi}(1-\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi_5)b $. \begin{figure}[h!] \centering \includegraphics[scale=0.55]{c2f7t} \caption{One loop gluon corrections at a four quark vertex.} \label{fig:c2f7} \end{figure} \pagebreak Since a \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertex can be thought of as a four fermion interaction, R{\"u}ckl \cite{ruck} first studied the effects of 1 loop gluon exchange at a \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertex. The diagrams for these processes are shown in Figure~\ref{fig:c2f7}. The diagrams of a) are absorbed in the renormalization of $G_F$, while those of b) introduce new effects. We note that none of the diagrams in b) are possible in semileptonic decay. R{\"u}ckl predicted the following ramifications of the hard gluon exchanges: \begin{enumerate} \item weak couplings would be renormalized. \item distortions to the color structure of the interaction from octet currents \item possibility of new chiral structures, (V+A) for example \item possibility of new Lorentz structures (scalar or tensor) \end{enumerate} \par The last two effects should be small, but the first two are not. He calculated the first order correction to the Hamiltonian to be $$ H^{(1)} = { G_F \over \sqrt{2}}V_{cs}V_{ud}^{\ast} {3\alpha_s \over 8 \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi} \log \left( {M^2_W \over \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^2} \right) (\bar s \lambda_a c)_L(\bar d \lambda^a u )_L $$ The effective weak Hamiltonian then becomes $$ H_{eff}(\Delta c = -1) = { G_F \over \sqrt{2}} V_{cs}V_{ud}^{\ast}\big [ c_+ O_+ + c_- O_- \big ] $$ where $$ O_{\pm} = {1 \over 2} \big [ (\bar s c)_L (\bar ud)_L + (\bar s d)_L (\bar u c)_L \big ]$$ and to lowest order $$ c_{\pm} = 1 \mp\ { \alpha_s \over 2 \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi} \log \left( {M^2_W \over \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi^2} \right)$$ Where the $c_{\pm}$ coefficients obey the relation $ c_- = { 1 \over \sqrt{c_+}}$ and are approximately 2.0 and 0.7, respectively at $q^2$ = 1.5 GeV. A plot of $c_{\pm}$ verses mass scale for leading log (LL) and next to leading log (NLL) is displayed in Figure~\ref{fig:c2f8} \begin{figure}[t!] \centering \includegraphics[scale=0.40]{c2f8t} \caption{Variations of the coefficients $ c_+$ and $c_-$ versus mass scale.} \label{fig:c2f8} \end{figure} Making one final transformation allows for a convenient representation of the Hamiltonian. We define $$ c_1 = { c_+ + c_- \over 2} $$ $$ c_2 = { c_+ - c_- \over 2} $$ The Hamiltonian then becomes $$ H_{eff}(\Delta c = -1) = { G_F \over \sqrt{2}} V_{cs}V_{ud}^{\ast}\big [ c_1 (\bar s c)_L (\bar ud)_L + c_2(\bar s d)_L (\bar u c)_L \big ] $$ The resulting $c_1$ and $c_2$ processes are displayed in Figure~ \ref{fig:c2f9}. \begin{figure}[t!] \centering \includegraphics[scale=0.7]{c2f9t} \caption{The $c_1$ (a) and $c_2$ (b) processes.} \label{fig:c2f9} \end{figure} \par Several important relationships can be derived based on the $c_{\pm}$ coefficients. The non-leptonic width becomes enhanced by a factor $$ \Gamma_{NL} =(2c^2_+ + c^2_-)\Gamma_0$$ where we would naively expect three. The semi leptonic branching ratio is also modified $$ B(c \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ eX) = { 1 \over (2c^2_+ + c^2_- + 2) } \approx 15 \% $$ Both of these are clearly pushing things in the correct direction. An induced neutral current ($c_2$ process) has also arisen which has precisely the form of the color suppressed spectator diagram, and may neatly account for such processes. In spite of these successes, the spectator model still does not predict the large difference in the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ and \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ lifetimes which has been confirmed experimentally, nor the annihilation type processes. To this end we must either consider the addition of nonspectator processes or resort to the alternate approach of final state interactions. \subsection{ Final State Interactions} We examine final state interactions on the quark level and the hadron level. \par Several \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ decay modes contain identical quarks in the final state. An example is the decay mode \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ depicted in Figure~\ref{fig:c2f10}. \begin{figure}[t!] \centering \includegraphics[scale=0.85]{c2f10t} \caption{ The decay mode \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. Both color clustered terms contain a $\bar d$ quark.} \label{fig:c2f10} \end{figure} Since there are two identical fermions in the final state, the Pauli principle demands that the wave function be suitably altered. This induces an additional term to the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ rate $$ \Gamma^{pauli} = (2c^2_+ - c^2_-){ 12 \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^2 \over M_D^2}f_D^2 \Gamma_0$$ Since this term is linked to $f_D^2$ which depends on the wave function overlap, uncertainty is introduced into the magnitude of this effect. General considerations predict that the size of the interference term compared to the spectator term will be about 20 \%. We note that for $c_- >> c_+$ the overall term is negative and acts to increase the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ lifetime, as expected. \par It is also possible that in the case of two body decays, the two final state hadrons may re-scatter into a different final state (post hadronization). This has been proposed \cite{dono} as a mechanism by which ordinary spectator decays may produce more exotic nonspectator final states. Hadron re-scattering is also required by unitarity. \subsection{Summary} An easy lesson to be learned from the previous discussion is that simple quark diagrams are meaningless without fully considering the role of gluons. Their presence substantially alters the weak Hamiltonian, and may well act to catalyze the nonspectator diagrams (by removing helicity suppression in \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange, for example). While the spectator model with QCD improvements is a more reliable theory, it is still incomplete. Both the nonspectator processes along with pre- and post- hadronization final state interactions seem to have established a foothold in the picture of charm decay. Some of the more modern theoretical systems have been successful at predicting the coarse features of relative \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decays, and these have been limited to the two body case (fortunately a large number of \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decays are two body or quasi-two body). Stumbling blocks persist however. Implementation of nonspectator processes will require some effort. The Bauer-Stech-Wirbel \cite{bsw} group has typically neglected annihilation processes, and thus has a difficult time explaining \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0. The 1/N expansion method of Buras-Gerard-Ruckl \cite{buras} includes such processes, however they are often of higher order an are then dropped out. It is also unclear whether final state hadron scattering can ever be integrated into calculations in a systematic way. Although the $\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi \leftrightarrow \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi$ lifetime difference has been known for almost a decade, we still await a quantified theoretical answer. \chapter{Apparatus} In order to perform high energy physics experiments, one needs to generate and record high energy interactions. Two general methods exist for producing interactions, accelerating a particle and firing in into a target (fixed target), or accelerating two particles (usually particle-antiparticle) and colliding them. \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ colliders are useful as both particles completely annihilate into a state of pure energy. The non resonant cross section for production of a fermion-antifermion pair in an \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ collision is approximately: $$ \sum_{nf} \ {4 \ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi\ \alpha^2 e^2_f \over 3 s }$$ where $nf$ is the number of species of the particular fermion, $\alpha$ the fine structure constant, and s the center-of-mass energy squared $\rm (GeV^2)$. In addition the \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ annihilations, \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ scattering (Bhabha) and two \ifmmode{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi}\else{$\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$}\fi\ process also occur. Of greater importance to the experimentalist are the several resonant enhancements to the \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ cross section. These are shown in Figure~\ref{fig:c3f1}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c3f1t} \caption{Electron-positron cross section as a function of center of mass energy. The cross section has been measured up to 50 GeV. Above this energy only the Z resonance is established.} \label{fig:c3f1} \end{figure} \par Reactions from proton colliders are extremely rich and dense, since the protons themselves are composite objects. Because the interaction rates for proton colliders are substantially larger, and the physics more difficult to disentangle, they have assumed the role of discovering phenomena, while \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ machines excel at refined measurements. \section{CESR} The data for this research project was gathered at the Cornell Electron Storage Ring (CESR), located at Wilson Synchrotron Laboratory on the campus of Cornell University in Ithaca, New York. It is capable of producing \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ collisions at a center-of-mass energy from 7 to 14 GeV (here, and throughout this thesis $\rm c=1$ unless explicitly stated otherwise). Electrons and positrons are produced in a linear accelerator and injected into a synchrotron where they are accelerated to near collision energy. Finally the counter-rotating beams are transferred into a storage ring 770 m in circumference where they are made to collide in two interaction regions at multiples of a fundamental frequency of 390 KhZ. The layout of this facility is shown in Figure~\ref{fig:c3f2}. \begin{figure}[p!] \centering \includegraphics[scale=0.6]{c3f2t} \caption{The Cornell electron storage ring and associated components.} \label{fig:c3f2} \end{figure} Individual data runs last an average of 1-2 hours. The energy resolution, which is largely a function of the bending radius of the machine is of order 3 MeV. \par Storage rings maximize the opportunity for obtaining collisions from a generated group of electrons and positrons. This is offset by synchrotron radiation, the energy lost by the beams as they are accelerated so as to travel in a (near) circular orbit. The power lost per electron per turn is: $$ P_{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi} = {2\over 3} \ {{c r_e E^4} \over { (mc^2)^3 \ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^2 } }$$ $r_e $ is the classical electron radius, E the beam energy, m the mass of the particles in the beam, $\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi$ the radius of curvature. At CESR energies this amounts to $\approx 1 \rm \ MeV$. To maintain a constant beam energy, this energy must be constantly replenished by boosts through R.F. cavities. The R.F. power costs makes a large contribution to the operating expenses of a storage ring. The cost of storage ring is predicted by the Richter relation: $$ \$ = \alpha R + \beta {E^4 \over R} $$ which the first term accounts for magnets, tunnels, etc., and the second for R.F. expenditures. This predicts both the size and the cost of a storage ring increase as the energy squared. \par The critical performance parameter in any collider facility is the luminosity (L). The number of events observed $\rm N_{obs}$ is given by: $$\rm N_{obs} = \sigma B \epsilon \cdot \int L $$ The integrated luminosity enters linearly in the rate, along with the cross section, branching ratio and reconstruction efficiency. Since the only variables available to the experimenter are the luminosity and efficiency (which is largely determined by detector design) it is desirable to accumulate as much luminosity as possible. The instantaneous luminosity, for a storage ring such as CESR, is governed by: $$ L = {{f N s\xi_v } \over {2 r_e \beta^{\ast}}} $$ where $f$ is the frequency of the collisions, N the number of particles per beam, $\xi_v$ the vertical tune shift, and $\beta^{\ast}$ is a parameter which represents how tightly the two beams can be focused at the interaction point. During this data taking the CESR group began a program to increase the luminosity by simultaneously circulating several bunches of electrons and positrons. This project was successful, although the strain on the R.F. system caused, at first, the machine to operate less reliably. The average luminosity during this running period was roughly $\rm 3 \times 10^{31} \ cm^{-2}sec^{-1}$. \subsection{Data Sample} The data sample was acquired from August 1984 through February 1986 with the CLEO detector, operating in what shall be referred to as the 85VD configuration. Table~\ref{t:3p1} \begin{table}[ht!] \centering \caption{85VD Data Sample} \begin{tabular}{|c|c|c|c|} \hline Resonance region & Machine timing & \# bunches & $\int \rm L$ $\rm (pb^{-1})$ \\ \hline \ifmmode{\Upsilon{\particleone (1S)} & 3 & 3& 17.0\\ \hline \ifmmode{\Upsilon{\particleone (4S)} & 7 & 7 &10.5\\ \hline \ifmmode{\Upsilon{\particleone (4S)} & 7 & 3 & 63.3\\ \hline \ifmmode{\Upsilon{\particleone (1S)} & 7 & 3 & 3.4\\ \hline \ifmmode{\Upsilon{\particleone (4S)} & 3 & 3 & 43.8\\ \hline \ifmmode{\Upsilon{\particleone (3S)} & 3 & 3 & 33.3 \\ \hline \end{tabular} \label{t:3p1} \end{table} collects the operating conditions for collection of the data sample. The various states of timing and bunches reflect the multibunch upgrade program. The luminosity reflects the runs to first order which are considered good, the data analysis utilizes a subset of the above luminosity. The major impetus for CLEO physics running is to study the decays of \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ mesons which are produced on the \ifmmode{\Upsilon{\particleone (4S)}\ resonance. To efficiently separate processes connected with the \ifmmode{\Upsilon{\particleone (4S)}\ from those of the surrounding continuum, the \ifmmode{\Upsilon{\particleone (4S)}\ running time is divided between on resonance and a center-of-mass energy approximately 60 MeV below resonance in a $2:1$ ratio. Also, running on the \ifmmode{\Upsilon{\particleone (1S)}\ is partially motivated to study lepton faking backgrounds. \section{The CLEO Detector} The results of the \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ collisions were recorded by the CLEO \cite{cleonim} detector, a large magnetic spectrometer with excellent charged particle tracking capabilities. Operational since 1979, in the summer of 1984 the first stages of a major detector upgrade \cite{cleo2} were implemented. This consisted of the installation of a precision vertex detector (VD) and instrumentation of the central drift chamber (DR1.5) to simultaneously perform drift time and specific ionization (dE/dx) measurements in all 17 layers. The CLEO detector so configured shall be referred to the 85VD configuration. The CLEO detector, not unlike a high fidelity stereo system, consists of several distinct detector elements, each of which performs a specific function in the event reconstruction. The single most important property that determines the detector response is whether or not the particle posses electric charge. The methods of detecting and analyzing charged and neutral particles differ such that detectors of CLEO's generation were often polarized to one extreme. In brief, the CLEO consists of an inner detector dedicated to charged particle tracking and and outer detector dedicated to the identification of both charged and neutral particles. The natural boundary between the inner and outer detectors is a 1.0 Tesla superconducting solenoid magnet. The inner detector consists of the two drift chamber devices mentioned above, as well as shower counters (ES) on either end of the drift chamber face. The outer detector contains a dedicated dE/dx system (DX) for charged particle identification, a time of flight scintillator detector (TF), also for particle identification, and an electromagnetic shower detector (RS) for \ifmmode{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi}\else{$\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$}\fi's, \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0$}\fi's and electron identification. These three devices are arranged in eight identical octant sections around the coil. Surrounding the entire detector is a steel-drift chamber sandwich used to identify muons (MU). Several other detector elements exist from previous detector configurations, they do not effect this analysis and will not be discusses here. A beams eye view of the CLEO detector is displayed in Figure~\ref{fig:c3f3}. \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c3f3t} \caption{Beam's eye view of the CLEO detector.} \label{fig:c3f3} \end{figure} The octant structure is readily apparent, and a side view is provided in Figure~\ref{fig:c3f4}. \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c3f4t} \caption{Side view of the CLEO detector.} \label{fig:c3f4} \end{figure} We now turn to a more involved discussion of the detector function. \subsection{Tracking Fundamentals} The inner detector is globally cylindrical in geometry, consisting of fine wires strung parallel to the beam direction, on circles of constantly increasing radius. Drift chamber detectors are gaseous detectors which detect the passage of charged particles through ionization. The ionized electrons drift toward sense wires held at high electrostatic potential. The extent to which the sense wires are isolated and the way the geometry of the electric field is defined determines the intrinsic performance characteristics of the chamber. As the electrons are accelerated toward the sense wire, they begin to acquire enough energy to liberate other electrons. This develops into a process referred to as avalanche multiplication, which the electron gain is on order $10^4$. The gas chosen is typically argon, which has high specific ionization, good gas gain, and undergoes roughly 30 ionizing collisions/cm at STP. Since argon is in the same chemical family as neon, the high voltage conditions could result in breakdown or self sustaining discharge. This is obviated by the addition of an organic vapor. Nobel gases can only be excited by the emission or absorption of photons, while organics posses a myriad of rotational and vibrational states. This leads to a substantial amount of energy dissipation is radiation-less transitions. Organics also tend to increase the drift velocity of the gas, thereby decreasing diffusion effects. CLEO operates its wire chambers in a 50-50 argon-ethane mixture, which has a mean drift velocity of $ \rm 50 \ \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi m/nsec ${.} The central operating principle of a drift chamber is drift velocity saturation. As the electric field is increased, the drift velocity plateaus or saturates. Insuring a constant drift velocity across the cell allows for a theoretically simple (in practice the drift-time relationship often represents the most difficult aspect of drift chamber calibration) method for reconstructing the trajectory through the cell by measuring the time difference between entry into the cell and a hit on the wire. The accuracy with which trajectories can be reconstructed is in the range of hundreds of microns, and requires timing precision on the nanosecond level. \par Immersing the drift chamber in a solenoidal magnetic field (B) allows for the measurement of two vital quantities for a charged track, the sign of the electric charge and the momentum. The path of a charged particle moving a a magnetic field is a helix. The CLEO reference coordinate system is depicted in Figure~\ref{fig:c3f5}. \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c3f5t} \caption{CLEO coordinate system.} \label{fig:c3f5} \end{figure} CLEO uses a track coordinate system that consists of three $r-\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ parameters; CUDR is one half times the reciprocal of the radius of curvature and is signed, FIDR is measured from the distance of closest approach to the beam line, DADR is the signed impact parameter, and two additional parameters CTDR, the cotangent of the angle between the track and the beam line (polar angle), and Z0DR, the z coordinate of the distance of closest approach. The transverse momentum $ p_{\perp}$ can then be calculated by: $$ p_{\perp} \rm = 0.015 \bigg| {B \over CUDR} \bigg | $$ where the dimensions are GeV, Kilogauss, and meters. Once the transverse momentum is determined the total momentum is trivially extracted from the measurement of the polar angle. Drift chamber performance is gauged by the spatial resolution in the $r-\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ plane $\sigma_{r\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}$ and the momentum resolution ${\delta p}\over p $. The two are linked by the relation: $$ \left({{\delta p_{\perp}}\over p_{\perp} } \right)_{res} = {p_{\perp} \sigma_{r\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi} \over (0.03){\rm L^2B}} \sqrt{ \rm 750 \over N+5}$$ for a drift chamber with N equally spaced measurements over a lever arm L in an axial magnetic field B (the units are GeV, meters, and kilogauss). The other dominant term comes from multiple scattering: $$ \left({{\delta p_{\perp}}\over p_{\perp} } \right)_{ms} = \rm { 0.5 \over LB} \sqrt{1.45 {L \over X_0} }$$ here $X_0$ is the average radiation length of the detector in meters. The resolution dominated term is prominent for high momentum tracks whilst the multiple scattering limits the resolution at low momentum. The CLEO VD+DR1.5 tracking system achieves a momentum resolution of $ \left({ {\delta p}\over p } \right)^2 = ( 0.007p)^2 + (0.006)^2$ ($p$ in GeV/c). \subsection{Vertex Detector} The CLEO vertex detector is a high precision drift chamber which forms the innermost detector element. 800 cells are divided among 10 axial layers, which range in radius from 8 to 16 cm (see Figure~\ref{fig:c3f6}). \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c3f6t} \caption{Vertex detector cell arrangement.} \label{fig:c3f6} \end{figure} The inner 5 layers have 64 cells per layer, and the outer 5 layers each have 96 cells. The cells are hexagonal in shape, and expand in size so as to subtend a constant angle in $\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$. The $\sigma_{r\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}$ resolution for the VD \cite{sob} on average is 100 microns. The active z length is 70 cm. Two methods are provided for z measurements. A resistive sense wire is read out at both ends allowing a measurement by charge division. The resolution for this method is in the realm of 9 mm. More precise information is obtained from the conducting cathode strips on the inner and outer support tubes. The strips are segmented into 8 $\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ regions which are further divided into 64 (96) sections on the inner and outer tubes. The cathode strips z measurements have a resolution of 750 microns. The device derives its name from the design purpose to extrapolate tracks back to their production vertex. The instrument is separated from the interaction point by a silver coated beryllium beam pipe and a carbon inner support tube. The interceding material amounts to 0.1 \% of a radiation length. The extrapolation error is given by: $$ \sigma_{ext} = \sqrt{{ {100}^2 + {115 \over \sin\Theta (p \beta)^2}}} $$ $\Theta$ is the polar angle and $\beta = v/c$. $p$ is in GeV and the result has the units of microns. \par A plethora of benefits accompanied the installation of the VD. The tracking momentum resolution was greatly improved, and the momentum range for low momentum track reconstruction was extended. Beam wall and beam gas events were efficiently removed, and the capabilities of the experimental trigger were extended. \subsection{Drift Chamber} In contrast to the VD, the design of the drift chamber relies on the repetition of a single cell geometry. The unit cell consists of a 20 $\rm \ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi m$ diameter gold plated tungsten sense wire responsible for a radial region of 11.3 mm. The cell boundaries are formed by three 115 micron diameter silver plated beryllium copper field wires located on either side of the sense wire, and arranged in a straight line perpendicular to the radius of the cylinder over a region of 10 mm centered about the sense wire. The inner and outer boundaries of the device are kept at ground, which coupled with the open rectangular cell geometry leads to certain peculiarities in the function of the detector which shall be addressed. The arrangement of cells is rigidly described by the prescription $\rm cells/layer = 24 \times (N+4)$ which are located on nearly concentric cylinders of radius $\rm R_{N} = 42.49 \times (4+N)$ (mm) and N ranges from 1 to 17. The lever arm of the drift chamber ranges from 212.5 to 892.5 mm. A schematic illustration of the drift chamber can be found in Figure~\ref{fig:c3f7}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c3f7t} \caption{The CLEO central drift chamber.} \label{fig:c3f7} \end{figure} Also distinct from the VD is the method of z measurement. Starting with the second cylinder, the 8 even numbered layers are strung in a ``stereo'' mode at alternating $ + -$ angles of $2.9^{\circ}$ to the beam axis. The odd numbered layers are all axial, strung parallel to the beam line. High voltage is distributed through the field wires, with the sense wires kept at ground for ease of electronic readout. During August 1984, the readout electronics for the drift chamber was replaced, with the new system allowing both timing and pulse height measurements. With the addition of the vertex detector, the drift chamber underwent its most through calibration, revealing several biases. It is this mode of operation which shall be recounted here. The chamber was operated at 2050V, corresponding to an approximate electric field strength of 3630 v/cm and gas gain $\approx 2 \times 10^4${.} Performance tests \cite{avery1} of the new electronics were done by T. Copi\'e and the author using a 9 layer, 220 channel $\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ slice mock-up of the drift chamber using cosmic ray muons. Plots of the timing and pulse height performance are shown in Figure~\ref{fig:c3f8} \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c3f8t} \caption{Measured test lab spatial resolution $\sigma_{r\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}$ verses chamber voltage.} \label{fig:c3f8} \end{figure} and \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c3f9t} \caption{Test lab pulse height resolution $\rm \sigma \over peak$ versus chamber voltage.} \label{fig:c3f9} \end{figure} and Figure~\ref{fig:c3f9}. Because of the open cell geometry, a symbiotic relationship exists among the gains of the cylinders. Shown in Table~\ref{t:3p2} \begin{table}[ht!] \centering \caption{Drift Chamber Cylinder Gain Interdependence} \begin{tabular}{|c|c|c|} \hline Layer 16 $\langle PH \rangle$ & Layer 16 Voltage (V) & layer 17 Voltage (V)\\ \hline 252 & 2050& 2150\\ \hline 148 & 2050 & 0\\ \hline 186 & 2100 & 0\\ \hline \end{tabular} \label{t:3p2} \end{table} are the effects on the average measured pulse height in layer 16 subject to variations in the outermost layer 17. Layer 15 was held at 2050 V. Since the inner and outermost cylinders have one high voltage layer and ground as their two radial nearest neighbors, one can deduce from the above table that they will also suffer from reduced gain. To compensate for this problem, these two cylinders were operated at 100V higher than the nominal operating voltage of the chamber. A more deleterious effect from the outer ground planes was that the field would ``leak" out of the cell creating asymmetries in the drift-time relationship on the right and left sides of the cell. The measured versus fit drift distance for the right and left sides of a wire are shown for an outer layer (Figure~\ref{fig:c3f10}) and an inner layer (Figure \ref{fig:c3f11}). Note the asymmetry in the outer layer. \begin{figure}[p!] \centering \includegraphics[scale=0.65]{c3f10t} \caption{Measured vs fit drift distance for the left (cross) and right (diamonds) side of the cell in layer 17. The units are in meters.} \label{fig:c3f10} \end{figure} \begin{figure}[p!] \centering \includegraphics[scale=0.65]{c3f11t} \caption{ Drift time relations for left and right sides of a normal layer.} \label{fig:c3f11} \end{figure} An analogous effect was observed in the vertex detector, which has a similar grounding arrangement. This problem was partially calibrated away by having separate drift-time relationships with right-left asymmetries for the inner and outer layers. A plot of the $r-\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ residuals from hadronic events is shown in Figure~\ref{fig:c3f12}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c3f12t} \caption{Measured - fit drift distance from all drift chamber layers, from hadronic type events. } \label{fig:c3f12} \end{figure} Fitting the central peak to a Gaussian yields $\sigma_{r \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi} = 160 \pm 10$ microns. Since the primary interaction of charged particles and a drift chamber detector is through ionization, it becomes possible to use the measured pulse heights to perform particle identification based on energy loss (dE/dx). The amount of energy lost for a relativistic charged particle traversing a medium is predicted from the Bethe-Bloch equation: $$\rm {dE \over dx} = { A \over \sin\Theta} \cdot {1 \over \beta^2} \cdot \Bigl [ \ln(2m_e \beta^2 \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^2 / I_0) -\beta^2 \Bigr]$$ where A is a term related to the medium traversed, $\ 1 \over {\sin\Theta}$ the path length, $\rm I_0$ the ionization potential, $\beta, \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$ the canonical relativistic variables. To engineer such measurements a precarious balance must be struck between preserving the spatial resolution which is improved with increasing gas gain, and the dE/dx measurements which deteriorate with increasing gas gain. Timing circuits are equipped with discriminator circuits which trigger once a pulse amplitude greater then a specified size is registered. It is desirable to have the circuit respond as quickly as possible to a given pulse in order to obtain the the best resolution. The path of least resistance is to make the gas gain as large as possible, providing hefty pulses from the chamber which are easy to trigger on. This approach was the initial operational mode of the CLEO drift chamber, where only timing information was collected. To also make ionization measurements, the gas gain must be lowered. Since the incoming pulses are smaller the system must be capable of operating at a reduced threshold. The major obstacle is that lowering the sensitivity means that noise hits may penetrate the system, and then real information can be washed out by the background. Noise immunity was enhanced by mounting preamplifiers, 24 channels to a preamplifier card, directly on to the drift chamber face. The entire drift chamber was read out one end face. The boosted signals were transmitted via a symmetric differential transmission system on a 7.5m flat twisted pair cable, to receivers on 48 channel data cards, located in standard CLEO readout crates outside the detector. Each data card channel contained timing and pulse height circuits which analyzed the same incoming pulse. The operating sensitivity was 300 nanoamps, and the resulting dE/dx resolution was in the range of 10-14\%. Part of the difficulty in performing dE/dx measurements was that the open rectangular cell geometry was not ideally suited for charge collection. The system was engineered and implemented by John Dobbins and Don Hartill. The calibration and analysis of the dE/dx information was primarily done by Thierry Copie and Tom Ferguson, while the timing performance system was the responsibility of Paul Avery and the author. Although the dE/dx resolution was somewhat inferior to the outer detector, simply being able to have information associated with every track (especially at low momentum) dramatically improved the particle identification performance of the detector. \subsection{ End Cap Shower Counters} Beyond the drift chamber face on both sides of the detector are the end cap shower counters. This detector is formed from aluminum proportional tubes and lead, with 21 layers of the device oriented in 3 groups oriented at $120^{\circ}$ to each other. The energy resolution is $\sigma_E/E = 0.39/\sqrt{E}$. The primary function of the device is to provide a measurement of the luminosity from Bhabha scattering. \subsection{Superconducting Coil} Since in the initial CLEO conceptual design the particle identification detectors were located beyond the coil, it was necessary to produce the field using as little material as possible, hence the decision was made to use a superconducting coil. The coil is 3.1 m in length and 2 m in diameter. The winding is made of Nb-45\% Ti in a copper matrix. The standard running conditions mandate a 1 Tesla field which requires a current of 1500 amps. Measurements of the field have determined uniformity to within 2\%{.} The net material, including the cryostat is 0.7 radiation lengths. This material ranges out pions with momenta less than 150 MeV, kaons with momenta less than 400 MeV, and protons with momenta less than 600 MeV. \subsection{ dE/dx} Immediately outside the coil is the dedicated dE/dx system. Each octant contains 124 modules, each of which contains 117 wires oriented perpendicular to the beam line. The large number of wires was desirable to to obtain a high statistics measurement of the average pulse height, which CLEO calculates from the lowest 50\% of the measurements to avoid the long tail associated with the Landau distribution. The detector was operated on a gas mixture of 91\% argon and 9\% methane at a regulated pressure of 45 psia. The resolution, as measured by the peak divided by the standard deviation of the distribution is 5.8 \% for hadrons. Despite this fine performance, the inability to distinguish multiple tracks and the loss of low momentum tracks where hadrons are most easily separated seriously undermined the usefulness of this device. \subsection{Time of Flight} Occupying the next radial position in the octants is the time of flight system. Each octant contains twelve $2.03 \times 0.312 \times 0.0023$ m scintillation counters. Each scintillator is read out on one end by an Amperex XP-2020 photomultiplier tube. The TF geometry per octant in shown in Figure~\ref{fig:c3f13}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c3f13t} \caption{ Time of flight detector.} \label{fig:c3f13} \end{figure} The counters operate using two discriminators sensitive to pulse height from the photomultiplier tube. The time for a given hit is then determined by extrapolating to zero pulse height and correcting for travel time in the scintillator medium, as well as measured pulse height. The rms resolution of the detector is 350 psec. TF hits are matched to drift chamber tracks, and the flight time coupled with the tracks momentum is used to determine the mass of the track. In addition the TF plays a vital role in the experimental trigger. \subsection{Octant Shower Counters} The last device in the octant is the octant shower counter usually referred to as the RS. It is a twelve radiation length thick lead proportional tube sandwich, operated on on a gas mixture of 91\% argon and 9\% methane. It covers a solid angle of $\Omega/4\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi=0.47$ and the device performance can be characterized by $\sigma_E/E = 0.17/\sqrt{E}$ with E in GeV. Neural particle generate characteristic electromagnetic showers and can thus be identified. Charged particles also are detected, though the response is much weaker, allowing electrons to be distinguished from other charged particles. The instrument is also used to calibrate the luminosity using large angle Bhabha events, and serves in the experimental trigger. \subsection{Muon Detectors} The muon system is the outermost element and surrounds the CLEO detector in a box-like geometry. Beyond a steel hadron absorber of 6 to 12 hadron interaction lengths are two orthogonal sets of drift chamber planes with cell widths of 10 cm, operated in a 50-50 argon ethane gas mixture. The solid angle coverage is $\Omega/4\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi=0.72$. The hadronic faking for tracks in the 1 GeV range is 1\% while the muon detection efficiency is $\sim$ 30 \%. \subsection{Luminosity Monitors} CLEO uses 3 devices to measure the luminosity, the two shower counters described above, and a of set two scintillation-lead shower counter units (LM) to detect small angle bhabha scattering. Each unit contains 4 sections, and are mounted at small angles to the beam trajectory. The device is triggered by hits in any of sections diametrically opposite from the beam spot. This device is useful for measuring relative luminosity, since the orientation of the two units cannot be measured reliably enough to match the rapidly varying Bhabha cross section at those angles. The absolute measurement of the luminosity comes from the two shower counters, and is determined with a systematic error of 2\%. \subsection{Experimental Control} Information from the individual channels must be sampled, organized, and stored in a coherent fashion for each accepted collision. In the CLEO system each associated detector circuit occurs redundantly in groups of 24-60 on discrete data cards housed in water cooled ``crates." The crate supplies power for the electronics, and contains a ``controller" which directs the readout of the digitizing electronics. Data from the channels is converted to a voltage and stored on buffered capacitors which are digitized by the 12 bit ADC of the controller. All crates are connected to a 16 bit data bus known as the y-bus, which is interfaced to the crate via the controller. The y-bus was driven by a VAX-11/750 computer during this run period. The 750 collects the data and writes it to 6250 bpi magnetic tape. An 8 bit data line (x-bus), also driven by the 750, is used to control and monitor detector functions such as high voltage and calibration pulsing. In all, nearly 80 crates comprise the data acquisition system. Virtually all of the readout electronics and crate controller system were designed by members of the CLEO collaboration. A block diagram of the control system is shown in Figure~\ref{fig:c3f14}. \begin{figure}[p!] \centering \includegraphics[scale=0.7]{c3f14t} \caption{CLEO control system.} \label{fig:c3f14} \end{figure} \section{Data Selection} In the vast majority of beam crossings no interesting interaction takes place. A fast trigger must be able to sample the initial response of the detector and decide to pursue the event before the next collision takes place. Accepted events are recorded merely as addresses and values, these must be converted into a format suitable for physics analysis. This ``arduous" process is collectively referred to as compress. Detectors must be calibrated and highly complex algorithms sift through the data to reconstruct the interactions of the decay products with the detector. At this stage the data is analyzable, however since the trigger requirements are fairly loose, some of the events at this stage are still uninteresting. Selective event classification schemes are activated to filter out the types of events desired for particular analysis needs. These facets of the analysis shall be examined here. \subsection{Experimental Trigger} The experimental trigger allows for reduction of the 1-3 MHz crossing rate to a raw data recording rate of 1-2 Hz, with an average live time fraction of 0.9. The detector elements which play a pivotal role in the trigger system are the VD, DR1.5, TF and the RS. To a minor extent MU and ES are also involved. Channels from the various detector elements are ganged together to provide a course overview of the detector response. Because of accelerator development work that transpired concurrently with the data acquisition program, two trigger modes were used in this data set. They are equivalent from a physics standpoint, differing primarily with the speed in which the decisions had to be made. In 7 (3) bunch mode the crossings occurred every 360 (854) nsec. In the tracking chambers, four wires are OR-ed together to form a fast-or bit. The patterns of these fast-or bits are correlated in a track segment processor, which selects acceptable topologies. In the vertex detector, the inner and outer five layers are each segmented into blocks, where a block consists of the fast-or bits of five vertically adjacent layers. The five layers in a block are or-ed, the block being on if four of the fast-or bits are on. A physical combination of an inner and outer block turns on a VD bit. In the drift chamber blocks are formed from 3 vertically adjacent axial layers, containing all the fast-or bits in an approximate $24^{\circ}$ phi slice for each layer. A DR bit is set if 2 of the 3 layers in a block are hit. The nine axial DR layers contribute to a total of 3 possible drift chamber bits. The VD and DR bits are correlated in a loose road in $\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ to form track track segments. A ``medium track" consists of a VD bit and the first two DR bits, and a long track is defined by all four bits being on. \par Two signals are sent from each time of flight octant, which are the or's of groups of six scintillators which share the same z side of the octant. The TF bit pattern is analyzed for acceptable patterns, such as TFNADJ (two TF hits in nonadjacent octants) or BBTF (TF hits in diametrically opposite octant segments). Trigger information is also examined from the shower detector. Pulse heights are accumulated with an analog or of all 24 channels on a data card, these are likewise summed on the crate level and then on the octant level (two crates). The octant energy threshold (OCT) was approximately 1 GeV. Due to the analog summation, the input to the trigger discriminator was susceptible to noise. In particular an odious glitch in the baseline of the discriminators occurred after analog reset. This partially contributed to the loss of 2 (1) crossings in 7 (3) bunch mode to allow for settling time. Analog reset was fired every 19 (16) crossing during this running period. This effect coupled with other engineering instabilities prompted a sweeping renovation of the energy trigger in the fall of 1986. This upgrade was developed by John Dobbins with assistance from the author. Preamplifiers (gain 25) were placed at the octant sum junction making the transmitted signals less susceptible to noise. The boosted signals were differentiated with a time constant of 560 nsec at the discriminator input to smooth out the $\rm \overline{AR}$ glitch. The improvement system has proved to be significantly more robust than its predecessor. Similar information from the end cap shower detectors (ES) was supplied to the trigger. \par The trigger logic analyzes the input from the detectors in a two tiered approach. The level 1 trigger is enabled by a hit in the TF, OCT or ES lines. The singles rate with beam in the machine is about 5 KhZ for the ES and OCT lines. Level 1 initiates a search through all of the possible level 2 topologies, which are much more restrictive. The CLEO trigger logic \begin{table}[ht!] \centering \caption{CLEO Trigger Logic} \begin{tabular}{|c|c|c|c|c|c|} \hline Level & option1 & option2 & option3 & LOGIC & prescale \\ \hline 1 & TF & ES & OCT & OR & N/A \\ \hline 2& 1M2L & TFNADJ & & AND& N/A \\ \hline 2& 2M& BBTF & & AND& N/ A \\ \hline 2& 1M1L & 1 + TF & 1 + OCT & AND& N/A \\ \hline 2& OCTOPP & & & AND& N/A \\ \hline 2& 2ES & & & AND& 1/64 \\ \hline 2& MU2 & 1 + TF & & AND& 1/64 (16/64) \\ \hline \end{tabular} \label{t:3p3} \end{table} implemented during this data taking is organized in Table~\ref{t:3p3}. The level 2 lines can be physically grouped into a hadronic element which fires the tracking devices and some of the outer detectors (to avoid beam-wall, beam gas triggers) and a two track trigger for QED processes. Because of higher rates, some lines are prescaled. Only 1 of 64 2ES triggers in accepted, while the MU2 1 + TF line is accepted 1/64 th (16/64) of the time in 7 (3) bunch mode. Each level 2 search takes roughly 2.5 $\ifmmode{\mathchar"116}\else{$\mathchar"116$}\fi$sec, and a successful $\rm L1\cdot L2$ trigger initiates a digitization and data readout sequence lasting 20 msec. \subsection{Compress} The process of turning the raw data into analyzed events is a complex, iterative, feedback system. A diagram representing the compress system used to analyze this data set is shown in Figure~\ref{fig:c3f15}. \begin{figure}[p!] \centering \includegraphics[scale=0.7]{c3f15t} \caption{CLEO compress data flow for hadronic type events. } \label{fig:c3f15} \end{figure} Data runs are taken in two modes, physics running and calibration. As seen from the diagram, the full calibration of the detector uses both calibration and physics data. Calibration runs for the drift chamber were taken every 2 to 3 days, and consisted of pulsing the chamber to determine the pedestals and gains for the timing channels and the pedestals for the pulse height channels. As the physics data was gathered, it was subjected to a ``first pass" track finding algorithm (SOLO \cite{solo}) to find seed tracks. Those events which were judged interesting by the track finder were then segregated into QED and hadronic event candidates. For the drift chamber, information from hadronic events at this stage was used to calibrate out global changes (such as machine timing) and systematic shifts in the electronics or performance of the drift chamber (non standard operating conditions). After completion of this stage all the devices have been fully tuned, and the final analysis algorithms are run. A second, more rigorous, track finding algorithm (DUET \cite{duet}) uses the seed tracks as initial conditions and systematically searches the data for additional tracks. At this stage information from the vertex detector is applied, and the intervening material between the vertex detector and drift chamber is compensated for by allowing a ``kink" in the track. \par Those fully analyzed events which are classified as hadronic are compacted into a format called ROAR \cite{avery2}. This format strips the event down to the minimal amount of information necessary for analysis, and computes and stores a variety of invariant mass combinations for quick analysis. The \ifmmode{\Upsilon{\particleone (4S)}\ data sample of 113.7 $\rm pb^{-1}$ produced 113 2400 foot, 6250 bpi tapes of hadronic events. After being ROARed, the set fit on to five such tapes and could be analyzed in 1-2 VAX 8600 cpu hours. \subsection{Hadronic Event Selection} All events used in this analysis have been classified as hadronic in nature. The primary selection criteria evolve from charged particle tracking in the sequence single tracks, event vertex, and event shape. To eliminate beam gas, beam wall, cosmic ray showers, and radiative bhabhas,the energy response of the octant shower detectors is also employed. \par To begin, the charged multiplicity of the event is examined. A charged track is considered good if no more than one following tests is failed: \begin{enumerate} \item The z value at the distance of closest approach to the beam line (DOCA) is less than 50 mm from the origin. \item The track's DOCA with respect to the nominal beam spot in the r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ plane is less than 5 mm. \item The average residual from the 5 parameter helical fit must be less than 700 microns. \item The track is defined using at least 8 drift chamber layers. \end{enumerate} The next level of selection is a crude classification of a hadronic event: \begin{enumerate} \item Tracking. \begin{enumerate} \item At least 3 good charged tracks, which may come from the primary or a secondary vertex, excluding those consistent with originating from a converted photon. \item At a least one of the tracks must originate from the primary vertex with a DOCA less than 5 mm. \end{enumerate} \item Vertex. \begin{enumerate} \item The x and y position of the vertex must be within 20 mm of the nominal x-y beam position. \item The z position of the vertex must be within 50 mm of the origin. \end{enumerate} \item Energy. \begin{enumerate} \item The sum of the energies of the charged and neutral tracks in the event must exceed 15 \% of the center-of-mass energy. \end{enumerate} \end{enumerate} All events that have survived to this stage are finally subjected to a stringent, well developed set of cuts known as morcut \cite{fhm} \begin{enumerate} \item Tracking. \begin{enumerate} \item At least 15 \% of the hits in the drift chamber must be associated with tracks. \item The fraction of bad tracks to good tracks must be less than 1.15 \end{enumerate} \item Energy. \begin{enumerate} \item The total energy of the charged and neutral tracks in the event must exceed 30\% of the center-of-mass energy. \item The energy measured in the octant shower counters must exceed 250 MeV. \end{enumerate} \item Topology. \begin{enumerate} \item The event must not be consistent with a radiative bhabha. \item The event must not be classified as a beam wall collision. \item The Fox-Wolfram \cite{foxwo} parameter $H_1/H_0$ which measures momentum imbalance must be less than 0.4. \item The Fox-Wolfram parameter $H_2/H_0$ which measures event shape ( 0 = spherical, 1 = back to back 2 track event) must be less than 0.98. \end{enumerate} \end{enumerate} The non hadronic background for events which have passed morcut is only a few percent, while a Monte Carlo simulation of D decays shows that 98 percent of the events that are successfully reconstructed pass morcut. \chapter{Technics} After the data has been collected and processed, additional ``second order" analysis systems need to be developed to fully exploit the underlying physics. Here we detail the other relevant tools essential to this analysis. They are; reconstruction of secondary vertices, charged particle momentum correction, hadron identification, and Monte Carlo simulation. Also presented is a discussion of particle decay kinematics, which forms the basis for a number of analysis procedures used in the following chapters. To conclude, a presentation of the analysis architecture used to derive the results of this thesis shall be crystallized. \section{Reconstruction of Secondary Vertices} As discussed in chapter 1, the tremendous diversity in the strength of the four forces causes a difference in the decay times associated with each force of several orders of magnitude. At CESR energies, strong and electromagnetic processes evolve at such a rate that they cannot be distinguished from the primary vertex. Particles that decay weakly, in contrast, can move from order 100 microns (charm, \ifmmode{\ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi}\else{$\ifmmode{\mathchar"11C}\else{$\mathchar"11C$}\fi$}\fi\ decays) to tens of millimeters (\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi, \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi$}\fi\ decays). At CLEO, these particles are reconstructed from their decays into purely charged modes. For the first class of decays, the decay length is in the realm of the extrapolation error of the individual tracks. Evidence for these vertices comes from the observation of systematic offsets from zero in distributions of extrapolated decay lengths and impact parameters. The vertices of the second group are often clearly visible with decent event display graphics. Decays of \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi's and \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi$}\fi's have been historically referred to as ``vees" because their 2 prong decay topology resembles the letter v. Since the track finding algorithm performs searches for individual tracks, an additional program is required to isolate vees. The vee finder examines all good tracks that have a full 3-D reconstruction, with the two fold mission of finding pairs of tracks which come from a vertex distinct from the common event vertex, and properly determining the momentum of the original neutral particle. Vectors of the two daughter particles must be re-evaluated, since they were determined at the point of closest approach to the drift chamber origin. The algorithm used in this analysis was developed by M. Ogg and M. Mestayer. The search is initiated by parametrizing the tracks as circles in the r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ plane. This is a preferred starting point since the r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ tracking resolution is about a factor of five better than that in z. Vee candidates are formed from pairs of oppositely charged tracks where the sum of the absolute values of the track DOCAs exceeds 1mm. The two circles are tested for intersection, and only those candidates with 2 intersection points are considered. If either solution is consistent with originating from the primary vertex (z of the vertex less than 3 cm from the origin, and radius of the vertex in the r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ plane with respect to the average beam spot $\rm (r_v)$ less than 0.3 cm ) the candidate is rejected. Each solution is required to have $\rm r_v$ in the range of 0.5 cm to 50 cm, and the reconstructed vee vector must point away from the origin, as determined by requiring the normalized dot product of the vee vector and the vector drawn from the origin to the secondary vertex be positive. The selection procedure is completed with the definition of a vee quality factor: $$\chi^2_{_V} ={ \left( { {\Delta z}\over{{\sigma}_z} }\right)}^2 + { \left( { { b_V}\over{ {\sigma}_b} }\right)}^2 $$ where the parameter $\Delta z$ is the z difference of the two candidate charged tracks at their r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ intersection point, and $b_V$ represents the impact parameter of the fit secondary vertex with respect to the run by run average x-y position of the beam spot. The nominal rms errors ${\sigma}_z$ and ${\sigma}_b$ were chosen from Monte Carlo studies to be 10 mm and 2 mm, respectively. Application of a cut on $\chi^2_{_V}$ improves the purity of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ sample by rejecting poorly determined and incorrect vertices. Vee candidates are subject to a loose cut of $ \chi^2_{_V} \leq 12.0$ in order to provide a maximal vee base available to the user. In practice, the author uses $\chi^2_{_V}$ cuts of 2-3 for \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi's. Should both solutions pass all cuts, the solution with smaller $\Delta z$ is selected. The variables involved in the vee finder as discussed above are visually presented in Figure \ref{fig:c4f1}. \begin{figure}[htp!] \centering \includegraphics[scale=0.56]{c4f1t} \caption{The r-$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ intersection of two helical tracks. The intersection occurs at the point $\rm (x_V,y_V)$ at radius $r_V$. The reconstructed vector has impact parameter $b_V${.}} \label{fig:c4f1} \end{figure} Once the position of the secondary vertex has been determined, the momenta of the two daughter tracks are redetermined with respect to the new vertex, and added to form the vee three vector. \par Two measures of the robustness of the vee finder are the reconstruction efficiency and FWHM (width) of the vee invariant mass peak. We examine the performance with respect to the decay \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi, which is vital to this analysis. Details on \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi$}\fi\ reconstruction can be found elsewhere \cite{skip}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c4f2t} \caption{Efficiency for reconstructing the decay \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ in $\rm c \bar c$ events versus \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum.} \label{fig:c4f2} \end{figure} Figure~\ref{fig:c4f2} contains a plot of the efficiency for reconstructing \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ versus \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum, as determined form Monte Carlo simulation. The efficiency folds in hadronic event selection, single particle tracking, and secondary vertex finding. The efficiency is seen to plateau at $\approx 50 \%$ from 1 to 3 GeV. The momentum dependence of the mean and width of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ observed in the data are shown in Figure~\ref{fig:c4f3} \begin{figure}[htp!] \centering \includegraphics[scale=0.525]{c4f3t} \caption{Observed mean of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass peak versus \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum.} \label{fig:c4f3} \end{figure} and Figure~\ref{fig:c4f4}. \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c4f4t} \caption{Observed FWHM of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass peak versus \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum.} \label{fig:c4f4} \end{figure} The invariant mass spectrum is fit to a Gaussian signal and a polynomial background. A systematic deviation from the known \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass versus momentum is prominent at at low \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum. This is due to energy loss of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ daughters. The correction for this effect will be presented in the next section. The width obeys an approximately linear relationship as a function of momentum. Convolving this with the observed \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ momentum spectrum yields an average value for the width of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass peak of 9 MeV. The behavior of both the mean and width are in agreement with Monte Carlo simulation of this decay in the CLEO detector. \section{Corrections to Charged Particle Momenta} Tracks in the vertex detector appear largely as straight line segments, making track curvature (hence momentum) measurements the sole responsibility of the drift chamber. Energy lost in interactions with material before the drift chamber will manifest itself in systematically lower invariant masses for reconstructed particles. \begin{table}[ht!] \centering \caption{Material Preceding the Drift Chamber Detector} \begin{tabular}{|c|c|c|} \hline outer radius (cm) & Material & dE/dx (MeV) \\ \hline 7.57 & silver coated beryllium & 0.246 \\ \hline 8.02 & Carbon filament tube & 0.322 \\ \hline 16.44 & VD gas + Carbon filament tube & 0.208 \\ \hline 17.44 & Carbon filament tube & 0.270 \\ \hline \end{tabular} \label{t:4p1} \end{table} The material preceding the drift chamber is collected in Table~\ref{t:4p1}. The material corresponds to the beam pipe, inner and outer VD supports, and drift chamber inner tube, respectively. As described earlier, we choose the Bethe-Bloch form $\rm {dE \over dx} = { A \over \sin\Theta} \cdot {1 \over \beta^2} \cdot \Bigl [ \ln(2m_e \beta^2 \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi^2 / I_0) -\beta^2 \Bigr]$ to model the expected energy loss. The algorithm used was developed by P Avery \cite{avery3}. The average momentum loss $\rm {dp\over dx} = {1\over \beta}\cdot{dE \over dx}$ is calculated based on the track's momentum and cell entrance angle, for several different mass hypotheses. The momentum loss is added to the measured track in such a way that only the magnitude, not the direction is altered. The average corrections for pions, kaons, and protons are shown in Figure \ref{fig:c4f5}. \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c4f5t} \caption{ Average energy loss corrections for tracks as a function of momentum and mass hypothesis. } \label{fig:c4f5} \end{figure} The net result when calculating the invariant mass of several charged tracks is to shift the peak of the mass distribution upward 1-2 MeV, with the FWHM of the distribution largely unaffected. Monte Carlo simulations mirror the effects of the correction observed in the data. \par We can now re-address the properties of \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi's after energy loss corrections are applied . The observed \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass ( Figure~\ref{fig:c4f6}) \begin{figure}[htp!] \centering \includegraphics[scale=0.575]{c4f6t} \caption{Invariant mass of \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidates with $ \chi^2_{_V} \leq 3.0$ from data.} \label{fig:c4f6} \end{figure} is found to be $497.8 \pm 0.1$ MeV, in good agreement with the world average of $497.72 \pm 0.07$ MeV. The \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass as a function of momentum (Figure~\ref{fig:c4f7}) \begin{figure}[htp!] \centering \includegraphics[scale=0.575]{c4f7t} \caption{The \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass as a function of momentum, after application of the energy loss correction.} \label{fig:c4f7} \end{figure} is now centered about the correct mass to within a few tenths of an MeV over the entire momentum range. \section{Hadron Identification} Hadron (\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi, \ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi, \ifmmode{{\particleone p}}\else{{\particleone p}}\fi) identification is accomplished by coherently combining information from the three detector elements with this capability; DR1.5, dE/dX, and TF. Complete details of the hadron identification system can be found in \cite{skip}. The hadron identification algorithm is unsophisticated in nature yet powerful in performance. For each device, a ``probability" is calculated for each of the three mass hypothesis $\rm p_{HYP i}$ ( 1 = \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi, 2 = \ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi, 3 = \ifmmode{{\particleone p}}\else{{\particleone p}}\fi) defined by: $$ \rm p_{HYP i} = \exp^{- {1 \over2}\left( { M_{DEV} - E_{HYPi}} \over { \sigma_{HYPi} } \right)^{\Large 2}}$$ where $\rm M_{DEV}$ are the measured values (mean of lowest 50\% pulse height for DR1.5 and dE/dx, time for the TF), $\rm E_{HYPi}, \sigma_{HYPi}$ are the expected measurement and resolution for each device, respectively. The distributions of $\rm M_{DEV}$ for each of the three devices \begin{figure}[htp!] \centering \includegraphics[scale=0.525]{c4f8t} \caption{Mean of the lowest 50\% of the pulse heights from the drift chamber detector versus track momentum.} \label{fig:c4f8} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.525]{c4f9t} \caption{Mean of lowest 50\% of the pulse heights from the dE/dx chamber detector versus track momentum.} \label{fig:c4f9} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.575]{c4f10t} \caption{Measured velocity in the Time of Flight detector versus track momentum measured in the drift chamber.} \label{fig:c4f10} \end{figure} (Figures \ref{fig:c4f8} - \ref{fig:c4f10}) all show distinctive \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi, \ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi, and \ifmmode{{\particleone p}}\else{{\particleone p}}\fi\ bands. $\rm E_{HYPi} \ and \ \sigma_{HYPi}$ are the expected detector responses for the mean and width of three species, as determined from non-trivial measurements from the data. Since dE/dx and the TF detectors are separated form the drift chamber by the superconducting coil, these devices can only be used if successful matches are made to drift chamber tracks. The overall hypothesis is calculated from the product of the probabilities for each functional device: $$\rm P_{HYPi} = \prod p_{HYPi}$$ $\rm P_{HYPi}$ is set to 1 for each hypothesis if there was no information available. To discern among the species, we define a normalized weight: $$\rm W_{HYPj} = { P_{HYPj} \over {\sum_{i=1}^3 P_{HYPi}} }$$ The normalized weight is set to zero if there was no information available $ \left( \rm \sum_{i=1}^3 P_{HYPi} =3 \right) $ or the information available favored none of the three hypotheses $\rm \left( \sum_{i=1}^3 P_{HYPi} \leq 0.001 \right) $ \par The utility of this approach is pragmatically illustrated for the case of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP. The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates have momentum greater than 2.5 GeV, with the momentum of pions greater than 0.3 GeV. \begin{figure}[p!] \centering \includegraphics[scale=0.6]{c4f11t} \caption{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ candidates with momentum greater than 2.5 GeV. a) $ W_{K} \geq 0.1 $. b) $\rm W_{K} \geq 0.7$ and $\rm W_{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi} \geq 0.2$. } \label{fig:c4f11} \end{figure} Figure~\ref{fig:c4f11} a) shows the invariant mass plot where the kaon candidate has $\rm W_{\ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi} \geq 0.1$, while in b) the kaon candidate is tightly identified $\rm W_{\ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi} \geq 0.7$ and the pion candidates are loosely identified$\rm W_{\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi} \geq 0.2$. The reduction in the background is startling. Obtaining a \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ mass peak with a signal to noise of $1:1$ was a significant experimental achievement. This has allowed CLEO to measure the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ lifetime with the world's second largest \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ sample. \section{Monte Carlo Simulation} In order to complete physics measurements, computer modeling of the detector and the 10 GeV \ifmmode{e^+e^-}\else{$e^+e^-$}\fi environment is required. For this analysis, Monte Carlo methods are used to determine the detector acceptance for specific \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decay modes, along with backgrounds from \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ decays. We simulate the byproducts of \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ annihilations using a modified \cite{art} version of the LUND \cite{lund} QQJET generator. For this analysis all Monte Carlo events were generated using $\rm c\bar c$ jets, including gluon radiation. The center-of-mass energy was defined to be 10.56 GeV, approximately the average run energy of this data set. Effects of initial state radiation of the beams were not compensated. The strategy for studying a particular decay mode was to force the particle (\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi, \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) to decay into the mode in question, while the anti-particle (\ifmmode{\overline{{\particleone D}}\tiny^0, \ifmmode{{\particleone D}^-}\else{{\particleone D}$^-$}\fi) was allowed to decay freely. Each $\rm c\bar c$ event thrown was selected for further analysis only if it contained the particle under study. Since much of this analysis involves neutral kaons, when the \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ meson decayed it was forced only to decay to a \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ or \ifmmode{\overline{{\particleone K}}\tiny^0, the event being selected only if the final state \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ was thrown. In this way the Monte Carlo was free from bias from either an excessive number of \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi's or charged pions. All efficiencies are calculated with respect to the final state \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi, and later rescaled for the branching ratio $\rm B( \ifmmode{\overline{{\particleone K}}\tiny^0\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi) \cdot B(\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi)$ \par \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons are fragmented according to the Peterson recipe. Because of the symmetric LUND fragmentation scheme, when two objects are produced in the fragmentation process each with a mass of order the beam energy, the first object tends to receive a larger share of the available energy-momentum. This distorts the fragmentation distribution. This effect is mitigated by by calculating efficiencies over a small momentum range and performing a summation. This method also makes calculation of Monte Carlo parameters insensitive to the fragmentation model. \par The collection of vectors and vertices for each selected event are then propagated through a detector simulation to mimic CLEO raw data. Detector response in terms of effective efficiency and resolution are folded into the process. The ``false data" is passed through the CLEO compress system, and is subjected to the final tracking algorithm DUET. The data at this stage is identical to real data, and can be analyzed by the same program. The entire process requires $\simeq$ 1.5 VAX 8600 cpu hours for about 1000 events, making Monte Carlo generation one of the most cpu intensive tasks in an analysis project. \par The detector resolution in most cases dominates the width of an observed mass peak. Monte Carlo predicted widths have been in respectable agreement with the data, indicating a proper simulation of the tracking. Information for neutral particles is approximated to first order based on their trajectories and shower counter efficiencies. The hadronic event selection criteria requires 250 MeV deposited in the octant shower counters. Thorough modeling of this detector would require the cpu intensive EGS shower simulation, so this constraint was relaxed for Monte Carlo events. While charged particles lose energy as they traverse the detector, response of the devices that measure this energy loss is not simulated. The efficiency for identifying hadrons is measured \cite{skip} directly from the data (Figure \ref{fig:c4f12}) $|\cos\Theta| > 0.6$ only information from the drift chamber using ``pure" samples of pions (\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi), kaons (\ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi {\rm and } \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi), and protons (\ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone p}}\else{{\particleone p}}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi$}\fi). \begin{figure}[htp!] \centering \includegraphics[scale=0.62]{c4f12t} \caption{Efficiencies for detecting kaons with weights $ W_K \geq 0.1$ (squares), $ W_K \geq 0.4 $ (crosses), and $ W_K \geq 0.7$ (circles). Also shown are the efficiencies for measuring pions with the same kaon weights. For tracks with $|\cos\Theta| > 0.6$ only information from the drift chamber is available.} \label{fig:c4f12} \end{figure} These efficiencies are injected into the Monte Carlo by simply throwing a random number between 0 and 1, and comparing it to the expected efficiency. While this empirical approach assures to first order proper detector response, the selection of the modes to produce the ``pure sample" might possibly introduce systematic effects. \section{Two Body Decay Kinematics} Important insight into the decay mechanism of a parent particle can be gleaned by examining the angular distribution of the decay products. Observation of an angular distribution consistent with an anticipated polarization (or lack thereof) can help substantiate a particular decay hypothesis. Also, an anisotropy of the angular distribution of the background can be used to enhance the signal. A well quantified formalism \cite{jeffr} exists for analyzing decays which are either two body or consist of a series of sequential two body decays. Termed the helicity formalism, the following relation can be derived for the angular distribution of a two body decay in the rest frame frame of the decaying particle: $$ \rm { d \sigma \over d\Omega_f }(\theta_f, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_f) = \sum_{\lambda_1,\lambda_2} {\Biggl | {\left( { 2J + 1 \over 4\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi } \right)}^{1\over 2} D^{J \ \ast}_{M \lambda} \left(\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_f, \theta_f, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_f \right) A_{\lambda_1,\lambda_2} \Biggr |}^{\lower 1.6ex \hbox{2}} $$ Here $\theta_f, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_f$ are the angles of the two body decay axis with respect to the spin quantization (z) axis of the parent. J and M are the spin and z projection of the parent particle. ${\lambda_1,\lambda_2}$ are the helicities of the daughters, as defined by $\left( \lambda_i = \vec {s_i} \cdot \hat{p_i} \right)$, and $\lambda$ is the overall helicity $\lambda_1-\lambda_2$, which is subject to the constraint $\rm J \geq | \lambda |$. The $D^{j }_{m' m}$ functions have the definition: $$ D^{j }_{m' m} \left( \alpha, \beta, \ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi \right)= e^{-i\alpha m'} d^j_{m' m}(\beta)e^{-i\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi m}$$ where $d^j_{m' m}$ are the ``D" functions \cite{rpp}. The ``D" functions are connected to the spherical harmonics through the relation: $$ d^{l }_{m 0} \propto Y^l_m(\theta,\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi)e^{-im\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}$$ The factor $A_{\lambda_1,\lambda_2}$ is related to the decay matrix element, but does not contain any angular information. We can neglect the overall normalization (and in most instances the $\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$ dependence) to get at the the general character of the decay: $$\rm { d \sigma \over d\Omega_f }(\theta_f, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_f) \propto \sum_{\lambda_1,\lambda_2} | d^J_{m \lambda} |^2$$ The first case to consider is the frequently encountered decay of a pseudoscalar decaying into two pseudoscalars (P \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ PP). Since everything in this decay is in a state of zero angular momentum, J=M=$\lambda$=0 and $$\rm { d \sigma \over d\Omega_{PP} }(\theta_{PP}, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_{PP}) \propto | Y^0_0|^2 = 1$$ hence the decay axis (PP) is isotropically distributed. \begin{figure}[p!] \centering \includegraphics[scale=0.6]{c4f13t} \caption{ a) Rest frame decay angle (RFDA) for a P \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ PP decay. b) polarization angle $\cos\Theta_{P'P''}$ for P \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ PV, V\ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ PP.} \label{fig:c4f13} \end{figure} Since there exists no particular quantization axis, we choose as the reference direction to be that of the momentum of the parent particle in the laboratory frame. We define the normalized dot product of the decay axis and the reference vector to be the rest frame decay angle (RFDA). The second angle of interest results from a polarization in the decay chain pseudoscalar \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ pseudoscalar, vector (P \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ PV), where the vector particle subsequently decays into two pseudoscalars (V \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ $\rm P'P''$). Because the parent has J=0, the value for the difference of the helicity state of the daughters $\lambda_1-\lambda_2 = \lambda \equiv 0$. This polarizes the vector particle into the helicity zero state. The PV decay axis is also isotropically distributed. $$\rm { d \sigma \over d\Omega_{PV} }(\theta_{PV}, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_{PV}) \propto | Y^0_0|^2 = 1$$ We now examine the decay of the vector particle in its rest frame. The quantization direction has already been defined as the PV decay axis. The two vector daughters $\rm P'P''$ are then boosted into the vector rest frame. The vector particle has $J=1, M=0$, and both daughter particles have helicity zero, by definition. The angular distribution of the $\rm P'P''$ decay axis in the V rest frame with respect to the PV decay axis (Figure \ref{fig:c4f13}) is $$\rm { d \sigma \over d\Omega_{P'P''} }(\theta_{P'P''}, \ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi_{P'P''}) \propto | Y^1_0|^2 = \cos^2\theta _{P'P''}$$ \subsection{Satellite Mass Peaks} The ramifications of this polarization are profound. In the laboratory frame, the two vector daughters will have diametrically opposite values for their total momentum, with one being produced almost at rest. The invariant mass formed from 2 of the 3 final state particles, neglecting the particle produced with very small momentum, will differ in mass from the parent mass largely by the mass of the neglected daughter. Such a distribution has a distinctly non Gaussian shape, and is deemed a ``satellite'' mass peak. \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c4f14t} \caption{Monte Carlo simulation of the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi, \ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0$}\fi. The Invariant mass spectrum is formed from the \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ and \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. } \label{fig:c4f14} \end{figure} Figure~\ref{fig:c4f14} shows a Monte Carlo simulation of the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi, \ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0$}\fi, where the invariant mass is formed from the \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ and \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. The two lobed structure corresponds to the cases when the \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ is the slow and fast particle from the \ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi\ polarization. Because the low mass lobe is much broader and in a region of extremely high background, it is not visible in the data. Information can be extracted from satellite peaks, provided a proper form can be determined to fit the peak. Extensive study by the author has demonstrated that a form termed the Bifurcated Gaussian provides very agreeable fits to satellite peaks. This curve is composed of two Gaussian peaks that share a mean, but have different areas and widths on either side of the mean. Continuity of the curve is guaranteed by forcing the constraint $ \rm { A_1 \over F_1} = { A_2 \over F_2}$. \begin{figure}[htp!] \centering \includegraphics[scale=0.560]{c4f15t} \caption{Invariant mass spectrum for the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. A satellite peak is observed in the 1.4 - 1.6 GeV region. } \label{fig:c4f15} \end{figure} Figure~\ref{fig:c4f15} shows the invariant mass distribution for of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. A satellite mass peak is clearly evident in the 1.4-1.6 GeV region. The \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ peak is fit to a Gaussian peak, while the the satellite peak is fit to a Bifurcated Gaussian. Fitting a satellite peak to a Gaussian shape underestimates the area by 10-15\%. \section{Analysis Architecture} Having collected all the techniques and machinery employed in this project, let us examine how they are used in concert to derive results. Since most of this analysis involves the reconstruction of exclusive decays using charged particle tracking and the invariant mass technique, the first step required is to prepare a combinatoric driver. This code generates suitable combinations of charged tracks and secondary vertices specific for each decay mode. The most difficult task in using the invariant mass technique is to isolate a signal. This is accomplished by developing a set of physically motivated cuts to simultaneously maximize the observed signal and provide the highest possible rejection of spurious combinations which form the background. For this analysis, the cut development was accomplished by writing out a binary word for each event containing all the relevant information into a disk file, which ranged in length from 2.5 - 100 K. This facilitated a decrease in cut development by a large factor, since after passing through the data once to create the file, the entire set could be reanalyzed in a matter of minutes. More importantly, this made the cut development largely an interactive process. After a signal has been isolated, the detector acceptance must be determined from Monte Carlo. Two methods are used to extract the Monte Carlo parameters. This redundancy assures that these parameters are properly determined, as well as demonstrating the correctness of the combinatoric driver. In the first method the Monte Carlo events are subjected to the full data analysis system. Since the Monte Carlo was generated with \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi's decaying into the desired final state, decays of the antiparticles into analogous final states are eliminated. The second (TAGGER) method involves using additional stored information on the Monte Carlo data which ``tags'' the original Monte Carlo tracks to the tracks eventually found in the simulated event by the track finder. The decay products in the event are traced down to particles observable in the detector, these trajectories are compared to the found tracks, and the Monte Carlo trajectory associated with the most hits on the track is ``assigned" to the track. For each event the initial \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ or \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ Monte Carlo track is found, and its generated momentum is calculated. The decay daughters are found, and tested for tagged matches to drift chamber tracks. If all daughter tracks are matched, all standard analysis parameters for that group of daughter tracks are calculated. The Monte Carlo signals are fit to a Gaussian signal and a polynomial background, and the Monte Carlo parameters are calculated from a weighted average of the two methods. The agreement \begin{table}[ht! ] \centering \caption{Comparison of Monte Carlo Efficiencies for Reconstructing the Decay Mode \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi} \begin{tabular}{|c|c|c|} \hline x range & Method 1 & Method 2 (TAGGER) \\ \hline $ 0 \leq {\rm x} < 0.375 $ &$0.381 \pm\ 0.009 $& $0.373 \pm 0.009$\\ \hline $ 0.375 \leq {\rm x} < 0.51 $&$0.346 \pm\ 0.009 $& $0.347 \pm 0.009$\\ \hline $ 0.51 \leq {\rm x} < 0.625 $&$0.358 \pm\ 0.009 $& $0.352 \pm 0.009$\\ \hline $ 0.625 \leq {\rm x} < 0.750 $&$0.371 \pm\ 0.009 $& $0.365 \pm 0.009$\\ \hline $ 0.750 \leq {\rm x} < 0.875 $&$0.363 \pm\ 0.010 $& $0.357 \pm 0.010$\\ \hline $ 0.875 \leq {\rm x} \leq 1.0 $&$0.369 \pm\ 0.020 $& $0.365 \pm 0.020$\\ \hline \end{tabular} \label{t:4p2} \end{table} between the two methods is excellent, as evidenced from the example listed in Table \ref{t:4p2} which compares the two methods in calculating the parameters for the decay \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. Although we believe the extracted parameters to be correct for a given Monte Carlo, we add an additional 5\% systematic error for uncertainties in the generator and the simulation of the detector. When determining fragmentation distributions, because of limited statistics, when performing fits to the data we choose to constrain the mean and the FWHM of the Gaussian signal to be within $\ifmmode{\pm}\else{$\pm$}\fi$ 3 standard deviations of the Monte Carlo parameters. This is a more physical approach than an exactly constrained fit (with or without smoothing), as it can adjust for systematic errors in the Monte Carlo simulation. The analysis structure is summarized in Figure~\ref{fig:c4f16}. \begin{figure}[htp!] \centering \includegraphics[scale=0.425]{c4f16t} \caption{Data analysis flow diagram. } \label{fig:c4f16} \end{figure} \chapter{ Production and Decay of Charged \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ Mesons} The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ meson is the spin 0 charged ground state of the charmed,nonstrange meson. Its mass is measured to be $1.8693 \pm\ 0.0006$ GeV. Initially, the properties of \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons were studied in \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ experiments running on the \ifmmode{\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''}\else{$\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''$}\fi\ resonance. This was an optimal running location since the \ifmmode{\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''}\else{$\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''$}\fi\ decays into a \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ifmmode{\overline{{\particleone D}}\ pair produced almost at rest. Experiments performed at this energy were able to accumulate large samples of \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons, and successfully reconstruct a vast number of \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decay modes. Not accessible to these experiments was the opportunity to study the charm hadronization process or to measure the lifetimes of the \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons. \par In \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ annihilations at higher center of mass energies, charm production accounts for $ 4 \over 10$ ($4 \over 11$ beyond the $\Upsilon$ region) of the total hadronic cross section, allowing for a fairly copious production of charmed particles. This is offset by a superabundant combinatorial background, which impedes the isolation of charm decay signals. For the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ mesons, this problem is obviated by exploiting the well known mass difference of the two states using the decay chain \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ X. Using this technique, \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ experiments in the energy range of $10-50$ GeV have been able to study the properties of these two mesons. No such kinematical artifice exists for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons, as the mass difference between the \ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi\ and the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ forbids charged pion transitions. Additional difficulties complicate the reconstruction of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ hadronic decays. The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ has a large branching ratio into semileptonic modes $( \rm B ( \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{e^+}\else{$e^+$}\fi X ) = 17.0 \pm 1.9 \pm 0.7 \ \% )$ \cite{dmc}. and the mass splitting of the \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ and \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ mesons allows cascades from the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi's to \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi's to occur only one third of the time. Since \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ mesons are favored in the charm hadronization process $\rm \left( {\ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ \over \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ + \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ } \right) \simeq 0.7 \pm 0.2 $, this manifests itself in an inclusive \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ cross section which is about a factor of two larger than the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi. Thus, the lack of a convenient reconstruction method and low production rates have inhibited experimental study of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons. \par The excellent charged particle tracking system coupled with hadron identification capabilities make the CLEO experiment a highly competitive facility for studying charmed particles, including \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons. The sample of reconstructed \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons used in this analysis, on a mode by mode basis, is larger that that of the MARK III experiment. The MARK III group has been a leading source of information on \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ decays, and have measured the the greatest number of \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ meson decay modes. Here we will detail the measurement of three Cabibbo allowed, hadronic decays of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi. They are \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi, and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP. For comparison and use in future chapters, we also present a measurement of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. Signal isolation techniques and corrections for physics backgrounds will be discussed. Fragmentation distributions and relative branching ratios will be compared, and total cross sections for \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ continuum production will be estimated. Information of the CLEO measurement of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ lifetime can be found elsewhere \cite{haasp}. The measurements of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ modes are the first to be done at an \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ experiment at an energy greater than the \ifmmode{\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''}\else{$\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''$}\fi, and the measurements of the fragmentation distributions of these decay modes are the first such measurements. \section{Preliminaries} \begin{table}[ht!] \centering \caption{MARK III \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ Meson Branching Fractions} \begin{tabular}{|c|c|} \hline Decay Mode & Branching Fraction (\% )\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ \hfill & 3.2$\pm$\ 0.5 $\pm$\ 0.2\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ \hfill & 6.6 $\pm$\ 1.5 $\pm$\ 0.5\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ \hfill & 9.1 $\pm$\ 1.3 $\pm$\ 0.4\\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ \hfill & 6.4 $\pm$\ 0.5 $\pm$\ 1.0\\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi \hfill & 0.51 $\pm$\ 0.09 $\pm$\ 0.07\\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi \hfill & 4.2 $\pm$\ 0.4 $\pm$\ 0.4 \\ \hline \end{tabular} \label{t:5p1} \end{table} All events under consideration here have passed the hadronic selection criteria. The invariant mass for \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ candidates is formed only from ``good tracks" which have been corrected for energy loss in the material preceeding the drift chamber detector. For calculation of continuum cross sections, we used only data taken in the region of the \ifmmode{\Upsilon{\particleone (4S)}\ (77.7 \pb\ on resonance, 36 \pb\ below resonance). The 33 \pb\ of \ifmmode{\Upsilon{\particleone (3S)}\ data is also used for specific measurements, we choose not to use this data to calculate cross sections because of the high combinatorial backgrounds in this region and the possibility of contamination form the process \ifmmode{\Upsilon{\particleone (3S)}\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ $c\bar c${.} \par Since \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons can also be produced in the decay of \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ mesons, this \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ production mechanism must be excluded. A Monte Carlo study of 11.5 K \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ifmmode{\overline{{\particleone B}}\ decays where there was at least 1 \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ in the event found no \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi X decays where the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ had a momentum greater than 2.5 GeV. We prudently select momentum cutoff for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates of $\rm p \geq 2.52 GeV \ (\rm x = {p\over p_{max}} \geq 0.51)$ to use the on resonance data. To calculate the differential cross section we elect to use the kinematical variable $ \rm x = { p\over p_{max}} = {p \over \sqrt{E^2_{beam} - m^2_{had}} }$. While the approximate ``light-cone" variable $ x^+ = \rm \ {{\kern -1.1em(E+P)}\over (E+P)_{max}} $ is described as being the most suitable for comparing fragmentation distributions at different energies, this is mitigated by radiative effects. This variable also suffers from systematic distortions at low $x^+$. The x variable ranges from 0 to 1 for all experiments and is much more useful in fitting and visualizing the data. Finally, for comparison of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ meson decay rates, and to convert those measurements into cross sections we collect the most recent MARK III \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ meson branching ratios \cite{adler1} in Table \ref{t:5p1} and the measurements \begin{table}[ht!] \centering \caption{MARK III Resonant Substructure of Three Body Decays} \begin{tabular}{|c|c|c|} \hline Decay Mode & Fraction & Branching Fraction (\% )\\ \hline \multicolumn{3}{|c|}{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ } \\ \hline \ifmmode{\overline{{\particleone K}}\tiny^{*0} \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi \hfill & 13 $\pm$\ 1 $\pm$\ 7 & 1.8 $\pm$\ 0.2 $\pm$\ 1.0 \\ \hline non-resonant \hfill & 79 $\pm$\ 7 $\pm$\ 15 & 7.2 $\pm$\ 0.6 $\pm$\ 1.8 \\ \hline \multicolumn{3}{|c|}{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ } \\ \hline \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0$}\fi\ \hfill & 12 $\pm$\ 1 $\pm$\ 7 & 0.8 $\pm$\ 0.1$\pm$\ 0.5 \\ \hline \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ \hfill & 56 $\pm$\ 4 $\pm$\ 5 & 5.3 $\pm$\ 0.4 $\pm$\ 1.0 \\ \hline non-resonant \hfill & 33 $\pm$\ 5 $\pm$\ 10 & 2.2 $\pm$\ 0.3 $\pm$\ 0.7 \\ \hline \end{tabular} \label{t:5p2} \end{table} of resonant substructure of three body \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ meson decays \cite{adler2} in Table \ref{t:5p2}. \section{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi} \subsection{Signal Isolation} The first approach used in isolating a \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ signal is to use decay modes which contain a secondary vertex, in this circumstance a \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi. Since approximately 10 \% of the events contain a \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidate, this dramatically reduces the combinatorial background. We observe the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ through its decay into two charged pions. To enhance the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ signal we require that the secondary vertex quality factor, $\chi^2_V$, be less than 2.0, and that the invariant mass of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ formed from its two daughter tracks be within 30 MeV of the nominal \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass. The effect of the $\chi^2_V$\ cut was tested by calculating the efficiency corrected number of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates for several different values of $\chi^2_V$. The numbers were found to be completely consistent. To test the loose mass cut, we formed \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates where the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidates were selected from a similar mass band centered at 400 MeV (Figure \ref{fig:c5f1}) \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c5f1t} \caption{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ candidates where the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass was selected from a side band region centered at 400 MeV.} \label{fig:c5f1} \end{figure} , no enhancement is evident. Since this is a two body decay, restrictive cuts are placed on the on the momentum of the track not associated with the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi. We require that that the single pion track have a momentum greater than 0.4 GeV. This track was also required not to be consistent with originating from a secondary vertex. \par The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidate four momentum is calculated by adding the four momenta of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ and \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ candidates, where the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ four momentum was calculated from the three-momentum of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ as determined from the secondary vertex finding algorithm, and defining the mass to be that of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi. A plot of the invariant mass of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates with $\rm x \geq 0.51$ is shown in Figure \ref{fig:c5f2}. \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c5f2t} \caption{Mass spectrum for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ candidates with $\rm x \geq 0.51${.}} \label{fig:c5f2} \end{figure} We fit the mass spectrum to a Gaussian signal plus a third order polynomial background. The signal was found at a mass of $ 1871.4 \pm 3.1 $ MeV/$c^2$ and a FWHM of $48.3 \pm 8.0$ MeV/$c^2$ which are consistent with a Monte Carlo simulation of this decay in the CLEO detector. \subsection{Background} In addition to the combinatorial background, which is a smoothly varying function of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidate momentum, certain physics processes cause anomalous enhancements to the combinatorial background. The final state \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ is also a subset of the decays \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^+$}\fi, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. Because of the polarization of the vector particle, combining the pseudoscalar daughter of the \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ with only one of the daughters of the vector particle will produce a satellite peak. Evidence for a satellite peak appears in the region of 1.4 to 1.7 GeV/$c^2$ and was excluded from the fit. \par A particularly difficult situation arises when the enhancement occurs in the signal region. This is a known problem for the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi, which is plagued by reflections from the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi. A \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ final state which is identical to a \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ final state with the exception of one pion in the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ state replaced with a kaon in the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ final state can produce an enhancement in the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region if the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ kaon is misconstrued as a pion. Possibilities also exist for a similar confusion of \ifmmode{{\particleone p}}\else{{\particleone p}}\fi\ from a \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c$}\fi\ decay, however the reflection from this decay does not significantly overlap with the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region. \par The author has developed a technique for quantifying $\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi \Leftrightarrow \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi$ reflections for two body decays. While examining the properties of the invariant mass \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ when the \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ was misidentified as a \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, it was noticed that at high momentum $(\rm p \geq\ 2.5 \ GeV)$ the reflected mass had a strong dependence on the Rest Frame Decay Angle (RFDA - section 4.5), which is defined here as the cosine of the angle $\theta$ of the `pion' in the `\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi' center-of-mass frame with respect to the `\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi' direction in the laboratory frame. A Monte Carlo prediction for this dependence is shown in Figure \ref{fig:c5f3}. \begin{figure}[htp!] \centering \includegraphics[scale=0.60]{c5f3t} \caption{Reflected mass of \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ candidates with momentum $\geq$ 2.5 GeV versus the RFDA (see text).} \label{fig:c5f3} \end{figure} In a certain sector of RFDA $( \rm RFDA \leq -0.2) $ the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflection does not contaminate the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region. Thus it becomes possible to decompose the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ reflection into parts which do (contaminated region) and do not (pure region) populate the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region. \begin{figure}[p!] \centering \includegraphics[scale=0.65]{c5f4t} \caption{Monte Carlo Simulation of the decay \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ where the invariant mass has been calculated calling the \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ a \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. The invariant mass is plotted for two regions of RFDA; a) RFDA $ < -0.2$ (pure region) and b) RFDA $\geq -0.2$ (contaminated region). Only the events in b) significantly overlap the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region.} \label{fig:c5f4} \end{figure} Figure \ref{fig:c5f4} illustrates a Monte Carlo simulation of the distribution of reflected \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ mass when segregated by RFDA. In addition, the broad peak observed in the contaminated region would change the width of the observed \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ peak in that region depending on the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ : \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ ratio. \begin{table}[ht!] \centering \caption{Monte Carlo Predicted Contamination Ratios, $p \geq 2.5$ GeV } \begin{tabular}{|c|c|c|c|} \hline $ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ : \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ $ &P = $\rm {N_{contaminated}\over N_{pure}}$&pure FWHM (MeV)&cont FWHM (MeV)\\ \hline Pure $\rm D^+$ & 1.59 $\pm$ 0.05 &40 $\pm$ 1 & 42 $\pm$ 1\\ \hline 4 : 1 & 2.01 $\pm$ 0.06 &39 $\pm$ 1 & 47 $\pm$ 1\\ \hline 2 : 1 & 2.47 $\pm$ 0.08& 39 $\pm$ 1 & 52 $\pm$ 1\\ \hline \end{tabular} \label{t:5p3} \end{table} Table \ref{t:5p3} contains a Monte Carlo study of the properties of the signal at the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mass for various ratios of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ : \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ production. The presence of a reflection from the decay \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ appears in the broadening of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ signal width in the contaminated region and an increase in the ratio of events in the contaminated to pure region. To perform this measurement in the data we include the 33 \pb\ of \ifmmode{\Upsilon{\particleone (3S)}\ data. \begin{figure}[p!] \centering \includegraphics[scale=0.65]{c5f5t} \caption{ The \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi split into two regions on the basis of RFDA. a) Pure region (RFDA $< -0.2$) b) contaminated region (RFDA $\geq -0.2$). } \label{fig:c5f5} \end{figure} Figure \ref{fig:c5f5} shows the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ signal decomposed into pure and contaminated regions. We fit the distributions to the same form as the other \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ signals, however we make no constraints on the mean and width of the signal since these properties could be altered by \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflections. The efficiency corrected ratio of events found in the contaminated region to those found in the pure region is 1.6 $\pm$\ 0.4. The large error, which is due to the combinatorial background, unfortunately prevents a quantitative statement being made about the amount of \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ contamination. The width of the invariant mass peak is found to be 43.7 $\pm$\ 9.4 ( 47.4 $\pm$\ 8.0 ) in the pure (contaminated) regions, and are also consistent. While we are prevented from making a strong statistical statement, it would seem unlikely that \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ contaminates \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ at more than a $\approx 20 \%$ level. \par From a theoretical perspective \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi, which proceeds through non spectator diagrams, is not anticipated to be large. Kamal \cite{kamal} predicts B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi)/B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi) = 0.04 to 0.08. Approximating that there are nearly 3 times as many \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi's as \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi's, it is expected that the ratio $\sigma(e^+e^{-}\rightarrow \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ )$B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ ) / $\sigma(e^+e^{-}\rightarrow \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ )$B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ ) would be only a few percent. CLEO has searched \cite{pla} for the decay \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ and determined the upper limit the upper limit B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ )/ B(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ ) $<$ 0.55. Only one group has seen a signal for this decay mode. MARK III \cite{toki} finds the ratio of the above two \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ branching ratios to be $0.88 \pm 0.50${.} This number, however, has continued to enjoy preliminary status. The amount of contamination from \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflection for this decay mode can be described by $$ C = \left[ \sigma_{\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi} \cdot {\rm B}( \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi) \right] K_R $$ where $K_R$ is a kinematical factor relating how much of the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ signal actually reflects into the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region and is approximately 0.5 for this case. Based on CLEO's measurement \cite{frag} of ${\sigma_{\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi} \cdot {\rm B}(\ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi) = 5.8 \pm 1.0} $ $(x \geq 0.5)$ we estimate a systematic error of 1.3 \pb\ for potential contamination of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ cross section from \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi reflections. \subsection{ Fragmentation Distribution} \begin{table}[ht!] \centering \caption{Fragmentation Distribution for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi } \begin{tabular}{|c|c|c|c|} \hline x range & $\rm N_{obs}$ & efficiency & ${\rm {{d\sigma} / {dx}} }\cdot B$ (pb) \\ \hline 0.000 - 0.375 & 43 $\pm$ 32 & 0.367 $\pm$ 0.020 &25 $\pm$ 18 \\ \hline 0.375 - 0.510& 17 $\pm$ 12 &0.347 $\pm$ 0.019 &30 $\pm$ 21 \\ \hline 0.510 - 0.625 & 73 $\pm$ 17 &0.355 $\pm$ 0.019 &46 $\pm$ 11 \\ \hline 0.625 - 0.750 & 73 $\pm$ 14 &0.368 $\pm$ 0.019 &41 $\pm$ 9 \\ \hline 0.750 - 0.825 & 48 $\pm$ 10 &0.360 $\pm$ 0.019 &27 $\pm$ 6 \\ \hline 0.825 - 1.000 & 14 $\pm$ 5 &0.367 $\pm$ 0.024 &8 $\pm$ 2 \\ \hline \end{tabular} \label{t:5p4} \end{table} The results of the fits for the differential cross sections for the \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ mode are collected in Table \ref{t:5p4}. When quoting fragmentation distributions for modes which contain a \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi, we adopt the convention that $\rm N_{obs}$ is number observed in the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mode, where the error is statistical only. The quoted efficiency ($\epsilon_{r}$) is the reconstruction efficiency which accounts for geometrical acceptance, tracking efficiency, and cuts for this decay mode where the \ifmmode{\overline{{\particleone K}}\tiny^0\ has decayed via the chain \ifmmode{\overline{{\particleone K}}\tiny^0\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi, \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. The error on the efficiency includes both statistical and systematic effects combined in quadrature. The value ${\rm {{d\sigma} / {dx}} }\cdot B$ is defined as $ (2.91)N_{\rm obs}/{\epsilon_{r} {\Delta x} L}$. Here L is the integrated luminosity, $\Delta x$ is the width of the bin, and the factor of 2.91 accounts for $B$( \ifmmode{\overline{{\particleone K}}\tiny^0\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi)$ \cdot $ $B$(\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi). Differential cross sections will always be referenced to the original state of a \ifmmode{\overline{{\particleone K}}\tiny^0. Because of the greatly reduced luminosity and immense combinatorial background, the measurements below $\rm x = .51$ have disproportionately large errors. The most statistically significant information about the production cross section comes from a summation of the points above x = .51. Performing this summation yields $ \rm \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi} \cdot B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi) = 14.8 \pm 1.7 \pm 0.6 \pm 1.3{}$ pb $(\rm x \geq 0.51)$, where the errors are statistical, systematic (fitting procedure and Monte Carlo simulation), and an estimate for \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflection contamination. \section{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi} The approach for isolating this decay is quite similar to \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. We again rely on the $\chi^2_V$\ cut, though it is relaxed to 2.5, since in this case the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ is slower and more difficult to reconstruct. All tracks not associated with the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ were required to have a momentum greater than 200 MeV. The mass spectrum for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates with x greater then 0.51, displayed in Figure \ref{fig:c5f6}, \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c5f6t} \caption{ Mass spectrum for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ candidates having x $\geq\ 0.51${.}} \label{fig:c5f6} \end{figure} was fit to a Gaussian signal plus a fourth order polynomial background. A signal from the decay \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ appears in the mass spectrum in the high x bins and is excluded from the fit. The signal was found at a mass of $ 1876.0 \pm 2.6 $ MeV/$c^2$ and a FWHM of $26.3.0 \pm 6.3$ MeV/$c^2$. The FWHM is consistent with Monte Carlo simulation; however the mass is 2.5 standard deviations away from the expected mass. This is caused by an upward fluctuation in the mass spectrum in the region $ 0.51 \leq x < 0.625${.} Fitting the mass spectrum for x greater than 0.625 yields a \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mass of $1869.9 \pm 2.7 $ MeV/$c^2$, as expected. Because this is a four body decay, \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflections are extremely broad and do not significantly overlap with the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region. \subsection{Fragmentation Distribution} \begin{table}[ht!] \centering \caption{Fragmentation Distribution for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi } \begin{tabular}{|c|c|c|c|} \hline x range & $\rm N_{observed}$ & efficiency & ${\rm {{d\sigma} / {dx}}}\cdot B$ (pb) \\ \hline 0.510 - 0.625 & 99 $\pm$ 33 &0.183 $\pm$ 0.010 &120 $\pm$ 41 \\ \hline 0.625 - 0.750 & 98 $\pm$ 28 &0.206 $\pm$ 0.011 &98 $\pm$ 29 \\ \hline 0.750 - 0.825 & 44 $\pm$ 15 &0.236 $\pm$ 0.013 &38 $\pm$ 13 \\ \hline 0.825 - 1.000 & 28 $\pm$ 8 &0.275 $\pm$ 0.014 &21 $\pm$ 6 \\ \hline \end{tabular} \label{t:5p5} \end{table} The fragmentation distribution for the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ mode is collected in Table \ref{t:5p5}, where the definitions are the same as the previous section. $\rm \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi} \cdot B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi) = 33.4 \pm 6.0 \pm 1.4$ pb, $(x\geq 0.51)$. The errors are statistical and systematic, respectively. \section{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP} The isolation of \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ requires a completely different approach. Here we rely primarily on the hadron identification system to produce a discernible signal. An additional advantage of this decay is that the kaon is produced with the opposite sign of the two pions. This reduces the combinatorial background along with providing signal validation and reflection rejection through the use of right and wrong sign combinations. \par To enhance the signal we require that the kaon candidate have a weight $\rm W_{\ifmmode{{\particletwo K}}\else{{\particletwo K}}\fi} \geq 0.1$. In addition we require that the kaon candidate posses at least 200 MeV of momentum. The motivation for this cut is simply that at low momentum it is difficult to produce a pure sample of kaons to measure the efficiency. Since a low momentum a particle's energy loss is maximal, saturation effects in the electronics may produce unusual responses. Since these effects may not be properly reflected in the measured efficiency, it is best to avoid this particular group. We also make the usual cuts on the momenta of the pions (200 MeV). To further reduce the combinatorial background we require that all tracks have their DOCA's less than 8 mm. The invariant mass spectrum for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ candidates with $\rm x \geq\ 0.51$ is presented in Figure \ref{fig:c5f7}. \begin{figure}[htp!] \centering \includegraphics[scale=0.55]{c5f7t} \caption{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ candidates with $\rm x \geq\ 0.51$. The \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ has been observed with $\rm W_K \geq\ 0.1$.} \label{fig:c5f7} \end{figure} The mean of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ peak is measured to be $1.8693 \pm\ .0013$ GeV, which is in excellent agreement with the world average of $1.8693 \pm\ 0.0006$ GeV. \subsection{Background} This decay mode can be contaminated by the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ final state \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. If either (but only one) of the kaons are misidentified a reflection may occur. The misidentification \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ could count as a valid \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidate but \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ would not. We are therefore able look for the effects of \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflections by examining wrong sign combinations. Figure \ref{fig:c5f8} \begin{figure}[htp!] \centering \includegraphics[scale=0.5]{c5f8t} \caption{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ candidates formed from combinations of the type \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. No enhancement is observable.} \label{fig:c5f8} \end{figure} contains a plot of wrong sign combinations. No enhancements to the background are observed. We also note that because this is a three body decay the reflected peak is much broader and peaks well below the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ region. For this case $K_R \approx 0.1$ as opposed to the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi mode where $K_R \sim 0.5$. Enhancement to the background also occurs from the decay chain \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, which shares the same final state. This is simply removed by excluding the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ region from the fit. \subsection{Fragmentation Distribution} \begin{table}[ht!] \centering \caption{Fragmentation Distribution for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP} \begin{tabular}{|c|c|c|c|} \hline x range & $\rm N_{obs}$ & efficiency & ${\rm {{d\sigma} / {dx}} }\cdot B$ (pb) \\ \hline 0.000 - 0.375 & -201 $\pm$ 149 & 0.405 $\pm$ 0.046 & -36 $\pm$ 30 \\ \hline 0.375 - 0.510& 221 $\pm$ 67 &0.421 $\pm$ 0.047 & 107 $\pm$ 39 \\ \hline 0.510 - 0.625 & 702 $\pm$ 96 &0.416 $\pm$ 0.047 & 129 $\pm$ 23 \\ \hline 0.625 - 0.750 & 609 $\pm$ 68 &0.461 $\pm$ 0.052 & 93 $\pm$ 15 \\ \hline 0.750 - 0.825 & 384 $\pm$ 49 &0.475 $\pm$ 0.052 & 57 $\pm$ 10 \\ \hline 0.825 - 1.000 & 119 $\pm$ 24 &0.493 $\pm$ 0.059 & 17 $\pm$ 4 \\ \hline \end{tabular} \label{t:5p6} \end{table} The Monte Carlo simulation of this decay consisted of the nonresonant \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ and \ifmmode{{\particleone K}^{*0}}\else{{\particleone K}$^{*0}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ in a ratio of $6.1 : 1$ in accord with Table 5.2. Since the kaon efficiencies are not determined from Monte Carlo simulation, we include an additional term which is combined in quadrature with the error on the Monte Carlo efficiency and the error from the fitting procedure to estimate the overall systematic error. The summary of the differential cross section for the \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ mode is contained in Table \ref{t:5p6}. The data are fit to a Gaussian signal and a third order polynomial. Performing a summation of the points with $\rm x \geq 0.51$ We find a differential cross section $\rm \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi} \cdot B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP ) = 35.6 \pm 2.7 \pm 2.4$ pb, $(\rm x \geq\ 0.51)$. \section{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi} \begin{table}[ht!] \centering \caption{Fragmentation Distribution for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi} \begin{tabular}{|c|c|c|c|} \hline x range & $\rm N_{obs}$ & efficiency & ${\rm {{d\sigma} / {dx}} }\cdot B$ (pb) \\ \hline 0.000 - 0.375 & -50 $\pm$ 44 & 0.175 $\pm$ 0.010 & -61 $\pm$ 62 \\ \hline 0.375 - 0.510& 53 $\pm$ 21 &0.221 $\pm$ 0.012 & 143 $\pm$ 63 \\ \hline 0.510 - 0.625 & 239 $\pm$ 32 &0.249 $\pm$ 0.014 & 213 $\pm$ 33 \\ \hline 0.625 - 0.750 & 212 $\pm$ 25 &0.275 $\pm$ 0.016 & 159 $\pm$ 21 \\ \hline 0.750 - 0.825 & 134 $\pm$ 17 &0.284 $\pm$ 0.017 & 96 $\pm$ 14 \\ \hline 0.825 - 1.000 & 39 $\pm$ 8 &0.286 $\pm$ 0.025 & 28 $\pm$ 7.0 \\ \hline \end{tabular} \label{t:5p7} \end{table} For comparison with the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ measurements and for use in the following chapter we document the properties of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. To detect this signal we make identical cuts on the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ and associated pions that we use in reconstructing \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. Our simulation of this decay included \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0$}\fi, and non resonant \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi in the ratio as prescribed by Table 5.2. This decay mode is free from reflections of any kind. \begin{figure}[htp!] \centering \includegraphics[scale=0.62]{c5f9t} \caption{Mass spectrum for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi candidates having $\rm x \geq .51${.}} \label{fig:c5f9} \end{figure} Figure \ref{fig:c5f9} displays the invariant mass for \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi candidates. The curve is a second order polynomial with a Gaussian signal. Table \ref{t:5p7} collects the measurements of the differential cross section for this mode. Performing the canonical summation, we find $\rm \sigma_{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi} \cdot B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi ) = 59.7 \pm 4.4 \pm 2.3$ pb, $(\rm x \geq\ 0.51)$. \section{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ Relative Production Rates} \begin{table}[ht!] \centering \caption{ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ Cross Section Measurements} \begin{tabular}{|c|c|c|} \hline Mode & Experiment & $\rm \sigma \cdot B $ \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ & LGW & 0.14 $\pm$\ 0.05 (nb) \\ & MARK II & 0.14 $\pm$\ 0.03 (nb) \\ & MARK III & 0.135 $\pm$\ 0.012 $\pm$\ 0.010 (nb)\\ & CLEO & 14.8 $\pm$\ 1.7 $\pm$\ 0.6 $\pm$\ 1.3 (pb) \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ & MARK II & 0.51 $\pm$\ 0.18 (nb) \\ & MARK III & 0.305 $\pm$\ 0.031 $\pm$\ 0.030 (nb)\\ & CLEO & 33.4 $\pm$\ 6.0 $\pm$\ 1.4 (pb) \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ (inclusive) & LGW & 0.36 $\pm$\ 0.06 (nb) \\ & MARK II & 0.38 $\pm$\ 0.05 (nb) \\ & MARK III & 0.388 $\pm$\ 0.013 $\pm$\ 0.029 (nb)\\ & CLEO & 35.6 $\pm$\ 2.7 $\pm$\ 2.4 (pb) \\ \hline \end{tabular} \label{t:5p8} \end{table} \begin{table}[ht!] \centering \caption{ B( \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ ) / {B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi )} } \begin{tabular}{|c|c|} \hline Experiment & Ratio \\ \hline MARK II &3.6 $\pm$\ 1.5\\ \hline MARK III &2.3 $\pm$\ 0.3\\ \hline MARK III $ ( 2 \ast T) $&2.1 $\pm$\ 0.6 \\ \hline CLEO &2.3 $\pm$\ 0.5 \\ \hline \end{tabular} \label{t:5p9} \end{table} \begin{table}[ht!] \centering \caption{ B( \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ ) / {B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi)} } \begin{tabular}{|c|c|} \hline Experiment & Ratio \\ \hline LGW & 2.6 $\pm$\ 1.0\\ \hline MARK II &2.7 $\pm$\ 0.7 \\ \hline MARK III &2.9 $\pm$\ 0.4 \\ \hline MARK III $ ( 2 \ast T) $&2.8 $\pm$\ 0.6 \\ \hline CLEO & 2.4 $\pm$\ 0.4 \\ \hline \end{tabular} \label{t:5p10} \end{table} The production measurements of \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons as observed in \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ annihilations are arranged in Table \ref{t:5p8} \cite{Hitlin} The CLEO cross sections are only partial $( x \geq\ 0.51)$, they represent the most statistically significant information available, and are perfectly acceptable for calculating relative rates. We have evaluated the relative rates from the various groups for $\rm \left( { B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ ) \over B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) } \right)$ (Table \ref{t:5p9}) and $\rm \left( { B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP ) \over B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) } \right)$ (Table \ref{t:5p10}). For the MARK III group, we also compare the branching ratios derived from the global fit double tag method $( 2 \ast T)$. This analysis determines \cite{sblp} $$\rm \left( { B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ ) \over B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) } \right) = 2.3 \pm\ 0.5$$ and $$\rm \left( { B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP ) \over B(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) } \right) = 2.4 \pm\ 0.4$$ We find these results to agree well with previous measurements. \section{Analysis of Fragmentation Distributions} \begin{table}[ht!] \centering \caption{ Results of Fitting to the Peterson Form} \begin{tabular}{|c|c|c|} \hline Mode & $ \epsilon_Q $& $\chi^2$/d.o.f.\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP & $0.21^{+6}_{-5} $& 10.7/4 \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi & $0.17^{+7}_{-5}$ & 3.7/4 \\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi& $0.22^{+6}_{-5}$ & 9.0/4 \\ \hline \end{tabular} \label{t:5p11} \end{table} \begin{table}[ht!] \centering \caption{Results of Fitting to the Andersson Form} \begin{tabular}{|c|c|c|c|} \hline Mode & A & B & $\chi^2$/d.o.f.\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP & 1.3 $\pm$\ 0.15 & 0.39 $\pm$\ 0.05 & 2.4/3 \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi & 1.2 $\pm$\ 0.26 & 0.47 $\pm$\ 0.03 & 1.4/3 \\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi & 1.3 $\pm$\ 0.17 & 0.42 $\pm$\ 0.04 & 9.0/4 \\ \hline \end{tabular} \label{t:5p12} \end{table} The precise shape of the fragmentation distribution reveals distinctive features of the hadronization process. Charmed hadrons make an excellent laboratory to study such phenomenon, since they can be abundantly isolated in a variety of forms, and at different center of mass energies. Experimentally, measurements of fragmentation distribution distributions are difficult to execute. This is largely because fragmentation models distinguish themselves most at low momentum, which for reasons of acceptance, feed down, and or lack of data tends not to be measured well. None the less qualitative still exist trends exist. We choose to compare two fragmentation functions to our fragmentation data, one from each prevalent philosophy. We elect not to use models which explicitly include meson structure functions in their derivation (Kartvelishvili and Collins). The Peterson function $$ \rm D{^H_Q}(z) = {N \left(z\left[1-{1 \over z}-{\epsilon_Q\over(1-z)} \right] ^2\right)^{-1}}$$ of independent fragmentation is used as is the Andersson form $$\rm D{^H_Q}(z{^+})={N\over z{^+}}(1-z{^+})^a exp({-b{{m_H}_\perp}^2\over z{^+}} )$$ representing string fragmentation. These functions happen to also be the most common in the literature. We note that the Andersson function is derived in terms of the light cone variable, analysis in terms of the x variable may not be optimal, but is suitable here for our mainly didactic purposes. The values of A and B are only for comparison with similar fragmentation distributions binned in x, and should not be compared with theoretical expectations. When fitting to the Andersson form, we fix the parameter ${m_H}_\perp$ to be the known hadron mass. The results for the fits are exhibited in Table \ref{t:5p11} and Table \ref{t:5p12}. Fits of both forms to the \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ and \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ modes are shown in \begin{figure}[htp!] \centering \includegraphics[scale=0.6]{c5f10t} \caption{Fits to the Andersson form (solid line) and Peterson form (dashed line) for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP .} \label{fig:c5f10} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.59]{c5f11t} \caption{Fits to the Andersson form (solid line) and Peterson form (dashed line) for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi . } \label{fig:c5f11} \end{figure} Figures \ref{fig:c5f10} and \ref{fig:c5f11}. We have elected not to analyze the fragmentation distribution of the \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ mode because of the much weaker statistical significance. For comparison Figure \ref{fig:c5f12} \begin{figure}[htp!] \centering \includegraphics[scale=0.565]{c5f12t} \caption{The Fragmentation distributions for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ (solid squares) and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ (solid circles). The two modes have been scaled by their relative production ratios.} \label{fig:c5f12} \end{figure} displays the fragmentation distributions for the two \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ X modes, scaled by their relative production rates. Collectively we note that the shapes of the fragmentation distributions are consistent, as endorsed by the similarity of the parameters from the various fits. \par In interpreting the information from the experimental observed fragmentation distributions, the historical trend has been to compare the values of the Peterson $\epsilon_Q$. This is an unfortunate tradition which shall be continued here. The main advantage of this approach is that $\epsilon_Q$ has an easily interpretable physical meaning, namely the parameter $\epsilon_Q$ is proportional to $m_{q\perp}^2/m_{Q\perp}^2$, where the transverse mass is defined as ${m_\perp}^2 \equiv {m^2 + {p_\perp}^2}$. The heavier the object is that combines with the charmed quark, the larger epsilon will be. Also, radiative effects will produce softer distributions which are again reflected in larger values of epsilon. CLEO has made a precise \cite{frag} measurement of the \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ fragmentation distribution and found $\epsilon_Q = 0.16 \pm\ 0.02$, and has also performed a fit to the fragmentation distribution of several \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c$}\fi\ modes \cite{malp} and determined $\epsilon_Q = 0.30 \pm\ 0.10$. The results of this analysis fit agreeably into this scheme. The \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons are softer than the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ mesons, as they should be since \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi's are produced in the cascade of \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi's, and the fragmentation distribution of charmed baryons is softer than charmed mesons. In conclusion, we note that mapping experimentally determined parameters of fragmentation functions to theoretical predictions for those parameters is an extremely perilous task. Data gathered at different energies by different experiments must be reconciled for choice of scaling variable and more importantly QED and QCD radiative effects. Work by Bethke \cite{bethke} has attempted such a reconciliation, and determined $<z_c> = 0.71 \pm\ 0.14$. This corresponds to a Peterson $\approx\ \epsilon = 0.04$, which is more in line with naive expectations. Work has also been done by Galik \cite{galik} to evolve fragmentation distributions to different energies (A detailed analysis of the CLEO \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ fragmentation distribution, with comparison to similar measurements at different energies, including radiative effects, can be found elsewhere \cite{frag}). \subsection{Estimates of Total Cross Sections} \begin{table}[ht!] \centering \caption{ Extrapolated Total Cross Sections (nb)} \begin{tabular}{|c|c|} \hline Decay Mode & $\sigma_{Total} $ \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP & 0.63 $\pm$\ 0.06 $\pm$\ 0.10 $\pm$\ 0.09 \\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi & 0.74 $\pm$\ 0.11 $\pm$\ 0.12 $\pm$\ 0.11\\ \hline \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi & 0.81 $\pm$\ 0.15 $\pm$\ 0.20 $\pm$\ 0.12 \\ \hline \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi & 1.49 $\pm$\ 0.13 $\pm$\ 0.26 $\pm$\ 0.22 \\ \hline \end{tabular} \label{t:5p13} \end{table} Because of the large errors in the fragmentation distribution at low x, a measurement of the total cross section by summing the differential cross section will lose much of its statistical significance. Alternatively, we can attempt such a measurement by extrapolating from the part which is well measured. Both the Peterson and Andersson distributions predict about 63 \% of the fragmentation distribution resides above $\rm x = 0.51$. We estimate the systematic error for the extrapolation procedure by varying the observed value of $\epsilon_Q$ by $\pm\ 1 \ \sigma$ and determining the extrapolation factor. This, coupled with uncertainty in theoretical models and the inability to measure the end points of the fragmentation distribution well leads to an error of $ \simeq 15\%$. We base our extrapolation on the value of epsilon determined from the \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ and \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ modes, which are determined with the most precision. Normalizing out the branching ratios (Table 5.1), we find the results presented in Table \ref{t:5p13}. The first error is the statistical and systematic combined in quadrature, the second is uncertainty in the branching ratio, and the third term is due to the extrapolation error. CLEO has recently measured \cite{frag} the total cross section for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ to be $ \sigma_{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi } =1.24 \pm\ .21$ (nb) which is consistent with this measurement of $ \sigma_{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi } = 1.49 \pm\ .36$ (nb). We also compare a previous CLEO \cite{bort} measurement for \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ of $ \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi } = 0.52 \pm\ 0.11 $ (nb) agrees with our weighted average of the three decay modes $ \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi } = 0.68 \pm\ 0.13 $. The systematic error on the previous measurement may have been underestimated. \chapter{Penultimate} In this chapter we examine several unique aspects of the production and decay of charmed mesons. We shall detail a search for a rare \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ decay which may provide evidence for the role of hadronic final state interactions in charm decay. We measure the probability that a charmed meson will emerge from the \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ hadronization process in a state of non-zero angular momentum, and analyze the experimentally important transition rate for a charged \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ meson to decay into a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and a \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. \section{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0} \subsection{Motivation} Decays of charmed hadrons have provided valuable insight into the dynamics of the weak interaction. The nonleptonic sector is believed to be the source of the radically different lifetimes and semileptonic branching ratios of the charged and neutral \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ mesons. The mechanisms which produce these effects are not fully quantified. A possible component of the solution is to shorten the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ lifetime by including a class of nonspectator decays known as \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange, which are accessible to \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's at the Cabibbo allowed level. The contribution of these processes was initially anticipated to be small based on helicity arguments. The experimental observations \cite{halb2,bebek} of the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ with a large branching fraction $(\sim 1\%)$ was initially considered evidence for \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange. An alternate theory \cite{dono} proposed final state re-scattering of hadrons, not \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange, as the origin of this mode. The quark diagrams leading to \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ through \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange and final state interactions are can be found in Figure \ref{fig:c6f1} \begin{figure}[p!] \centering \includegraphics[scale=0.8]{c6f1t} \caption{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ through a) \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange and b)-c) final state inter\\ -actions.} \label{fig:c6f1} \end{figure} The extent to which either of these theories successfully explains \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ remains to be resolved. \par The decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ is uniquely suited to study \cite{pham} the effect of final state interactions. At the quark level this decay proceeds through two classes of nonspectator diagrams \begin{figure}[p!] \centering \includegraphics[scale=0.675]{c6f2t} \caption{Quark diagrams contributing to the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0. a) \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ exchange, b) sideways ``penguin."} \label{fig:c6f2} \end{figure} (Figure \ref{fig:c6f2}). The unique feature of this decay is that both quarks present in the initial state are absent in the final state, which in this case is composed of an $\rm s \bar s$ and a $\rm d \bar d$ pair. There are two paths to the final state for each diagram, each of which contains one Cabbibo suppressed \ifmmode{{\particletwo W}}\else{{\particletwo W}}\fi\ vertex. In the limit of exact $\rm SU(3)_f$ symmetry, a $\rm s \bar s$ can be popped from the vacuum on equal footing with a $\rm d \bar d$ pair. We could then factorize the term $V_{ud}^{}V_{cd}^{\ast} + V_{us}^{}V_{cs}^{\ast}$, which then multiplies the matrix element for each of the two diagrams. This expression can be recognized as the product of the first two terms of the first two rows of the K-M Matrix. Unitarity of the K-M matrix demands $$V_{ud}^{}V_{cd}^{\ast} + V_{us}^{}V_{cs}^{\ast} + V_{ub}^{}V_{cb}^{\ast} \equiv 0 $$ In the limit of vanishing b \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ u coupling, $V_{ud}^{}V_{cd}^{\ast} + V_{us}^{}V_{cs}^{\ast} \approx 0$, and the amplitude for this decay vanishes. \par A calculation by Pham \cite{pham} based on the re-scattering of the modes \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi, and \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi, predicts B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) $\simeq$ $1\over 2$ B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi) $\simeq 0.3 \% $. A branching ratio of 1\% or greater for this decay could not be produced by final state interactions and would represent a violation of the standard model. If a substantial branching ratio were found for this mode, this would confirm the role of hadronic final state interactions. Conversely, if this decay was strictly ruled out, the case for nonspectator processes would be strengthened. Since the general theoretical framework for describing charm decays does not fully implement either of these two processes, a better experimental understanding of this decay mode would lend valuable direction to charm decay theorists. \subsection{Signal Isolation} Experimentally this mode can be cleanly observed in \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH, given good \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass resolution and reconstruction efficiency. In the previous chapter we have demonstrated the robustness of the CLEO detector in reconstructing \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ meson decay modes which contain a \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi. For quick comparison, we note that using the \ifmmode{\Upsilon{\particleone (4S)}\ data set, we have reconstructed $\sim 600$ high momentum \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ decays. \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ has a predicted branching ratio about twenty times smaller, and tacking on another factor of 3 to get $\rm B (\ifmmode{\overline{{\particleone K}}\tiny^0\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi) \cdot B(\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi)$ optimistically reduces our sample expectations to 10. Since we cannot hope to make an absolute measurement of this decay rate with these statistics, we choose to study the properties of this decay through normalization to a well known decay mode. The decay mode which is most similar to \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ is clearly \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. They are both final states containing four charged pions, and differ only by one secondary vertex. Normalization to this decay mode provides maximal cancellation of systematic errors. \par To detect such a small signal we need to reduce the background to a minimum while maintaining reconstruction efficiency. We restrict our sample to those events in which the candidate \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ has a momentum greater than 2.5 GeV. Despite the loss of the low momentum \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi's, this cut has consistently produced invariant mass peaks with the highest signal to noise in all observed exclusive charm decays. Since we are performing a normalization and not a cross section measurement, we are free to take advantage of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's which are copiously produced in \ifmmode{{\particletwo B}}\else{{\particletwo B}}\fi\ decay. Unfortunately, \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's produced at low momentum have \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ daughters in the momentum range where the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ reconstruction efficiency begins to fall off, producing as much as a 20 \% loss in efficiency. Coupled with the rising background at low momentum, this makes searching for this decay mode with low momentum \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's unprofitable. \par The data for this analysis includes 113.7 \pb\ taken on the \ifmmode{\Upsilon{\particleone (4S)}\ and 33 \pb\ gathered on the \ifmmode{\Upsilon{\particleone (3S)}. The event selection procedure and track corrections follow those outlined in section 5.1. Candidate \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi's are then formed from two \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidates. To further improve the purity of the sample we also require that the number of track pairs consistent with secondary vertices in an event be less than five, and that the $\chi^2_{V} \leq 3.0$ per \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidate. Based on the measured \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ FWHM as a function of momentum, we demand that each \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ be within 2.5 standard deviations of the expected \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass. \subsection{Background} We have analyzed the background from the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. We simulated this mode using the resonant substructure measurement outlined in Table 5.2. For each Monte Carlo event we tagged the four final state \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughters for this decay with matches to drift chamber tracks. We then ran the event through our \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ driver program, and analyzed \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ pairs only if one of the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi's was the true \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughter and the vee finder has accidentally found one of the two \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ tracks to be one of the other \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughter pions. For the \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ mode, a plot of these events in shown in Figure \ref{fig:c6f3}. \begin{figure}[htp!] \centering \includegraphics[scale=0.62]{c6f3t} \caption{Monte Carlo simulation of the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ which was passed through the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ analysis program. One \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidate was the correct \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughter, while the other contained at least one other \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughter. a) no cuts, b) with \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ mass and $\chi^2_V$\ cuts.} \label{fig:c6f3} \end{figure} An anomalous enhancement occurs at the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ region. No such enhancement was found in either the nonresonant \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ or \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0}\else{$\ifmmode{\mathchar"11A}\else{$\mathchar"11A$}\fi^0$}\fi\ modes. To test that this was not a statistical fluctuation, we analyzed the properties of this distribution using the fact that the \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ is polarized in the helicity zero state (section 4.2). The \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ daughters are polarized with a $\cos^2\Theta_{P'P''}$ distribution in the \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ rest frame with respect to the \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ direction in the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ rest frame. We calculate this angle from the Monte Carlo for the \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ daughter, and histogram the quantity for each ``fake" \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ candidate. We show this distribution for all events and for events in the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ region in Figure \ref{fig:c6f4}. \begin{figure}[p!] \centering \includegraphics[scale=0.625]{c6f4t} \caption{$\cos\Theta_{P'P''}$ distributions A) without and B) with a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mass cut.} \label{fig:c6f4} \end{figure} The events in the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ are clustered where $\cos\Theta_{P'P''} \leq\ 0.7$. this would correspond to the case where the \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ daughter \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ ends up almost at rest in the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ rest frame, and can end up being boosted in the same direction as the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ daughter. We note that while this effect seems to be caused by the unique kinematics of this particular decay, the efficiency for such processes is exceedingly small. The efficiency for a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ to cause a \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ candidate (where both \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidates have been classified as good) with a mass in the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ region is of order 0.0006, while for a true \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ decay the Monte Carlo efficiency is 0.194. We note, however that for a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ branching ratio of 0.1\%, and a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ rate of 5.6\%, after scaling down the branching ratios to reach the four pion final state, we find the branching ratio times efficiency to be $2 \times 10^{-3}$ for \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ and $7 \times 10^{-4}$ for \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, differing by only a factor of two! To avoid any such confusion, we calculate the invariant mass of of each \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ candidate with the other two tracks which form the second \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi, and require that the mass not be consistent with a \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi. This eliminates this anomalous enhancement while only slightly reducing the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ reconstruction efficiency. We also note that this faking caused by \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ does not seem to result from gross track mis-measurement errors, as the reconstructed \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ has a momentum on average of 99\% of the Monte Carlo generated momentum. \subsection{ Calculation of Upper Limit} The mass spectrum for \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ candidates that have passed all cuts is displayed in Figure \ref{fig:c6f5}. \begin{figure}[htp!] \centering \includegraphics[scale=0.62]{c6f5t} \caption{Invariant mass spectrum for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH. The \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ candidate is required to have a momentum in excess of 2.5 GeV.} \label{fig:c6f5} \end{figure} We use a Monte Carlo procedure to determine the expected properties of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ signal, where the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ is allowed to decay into the \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ mode. Subject to the cuts described above, we find the mean 1.8645 $\pm$ .0002 GeV, FWHM .029 $\pm$ .003 GeV and an overall reconstruction efficiency of $\epsilon_{_{\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH}}=.170\pm.007$. We fit the mass spectrum (Fig 3.) using a Maximum Likelihood method to a polynomial background and a Gaussian signal representing the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ decay. We observe a signal $N_{\rm obs}\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH $ of $-2.7^{+2.7}_{-1.9}$ events at the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mass, where the errors are statistical only. The stability of the observed signal area and errors has been analyzed subject to variations in \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mass, FWHM, and selection of background function. The results from the various fits are consistent, and we conservatively estimate a systematic error of 1.0 event. \begin{figure}[htp!] \centering \includegraphics[scale=0.65]{c6f6t} \caption{Invariant mass spectrum for \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ which is used to normalize the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ signal.} \label{fig:c6f6} \end{figure} Figure \ref{fig:c6f6} displays the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi\ signal which is used to normalize the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ signal. In reconstructing this decay, we subject the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ to the same selection cuts as applied to the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ mode, and we require that the two \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ daughter pions have a momentum greater than 200 MeV. The signal is fit to a Gaussian signal consistent with Monte Carlo simulation of this decay and a polynomial background. For this decay mode we find $N_{\rm obs}\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi = 811 \pm 64$, and ${\epsilon_{_{\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi}}} = .260 \pm .011$ \par The ratio of the branching fractions is obtained from the following prescription, $$ { {\rm B} (\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) \over {\rm B}(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi) } = 2.91 \left({N_{\rm obs} \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH \over {\epsilon_{_{\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH}}}}\right)\cdot \left({\epsilon_{_{\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi}}\over{N_{\rm obs}\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi}}\right)$$ which yields $-0.015 \pm 0.016$. The factor 2.91 accounts for B(\ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi)$\cdot$B(\ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi). This ratio is converted to an upper limit on B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) utilizing the most recent \cite{adler1} Mark III value $\rm B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi) = 6.4 \pm 1.1 \% ${.} From this we find a 90\% confidence level upper limit of B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ ) $< .12\%$. The consistency of the normalization procedure has been checked by normalizing to the decay mode \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. This result is subject to systematic uncertainties in both \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ reconstruction and \ifmmode{{\particleone K}^\pm}\else{{\particleone K}$^\pm$}\fi\ identification efficiency. The upper limit determined in this fashion gives B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ ) $< .17\%$. \par Two other measurements of this decay have been publicized. The Mark III \cite{adler1} collaboration has determined the upper limit B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0\ ) $< .46\%$. A signal for this decay has been claimed by the E-400 \cite{cuma} experiment. Using the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi-\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi mass difference trick, they have observed a signal of 8.9 $\pm$\ 2.7 events. They elect to normalize to the decay mode \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi, which would seem to maximize the prospects for systematic errors. They determine $ { {\rm B} (\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) \over {\rm B}(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi ) } = 0.4 \pm 0.3$. Normalizing away the denominator using the Mark III branching fraction $\rm B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi ) = .51 \pm .11$, E-400 finds B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) = .20 $\pm$\ .16. This is consistent with both the upper limit determined from this analysis and 0. It should also be noted that the E-400 group has not described any attempt to study the background to their signal, or examined the validity of the signal by exploring the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ side bands. The also have not normalized their \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^+}\else{{\particleone K}$^+$}\fi\ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ to any other \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ decay mode. \par In conclusion, more information is required to understand the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0. CLEO is currently performing an analysis of this decay using a substantially larger $(\times 2)$ data set taken with a new 52 layer drift chamber. CLEO has observed 5 \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ decay modes with high statistical significance, and fails to observe a signal in this mode. CLEO has also determined that in their tracking system that a non negligible background for this decay mode can be produced from the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi, \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi. The E-400 group has observed a statistically weak effect at the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mass in the \ifmmode{{\particleone K}^0_S}\else{{\particleone K}$^0_S$}\fi\KSH\ final state. The have not demonstrated the robustness of their normalization procedure, nor unambiguously attributed the signal to \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0. \section{ B(\ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi) and Spinless \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ Meson Production.} \begin{table}[ht!] \centering \caption{ Extrapolated Total Cross Sections for Vector \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ Mesons} \begin{tabular}{|c|c|} \hline Decay Chain & $\sigma_{Total}$ (nb) \\ \hline \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ & 0.77 $\pm$\ 0.14 \\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ & \\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\PIP\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^-$}\fi & \\ \hline \ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^0$}\fi\ & 0.74 $\pm$\ 0.18\\ \ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi}\else{$\ifmmode{\mathchar"10D}\else{$\mathchar"10D$}\fi$}\fi & \\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ & \\ \hline \end{tabular} \label{t:6p1} \end{table} In addition to extensive study of pseudoscalar charmed mesons, CLEO has made precise measurements of the charged and neutral vector D mesons. CLEO's measurements \cite{frag} the total cross sections for the \ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ and \ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi\ mesons are summarized in Table \ref{t:6p1}. It was first noticed by the author that CLEO's unique situation of possessing a complete set of charged and neutral vector and pseudoscalar cross sections would allow for novel investigation of the hadronization process. Let us define $\sigma_{P}$ as the cross section for direct production of pseudoscalar charmed meson and $\sigma_{V}$ as the cross section for production of a vector charmed meson. We assume belief in isospin, making $\sigma_{P}$ and $\sigma_{V}$ the same for charged and neutral particles. Let us also define $\alpha$ as the branching ratio B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) and $\beta$ as the branching ratio B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi X). Where we have the constraint $ \alpha + \beta = 1$. The total cross sections of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons are governed by $$ \sigma(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi) = \sigma_{P} + \sigma_{V} +\alpha\sigma_{V}$$ $$ \sigma(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) = \sigma_{P} + \beta\sigma_{V}$$ after applying the constraint this becomes $$ \sigma(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi) = \sigma_{P} + (1 + \alpha)\sigma_{V}$$ $$ \sigma(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) = \sigma_{P} + (1 - \alpha)\sigma_{V}$$ by adding and subtracting these two equations, we decompose the equations into one which contains $\sigma_{P}$ and one which contains $\alpha$. Solving these two equations for the aforementioned quantities we find the relations $$ \left( {\sigma(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi) + \sigma(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) \over 2} \right) - \sigma_{V} = \sigma_{P}$$ $$ \left( {\sigma(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi) - \sigma(\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi) \over 2\sigma_{V}} \right) = \alpha$$ We note that in the derivation of the above equations we have excluded contribution from the ${\ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi}^{\ast\ast 0}(2420)$. This is a candidate for the spin 2 charmed meson, its spin and parity have not been established. Its relative production rate to the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ is on order \cite{bort2} $12 \pm 5 \%$. This state has only been observed in cascades to the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi. Provided this state does not have a substantial decay rate directly into the ground states, these relations are unaffected. Using the results derived in the previous Chapter $ \sigma_{\ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi } = 0.68 \pm\ 0.13 $, $\sigma_{\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi } = 1.49 \pm\ .36$ (nb), and defining $\sigma_{V} = \sigma_{\ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi } = 0.74 \pm\ 0.18$ we derive $$ \sigma_{P} = 0.35 \pm\ 0.26 \rm (nb)$$ $$\rm \alpha = B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) = 0.53 \pm\ 0.29 $$ To evaluate our measurement of B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ), we first make the simple comparison of the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ to \ifmmode{{\particleone D}^{*0}}\else{{\particleone D}$^{*0}$}\fi\ cross section where both particles cascade into a \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi, which then decays into the \ifmmode{{\particleone K}^-}\else{{\particleone K}$^-$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi\ final state. This yields $ B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ) = 0.55 \pm\ 0.07 \pm\ 0.11$. Other measurements of this number include those of MARK I \cite{gold} (0.60 $\pm$\ 0.15), MARK II \cite{coles} (0.44 $\pm$\ 0.7), and MARK III \cite{hitlp} (0.55 $\pm$\ 0.02 $\pm$\ 0.06). The MARK II number does not assume conservation of isospin. We again find favorable agreement among the measurements. Lastly, using our measurement of the direct \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ cross section to the amount of charmed particles which are observed in the vector state. We evaluate the quantity $ \sigma_{V} \over \sigma_{P} + \sigma_{V}$ Using our derived value for $ \sigma_{P} = 0.35 \pm\ 0.26 \rm (nb)$ and a weighted average of the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ cross sections for $\sigma_{V}$ we derive $$ { \sigma_{V} \over \sigma_{P} + \sigma_{V}} = 0.68 \pm\ 0.18$$ Traditionally the measurement was limited by the knowledge of B(\ifmmode{{\particleone D}^{*+}}\else{{\particleone D}$^{*+}$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi ), which for some time suffered a 20 \% uncertainty. This method bypasses the need for that number. Theoretical predictions for this ratio are not well established, however we note that simple spin statistics predicts 0.75. In retrospect, we note that the errors on the numbers derived here make them somewhat less competitive with previous measurements. We feel the originality of the method warrants its presentation, which may find application in the future. We also note that the means of these measurements are in fine agreement with expectations, attributing to the consistency of several distinct detector functions. \chapter{Summary} We have made a broad study of the properties of charmed mesons produced in \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ annihilations. The hadronization properties of charmed quarks were studied in two ways. The fragmentation distributions of \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ mesons were compared using string inspired and independent fragmentation models. The Peterson function provided consistently poorer fits to the data. This was in part dominated by the highest x bin which the function consistently undershot. It is noted that the Peterson function goes to 0 at x = 1 with a much higher rate $(1-x)^2$ than the other models $\sim(1-x)$. Radiative effects may soften the spectrum most severely at high x, however these effects have not been examined (this is in part due to the fact that there is significant feed down from the vector states in pseudoscalar fragmentation distributions, making vector mesons the laboratory of choice to study these effects). The Peterson function also predicts a much larger distribution at low x than do the string functions, which sharply cutoff at $\approx$ x = 0.3. This cutoff is more intuitively appealing, as catastrophic processes would be required in the hadronization process to produce mesons with very low x. We find the fragmentation distributions of the \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ and \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ mesons to be quite similar. The \ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi\ fragmentation distribution is softer than the \ifmmode{{\particleone D}^*}\else{{\particleone D}$^*$}\fi\ and harder than the \ifmmode{\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c}\else{$\ifmmode{\mathchar"7003}\else{$\mathchar"7003$}\fi_c$}\fi , as expected. It was found that about 70 \% of the charmed mesons are produced with at least 1 unit of angular momentum. This is further supported by the large difference in the inclusive \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ and \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ inclusive cross sections. The relative branching ratios were measured for three different \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ decay modes. They were found to be in good agreement with previous measurements done at SPEAR. No specific theoretical information can be gleaned form these relative rates. This is partly due to the fact that in \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ decays there are often interfering diagrams (Figure \ref{fig:c2f8}) so rates cannot be purely determined. This is not the case in \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ decays, where relative rates for two body decays often provide useful information. The second difficulty is that in general theoretical predictions tend to be limited to two body and quasi two body decays, and no such predictions exist for three and four body nonresonant decays. We have also searched for nonspectator decays of the \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ as a contamination of the decay mode \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\overline{{\particleone K}}\tiny^0\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. A novel technique was developed for this purpose, which would detect signs of \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ reflections in two ways. Due to limited statistics, we were unable to make a definitive statement, although no real sign for this reflection was observed We also searched for the special decay mode \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0, which has been touted as evidence for hadronic final state interactions in charm decays. The E-400 group has measured a candidate signal for this decay and found B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) = 0.20 $\pm$\ 0.16 \%. This analysis is unable to confirm the E-400 measurement, and places the 90 \% confidence interval upper limit of B(\ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^0}\else{{\particleone K}$^0$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0) $<$ 0.12 \%. In addition, we have determined that there is a non negligible background to this mode from the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{{\particleone K}^{*-}}\else{{\particleone K}$^{*-}$}\fi\ifmmode{\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+}\else{$\ifmmode{\mathchar"119}\else{$\mathchar"119$}\fi^+$}\fi. The theoretical understanding of charm decay has advanced greatly in recent years. This was fueled, in part, by experiments performed at \ifmmode{e^+e^-}\else{$e^+e^-$}\fi\ energies above the \ifmmode{\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''}\else{$\ifmmode{\mathchar"120}\else{$\mathchar"120$}\fi''$}\fi, and fixed target experiments. The ARGUS group at DORIS was the first to observe the decay \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi\ \ifmmode{\rightarrow}\else{$\rightarrow$}\fi\tiny\ \ifmmode{\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi}\else{$\ifmmode{\mathchar"11E}\else{$\mathchar"11E$}\fi$}\fi\ifmmode{\overline{{\particleone K}}\tiny^0, and the tensor meson candidate $\ifmmode{{\particletwo D}}\else{{\particletwo D}}\fi^{\ast\ast 0}(2240)$. CLEO was the first experiment to perform a high statistics study of the \ifmmode{{\particleone D}^0}\else{{\particleone D}$^0$}\fi, \ifmmode{{\particleone D}^+}\else{{\particleone D}$^+$}\fi, and \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi\ lifetimes. The E-691 group has precisely determined the lifetimes of these three mesons, along with measuring several rare decay modes. One difficulty of these experiments has been the reliance on states reconstructed purely from charged tracks, neglecting final states with single or multiple neutral particles (this partially contributed to the undoing of the MARK III double tag method). The CLEO II detector, with exceptional charged and neutral particle reconstruction capabilities should make an excellent tool for studying charmed decays and spectroscopy. Further advances in charm decay theory will also require a better understanding of the charmed, strange, meson \ifmmode{{\particleone D}^+_s}\else{{\particleone D}$^+_s$}\fi, which has thus far only been observed in five decay modes. In conclusion, much has been learned, and much is to be learned about the charm sector. The study of charmed particles has deepened our knowledge of elementary particle physics. A fully quantified theory of charm decays will represent a great triumph for theorists and experimentalists alike.
2023-04-23T08:18:00.756Z
2020-12-08T02:16:51.000Z
redpajama/arxiv
arxiv_0000
1,023
44,382
f7a850f7601ffeb641414d539310445acf9b088d
\section{Introduction} The starting point of this note is the flow \begin{equation} \label{H1} \dot z = f(z), \end{equation} in which $f $ or its conjugate $\bar f$ is an entire function. A trajectory for (\ref{H1}) is a path $z(t)$ in the plane with $z'(t) = f(z(t)) \in \mathbb C$ for $t$ in some maximal interval $(\alpha, \beta) \subseteq \mathbb R$. By the existence-uniqueness theorem, such trajectories are either constant (with $z(t)$ a zero of $f$), periodic or injective. It was shown in \cite[Theorem 5]{kingneedham} that if $f$ is a polynomial in $z$ of degree $n \geq 2$ then there exist $n-1$ disjoint trajectories for (\ref{H1}) which tend to infinity in finite increasing time, that is, which satisfy $\beta \in \mathbb R$ and $\lim_{t \to \beta - } z(t) = \infty$.The following theorem for holomorphic flows with transcendental entire $f$ was proved in \cite[Theorem 1.1]{Latraj}. \begin{thm}[\cite{Latraj}] \label{thm0} Let the function $f$ be transcendental entire: then (\ref{H1}) has infinitely many pairwise disjoint trajectories which tend to infinity in finite increasing time. \end{thm} For meromorphic functions in general, such trajectories need not exist at all \cite{Latraj}, but a result was also proved in \cite{Latraj} for the case where $f$ is transcendental and meromorphic in the plane and the inverse function $f^{-1}$ has a logarithmic singularity over $\infty$: this means that there exist $M > 0$ and a component $U$ of the set $\{ z \in \mathbb C : \, |f(z)| > M \}$ such that $U$ contains no poles of $f$ and $\log f$ maps $U$ conformally onto the half-plane $H = \{ v \in \mathbb C : \, {\rm Re } \, v > \log M \}$ \cite{BE,Nev}. In this case \cite[Theorem 1.2]{Latraj}, (\ref{H1}) has infinitely many pairwise disjoint trajectories tending to infinity in finite increasing time from within a neighbourhood $\{ z \in U : |f(z)| > M' \geq M \}$ of the singularity. On the other hand, for entire $f$ in (\ref{H1}), it seems that trajectories which tend to infinity in finite increasing time are somewhat exceptional. For the simple example $\dot z = - \exp( -z)$, it is easy to check that all trajectories satisfy $\exp(z(t)) = \exp(z(0)) -t$ and so tend to infinity as $t$ increases, but take infinite time to do so unless $\exp( z(0))$ is real and positive. It will be shown that for transcendental entire $f$ there is, in a certain sense, zero probability of landing on a trajectory of (\ref{H1}) which tends to infinity in finite time. To state the theorem, let $f$ be transcendental entire and let \begin{equation} \label{Fdef1} z_0 \in \mathbb C, \quad f(z_0) \neq 0, \quad F(z) = \int_{z_0}^z \frac{du}{f(u)} . \end{equation} Then $F(z)$ is defined near $z_0$ and is real and increasing as $z$ follows the trajectory $\zeta_{z_0} (t)$ of (\ref{H1}) starting at $z_0$. Let $\delta $ be small and positive and take the pre-image $ L_\delta(z_0)$ of the real interval $(- \delta, \delta)$ under the function $- i F(z) $; then $ L_\delta(z_0)$ is perpendicular to $\zeta_{z_0} (t)$ at $z_0$. The proof of the following result is adapted from that of the Gross star theorem \cite[p.292]{Nev}. \begin{thm} \label{thmhol} Let $f$ be a transcendental entire function and let $z_0$ and $F$ be as in (\ref{Fdef1}). For small positive $\delta$ let $Y_\delta$ be the set of $y \in (- \delta, \delta)$ such that the trajectory of (\ref{H1}) starting at $F^{-1}(iy)$ tends to infinity in finite increasing time. Then $Y_\delta$ has Lebesgue measure $0$. \end{thm} Theorem \ref{thmhol} seems unlikely to be best possible, but an example from \cite{Volk} (see \S \ref{uncountable}) shows that there exists a transcendental entire $f$ for which (\ref{H1}) has uncountably many trajectories tending to infinity in finite increasing time. It seems natural to ask similar questions in respect of the antiholomorphic flow \begin{equation} \label{AH} \dot z = \frac{dz}{dt} = \bar g(z), \end{equation} where $g$ is a non-constant entire function. Equation (\ref{AH}) appears widely in textbooks as a model for incompressible irrotational plane fluid flow, and is linked to (\ref{H1}) insofar as if $f = 1/g$ then (\ref{AH}) has the same trajectories as (\ref{H1}), since $\bar g = f/|f|^2$, although zeros of one of $f$ and $g$ are of course poles of the other and in general the speeds of travel differ. The trajectories of (\ref{AH}) are determined by choosing $G$ with $G'(z) = g(z)$ and writing \begin{equation} \label{transform1} v = G(z), \quad \dot v = g(z) \dot z = |g(z)|^2 \geq 0 , \end{equation} which leads to the classical fact that trajectories for (\ref{AH}) are level curves of ${\rm Im} \, G(z)$ on which ${\rm Re} \, G(z)$ increases with $t$. By the maximum principle, ${\rm Im} \, G(z)$ cannot be constant on a closed curve. Thus, apart from the countably many which tend to a zero of $G' = g$, all trajectories for (\ref{AH}) go to infinity, but this leaves open the question as to how long they take to do so. If a non-constant trajectory $\Gamma$ of (\ref{AH}) passes from $z_1 $ to $ z_2$ along an arc meeting no zeros of $g$, then ${\rm Im} \, v = \beta$ is constant on $\Gamma$ and $X = {\rm Re} \, v$ increases from $X_1 = {\rm Re} \, G(z_1) $ to $X_2 = {\rm Re} \, G(z_2)$. Thus (\ref{transform1}) implies that the transit time is \begin{equation} \int_{X_1+i\beta}^{X_2+i \beta } \frac1{|g(z)|^2} \, dv = \int_{X_1+i \beta}^{X_2+i \beta} \left| \frac{dz}{dv} \right|^2 \, dv = \int_{X_1}^{X_2} \left| \frac{dz}{dX} \right|^2 \, dX . \label{transit} \end{equation} This formula shows that a zero of $g$ cannot be reached in finite time, because if $z$ tends to a zero $z_3$ of $g$ of multiplicity $m$ as $X \to X_3$ then, with $c_j$ denoting non-zero constants, \begin{eqnarray*} X - X_3 &=& G(z)-G(z_3) \sim c_1 (z-z_3)^{m+1}, \\ \left| \frac{dz}{dX} \right|^2 &=& \frac1{ |g(z)|^2 } \sim \frac{c_2}{ |X-X_3|^{2m/(m+1)} } \geq \frac{ c_2 }{|X - X_3|} . \end{eqnarray*} Suppose now that $G' = g$ is a polynomial of degree $n \geq 1$ in (\ref{AH}), (\ref{transform1}) and (\ref{transit}). If $S \in \mathbb R$ and $R$ is sufficiently large and positive then each pre-image under $v = G(z)$ of the half-line $v = r + iS, r \geq R,$ gives a trajectory of (\ref{AH}) which tends to infinity, on which (\ref{transform1}) delivers $$\frac{dt}{dv} = \frac1{|g(z)|^2} \sim \frac{c_3 }{ |z|^{2n}} \sim \frac{c_4}{ |v|^{2n/(n+1)}} .$$ Hence (\ref{transit}) implies that the transit time to infinity is finite for $n \geq 2$ and infinite for $n=1$. Thus, if $g$ is a non-linear polynomial, (\ref{AH}) always has uncountably many trajectories tending to infinity in finite increasing time, but this need not be the case for transcendental entire $g$. \begin{thm} \label{thmbbh} There exists a transcendental entire function $g$ such that (\ref{AH}) has no trajectories tending to infinity in finite increasing time. \end{thm} Theorem \ref{thmbbh} also marks a sharp contrast with Theorem~\ref{thm0}, and its proof rests on the following immediate consequence of a result of Barth, Brannan and Hayman \cite[Theorem 2]{BBH}. \begin{thm}[\cite{BBH}] \label{BBHthm} There exists a transcendental entire function $G$ such that any unbounded connected plane set contains a sequence $(w_n)$ tending to infinity on which $U = {\rm Re} \, G$ satisfies $(-1)^n U(w_n) \leq |w_n|^{1/2 } $. \end{thm} To establish Theorem \ref{BBHthm}, it is only necessary to take the plane harmonic function $v$ constructed in \cite[Theorem 2]{BBH}, with the choice of $\psi(r)$ given by \cite[p.364]{BBH}. With $U = v$, and $V$ a harmonic conjugate of $U$, elementary considerations show that the resulting entire function $G = U+iV$ cannot be a polynomial. On the other hand, in the presence of a logarithmic singularity of the inverse function over infinity, trajectories of (\ref{AH}) tending to infinity in finite increasing time exist in abundance. \begin{thm} \label{thm2} Let $g$ and $G$ be transcendental meromorphic functions in the plane such that $G'=g$ and either $G^{-1}$ or $g^{-1}$ has a logarithmic singularity over $\infty$. Then in each neighbourhood of the singularity the flow (\ref{AH}) has a family of pairwise disjoint trajectories $\gamma_Y, Y \in \mathbb R$, each of which tends to infinity in finite increasing time. \end{thm} Theorem \ref{thm2} applies in particular if $g$ or its antiderivative $G$ is a transcendental entire function and belongs to the Eremenko-Lyubich class $\mathcal{B}$, which plays a salient role in complex dynamics \cite{Ber4,EL,sixsmithEL} and is defined by the property that $F \in \mathcal{B}$ if the finite critical and asymptotic values of $F$ form a bounded set, from which it follows that if $F \in \mathcal{B}$ is transcendental entire then $F^{-1}$ automatically has a logarithmic singularity over $\infty$. A specific function to which Theorem \ref{thm2} may be applied is $g(z) = e^{-z} + 1$; here $g$ is in $\mathcal{B}$, but its antiderivative $G$ is not, and this example also gives uncountably many trajectories of (\ref{AH}) taking infinite time to reach infinity through the right half-plane. Theorem \ref{thm2} is quite straightforward to prove when the inverse of $G$ has a logarithmic singularity over infinity, but the method turns out to have a bearing on the following question of Rubel \cite[pp.595-6]{Linear}: if $f$ is a transcendental entire function, must there exist a path tending to infinity on which $f$ and its derivative $f'$ both have asymptotic value $\infty$? This problem was motivated by the classical theorem of Iversen \cite{Nev}, which states that $\infty$ is an asymptotic value of every non-constant entire function. For transcendental entire $f$ of finite order, a strongly affirmative answer to Rubel's question was provided by the following result \cite[Theorem 1.5]{Larubel}. \begin{thm}[\cite{Larubel}] \label{rubelthm} Let the function $f$ be transcendental and meromorphic in the plane, of finite order of growth, and with finitely many poles. Then there exists a path $\gamma$ tending to infinity such that, for each non-negative integer $m$ and each positive real number $c$, \begin{equation} \lim_{z \to \infty, z \in \gamma } \frac{ \log |f^{(m)}(z)|}{ \log |z|} = + \infty \quad \hbox{and} \quad \int_\gamma |f^{(m)}(z)|^{-c} |dz| < + \infty . \label{rr3} \end{equation} \end{thm} For functions of infinite order, Rubel's question appears to be difficult, although a path satisfying (\ref{rr3}) for $m=0$ is known to exist for any transcendental entire function $f$ \cite{LRW}. However, a direct analogue of Theorem \ref{rubelthm} goes through relatively straightforwardly for transcendental entire functions $f$ in the Eremenko-Lyubich class $\mathcal{B}$. \begin{thm} \label{thm1} Let $f$ be a transcendental meromorphic function in the plane such that $f^{-1}$ has a logarithmic singularity over $\infty$, and let $D \in \mathbb R$. Then there exists a path $\gamma$ tending to infinity in a neighbourhood of the singularity, such that $f(z) -iD$ is real, positive and increasing on $\gamma$ and (\ref{rr3}) holds for each integer $m \geq 0 $ and real $c > 0$. \end{thm} This paper is organised as follows: Theorem \ref{thmhol} is proved in \S\ref{pfthmhol}, followed by an example in \S\ref{uncountable} and the proof of Theorem \ref{thmbbh} in \S\ref{pfthmbbh}. It is then convenient to give the proof of Theorem \ref{thm1} in \S\ref{pfthm1}, prior to that of Theorem \ref{thm2} in \S\ref{pfthm2}. \section{Proof of Theorem \ref{thmhol}}\label{pfthmhol} Let $f$, $F$, $z_0$ and $\delta$ be as in the statement of Theorem \ref{thmhol}. For $y \in (- \delta, \delta)$ let $g(y) = F^{-1}(iy)$ and let $T(y)$ be the supremum of $s > 0$ such that the trajectory $\zeta_{g(y)}(t)$ of (\ref{H1}) with $\zeta_{g(y)}(0) = g(y)$ is defined and injective for $0 \leq t < s$. If the trajectory $\zeta_{g(y)}(t)$ is periodic with minimal period $S_y$ then $T(y) = S_y$ and $\zeta_{g(y')}(t)$ has the same period for $y'$ close to $y$ \cite{brickman}. Furthermore, if $\zeta_{g(y)}(t)$ tends to infinity in finite time then $T(y) < + \infty$, while if $T(y)$ is finite but $\zeta_{g(y)}(t)$ is not periodic then $\lim_{t \uparrow T(y)} \zeta_{g(y)}(t) = \infty$ \cite[Lemma 2.1]{Latraj}. Set $$ A = \{ iy + t: \, \, y \in (- \delta, \delta), \, 0 < t < T(y) \} , \quad B = \{ \zeta_{g(y)} (t) : \, y \in (- \delta, \delta), \, 0 < t < T(y) \}. $$ Then $G( iy + t ) = \zeta_{g(y)} (t) $ is a bijection from $A$ to $B$. For $u = \zeta_{g(y)} (t)$, where $y \in (- \delta, \delta)$ and $0 < t < T(y)$, let $\sigma_u$ be the subarc of $ L_\delta(z_0)$ from $z_0$ to $g(y)$ followed by the sub-trajectory of (\ref{H1}) from $g(y)$ to $u$, and define $F$ by (\ref{Fdef1}) on a simply connected neighbourhood $D_u$ of $\sigma_u$. Then $F$ maps $\sigma_u$ bijectively to the line segment $[0, iy]$ followed by the line segment $[iy, iy+t]$, and taking a sub-domain if necessary makes it possible to assume that $F$ is univalent on $D_u$, with inverse function defined on a neighbourhood of $[iy, iy+t]$. Let $y'$ and $t'$ be real and close to $y$ and $t$ respectively. Then the image under $F^{-1}$ of the line segment $[iy', iy' + t']$ is an injective sub-trajectory of (\ref{H1}) joining $g(y') \in L_\delta(z_0)$ to $F^{-1}(iy' +t') = \zeta_{g(y')} (t') = G(iy'+t')$, and so $T(y') \geq t'$. Thus $y \rightarrow T(y)$ is lower semi-continuous and $A$ is a domain, while $G: A \to B$ is analytic. Moreover, $A$ is simply connected, because its complement in $\mathbb C \cup \{ \infty \}$ is connected, and so is $B$. Furthermore, $F$ extends to be analytic on $B$, by (\ref{Fdef1}) and the fact that $f \neq 0$ on $B$, and $F \circ G$ is the identity on $A$ because $F(G(t)) = t$ for small positive $t$. For $N \in (0, + \infty ) $, let $M_N $ be the set of all $ y$ in $ (- \delta, \delta)$ such that $\zeta_{g(y)} (t)$ tends to infinity and $T(y) < N $. To prove Theorem \ref{thmhol}, it suffices to show that each such $M_N$ has measure $0$, and the subsequent steps will be adapted from the proof of the Gross star theorem \cite[p.292]{Nev} and its extensions due to Kaplan \cite{Kaplan}. Let $\Lambda_N \subseteq B$ be the image of $\Omega_N = \{ w \in A : \, {\rm Re} \, w < N \}$ under $G$, let $r$ be large and positive and denote the circle $|z| = r$ by $S(0, r)$. Then $S(0, r) \cap \Lambda_N$ is a union of countably many open arcs $\Sigma_r$. If $y \in M_N$ then $T(y) < N$ and as $t \to T(y)$ the image $z = G(iy+t) $ tends to infinity in $\Lambda_N$ and so crosses $S(0, r)$, and hence there exists $\zeta $ in some $ \Sigma_r$ with ${\rm Im} \, F(\zeta) = y$, since $F: B \to A$ is the inverse of $G$. Thus the measure $\mu_N$ of $M_N$ is at most the total length $s(r)$ of the arcs $F(\Sigma_r)$. It follows from the Cauchy-Schwarz inequality that, as $t \to + \infty$, \begin{eqnarray*} \mu_N^2 &\leq& s(t)^2 = \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )| t \, d \phi \, \right)^2 \\ &\leq& \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 t \, d \phi \, \right) \left( \int_{t e^{i \phi } \in \Lambda_N } t \, d \phi \, \right) \leq 2 \pi t \left( \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 t \, d \phi \, \right) . \end{eqnarray*} Thus $\mu_N = 0$, since dividing by $2 \pi t$ and integrating from $r$ to $r^2$ yields, as $r \to + \infty$, \begin{eqnarray*} \frac{ \mu_N^2 \log r }{2 \pi} &\leq& \int_r^{r^2} \int_{t e^{i \phi } \in \Lambda_N } |F'(t e^{i \phi } )|^2 \, t \, d \phi \, dt \leq \int_{\Lambda_N} |F'(t e^{i \phi } )|^2 \, t \, d \phi \, dt = \hbox{area $(\Omega_N)$} \leq 2 \delta N . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} \section{An example}\label{uncountable} Suppose that $G$ is a locally univalent meromorphic function in the plane, whose set of asymptotic values is an uncountable subset $E$ of the unit circle $\mathbb T$. Suppose further that there exists a simply connected plane domain $D$, mapped univalently onto the unit disc $\Delta$ by $G$, such that the branch $\phi$ of $G^{-1}$ mapping $\Delta$ to $D$ has no analytic extension to a neighbourhood of any $\beta \in E$. Let $F = S(G)$, where $S$ is a M\"obius transformation mapping $\Delta$ onto $\{ w \in \mathbb C : \, {\rm Re} \, w < 0 \}$, and for $\beta \in E$ let $\alpha = S(\beta)$ and let $L$ be the half-open line segment $[\alpha -1, \alpha)$. Then $M = S^{-1}(L)$ is a line segment or circular arc in $\Delta$ which meets $\mathbb T$ orthogonally at $\beta$. Moreover, $\phi(M)$ is a level curve of ${\rm Im} \, F$ in $D$, which cannot tend to a simple $\beta$-point of $G$ in $\mathbb C$ because this would imply that $\phi$ extends to a neighbourhood of $\beta$. Hence $\phi(M)$ is a path tending to infinity in $D$, on which ${\rm Im} \, F(z)$ is constant and $F(z)$ tends to $\alpha$. Since $G$ and $F$ are locally univalent, $f = 1/F'$ is entire. As $t \to 0-$ write, on $\phi(M)$, $$ F(z) = \alpha + t, \quad \quad \frac{dt}{dz} = F'(z) = \frac1{f(z)}, \quad \frac{dz}{dt} = f(z), $$ so that $\phi(M)$ is a trajectory of (\ref{H1}) which tends to infinity in finite increasing time, and there exists one of these for every $\beta$ in the uncountable set $E$. A suitable $G$ is furnished by a construction of Volkovyskii \cite{Ermich,Volk}, in which $\mathbb T \setminus E$ is a union of disjoint open circular arcs $I_k = (a_k, b_k)$, oriented counter-clockwise. For each $k$, take the multi-sheeted Riemann surface onto which $(a_k - b_k e^z)/(1-e^z)$ maps the plane, cut it along a curve which projects to $I_k$, and glue to $\Delta$ that half which lies to the right as $I_k$ is followed counter-clockwise. This forms a simply connected Riemann surface $R$ with no algebraic branch points. By \cite[Theorem 17, p.71]{Volk} (see also \cite[p.6]{Ermich}), the $I_k$ can be chosen so that $R$ is parabolic and is thereby the image surface of a locally univalent meromorphic function $G$ in the plane. \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thmbbh}}\label{pfthmbbh} Following the notation of the introduction, suppose that $v=G(z)$ is a transcendental entire function with derivative $g$ in (\ref{AH}), (\ref{transform1}) and (\ref{transit}). \begin{prop} \label{propbbh} Let $\Gamma$ be a level curve tending to infinity on which $Y = {\rm Im} \, G(z) = \beta \in \mathbb R $ and $X = {\rm Re} \, G(z) $ increases, with $X \geq \alpha \in \mathbb R $, and assume that $\Gamma$ meets no zero of $g$. Suppose that $(z_n)$ is a sequence tending to infinity on $\Gamma$ such that $v_n = G(z_n ) = X_n + i \beta $ satisfies $v_n = o( |z_n|)^2 $. Then the trajectory of (\ref{AH}) which follows $\Gamma$ takes infinite time in tending to infinity. \end{prop} Here it is not assumed or required that $X \to + \infty$ as $z \to \infty$ on $\Gamma$. \\ \\ \textit{Proof of Proposition \ref{propbbh}.} It may be assumed that $\Gamma$ starts at $z^*$ and $G(z^*) = \alpha + i \beta$. Denote positive constants, independent of $n$, by $C_j$. Then the Cauchy-Schwarz inequality gives, as $n $ and $z_n$ tend to infinity, \begin{eqnarray*} |z_n|^2 &\leq& \left( C_1 + \int_\alpha^{X_n} \left| \frac{dz}{dX} \right| \, dX \right)^2 \\ &\leq& 2 \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right| \, dX \right)^2 \\ &\leq& 2 \left( \int_\alpha^{X_n} \, dX \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) \\ &\leq& 2 \left( |v_n| + C_2 \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) \\ &\leq& o \left( |z_n|^2 \right) \left( \int_\alpha^{X_n} \left| \frac{dz}{dX} \right|^2 \, dX \right) . \end{eqnarray*} Thus (\ref{transit}) shows that the transit time from $z^*$ to $z_n$ tends to infinity with $n$. \hfill$\Box$ \vspace{.1in} \textit{Proof of Theorem \ref{thmbbh}.} Let $G$ be the entire function given by Theorem \ref{BBHthm}, and set $g = G'$. As noted in the introduction, no trajectory of (\ref{AH}) can pass through a zero of $g$, and in any case it takes infinite time for a trajectory to approach a zero of $g$. Furthermore, if $\Gamma$ is a level curve, starting at $z^*$ say, on which ${\rm Im} \, G(z)$ is constant and $U(z) = {\rm Re} \, G(z)$ increases, and on which $g$ has no zeros, then there exists a sequence $z_n = w_{2n}$ which tends to infinity on $\Gamma$ and satisfies $$U(z^*) \leq U(z_n) \leq |z_n|^{1/2} , \quad |G(z_n)| \leq |U(z_n)| + O(1) \leq |z_n|^{1/2} + O(1).$$ Hence $\Gamma$ satisfies the hypotheses of Proposition \ref{propbbh}. It now follows that (\ref{AH}) has no trajectories tending to infinity in finite increasing time. Since time can be reversed for these flows by setting $s = -t$ and $dz/ds = - \bar g(z)$, the same example has no trajectories tending to infinity in finite decreasing time either. \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thm1}}\label{pfthm1} Let $f$ be as in the hypotheses. Then there exist $M > 0$ and a component $U$ of $\{ z \in \mathbb C : \, |f(z)| > M \} $ such that $v = \log f(z)$ is a conformal bijection from $U$ to the half-plane $H$ given by ${\rm Re} \, v > N = \log M$; it may be assumed that $0 \not \in U$. Let $\phi: H \to U$ be the inverse function. If $u \in H$ then $\phi$ and $\log \phi$ are univalent on the disc $|w-u| < {\rm Re} \, u - N$ and so Bieberbach's theorem and Koebe's quarter theorem \cite[Chapter 1]{Hay9} imply that \begin{equation} \label{h3} \left| \frac{\phi''(u)}{\phi'(u)} \right| \leq \frac4{{\rm Re} \, u -N } , \quad \left| \frac{\phi'(u)}{\phi(u)} \right| \leq \frac{4 \pi}{{\rm Re} \, u -N } . \end{equation} \begin{lem} \label{lem1} Let $v_0 $ be large and positive and for $0 \leq k \in \mathbb Z$ write \begin{equation} \label{rub1} V_k = \left\{ v_0 + t e^{i \theta} : \, t \geq 0, \, - \, \frac{\pi}{2^{k+2}} \leq \theta \leq \frac{\pi}{2^{k+2}} \right\}, \quad G_k(v) = \frac{f^{(k)}(z)}{f(z)}, \quad z = \phi(v). \end{equation} Then there exist positive constants $d$ and $c_k $ such that $| \log \phi'(v) | \leq d \log ( {\rm Re} \, v )$ as $v \to \infty$ in $V_1$ and $| \log |G_k(v)| | \leq c_k \log ( {\rm Re} \, v )$ as $v \to \infty$ in $V_k$. \end{lem} \textit{Proof.} For $v \in V_1$, parametrise the straight line segment from $v_0$ to $v$ with respect to $s = {\rm Re} \, u$. Then (\ref{h3}) and the simple estimate $|du| \leq \sqrt{2} ds $ yield $| \log \phi'(v) | = O( \log ( {\rm Re} \, v ))$ as $v \to \infty$ in $V_1$. Next, the assertion for $G_k$ is trivially true for $k=0$, so assume that it holds for some $k \geq 0$ and write \begin{eqnarray*} G_{k+1}(v) &=& \frac{f^{(k+1)}(z)}{f(z)} = \frac{f^{(k)}(z)}{f(z)} \cdot \frac{f'(z)}{f(z)} +\frac{d}{dz} \left( \frac{f^{(k)}(z)}{f(z)} \right) \nonumber \\ &=& G_k(v) G_1(v) + \frac{G_k'(v)}{\phi'(v)} = \frac{G_k(v)}{\phi'(v)} \left( 1 + \frac{G_k'(v)}{G_k(v)} \right). \end{eqnarray*} Thus it suffices to show that $G_k'(v)/G_k(v) \to 0$ as as $v \to \infty$ in $V_{k+1}$. By (\ref{rub1}) there exists a small positive $d_1$ such that if $v \in V_{k+1}$ is large then the circle $|u - v| = r_v = d_1 {\rm Re} \, v$ lies in $V_k$, and the differentiated Poisson-Jensen formula \cite[p.22]{Hay2} delivers $$ \frac{G_k'(v)}{G_k(v)} = \frac1{ \pi} \int_0^{2 \pi} \, \frac{ \log | G_k(v + r_v e^{i \theta } )|}{r_v e^{i \theta}} \, d \theta = O \left( \frac{ \log ( {\rm Re} \, v ) }{ {\rm Re} \, v } \right) \to 0 $$ as $v \to \infty$ in $V_{k+1}$. This proves the lemma. \hfill$\Box$ \vspace{.1in} To establish Theorem \ref{thm1}, take any $D \in \mathbb R$. Then there exist $v_1 \in [1, + \infty) $ and a path $$\Gamma \subseteq \{ v \in \mathbb C : \, {\rm Re} \, v > N , \, | {\rm Im} \, v | < \pi/4 \} \subseteq H$$ which is mapped by $e^v$ to the half-line $\{ t + iD : \, t \geq v_1 \}$. Thus $f(z) -i D = e^v -i D $ is real and positive for $z$ on $\gamma = \phi (\Gamma)$, and $\Gamma \setminus V_k $ is bounded for each $k \geq 0$. Now write, on $\Gamma$, $$ e^v = t+iD, \quad \frac{dv}{dt} = \frac1{t+iD}, \quad s = {\rm Re} \, v = \frac12 \ln (t^2+D^2) .$$ Hence, for any non-negative integers $k, m$, Lemma \ref{lem1} gives, as $v \to \infty$ on $\Gamma $, $$ \left| \frac{f^{(k)}(z)}{z^m} \right| = \left| \frac{f(z) G_k(v)}{z^m} \right| = \left| \frac{e^v G_k(v)}{\phi(v)^m} \right| \geq \frac{e^s }{ s^{c_k+md} } \geq e^{s/2} \to \infty . $$ It then follows that, for $c > 0$, \begin{eqnarray*} \int_\gamma |f^{(k)}(z)|^{-c} \, |dz| &\leq & O(1) + \int_\Gamma e^{- cs/2} |\phi'(v)| \, |dv| \\ &\leq& O(1) + \int_\Gamma e^{- cs/4} \, |dv| \\ &=& O(1) + \int_{v_1}^{+\infty} \frac1{(t^2+D^2)^{1/2+c/8}} \, dt < + \infty . \end{eqnarray*} \hfill$\Box$ \vspace{.1in} \section{Proof of Theorem \ref{thm2}}\label{pfthm2} Suppose first that the inverse function of the antiderivative $G$ of $g$ has a logarithmic singularity over infinity, and take $D \in \mathbb R$. Then Theorem \ref{thm1} may be applied with $f = G$ and $m= c = 1$, giving a level curve $\gamma = \gamma_D$, lying in a neighbourhood of the singularity, on which ${\rm Im} \, G(z) = D$ and ${\rm Re} \, G(z) $ increases. This curve is a trajectory for (\ref{AH}), traversed in time $$ \int_\gamma \frac1{\bar g(z)} \, dz \leq \int_\gamma |G'(z)|^{-1} \, |dz| < + \infty ,$$ which completes the proof in this case. For the proof of the following lemma the reader is referred to the statement and proof of \cite[Lemma 3.1]{blnewqc}. \begin{lem}[\cite{blnewqc}] \label{lemfirstest} Let the function $\phi : H \to \mathbb C \setminus \{ 0 \}$ be analytic and univalent, where $H = \{ v \in \mathbb C : \, {\rm Re} \, v > 0 \}$, and for $v, v_1 \in H$ define $Z(v) = Z(v, v_1)$ by \begin{equation} \label{h1} Z(v, v_1) = \int_{v_1}^v e^{u/2} \phi'(u) \, du = 2 e^{v/2} \phi'(v) - 2 e^{v_1/2} \phi'(v_1) - 2 \int_{v_1}^v e^{u/2} \phi''(u) \, du . \end{equation} Let $\varepsilon $ be a small positive real number. Then there exists a large positive real number $N_0$, depending on $\varepsilon$ but not on $\phi$, with the following property. Let $v_0 \in H$ be such that $S_0 = {\rm Re} \, v_0 \geq N_0$, and define $v_1, v_2, v_3, K_2$ and $ K_3$ by \begin{equation*} \label{vjdef} v_j = \frac{2^j S_0}{128} + i T_0, \quad T_0 = {\rm Im} \, v_0, \quad K_j = \left\{ v_j + r e^{i \theta} : \, r \geq 0, \, - \frac{\pi}{2^j} \leq \theta \leq \frac{\pi}{2^j} \right\}. \end{equation*} Then the following two conclusions both hold:\\ (i) $Z = Z(v, v_1)$ satisfies, for $v \in K_2$, \begin{equation} \label{h2} Z(v, v_1 ) = \int_{v_1}^v e^{u/2} \phi'(u) \, du = 2 e^{v/2} \phi'(v) (1 + \delta (v) ), \quad | \delta (v) | < \varepsilon . \end{equation} (ii) $\psi = \psi (v, v_1) = \log Z(v, v_1) $ is univalent on a domain $H_1$, with $v_0 \in H_1 \subseteq K_3$, and $\psi(H_1)$ contains the strip \begin{equation} \label{Omegaimage} \left\{ \psi (v_0) + \sigma+ i \tau : \, \sigma \geq \log \frac18 \, , \, - 2 \pi \leq \tau \leq 2 \pi \right\} . \end{equation} \end{lem} \hfill$\Box$ \vspace{.1in} Assume henceforth that $g$ is as in the hypotheses of Theorem \ref{thm2} and the inverse function of $g$ has a logarithmic singularity over infinity. This time there exist $M > 0$ and a component $C$ of $\{ z \in \mathbb C : \, |g(z)| > M \} $ such that $\zeta = \log g(z)$ is a conformal mapping of $C$ onto the half-plane given by ${\rm Re} \, \zeta > \log M$. Since (\ref{AH}) may be re-scaled via $z = Mw$ and $g(z) = Mh(w) $, it may be assumed that $M = 1$ and $0 \not \in C$. In order to apply Lemma \ref{lemfirstest}, let $\phi: H \to C$ be the inverse function $z = \phi(v)$ of the mapping from $C$ onto $H$ given by $$ v = 2 \zeta = 2 \log g(z), \quad g(z) = e^{v/2} , $$ As in the proof of Theorem \ref{thm1}, (\ref{h3}) holds for $u \in H$, with $N = 0$. By (\ref{Omegaimage}) there exists $X_0 > 0$ such that $ Z(v, v_1)$ maps a domain $H_2 \subseteq H_1 \subseteq K_3 \subseteq H$ univalently onto a half-plane ${\rm Re} \, Z > X_0$. Hence, for any $Y_0 \in \mathbb R$, there exists a path $\Gamma $ which tends to infinity in $ H_1 \subseteq K_3$ and is mapped by $ Z(v, v_1)$ onto the half-line $L_0 = \{ X + i Y_0, \, X \geq X_0 + 1 \}$. Consider the flow in $H_2$ given by \begin{equation} \label{vflow} \phi'(v) \dot v = \overline{e^{v/2}} ; \end{equation} by (\ref{h2}) this transforms under $Z = Z(v, v_1)$ to \begin{equation} \label{wflow} \dot Z = \frac{dZ}{dv} \, \dot v = e^{v/2} \phi'(v) \dot v = | e^{v} | . \end{equation} Combining (\ref{h3}) and (\ref{h2}) shows that $ | e^{v} | \geq |Z(v)|^{3/2} $ for large $v$ on $\Gamma$. Hence there exists a trajectory of (\ref{wflow}) which starts at $X_0+1 + iY_0$ and tends to infinity along $L_0$ in time $$ T_0 \leq \int_{X_0+1}^\infty \left| \frac{dt}{dX} \right| \, dX \leq O(1) + \int_{X_0+1}^\infty (X^2 + Y_0^2)^{-3/4} \, dX < + \infty . $$ This gives a trajectory of (\ref{vflow}) tending to infinity along $\Gamma$ and taking finite time to do so, and hence a trajectory $\gamma$ of (\ref{AH}) in $C$, tending to infinity in finite increasing time. Since $Y_0 \in \mathbb R$ may be chosen at will, this proves Theorem \ref{thm2}. \hfill$\Box$ \vspace{.1in} {\footnotesize
2023-04-23T08:18:01.662Z
2021-05-13T02:10:00.000Z
redpajama/arxiv
arxiv_0000
1,055
5,213
ece751f52a3c14cc273632ba73f7b022d3a026a1
\section{Introduction} Enhancing a speech signal corrupted by interfering speakers has been one of the major challenges of speech signal processing. One way to tackle this problem is to use speech separation~\cite{makino2007blind}, which separates a speech mixture into all its sources. Research in speech separation has progressed rapidly with the advent of deep learning~\cite{hershey2016deep,yu2016permutation,luo2018tasnet}. However, there are two fundamental limitations with most separation techniques. First, separation requires knowing or estimating the number of sources in the mixture. Then, there is a global permutation ambiguity; the mapping between outputs speakers is arbitrary. \gls{TSE}~\cite{zmolikova2019Journal} has been proposed as an alternative to enhance speech in a mixture. \gls{TSE} focuses on extracting only the speech signal of a target speaker instead of separating all sources by exploiting a speaker clue to identify which speaker to extract~\cite{zmolikova2017speaker,zmolikova2019Journal,delcroix2018single,ephrat2018looking,afouras2018conversation,Wang_voicefilter19,xu2019time,Chen2018DeepEN,xiao2019single,Jansky_20,Gu2019}. For example, we can use an enrollment utterance, which consists of a short recording containing only the voice of the target speaker~\cite{zmolikova2017speaker,delcroix2018single,Wang_voicefilter19}. Because \gls{TSE} estimates only the speech of the target speaker, it naturally alleviates the issues of separation systems, i.e., the processing is independent of the number of sources in the mixtures, and there is no speaker ambiguity at the output. We can realize \gls{TSE} using a \gls{NN} conditioned on the target speaker clue, which directly estimates the target speech from the mixture~\cite{zmolikova2017speaker,delcroix2020improving,Wang_voicefilter19,xu2019time}. Such a \gls{TSE} system must perform thus both \emph{separation} and \emph{speaker identification} internally. Most studies about \gls{TSE} have assumed the target speaker was always actively speaking in the mixtures, i.e., \emph{\gls{AS}} case. However, we argue that measuring \gls{TSE} performance in such conditions does not fully represent the \emph{speaker identification} capabilities of \gls{TSE} systems. Indeed, in practice, a target speaker may be silent, i.e., \emph{\gls{IS}} case. In such a case, a \gls{TSE} should output nothing or a zero signal. However, a \gls{TSE} system trained only on \gls{AS} conditions would always try to output a speech-like signal, which would cause \emph{false alarms} or false positive. It is thus essential to consider \gls{IS} conditions in the design and evaluation of \gls{TSE} systems. There have been only a few works dealing with the \gls{IS} issue of \gls{TSE}~\cite{zhang20m_interspeech,borsdorf21_interspeech,Zhang2021}. These works offer two different strategies to address the problem. The \gls{TSEIS} scheme trains a \gls{TSE} system to directly output zero signals for \gls{IS} cases by including \gls{IS} samples during training~\cite{zhang20m_interspeech,borsdorf21_interspeech}. The \gls{TSEV} scheme combines \gls{TSE} with speaker verification and detects \gls{IS} samples when the extracted signals do not match the target speaker characteristics of the enrollment~\cite{Zhang2021}\footnote{Note that in \cite{Zhang2021} the goal is not \gls{TSE} but speaker verification.}. \gls{TSEIS} is a simpler system than \gls{TSEV}, but it is potentially easier to control false alarm and \emph{miss detection}\footnote{Here miss detection or false negative means that the \gls{TSE} systems wrongly predicted an \gls{AS} as \gls{IS}.} with \gls{TSEV}. However, these schemes have not been compared, and their impact on \gls{TSE} performance has not been fully revealed. In this paper, we address this shortcoming and perform a comprehensive comparison in terms of the detection of \gls{IS} and extraction performance in order to answer the following question: \emph{How well can \gls{TSE} systems handle \gls{IS} samples?} The contribution of this paper is as follows: (1) We propose two simple implementations of the \gls{TSEIS} and \gls{TSEV} schemes based on the SpeakerBeam \gls{TSE} framework~\cite{delcroix2020improving}, and perform an comprehensive experimental comparison in terms of extraction and \gls{AS}/\gls{IS} detection performance. (2) We reveal that a \gls{TSEIS} system trained with a modified \gls{SNR} loss can predict \gls{IS} in about 90\% of the cases but also significantly increases the number of extraction failures for \gls{AS} cases. (3) We show that we can build a \gls{TSEV} system from a \gls{TSE} system trained only with \gls{AS} samples. Such a simple \gls{TSEV} system can detect \gls{AS}/\gls{IS} better than a \gls{TSEIS}, while maintaining high extraction performance. (4) Finally, we reveal that the enrollment duration impacts moderately extraction performance but greatly affects \gls{AS}/\gls{IS} detection errors of \gls{TSEV}. With enrollment of 15 sec or more, we can achieve \gls{AS}/\gls{IS} detection with a \gls{EER} of about 5 \%. The results of this study demonstrate the potential of current capabilities of \gls{TSE} systems to detect and extract a target speaker. \section{Related works} Prior works~\cite{zhang20m_interspeech,borsdorf21_interspeech} considered \gls{TSE} with \gls{IS} cases. They introduced a modified \gls{SI-SNR}~\cite{roux2019sdr} loss to allow training a \gls{TSEIS} system with \gls{IS} samples. However, using a scale-invariant loss makes the output scale arbitrary. It may thus be challenging to determine if the output can actually be considered an \gls{AS} or not. Besides, it is thus unclear how well the system proposed in~\cite{borsdorf21_interspeech} can detect \gls{IS} cases since the approach was only evaluated with signal extraction measures and not in terms of \gls{AS}/\gls{IS} detection. In contrast, we propose using a modified \gls{SNR} loss to train our \gls{TSEIS} system, which preserves the scale at the output of the system and allows thus performing the \gls{AS}/\gls{IS} detection based on the attenuation from the mixture. There have been two prior studies combining \gls{TSE} with speaker verification~\cite{rao19_interspeech, Zhang2021}, which are related to \gls{TSEV}. Both works aimed at improving speaker verification for speech in a mixture and used a \gls{TSE} system as a pre-processing. However, as their goal was speaker verification and not \gls{TSE}, they did not evaluate their systems in terms of extraction performance although, e.g., miss detection errors caused by zeroing out the output when \gls{AS} cases are detected as \gls{IS} can have a severe impact on extraction performance. \section{TSE problem and baseline system} \label{sec:baseline} \subsection{Problem formulation} \gls{TSE} aims at extracting speech of a target speaker, ${\mathbf{x}^s} \in \mathcal{R}^T$ from a mixture ${\mathbf y} \in \mathcal{R}^T$ defined as, \begin{align} {\mathbf y} = {\mathbf{x}^s} + \sum_{i \neq s} {\mathbf x}^i + {\mathbf n}, \end{align} where ${\mathbf x}^i$ and ${\mathbf n}$ represent the interference speech and the background noise signals, respectively. $T$ is the duration of the signal. We assume having an enrollment utterance of the target speaker, $\mathbf{a}^s \in \mathcal{R}^{T^a}$, of duration $T^a$. Note that when the target speaker is active, ${\mathbf{x}^s}$ is a speech signal, while when it is inactive ${\mathbf{x}^s} = \mathbf{0}$, where $\mathbf{0}$ denotes a vector of all zeros. \subsection{SpeakerBeam} \begin{figure}[tb] \centering \centerline{\includegraphics[width=1\linewidth]{architecture.pdf}} \vspace{-2mm} \caption{Overview of a \gls{TSE} system and its extension to \gls{TSEV}.} \vspace{-5mm} \label{fig:sys_overview} \end{figure} We use time-domain SpeakerBeam~\cite{delcroix2020improving} as a basis for our study as it represents a typical enrollment-based neural \gls{TSE} system~\cite{Wang_voicefilter19,xu2019time}. The left part of Fig.~\ref{fig:sys_overview} shows a diagram of the system. It consists of two modules. (1) An auxiliary network that computes a target speaker embedding, $\mathbf{e}^s \in \mathcal{R}^N$, from the enrollment, $\mathbf{a}^s$. (2) An extraction network that estimates the target speech from the mixture given the speaker embedding. The operation of the network is summarized as follows, \begin{align} \mathbf{e}^s = & \Aux(\mathbf{a}^s),\\ \hat{\mathbf x}^{s} = & \Ext({\mathbf y}, \mathbf{e}^s), \end{align} where $\hat{\mathbf x}^{s} \in \mathcal{R}^T$ is the estimated target speech signal, $\Aux(\cdot)$ and $\Ext(\cdot)$ represent the auxiliary and extraction \gls{NN}, respectively. With time-domain SpeakerBeam both $\text{NN}^{\text{Aux}}(\cdot)$ and $\text{NN}^{\text{Ext}}(\cdot)$ are implemented with 1-D convolutional blocks proposed for the \gls{ConvTasNet}~\cite{luo2019conv}. The extraction network uses an element-wise multiplication~\cite{samarakoon2016subspace,delcroixIcassp19} to combine the embedding vector with the hidden representation obtained after the first convolutional block of the extraction network. \subsection{Training objective for active speaker cases} With SpeakerBeam, we train jointly both the auxiliary and extraction networks, which enables learning speaker embeddings optimal for the \gls{TSE} task. Speech separation and \gls{TSE} systems are usually trained using a time-domain criterion such as \gls{SNR} or \gls{SI-SNR}~\cite{roux2019sdr,luo2019conv}. We chose to use a scale-dependent loss, to ensure that the system preserves the scale of the signals as it may be important to detect \gls{AS}/\gls{IS} samples. In particular, we use the negative thresholded \gls{SNR}~\cite{wisdom2020unsupervised} loss defined as, \begin{align} \mathcal{L}^{\text{active}} (\hat{\mathbf x}^{s}, {\mathbf x}^s )& = - 10 \log_{10} \left( \frac{\| {\mathbf x}^s \|^{2}}{\| {\mathbf x}^s - \hat{\mathbf x}^{s} \|^{2} + \tau \| {\mathbf x}^s \|^2} \right), \label{eq:ext_loss} \end{align} where $\tau$ is a threshold that we set at $\tau = 10^{-3}$. It avoids that the low distortion training samples dominate the gradient. We train our baseline \gls{TSE} model with only \gls{AS} training samples. \section{Handling inactive speakers} \subsection{TSE-IS: Learning direct \gls{IS} detection with inactive loss} \label{ssec:SpeakerBeam-IS} The first approach for handling \gls{IS}, \gls{TSEIS}, consists of training a system to output zero signals for \gls{IS} cases. The loss functions derived from the \gls{SNR} such as Eq.~\eqref{eq:ext_loss} are ill-defined when the reference signal is zero. Thus, we cannot use such losses directly with \gls{IS} samples. This problem was first revealed for the training of separation systems that can accommodate a varying number of sources in the mixture~\cite{FUSS_wisdom2021s}, i.e., the number of sources can be less than the number of outputs of the separation system. In this case, a separation system needs thus to be able to output zero signals, which is similar to the \gls{IS} problem of \gls{TSE}. We propose to use the modified \gls{SNR} loss introduced in~\cite{FUSS_wisdom2021s} to train our \gls{TSEIS} system. The loss is defined as, \begin{align} \mathcal{L} (\hat{\mathbf x}^{s}, {\mathbf x}^s, {\mathbf y} )= &\left\{ \begin{array}{lc} \mathcal{L}^{\text{active}} (\hat{\mathbf x}^{s}, {\mathbf x}^s ), & \text{if } {\mathbf x}^s \neq \mathbf{0},\\ \mathcal{L}^{\text{inactive}} (\hat{\mathbf x}^{s}, {\mathbf y} ), & \text{if } {\mathbf x}^s = \mathbf{0}, \end{array} \right. \label{eq:active&inactive_loss} \end{align} where the inactive loss is given by \begin{align} \mathcal{L}^{\text{inactive}} (\hat{\mathbf x}^{s}, {\mathbf y} )=10 \log_{10} \left( \| \hat{\mathbf x}^{s} \|^{2} + \tau^{\text{inactive}} \| {\mathbf y} \|^2 \right), \end{align} and $\tau^{\text{inactive}}$ is a soft threshold set at $\tau^{\text{inactive} } = 10^{-2}$. $\mathcal{L}^{\text{inactive}}$ consists of the denominator term of Eq. (\ref{eq:ext_loss}) with a different setting for the soft threshold (i.e., ${\mathbf x}^s$ replaced by ${\mathbf y}$). We opt here for a scale-dependent \gls{SNR} loss, unlike \cite{borsdorf21_interspeech}, because we believe that the scale of the output signal may matter in practical applications to detect \gls{IS} cases. For example, we can evaluate how well the system could internally detect \gls{AS}/\gls{IS} cases by looking at the output scale and, e.g., measuring the attenuation from the mixtures, $\mathcal{A}^{\text{mixture}}= 10 \log_{10} \left(\frac{\| \hat{\mathbf x}^{s} \|^{2}}{\| {\mathbf y} \|^{2}} \right)$. We can thus define a classifier based on the attenuation as, \begin{align} c^{\text{Att}} = &\left\{ \begin{array}{lc} 1, & \text{if } \mathcal{A}^{\text{mixture}} > \eta^{\text{Att}},\\ 0, & \text{if } \mathcal{A}^{\text{mixture}} \le \eta^{\text{Att}}, \end{array} \right. \label{eq:classifier_att} \end{align} where $\eta^{\text{Att}}$ is a threshold. The target speaker is considered active when $c^{\text{Att}}=1 $ and inactive when $c^{\text{Att}}=0 $. We introduced the above classifier to measure the \gls{AS}/\gls{IS} detection capability of the system, but in practice, we do not need it as \gls{TSEIS} performs the \gls{AS}/\gls{IS} detection internally and directly output a speech signal or a zero signal. There is no increase in computational complexity compared to an existing \gls{TSE} system. However, it allows little control to, e.g., balance the false alarms or miss detection errors of the system at test time. Besides, adding \gls{IS} cases during training may hurt the extraction performance for the \gls{AS} cases. \subsection{TSE-V: Post \gls{AS}/\gls{IS} detection with speaker verification} \label{ssec:SpeakerBeamV} Another approach to handle \gls{IS} cases consists of using a \gls{TSE} system trained on \gls{AS} cases, which always try to output a speech-like signal, and then perform post verification to check that the speech characteristics of the extracted speech, ${\mathbf{x}^s}$, correspond to those of the enrollment. Figure \ref{fig:sys_overview} shows a schematic diagram of such a system. In this work, we propose using the auxiliary network to compute a speaker embedding for the extracted speech, ${\mathbf e}^{\hat{\mathbf x}^{s}} = \Aux(\hat{\mathbf x}^{s})$, since we showed in prior works that it could extract discriminative speaker embeddings\cite{zmolikova2019Journal}. We then make the \gls{AS}/\gls{IS} decision by looking at the cosine similarity between the embeddings computed from the enrollment and from the extracted speech as, \begin{align} c^{\text{Cos}} = &\left\{ \begin{array}{lc} 1, & \text{if } \mathcal{C}({\mathbf e}^{\hat{\mathbf x}^{s}}, \mathbf{e}^s) > \eta^{\text{Cos}},\\ 0, & \text{if } \mathcal{C}({\mathbf e}^{\hat{\mathbf x}^{s}}, \mathbf{e}^s) \le \eta^{\text{Cos}}, \end{array} \right. \label{eq:classifier_cos} \end{align} where $\mathcal{C}({\mathbf e}^{\hat{\mathbf x}^{s}}, \mathbf{e}^s)$ is the cosine similarity between ${\mathbf e}^{\hat{\mathbf x}^{s}}$ and $\mathbf{e}^s$, and $\eta^{\text{Cos}}$ is a threshold. We can then define an extracted signal after detection as $\bar{{\mathbf x}}^s =c^{\text{Cos}} \hat{\mathbf x}^{s}$, which simply zeros out the samples detected as \gls{IS}. Note that this approach checks whether the extracted speech matches the enrollment characteristics. It can thus possibly detect not only \gls{IS} but also extraction failures. Such failures occur when the \gls{TSE} system wrongly outputs the mixture or the interference speakers instead of the target speech. Compared to \gls{TSEIS}, \gls{TSEV} increases the computational complexity slightly as it requires an additional pass through the auxiliary network. However, since the \gls{AS}/\gls{IS} detection is performed independently of the \gls{TSE} process, it allows better control at test time and also does not require the training of the \gls{TSE} module with \gls{IS} samples. Note that in contrast to our proposed \gls{TSEV}, the system proposed in~\cite{Zhang2021} used a pre-trained speaker embedding extractor and retrained it on extracted speech. We can view our \gls{TSEV} system as a simplified version of~\cite{Zhang2021}. \section{Experiments} We performed experiments using the LibriMix dataset~\cite{cosentino2020librimix}, which consists of noisy two-speaker mixtures derived from the LibriSpeech dataset~\cite{panayotov2015librispeech}. We used the open implementation of SpeakerBeam~\cite{spkbeam_code} based on the asteroid toolkit~\cite{Pariente2020Asteroid}. \subsection{Dataset} \begin{table}[tb] \centering \caption{Description of the dataset} \vspace{-3mm} \begin{tabular}{l@{}cccc@{}} \toprule & Train-100k & Train-360k & Val & Test \\ \midrule Nb. of mixtures & 13900 & 50800 & 3000 & 3000 \\ Nb. of Speakers & 251 & 921 & 40 & 40 \\ \bottomrule \end{tabular} \label{tab:dataset} \vspace{-5mm} \end{table} We performed experiments using the full-overlap (i.e., min version) two-speaker noisy mixtures of the LibriMix dataset. Table \ref{tab:dataset} provides more details about the dataset. For each mixture, we randomly sampled enrollment utterances from the speakers in the mixture for \gls{AS} cases and from a different speaker for the \gls{IS} cases. In both cases, the enrollment differed from the utterances used in the mixture. At test time, we considered enrollment utterances from three speakers for each test mixture, i.e., two from the \glspl{AS} in the mixture and one from another speaker, i.e., \gls{IS}. \subsection{Experimental settings} We used the same network architecture for all experiments, which consists of the SpeakerBeam system provided in~\cite{spkbeam_code}, except that we used the training loss of Eq.~\eqref{eq:active&inactive_loss}. We followed a similar configuration as \gls{ConvTasNet}~\cite{luo2019conv}. We used blocks of eight stacked 1-D convolution layers for the auxiliary and extraction networks, repeated three times for the extraction network. We used an element-wise multiplication to combine the embedding vector with the hidden representation at the output of the first convolution block. We trained the systems for 200 epochs with the Adam optimizer~\cite{kingma2015adam}. We compared the following four \gls{TSE} systems. \\ \textbf{Baseline \gls{TSE}} corresponds to the baseline system of Section \ref{sec:baseline}, which was trained with the train-100k set with only \gls{AS} samples. It does not perform neither internal nor post \gls{AS}/\gls{IS} detection. \\ \textbf{\gls{TSEIS}} corresponds to the system described in Section \ref{ssec:SpeakerBeam-IS}, which was trained with the train-100k training set including 10\% of \gls{IS} cases, i.e., we used an enrollment from a speaker not present in the mixture and a zero signal as target for 10\% of the training samples.\\ \textbf{\gls{TSEV}} corresponds to the system described in Section \ref{ssec:SpeakerBeamV}. The \gls{TSE} module corresponds to the above baseline \gls{TSE} system trained with only \gls{AS} samples. At test time, we re-used the auxiliary network to compute the embedding vector for the extracted speech and performed \gls{AS}/\gls{IS} detection with Eq.~\eqref{eq:classifier_cos}. \\ \textbf{\gls{TSEV}(360)} consists of the \gls{TSE} module of the above \gls{TSEV} system retrained on \gls{AS} samples of the train-360k dataset for 100 epochs. It is used to measure the impact of using a larger training set with more speakers. \subsection{Evaluation metrics} We evaluated the systems in terms of the following evaluation metrics: (1) \textbf{\gls{EER}} measures the \gls{AS}/\gls{IS} detection errors using the \gls{DET} curves shown in Fig. \ref{fig:DET} obtained with the classifiers of Eq.~\eqref{eq:classifier_att} and Eq.~\eqref{eq:classifier_cos} for \gls{TSEIS} and \gls{TSEV}, respectively. (2) \textbf{\Gls{SDRi}} measures the extraction performance for the \gls{AS} cases using the BSS eval toolkit~\cite{vincent2006performance}. We report the \gls{SDRi} before and after \gls{AS}/\gls{IS} detection, i.e. using $\hat{\mathbf x}^{s}$ or $\bar{{\mathbf x}}^s =c \hat{\mathbf x}^{s}$, respectively, where $c$ is given by either Eq.~\eqref{eq:classifier_att} or Eq.\eqref{eq:classifier_cos} using the threshold that gives the \gls{EER}. We do not need to compute $\bar{{\mathbf x}}^s$ for \gls{TSEIS}, but we perform it anyway to provide a fair comparison with \gls{TSEV}. \gls{SDRi}-after accounts for the impact of miss detection errors on the extraction performance. Note that samples detected as \gls{IS} are replaced by a zero signal, thus resulting in a SDR of 0 dB. (3) \textbf{Failure rate (Fail)} is defined as $Fail = \frac{\mathit{NF}^{\text{AS}}}{N^{\text{AS}}}$, where $\mathit{NF}^{\text{AS}}$ is number of \gls{AS} samples with \gls{SDRi} below 1dB and $N^{AS}$ is the total number of \gls{AS} samples. Failures happen when, e.g., the \gls{TSE} system extracts the wrong speaker, output the mixture or a zero signal when using \gls{TSEIS}. (4) \textbf{Failure and miss detection rate (Fail\&Miss)} is defined as $Fail\&Miss = \frac{\mathit{NFM}^{\text{AS}}}{N^{\text{AS}}}$, where $\mathit{NFM}^{\text{AS}}$ is the number of \gls{AS} samples that result in extraction or detection errors, i.e. \gls{SDRi} below 1dB, miss detection or both. It measures the total error rate for the \gls{AS} cases. For example, even if a sample is correctly detected as \gls{AS} its extraction performance may be low, and it should thus be considered as an error. (5) \textbf{Attenuation (Att.)} measures the attenuation from the mixture, $\mathcal{A}^{\text{mixture}}$, defined in Section \ref{ssec:SpeakerBeam-IS}. It shows how well a \gls{TSE} system can output zero signals for \gls{IS} cases. Note that we use only \gls{AS} samples to compute \gls{SDRi}, Fail and Fail\&Miss, but both \gls{AS} and \gls{IS} for \gls{EER} and Attenuation. \subsection{Experimental results} \begin{table}[tb] \caption{Extraction and detection performance with enrollment of average duration of 10~sec. The input SDR is -1.8 dB. } \vspace{-4mm} \label{tab:results} \centering \begin{tabular}{@{}l@{}cccc@{}} \toprule & \gls{SDRi} before(after) & Fail& \gls{EER}& Fail\&Miss \\ & detection [dB] $\uparrow$ & [\%] $\downarrow$& [\%] $\downarrow$ &[\%] $\downarrow$ \\ \midrule Baseline TSE & 12.4 ( na )& 3.4& -&- \\ TSE-IS& 10.8 (11.4)& 8.6& 11.6& 13.4\\ TSE-V& 12.4 (11.9)& 3.4& 8.9& 10.5\\ \midrule TSE-V(360)& 13.6 (13.1)& 1.7& 6.3& 7.1\\ \bottomrule \end{tabular} \vspace{-2mm} \end{table} \begin{figure}[tb] \centering \centerline{\includegraphics[width=0.95\linewidth]{Attenutation.pdf}} \vspace{-2mm} \caption{Attenuation for each test sample. The first 6000 samples correspond to \gls{AS} and the last 3000 to \gls{IS} samples.} \vspace{-5mm} \label{fig:attenuation} \end{figure} Table \ref{tab:results} shows the extraction and \gls{AS}/\gls{IS} detection results for the different systems using enrollment of average duration of 10 sec. Figure \ref{fig:attenuation} shows the attenuation with respect to the mixture, $\mathcal{A}^{mixture}$, as a function of the samples in the test set. The baseline \gls{TSE} system, which was trained only with \gls{AS} samples, achieves 12.4 dB \gls{SDRi} (the input SDR is -1.8~dB) and only 3.4 \% of failures. However, as seen in Fig. \ref{fig:attenuation}, the attenuation values remain in a similar range for both \gls{AS} and \gls{IS} samples, meaning that it always outputs some signal even for \gls{IS} cases, which would cause many false alarms. The \gls{TSEIS} system, which we trained with \gls{IS} samples, can output zero signals. We observe in Fig.\ref{fig:attenuation} that the attenuation is around -100 dB for most \gls{IS} cases while it remains close to 0dB for most \gls{AS} cases. This confirms that \gls{TSEIS} can internally perform \gls{AS}/\gls{IS} detection. However, we also observe that learning with \gls{IS} has an impact on extraction performance for \gls{AS} cases. Indeed, around 10\% of the \gls{AS} test samples have attenuation around -100 dB (i.e. miss detection). Consequently, the failure rate is high, i.e., close to 9\%, and the average \gls{SDRi} lower than the baseline. The impact on \gls{SDRi} may be exaggerated as it includes miss detection errors, i.e., samples where the system wrongly outputs a signal close to zero. The \gls{SDRi} after detection is slightly better but remains lower than the baseline. We can also evaluate the \gls{AS}/\gls{IS} detection capability of the \gls{TSEIS} by looking at the detection performance of a classifier based on the attenuation as introduced in Eq.~\eqref{eq:classifier_att}. Figure \ref{fig:DET} shows the \gls{DET} curve and \gls{EER} of such a classifier. The \gls{TSE} module in \gls{TSEV} corresponds to the above baseline system, and thus the performance before detection is the same for the \gls{AS} cases. The proposed verification based on the cosine distance of the embeddings computed with the auxiliary network is simple yet effective. Indeed, it can detect relatively well \gls{AS}/\gls{IS}, with an \gls{EER} of less than 9\%. The \gls{SDRi} after detection is 0.5 dB lower because it includes miss detection errors. The total error rate on \gls{AS} cases, i.e., Fail\&Miss rate, is 10.5\%, which is better than the \gls{TSEIS} system by about 3\%. Overall \gls{TSEV} achieves higher extraction and detection performance than \gls{TSEIS}. We also explore the impact of training with a larger training set which includes more speakers with the \gls{TSEV}(360) system. Retraining on the larger training set improves \gls{SDRi} by about 1.2 dB, but mainly it can greatly reduce the failure rate, the \gls{EER} and the combined Fail\&Miss. \begin{figure}[tb] \centering \centerline{\includegraphics[width=0.95\linewidth]{DET_curve_log.pdf}} \vspace{-4mm} \caption{\gls{DET} curves for \gls{AS}/\gls{IS} detection with \gls{TSEV} and \gls{TSEIS}. The black circles indicate the \gls{EER}. } \vspace{-5mm} \label{fig:DET} \end{figure} Figure \ref{fig:DET} plots the \gls{DET} curves for the \gls{AS}/\gls{IS} detection with \gls{TSEV} and \gls{TSEIS}. We observe that the miss rate rapidly increases for the \gls{TSEIS}, while the curve for \gls{TSEV} is much smoother. Consequently, it is more challenging to tune at test time the false alarm or miss rate of \gls{TSEIS} then \gls{TSEV}. \begin{figure}[tb] \centering \centerline{\includegraphics[width=0.95\linewidth]{enroll_length_EER_SDRi_sub.pdf}} \vspace{-4mm} \caption{Extraction and \gls{AS}/\gls{IS} detection performance as a function of the enrollment duration.} \vspace{-5mm} \label{fig:enroll_duration} \end{figure} Finally, Figure \ref{fig:enroll_duration} plots the \gls{SDRi} before detection and \gls{EER} as a function of the average enrollment duration. Here we varied the enrollment duration by concatenating from 1 to 5 enrollment utterances for each test sample, which resulted in the average utterance length varying from 5 to 25 seconds. Increasing the enrollment duration improves extraction performance moderately but greatly reduces \gls{EER} for \gls{TSEV} and \gls{TSEV}(360). For example, we can approach a \gls{EER} of 5\% with \gls{TSEV}(360) when using an enrollment utterance of 15 to 20 sec. We do not observe a similar trend for \gls{TSEIS}. The results of our experiments demonstrate that with slight modifications, a \gls{TSE} system can handle \gls{IS} cases relatively well. The \gls{TSEV} approach provides better overall extraction and \gls{AS}/\gls{IS} detection performance than \gls{TSEIS}. It also allows more control to tune the miss detection and false alarm rates at test time. However, \gls{TSEV} requires an additional verification step. \gls{TSEIS} can learn to detect internally \gls{IS} cases and output directly zeros signals without increasing the computational complexity. Although \gls{TSEIS} performs worse than \gls{TSEV}, it could still be advantageous for, e.g., low-latency systems where the batch verification step would not be allowed. \section{Conclusion} A \gls{TSE} system must perform speech extraction and speaker identification. Most studies have focused on evaluating \gls{TSE} systems in terms of extraction performance and have mostly ignored the impact of false alarms when the target speaker is inactive. In this paper, we carried out a systematic comparison of two possible schemes to handle \gls{IS}. Our experiments revealed that we could exploit the auxiliary network of a \gls{TSE} system to perform speaker verification at the output and detect \gls{AS}/\gls{IS} cases. We can detect \gls{AS}/\gls{IS} cases with a \gls{EER} of around 5\%, using \gls{TSEV} trained with a relatively large amount of speaker, and using enrollment utterances of more than 15 sec. This positive finding confirms the potential of \gls{TSE} systems. Our \gls{TSEV} system outperforms a \gls{TSEIS} system that can internally detect \gls{IS} and output zero signals. However, the \gls{TSEIS} system may remain attractive for, e.g., low-latency systems, which we plan to explore in our future works. \begin{comment} \begin{table}[th] \caption{Extraction and \gls{AS}/\gls{IS} detection performance for \gls{TSEV} and \gls{TSEIS} systems. } \label{tab:example} \centering \begin{tabular}{@{}l@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{}} \toprule name& SDRi (act. SDRi)& EER-Att & EER-Cos& Fail& Fail+ME \\ & [dB] $\uparrow$ & [\%] $\downarrow$&[\%] $\downarrow$ &[\%] $\downarrow$ &[\%] $\downarrow$ \\ \midrule SpkBeam& 12.1 (12.1)& 41.5& 10.8& 4.2& 12.6\\ SpkBeam + IS& 10.0 (12.2)& 12.1& 11.1& 11.3& 14.0\\ SpkBeam-360& 12.7 (12.7)& 44.3& 8.9& 2.9& 10.0\\ \bottomrule \end{tabular} \end{table} \end{comment} \begin{comment} \begin{table}[tb] \caption{Performance for mixtures of same and different sex. NEED TO ADD:: FA looking at when Enrl. + mixuture all same gender, enroll + 1 same gender and all diff?? } \label{tab:example} \centering \begin{tabular}{lcccccc } \toprule & \multicolumn{3}{c}{Diff}& \multicolumn{3}{c}{Same} \\ & SDRi & MI& FA & SDRi & MI& FA \\ & [dB] $\uparrow$ & [\%] $\downarrow$ &[\%] $\downarrow$ & [dB] $\uparrow$ & [\%] $\downarrow$&[\%] $\downarrow$ \\ \midrule SpkBeam &12.90 &11.0 &12.5 &11.36 &10.7 &9.0\\ SpkBeam + IS &10.62 &11.1 &13.0 &9.30 &11.1 &9.2\\ SpkBeam-360 &13.21 &9.9 &10.8 &11.60 &9.3 &8.3\\ SpkBeam-360 &13.34 &9.2 &9.7 &11.96 &8.5 &8.0\\ \bottomrule \end{tabular} \end{table} \subsection{Enrollment duration} \begin{figure}[tb] \begin{minipage}[b]{0.98\linewidth} \centering \centerline{\includegraphics[width=0.8\linewidth]{SDRi_enroll_length}} \centerline{a) SDRi}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.98\linewidth} \centering \centerline{\includegraphics[width=0.8\linewidth]{EER (Att)_enroll_length}} \centerline{b) EER (Att)} \end{minipage} \medskip \hfill \begin{minipage}[b]{0.98\linewidth} \centering \centerline{\includegraphics[width=0.8\linewidth]{EER (Cos)_enroll_length.png}} \centerline{b) EER (Cos)}\medskip \end{minipage} \caption{Example for real mixtures.} \label{fig:res_real} \end{figure} \end{comment} \bibliographystyle{IEEEtran}
2023-04-23T08:18:02.974Z
2022-04-12T02:28:52.000Z
redpajama/arxiv
arxiv_0000
1,090
5,008
9b8d1f4beb3328df526daa2846225b5332484d10
\section{Introduction} Machine learning (ML) has progressed rapidly during the past decade, and ML models have been adopted in a wide range of applications, such as image recognition~\cite{HZRS16,KSH12}, speech recognition~\cite{GMH13,HDYDMJSVNSK12}, machine translation~\cite{BCB15}, etc. Most of the current ML applications are based on discriminative models, represented by classifiers. Generative models, on the other hand, have attracted an increasing amount of attention recently. The most representative generative model is generative adversarial networks (GANs)~\cite{GPMXWOCB14}. Due to the ability to produce novel samples from high-dimensional data distributions, GANs are finding appealing application scenarios, such as image-to-image translation~\cite{ZPIE17,LBK17,WLZTKC18}, image inpainting~\cite{ISI17}, text generation~\cite{AGLMZ17,VTBE15}, and sound generation~\cite{MKGKJSCB17,ODZSVGKSK16}. ML models have been shown to exhibit severe security and privacy vulnerabilities. Existing attacks including adversarial examples~\cite{SZSBEGF14,EEFLRXPKS18}, membership inference~\cite{SSSS17,SZHBFB19}, and model stealing~\cite{TZJRR16,PMGJCS17,OSF19} mainly focus on discriminative models. Recent research has also demonstrated the security and privacy risks of generative models. In particular, Hayes et al.~\cite{HMDC19} show that an adversary can effectively determine whether a data sample is used to train a target GAN. Chen et al.~\cite{CYZF20} further generalize this attack by proposing a taxonomy of membership inference scenarios against GANs. While most of the attacks against generative models focus on membership inference, other vulnerabilities are left largely unexplored. \subsection{Our Contributions} \mypara{Motivation} In this paper, we perform the first \emph{property inference attack} against GANs: an adversary aims to infer whether a target GAN's underlying training dataset exhibits a certain general property. Here, the general property is related to the macro-level information of the target GAN's training dataset. More importantly, the property is not related to the design purpose of the target GAN. For instance, if a GAN is trained to generate human faces, the property can be the proportion of white people in its training dataset. A successful property inference attack against a target GAN can lead to severe consequences. For instance, learning the property of a GAN's training dataset gains an adversary extra information of the data, which directly violates the intellectual property (IP) of the model owner, as the dataset is often expensive to collect. Also, an effective property inference attack can be used as a fairness auditing tool to make sure a GAN is not trained on biased data~\cite{BG18}. Moreover, this attack can be further leveraged as a stepping stone to perform more advanced attacks, such as membership inference~\cite{SSSS17}. \begin{figure*}[!t] \centering \begin{subfigure}{1.9\columnwidth} \includegraphics[width=\columnwidth]{pic/same_intuition.png} \caption{All white females} \label{figure:intuition_a} \end{subfigure} \begin{subfigure}{1.9\columnwidth} \includegraphics[width=\columnwidth]{pic/multi_intuition.png} \caption{Multiple groups} \label{figure:intuition_b} \end{subfigure} \caption{Samples generated by WGAN trained on (a) 256 images of white females drawn from CelebA dataset (b) 256 images of uniformly distributed demographic background (gender and pale skin) from CelebA dataset. We can see that almost all images in (a) are white females and images in (b) are rather diverse.} \label{figure:intuition} \end{figure*} \mypara{Attack Methodology} Our attack follows the intuition that the generated samples of a GAN can reflect its underlying training dataset's property. For instance, in \autoref{figure:intuition_a}, we can see a WGAN~\cite{ACB17} trained with faces of only white females mainly generates images of white females, while in \autoref{figure:intuition_b}, a WGAN trained with images from people with a diverse demographic background can produce a diverse set of images. We propose two attack scenarios, i.e., full black-box setting and partial black-box setting. For the former, we assume the adversary can just get samples blindly from the target GAN's generator. For the latter, the adversary can decide the input of the target GAN's generator, i.e., the latent code. Note that for both attack scenarios, the adversary does not have access to the target GAN's parameters, which means we focus on the most difficult setting for the adversary~\cite{SSSS17}. Both of our property inference attacks follow a general pipeline. The adversary first queries the target GAN model to obtain a set of generated samples. Then she relies on a property classifier to label these samples with respect to the target property. For instance, if the target property is the gender distribution of the GAN's training dataset, the property classifier is a gender classifier for the generated samples. In the end, she infers the target property by summarizing the results of the property classifier. For the partial black-box setting, since the adversary can choose the input, i.e., the latent code, for the target GAN, we propose a novel optimization framework which allows us to generate a set of latent codes, namely optimized latent code set, to reduce the number of queries to the target model. \mypara{Evaluation} Extensive experiments over four GANs on five different property inference tasks show that our attacks achieve very effective performance. In the gender prediction task on CelebA~\cite{LLWT15}, the average absolute difference between the inferred proportion and the ground truth proportion of males in the target training dataset is 2.4\% in the full black-box setting and 9.8\% in the partial black-box setting. In the age prediction task (proportion of youth) on AFAD~\cite{NZWGH16}, the average absolute difference is 9.7\% and 10.1\% in the full and partial black-box settings respectively. Meanwhile, in the income prediction task on the US Census dataset~\cite{UCIINCOME}, the average absolute difference is 2.9\% and 4.5\% in the full and partial black-box settings respectively. We further compare the two methodologies in detail and conclude that the partial black-box attack behaves better when using a limited number of generated samples (around 100 to 125), while the full black-box attack results in a high accuracy using a large number of random samples. We also observe that our full black-box attack works well even without any knowledge of the target training dataset. Also, our partial black-box attack works robustly with respect to different optimization starting points as well as the number, the structure, and the dataset of the shadow models. \mypara{Enhancing Membership Inference} We further enhance the state-of-the-art membership inference attack against GANs~\cite{CYZF20} leveraging our proposed property inference attacks. In detail, we calibrate a sample's membership prediction result based on its attributes and the property of the target GAN's training dataset obtained by our attack. Experimental result shows that our enhanced methodology increases the membership inference's AUC from 0.52 to 0.61 on the CelebA dataset. This further demonstrates the applicability of our proposed attacks. In summary, we make the following contributions in this paper: \begin{itemize} \item We propose the first property inference attack against generative models. \item We specify two attack scenarios and propose corresponding technical solutions relying on advanced ML techniques. \item We perform extensive experiments and demonstrate the efficacy of our proposed attacks. \item We show that our property inference attacks can serve as a building block to enhance other advanced attacks, in particular, membership inference against GANs. \end{itemize} \subsection{Organization} The rest of the paper is organized as follows. We introduce generative models, property inference attacks, and threat models used in this paper in \autoref{section:preliminaries}. Then, \autoref{section:attack} presents our attack workflow and detailed methodologies. The experiment setup and evaluation results are shown in \autoref{section:evaluation}. We further show how to leverage our attack to enhance membership inference in \autoref{section:MIA}. \autoref{section:related} discusses the related work and \autoref{section:conclusion} concludes this paper. \section{Preliminaries} \label{section:preliminaries} In this section, we first introduce generative models. Then, we present property inference attacks. The threat models considered in this paper are discussed in the end. \subsection{Generative Models} Machine learning models can be categorized into generative models and discriminative models. Discriminative models are mainly designed to solve classification problems, such as image recognition and text sentiment prediction. On the other hand, generative models aim to learn the underlying training data distribution and generate new data based on it. There exist various types of generative models, including Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs). In this paper, we focus on GANs as they are the most popular generative models nowadays. A GAN is assembled with two neural networks, i.e., the generator and the discriminator. The generator takes random noise (latent code) as input and generates samples, while the discriminator performs adversarial training to distinguish the real and fake (generated) samples. In the training stage, these two networks are updated in turns: the generator learns to generate samples as realistic as possible while the discriminator learns to better separate real and fake samples. Mathematically, the loss function of a GAN is defined as the following. \[ \mathbb{E}_{x \sim \mathcal{D}_{\it{train}}}[\log \mathsf{D}(x)] + \mathbb{E}_{z \sim \mathcal{Z}}[\log(1-\mathsf{D}(\mathsf{G}(z)))] \] Here, $\mathsf{G}$ and $\mathsf{D}$ represent the generator and the discriminator, respectively. $ z \sim \mathcal{Z} $ denotes the latent code following a prior, normally multivariate Gaussian or uniform distribution. The training dataset of the GAN is represented by $\mathcal{D}_{\it{train}}$. As $\mathsf{G}$ is trained to minimize the loss and $\mathsf{D}$ aims to maximize the loss, the optimization for GAN follows a two-player minimax game. After being first introduced by Goodfellow et al.~\cite{GPMXWOCB14}, GANs have attracted a lot of attention. Over the years, many works have been proposed to enhance the original GANs, such as WGAN~\cite{ACB17}, DCGAN~\cite{RMC15}, WGAPNGP~\cite{GAADC17}, TGAN~\cite{XV18}, PGGAN~\cite{KALL18}, and BigGAN~\cite{BDS19}. In this paper, we focus on WGANGP, PGGAN, DCGAN, and TGAN as they have achieved strong performance in various settings empirically. Note that our method is general and can be applied to other types of GANs as well. \subsection{Property Inference Attacks} Property inference attacks aim to extract some general properties of a target ML model's training dataset, which the target model owner does not intend to share. Also, these general properties are not related to the target GAN's original design purposes. For instance, if the target model is used to generate realistic human profile photos, the property can be the gender distribution of the samples in the training dataset. A successful property inference attack allows an adversary to gain insights on the training data of the target model, which may violate the intellectual property of the model owner as high-quality data is normally expensive to collect. Also, property inference can serve as an important building block for other more advanced attacks, such as membership inference attacks~\cite{CYZF20} (see \autoref{section:MIA}). Moreover, property inference attacks can serve as a fairness auditor for the target model's training dataset, e.g., whether the samples used to train a model are equally distributed among different genders~\cite{BG18}. Recently, Ganju et al.~\cite{GWYGB18} propose the first property inference attack against discriminative models, in particular fully connected neural networks. In this setting, the adversary is assumed to have white-box access to the target model and uses a meta classifier to predict the property of the corresponding training dataset. The features of the meta classifier are summarized over the parameters of the model. The authors propose two approaches for feature summarization, including neuron sorting and DeepSets. To train the meta classifier, the adversary needs to establish multiple shadow models. Different from~\cite{GWYGB18}, our attack is set up in more practical cases, where the victim GAN is completely black-box or only part of the GAN parameters are accessible. So far, property inference attacks concentrate on discriminative models in the white-box setting. In this paper, we propose the first set of property inference attacks against generative models, represented by GANs. More importantly, we focus on the black-box setting which is the most difficult and realistic scenario for the adversary~\cite{SSSS17,SBBFZ20}. \subsection{Threat Models} \label{section:threat_model} Similar to the setting of discriminative models, the goal of the adversary here is to infer whether the target model's training dataset $\mathcal{D}_{\it{target}}$ has a certain property $\mathcal{P}$. We first assume the adversary has an auxiliary dataset $\mathcal{D}_{\it{auxiliary}}$ that comes from the same distribution of $\mathcal{D}_{\it{target}}$. The adversary leverages this auxiliary dataset to build local shadow GANs and classifiers for her attacks, i.e., $\mathcal{D}_{\it{auxiliary}}=\mathcal{D}_{\it{shadow}}\cup \mathcal{D}_{\it{classifier}}$ (see~\autoref{section:attack} for more details). This is also the common assumption used in most of the works in machine learning privacy~\cite{SSSS17,GWYGB18,SZHBFB19,SBBFZ20}. We also assume the adversary only has access to the generator of the target model as the discriminator is normally disregarded after the training phase. For simplicity, we use $\mathsf{G}_{\it{target}}$ to represent the target model in the rest of the paper. We consider two scenarios for the adversary including \emph{full black-box setting} and \emph{partial black-box setting}. \mypara{Full Black-box Setting} This is the least knowledgeable setting for the adversary, where she can just get the generated samples blindly from the target black-box generator $\mathsf{G}_{\it{target}}$. This attack scenario provides a good simulation of the online closed-source API~\cite{CYZF20}. \mypara{Partial Black-box Setting} In this scenario, the adversary also has no knowledge about the parameters of the target GAN but can construct the latent code $z$ to generate the corresponding sample from $\mathsf{G}_{\it{target}}$. In this way, she can feed her chosen latent codes to obtain specific generated samples. Formally, we use the latent code set $\{z_{i}\}_{i=1}^{\vert\mathbf{X}\vert}$ to represent her chosen latent codes in our paper. Moreover, we assume the adversary knows the architecture of the target GAN, as well as the training algorithm. Such information can be directly inferred by performing hyperparameter stealing attacks~\cite{OASF18,WG18}. \section{Attack Methodology} \label{section:attack} In this section, we first introduce our general attack pipeline. Then, we present the details of our attacks under two different scenarios. \subsection{Attack Workflow} Our attacks are designed based on the intuition that the generated samples of a target GAN can reflect the underlying training dataset's property. For instance, if a GAN is mainly trained with images of white males, we expect the generated images are more likely to be white males compared to other demographic backgrounds. \autoref{figure:intuition_a} shows a WGAN trained only on images of white female mainly generates samples that are recognized as white females. Meanwhile, \autoref{figure:intuition_b} shows a WGAN trained with samples from a diverse demographic background generates a diverse set of samples. Therefore, an adversary can estimate a target GAN's underlying property $\mathcal{P}_{\it{infer}}$ by inspecting the corresponding property of its generated samples ($\mathcal{P}_{\it{real}}$). \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{pic/strategy.pdf} \caption{Workflow of the general property inference attack strategy. With the help of a property classifier $f_{\property}$, the adversary obtains $\mathcal{P}_{\it{infer}}$ to inspect $\mathcal{P}_{\it{real}}$, the target underlying property in the training dataset of $\mathsf{G}_{\it{target}}$.} \label{figure:strategy} \end{figure} \autoref{figure:strategy} depicts the general attack workflow, which can be roughly categorized into three steps. In the first step, the adversary queries the target generator $\mathsf{G}_{\it{target}}$ to produce synthetic samples $\mathbf{X}=\{\mathsf{G}(z_1),\mathsf{G}(z_2),\cdots,\mathsf{G}(z_{\vert\mathbf{X}\vert})\}$. Here, $\mathsf{G}(z_i)$ represents the $i$th generated sample from the target GAN with respect to the corresponding latent code $z_i$. The concrete methods for generating samples, i.e., choosing latent codes, for both full and partial black-box settings, are presented later in \autoref{section:full_black-box} and \autoref{section:partial_black-box}, respectively. Next, the adversary constructs a \emph{property classifier} $f_{\property}$ tailored for classifying the previously generated samples with respect to the target property she is interested in. For instance, if the target property is the gender distribution of the samples (profile images) in the underlying training dataset, the property classifier then predicts the gender of each sample. The property classifier here is trained with part of the auxiliary dataset, i.e., $\mathcal{D}_{\it{classifier}}$ (as described in \autoref{section:preliminaries}) that is disjoint from the underlying training dataset of target GAN. In the end, the adversary predicts $\mathcal{P}_{\it{infer}}$ based on the output of the property classifier. Concretely, she computes a function $\phi$ over the prediction of the property classifier, defined as: \[ \phi\left(\{f_{\property}(\mathsf{G}_{\it{target}}(z_i))\}_{i=1}^{\vert\mathbf{X}\vert}\right) \] In this paper, our attack focuses on inferring a general property as the distribution of a certain attribute, such as the gender distribution of the samples in the target underlying training dataset. Therefore, $\phi$ is realized as a function to summarize the distribution of the target attribute. However, we emphasize that our methodology is general and can be applied to infer other types of property. \begin{figure*}[!t] \centering \includegraphics[width=1.55\columnwidth]{pic/opt_noise.pdf} \caption{Methodology of optimizing input latent codes $(\noise\leftarrow\noise^{*})$ with shadow models.} \label{figure:opt_noise} \end{figure*} \subsection{Full Black-box Adversary} \label{section:full_black-box} For a full black-box adversary, she can only obtain generated samples blindly from the target GAN. These acquired samples $\mathbf{X}$, generated from a random latent code set, are consumed by $f_{\property}$, and then $\phi$ to get the attack result $\mathcal{P}_{\it{infer}}$, just as presented in the basic attack strategy. More formally, the property inference attack through full black-box GANs can be described as the following, where the latent codes are just drawn randomly from a prior. \[ \phi\left(\{f_{\property}\left(\mathsf{G}_{\it{target}}\left(z_{i}^{\circ} \right)\right)\}_{i=1}^{\vert\mathbf{X}\vert}\right) \] Here, each latent code is denoted by $z_{i}^{\circ}$, as a member of the latent code set. \subsection{Partial Black-box Adversary} \label{section:partial_black-box} Different from the full black-box adversary, the partial black-box adversary can choose the latent code to feed into the target GAN. Thus, she can construct/craft a specific latent code set to allow the target GAN to generate corresponding samples that can help her to achieve an effective property inference. Crafting a latent code set is a training process. To this end, the adversary needs to establish a set of shadow models to simulate GANs trained with datasets of different properties. The process to construct the latent code set with the help of shadow models can be divided into three stages. \autoref{figure:opt_noise} provides a schematic overview of it. In the first stage, the adversary generates shadow training datasets for training shadow GANs. More formally, she samples $M$ shadow training datasets $\{\mathcal{D}_1,\mathcal{D}_2,\cdots,\mathcal{D}_M\}$ from $\mathcal{D}_{shadow}$ (obtained from the local auxiliary dataset presented in \autoref{section:preliminaries}) corresponding to $M$ shadow models. Each shadow dataset $\mathcal{D}_{k}$ is sampled to fulfill a certain property denoted by $\mathcal{P}_{k}$, and all the shadow training datasets' properties are uniformly distributed. In the next stage, the adversary trains each local shadow GAN $\mathsf{G}_{k}$ with the corresponding shadow training dataset $\mathcal{D}_{k}$. Note that each $\mathsf{G}_{\it{shadow}}$ has the same architecture with the target GAN $\mathsf{G}_{\it{target}}$ (see \autoref{section:preliminaries}). Finally, the adversary crafts an optimized latent code set, denoted by $\{\noises^{*}_i\}_{i=1}^{\vert\mathbf{X}\vert}$, over the $M$ shadow GANs. Mathematically, the optimization is defined as the following. \[ \argmin_{\{\noises^{*}_i\}_{i=1}^{\vert\mathbf{X}\vert} } \sum_{k=1}^{M} \mathcal{L} \left(\phi\left(\{f_{\property}(\mathsf{G}_k(\noises^{*}_i))\}_{i=1}^{\vert\mathbf{X}\vert}\right), \mathcal{P}_k\right) \] Here, $\mathcal{L}$ represents the adopted loss function, and we utilize the stochastic gradient descent (SGD) method to minimize the loss function. At the beginning of the optimization, we need to set a random starting of the latent code set. Particularly, we present an experiment about how the optimization starting point affects the partial black-box attack in \autoref{section:evaluation_part_black-box}. With the optimized latent code set, the adversary can infer the target property $\mathcal{P}_{\it{infer}}$ similar to the full black-box adversary: \[ \phi\left(\{f_{\property}\left(\mathsf{G}_{\it{target}}\left(z_i^{*}\right)\right)\}_{i=1}^{\vert\mathbf{X}\vert}\right) \] Here, each latent code is denoted by ${z}_i^{*}$, as a member of the optimized latent code set. \section{Evaluation} \label{section:evaluation} In this section, we first describe the datasets used in our experiments, followed by descriptions of the evaluated GAN models and detailed experimental setup. We then present the results of our proposed attacks. \subsection{Dataset} GANs have been demonstrated to be successfully used in the image domain ~\cite{ISI17,LBK17,WLZTKC18}, and current attacks against GANs are also demonstrated with computer vision targets~\cite{CYZF20}. Therefore, we mainly focus on GANs generating image outputs in this paper. We also experiment on tabular data to prove that our attacks are general and can be applied to other domains. \mypara{MNIST} The MNIST database of handwritten digits~\cite{MNIST} is a commonly adopted benchmark repository for computer vision and machine learning projects. It includes 70,000 handwritten digits labeled with corresponding digit numbers. In this paper, we focus on inferring \emph{the proportion of 0s and 1s} used to train target GANs. Concretely, we construct a subset MNIST$_{\rm{01}}$ with over 6.9K 0s and 7.8K 1s and evaluate our inference attack on the proportion of 0s. We also show an extended experiment in \autoref{section:Multi} to evaluate our attack facing the property with multiple classes (digit 0\textasciitilde9) based on the whole MNIST dataset. \mypara{CelebA} CelebA~\cite{LLWT15} is a benchmark dataset for face-related problems. This large-scale face attributes dataset contains more than 200K celebrity images, and each of them has 40 binary attributes. In this paper, we focus on the gender attribute, which not only is easy for a property classifier to discriminate but also has a relatively balanced proportion on females and males (around 4:6). As a result, we intend to infer the \emph{gender distribution} of the samples used to train a target GAN. \begin{table*}[!t] \centering \caption{The settings for each experiment, describing the dataset, the property classifier task, and the target property.} \label{table:experiment_setting} \scriptsize \begin{tabular}{l | c | c | c | c | c | c | c } \toprule Task & Dataset & Property Classifier & Target Property & GAN structure & Size of $\mathcal{D}_{\it{target}}$ & Size of $\mathcal{D}_{k}$ & Size of $\mathcal{D}_{\it{classifier}}$ \\ \midrule $T_1$ & CelebA & Gender Classification & Proportion of Males & WGANGP & 40000 & 40000 & 82K \\ $T_2$ & AFAD$_{\rm{gender}}$ & Gender Classification & Proportion of Males & PGGAN & 10800 & 10800 & 92K \\ $T_3$ & AFAD$_{\rm{age}}$ & Age Classification & Proportion of Youth & PGGAN & 10800 & 10800 & 92K \\ $T_4$ & MNIST$_{\rm{01}}$ & Digit classification & Proportion of 0s & DCGAN & 3000 & 3000 & 8.8K \\ $T_5$ & Census Income & Income classification & Proportion of high-income & TGAN & 4200 & 4200 & 290k \\ \bottomrule \end{tabular} \end{table*} \mypara{AFAD} The Asian Face Age Dataset (AFAD)~\cite{NZWGH16} is a dataset proposed mainly for age estimation tasks. This dataset contains over 160K Asian faces, with the corresponding age and gender attributes. In this paper, we take advantage of both attributes and focus on inferring the \emph{gender distribution} and the \emph{age distribution} of the images used to train target GANs. Concretely, we construct two datasets from AFAD, i.e., AFAD$_{\rm{gender}}$ and AFAD$_{\rm{age}}$. AFAD$_{\rm{gender}}$ contains the same number of images (160K) as the normal AFAD. AFAD$_{\rm{age}}$, on the other hand, contains over 72K samples where $18 \le \rm{age} \le 20$ and $30 \le \rm{age} \le 39$ are chosen from AFAD. In this way, the \emph{age distribution} is described as the proportion of youth ($18 \le \rm{age} \le 20$) in the underlying training dataset. \mypara{US Census Income} The US Census Income Dataset~\cite{UCIINCOME} is used to learn to predict whether a person earns over \$50K a year. It includes 299,285 instances and each of them has 41 demographic and employment related attributes, such as age, gender, education, and citizenship. In this paper, we intend to infer the \emph{high-income distribution} (the proportion of records whose income is over \$50K) of the samples used to train a target GAN. \subsection{Models} \label{section:target_models} We first introduce the four GAN models that we focus on in this paper, i.e., DCGAN~\cite{RMC15}, WGANGP~\cite{GAADC17}, PGGAN~\cite{KALL18}, and TGAN~\cite{XV18}, as they are typical and representative models used in multiple applications like image generation, image-to-image translation, and super-resolution. Then, we describe the property classifier $f_{\property}$ which is used in both attack scenarios. \mypara{DCGAN} DCGAN bridges the gap between convolutional networks (CNNs) and unsupervised learning. Thanks to the combination of CNNs, DCGANs are stable to train in most settings and are proved to learn good representations of images for supervised learning and generative modeling. In this paper, the dimension of the latent code is chosen as 100, and the output size is set as 32$\times$32$\times$1, while our DCGANs are trained on MNIST. The structure we use in this paper is shown in \autoref{table:DCGAN_structure}, and the detailed hyper-parameters are listed below. The number of critic iterations per generator iteration $n_{critic}=1$, the batch size $m=100$, and parameters of the Adam optimizer $\alpha=0.0002$, $\beta_1=0.5$, $\beta_2=0.999$. \mypara{WGANGP} WGANGP is proposed to improve the training process of an ordinary Wasserstein GAN. With the help of an addition called gradient penalty, their proposed method enables stable training of a wide variety of GAN architectures. In this paper, we choose the dimension of each latent code as 100, and the pixel size of output images is 64$\times$64$\times$3. The structure of our WGANGP is shown in \autoref{table:WGANGP_structure}. Moreover, the hyper-parameters in the training process are configured as the following: the gradient penalty coefficient $\lambda=10$, the number of critic iterations per generator iteration $n_{critic}=3$, the batch size $m=100$, and parameters of the Adam optimizer $\alpha=0.0002$, $\beta_1=0.9$, $\beta_2=0.999$. \mypara{PGGAN} The key idea of PGGAN is to grow both generator and discriminator progressively with the resolution becoming higher. This training method has a better performance in training high-quality images. In this paper, we set the image size as 64$\times$64$\times$3, with consideration of the training cost. The remaining settings are the same as those provided in~\cite{KALL18}, including the dimension of input latent code is 512, the usage of WGANGP loss, and leaky ReLU with leakiness 0.2. The structure of PGGAN we use is shown in \autoref{table:PGGAN_structure}, and the detailed hyper-parameters are shown as below: the gradient penalty coefficient $\lambda=10$, the number of critic iterations per generator iteration $n_{critic}=1$, the batch size $m=36$, and parameters of the Adam optimizer $\alpha=0.001$, $\beta_1=0$, $\beta_2=0.99$. \mypara{TGAN} Tabular GAN is designed to generate tabular data like medical or educational records. With the help of mode-specific normalization for numerical variables and smoothing for categorical variables, TGAN can simultaneously generate discrete and continuous variables. In this paper, we set the dimension of the latent code to 200 and the size of the synthetic output to 503 (based on the total size of the one-hot vectors). All of the settings of the TGAN are the same as those provided in~\cite{XV18}, while the output and hidden state size of LSTM are 100, the size of the hidden vector is 100, the batch size is 200, and the parameters of the Adam optimizer are set as $\alpha=0.001$, $\beta_1=0.5$, $\beta_2=0.99$. \mypara{Property Classifier} For each inference task, we construct a property classifier (binary in our case) with the corresponding generated image size. For the gender classifiers, our neural networks start with two convolutional layers and follow with two fully connected layers, both obtaining over 95\% testing accuracy. And the age classifier starts with three convolutional layers followed with three fully connected layers, getting an accuracy of around 80\%. To discriminate the digit 0 and 1, our neural network has the same structure as the age classifier, with a testing accuracy of over 98\%. Additionally, we build up a four-layer fully connected classifier to recognize the high-income census with over 86\% testing accuracy. We also show extended experiments later in \autoref{section:property_classifier} to investigate the influence of the property classifier on the attack performance, i.e., the needlessness of the target model's training dataset and the irrelevance of architecture of the property classifier. For the former, we adopt two off-the-shelf gender classifiers (based on IMDB-WIKI dataset and the Audience dataset) and a locally trained model (based on EMNIST dataset). For the latter, we train five well-behaved classifiers with different architectures. \subsection{Experiment Setup} \begin{figure*}[!t] \centering \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_full_BB/rand1_p1.pdf} \caption{Evaluation on $T_1$} \label{figure:fullBBsight1} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_full_BB/rand1_p2.pdf} \caption{Evaluation on $T_2$} \label{figure:fullBBsight2} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_full_BB/rand1_p3.pdf} \caption{Evaluation on $T_3$} \label{figure:fullBBsight3} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_full_BB/rand1_p4.pdf} \caption{Evaluation on $T_4$} \label{figure:fullBBsight4} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_full_BB/rand1_p7.pdf} \caption{Evaluation on $T_5$} \label{figure:fullBBsight5} \end{subfigure} \caption{Full black-box attack performance. Each point depicts a target model with corresponding underlying property (Ground Truth) and inferred property (Inferred Proportion). The average curve gives an average attack result for target models with the same underlying property. The benchmark line refers to the best attack result, where the inferred proportion is exactly equal to the ground truth. The inferred result for each target model is obtained using 20K randomly generated samples.} \label{figure:fullBBsight} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/fullBB_wrt_num_samples/rand2_p1_euc.pdf} \caption{Evaluation on $T_1$} \label{figure:fullBBnumEUC1} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/fullBB_wrt_num_samples/rand2_p2_euc.pdf} \caption{Evaluation on $T_2$} \label{figure:fullBBnumEUC2} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/fullBB_wrt_num_samples/rand2_p3_euc.pdf} \caption{Evaluation on $T_3$} \label{figure:fullBBnumEUC3} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/fullBB_wrt_num_samples/rand2_p4_euc.pdf} \caption{Evaluation on $T_4$} \label{figure:fullBBnumEUC4} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/fullBB_wrt_num_samples/rand2_p7_euc.pdf} \caption{Evaluation on $T_5$} \label{figure:fullBBnumEUC5} \end{subfigure} \caption{Full black-box performance w.r.t.\ number of random samples, evaluated with the absolute difference between the inferred property and the underlying property. Each line presents the average behavior of the target models with the same property, and how the behavior changes using the different number of random samples.} \label{figure:fullBBnum} \end{figure*} We evaluate the performance of our property inference under five different settings, including the proportion of males on CelebA ($T_1$), the proportion of males on AFAD ($T_2$), the proportion of youth on AFAD ($T_3$), the proportion of 0s on MNIST ($T_4$), and the proportion of the high-income on Census Incone dataset ($T_5$). \autoref{table:experiment_setting} lists them in detail. For each task, we firstly split the dataset into three disjoint parts, i.e., a shadow dataset $\mathcal{D}_{\it{shadow}}$ to draw $\mathcal{D}_{k}$, a dataset to draw $\mathcal{D}_{\it{target}}$ with the same size as $\mathcal{D}_{\it{shadow}}$, and a classifier dataset $\mathcal{D}_{\it{classifier}}$ to train $f_{\property}$. We then split $\mathcal{D}_{\it{classifier}}$ with a proportion of 7:3 to train and test the property classifier. Note that, all of the samples are drawn randomly but under a control on the number of images with the target analyzing attribute. For instance, when we focus on inferring the gender distribution of the training dataset, we will control the number of females and males. In this way, these split datasets follow the same distribution as the original datasets except the target attribute. Moreover, the concrete size of the disjoint dataset is also presented in \autoref{table:experiment_setting}. With the aim of inferring the distribution of the target GAN's underlying training dataset, the property of each shadow dataset $\mathcal{P}_{k}$ can be simply set as a range of proportions covering the target property. In our experiments, we assume that $\mathcal{P}_{\it{target}},\mathcal{P}_{k}\in\{30\%,40\%,50\%,60\%,70\%\}$. In this way, we draw $\mathcal{D}_{\it{target}}$ and $\mathcal{D}_{k}$ randomly, while controlling the size and property of each dataset. Note that, we assume the target property $\mathcal{P}_{\it{target}}$ is between 30\% and 70\%, so we simply set the property of each shadow dataset within the same range. Moreover, we also investigate how our partial black-box attack behaves when the property of shadow models do not cover their target property, as shown in \autoref{figure:out_of_range}. \begin{table}[!t] \centering \caption{Average FID for GAN models in each task.} \label{table:FID} \scriptsize \begin{tabular}{l | c | c | c | c } \toprule & WGANGP $T_1$ & PGGAN $T_2$ & PGGAN $T_3$ & DCGAN $T_4$ \\ \midrule Target GANs & 33.62 & 28.79 & 29.48 & 49.07\\ Shaodw GANs & 35.17 & 29.28 & 29.02 & 51.23\\ \bottomrule \end{tabular} \end{table} For each inference task, our experiment basically goes as follows. In the training stage, we have trained eight target models with each $\mathcal{P}_{\it{target}}$, i.e., 40 target models in total, so as to examine the variation in inference accuracy. In the attack stage, we build up 20 shadow models for each $\mathcal{P}_k$, using the same GAN structures and training hyper-parameters with the target models. Consequently, 100 shadow models are involved in the evaluation for the partial black-box attack. The corresponding quantitative evaluation in terms of Fréchet Inception Distance (FID) metric~\cite{HRUNH17} is shown in~\autoref{table:FID} to present the quality of our shadow and target GANs. We can find that our GAN models are in a reasonable range compared with experiments (FID ranging from 14.86 to 53.08) in~\cite{CYZF20}. Note that we will further discuss the influence of shadow models in \autoref{section:Shadow} based on different GAN structures and underlying datasets. \subsection{Attack Evaluation} In this paper, our proposed property inference attack produces a continuous property value, instead of a discrete classification label like~\cite{GWYGB18}. Therefore, we evaluate the effects of the attack based on the \emph{absolute difference} between the real property and the inference property. Formally, as we focus on inferring the proportion of a certain attribute, the absolute difference of our attack can be easily calculated as $\vert\mathcal{P}_{\it{infer}}-\mathcal{P}_{\it{real}}\vert$, where $\mathcal{P}_{\it{infer}}$ and $\mathcal{P}_{\it{real}}$ range from 0\% to 100\%, represented by the percentage. As the definition of the evaluation metric, the attack result is better when the calculated absolute difference is closer to 0\%. \begin{figure*}[!t] \centering \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_part_BB/opt1_p1.pdf} \caption{Evaluation on $T_1$} \label{figure:partBBsight1} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_part_BB/opt1_p2.pdf} \caption{Evaluation on $T_2$} \label{figure:partBBsight2} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_part_BB/opt1_p3.pdf} \caption{Evaluation on $T_3$} \label{figure:partBBsight3} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_part_BB/opt1_p4.pdf} \caption{Evaluation on $T_4$} \label{figure:partBBsight4} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/sight_on_part_BB/opt1_p7.pdf} \caption{Evaluation on $T_5$} \label{figure:partBBsight5} \end{subfigure} \caption{Partial black-box attack performance. Each point depicts a target model with corresponding underlying property (Ground Truth) and inferred property (Inferred Proportion) based on an optimized latent code set. The average curve gives an average result for target models with the same underlying property. The benchmark line refers to the best attack result.} \label{figure:partBBsight} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_latent/opt2_p1_euc.pdf} \caption{Evaluation on $T_1$} \label{figure:partBBnumLEUC1} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_latent/opt2_p2_euc.pdf} \caption{Evaluation on $T_2$} \label{figure:partBBnumLEUC2} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_latent/opt2_p3_euc.pdf} \caption{Evaluation on $T_3$} \label{figure:partBBnumLEUC3} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_latent/opt2_p4_euc.pdf} \caption{Evaluation on $T_4$} \label{figure:partBBnumLEUC4} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_latent/opt2_p7_euc.pdf} \caption{Evaluation on $T_5$} \label{figure:partBBnumLEUC5} \end{subfigure} \caption{Partial black-box performance w.r.t.\ number of optimized samples. For each target model, we generate different numbers of optimized samples based on latent code sets and then obtain the inference results using these optimized samples. The red curve represents the average attack performance against all target models based on different numbers of optimized samples. We also provide box plots of our results for statistical analysis.} \label{figure:partBBnumL} \end{figure*} \subsection{Evaluation on Full Black-box Attack} \label{section:evaluation_full_black-box} We first start by evaluating the full black-box method, which is the least knowledgeable setting for the adversary. \autoref{figure:fullBBsight} shows the experimental results of our proposed full black-box attack against all target models. The inferred property of each target model is represented as a single point, which depicts the black-box attack result based on 20K random generated samples against the corresponding underlying property (i.e., the ground truth). We also plot the average inference result for target models sharing the same property, as well as an ideal benchmark line for comparison, on which the inferred property is exactly equal to the underlying property. Overall, our results indicate the effectiveness of the full black-box attack, as we can clearly observe that the average inference curve corresponds closely to the benchmark line. For instance, in \autoref{figure:fullBBsight1} focusing on target models with 30$\%$ males in the training dataset, our attack is well-behaved as the inferred proportion for each target model is very close to 30$\%$. Moreover, we can see that in the tasks $T_1$ and $T_4$, our attack achieves quite a good attack performance, and the variances of inference results on target models are smaller than the other two. Meanwhile, the result in the task $T_3$ is not so good, as the youth is hard to discriminate when using a local property classifier whose testing accuracy is only around 80{\%}. How the $f_{\property}$ affects our attack behavior will be further explored in \autoref{section:property_classifier}. \mypara{Number of Random Samples} Next, we evaluate our full black-box attack performance against an ablation study, i.e., the influence of the number of random generated samples. We repeat our aforementioned full black-box evaluation with various random sample amounts to achieve the attack: $2^i (i = 2, 3, \cdots, 16)$. After obtaining inferred proportions for all the target models, we average the inference results from the same underlying proportion and number of samples respectively for further analysis. \autoref{figure:fullBBnum} plots the average full black-box attack performance against different numbers of random samples. We can see the absolute difference of the inference result is obvious when using a relatively small amount of generated samples (less than 128). And then the inference results become more \emph{accurate and stable} with the increasing number of random generated samples. For instance, the green line in \autoref{figure:fullBBnumEUC2} depicts the average attack performance against target models with half male and half female training data, where the absolute difference is decreasing from 17.8\% to 3.7\% as the amount of samples grows to 1024, indicating the inference result is getting closer to the ground truth. Moreover, focusing on $T_1$ and $T_2$, we can find the inference results on models with 70$\%$ males in the underlying training dataset are always worse than the others. A possible explanation is that $f_{\property}$ has relatively higher accuracy on the female class. In that case, more males are misclassified as females when the proportion of males increases. \subsection{Evaluation on Partial Black-box Attack} \label{section:evaluation_part_black-box} \begin{figure*}[!t] \centering \begin{subfigure}{0.5\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_shadow/opt3_p1_euc.pdf} \caption{Evaluation on $T_1$} \label{figure:partBBnumSEUC1} \end{subfigure} \begin{subfigure}{0.5\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_shadow/opt3_p2_euc.pdf} \caption{Evaluation on $T_2$} \label{figure:partBBnumSEUC2} \end{subfigure} \begin{subfigure}{0.5\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_shadow/opt3_p3_euc.pdf} \caption{Evaluation on $T_3$} \label{figure:partBBnumSEUC3} \end{subfigure} \begin{subfigure}{0.5\columnwidth} \includegraphics[width=\columnwidth]{pic/partBB_wrt_num_shadow/opt3_p4_euc.pdf} \caption{Evaluation on $T_4$} \label{figure:partBBnumSEUC4} \end{subfigure} \caption{Partial black-box performance w.r.t.\ number of shadow models. For each target model, we generate 100 samples from the latent code sets optimized based on different numbers of shadow models and then obtain the inference results using these optimized samples. The red curve shows the average performance of all target models through latent code sets optimized by different numbers of shadow models. We also provide box plots of our results for statistical analysis.} \label{figure:partBBnumS} \end{figure*} We then evaluate the partial black-box method, which relies on locally-trained shadow models. \autoref{figure:partBBsight} shows the partial black-box attack results against all target models. The inferred property of each target model is represented as a single point, which depicts the partial black-box attack result against the corresponding underlying property, using 100 optimized samples generated from an optimized latent code set. Similar to \autoref{figure:fullBBsight}, we further plot a benchmark line as a reference to the best attack result, as well as an average line for target models with the same underlying property. As we can see, the average inference curve lies close to the benchmark line, proving the effectiveness of our partial black-box attack. For instance, we get an average inferred proportion of 48.4$\%$ for the target models with half males and half females in the underlying training dataset as shown in \autoref{figure:partBBsight2} (the ground truth is 50$\%$ males). Note that we use only 100 generated samples to achieve our partial black-box attack in \autoref{figure:partBBsight}, it is reasonable that the inference result of each target GAN has a relatively large deviation. Similar to our results and analysis for the full black-box attack (\autoref{section:evaluation_full_black-box}), our partial black-box attack produces good results when aiming to infer the underlying distribution in tasks $T_1$, $T_2$, $T_4$ and $T_5$, while $T_3$ is the most difficult one to achieve an accurate inference. Moreover, we can find some obvious gaps between the benchmark line and the average curve, such as $\mathcal{P}_{real}=40\%$ in $T_1$ and $\mathcal{P}_{real}=70\%$ in $T_3$. They are possibly caused by a significant dissimilarity between the shadow datasets and the target datasets. In that case, the optimization phase within the partial black-box pipeline fails to construct an effective latent code set for the target models. \mypara{Number of Optimized Samples} In \autoref{section:evaluation_full_black-box}, our one key finding is that the full-black box attack performance is influenced by the number of random samples. Here we also investigate how the number of optimized samples impacts the accuracy of our partial black-box attack, note that the adversary uses an optimized latent code set with the corresponding size to generate these optimized samples. \autoref{figure:partBBnumL} presents the attack performance on each task while using different numbers of optimized samples, in other words, using latent code sets with different sizes. Fixing the number of optimized samples, we can obtain the inference results for all 40 target models. And for each number, we take advantage of box plots to perform all the attack results against the target models. Besides, we also plot a red curve that highlights the average performance against the target models. In general, we can see the partial black-box attack gets more precise when increasing the number of optimized samples. For instance, in \autoref{figure:partBBnumL} when the number of optimized samples grows from 25 to 100, the average absolute difference of all target models changes from 12.5\% to 9.8\% in $T_1$, 8.7\% to 7.2\% in $T_2$, 14.9\% to 10.1\% in $T_3$, 7.7\% to 4.9\% in $T_4$ and 9.8\% to 4.5\% in $T_5$. However, a larger number of optimized samples also tend to cause more serious instability (e.g., the variance of inference results), scaled here by the difference between the lower and upper quartile in the box plot. As shown in \autoref{figure:partBBnumLEUC1}, when choosing the number of optimized samples as 200 in $T_1$, the average performance increases to 12.6\%, and the attack stability increases to around 15\%. In our remaining experiments, after trading off the attack accuracy and the optimization time cost, we set the number of optimized samples as 100, i.e., the size of the optimized latent code set is equal to 100. \begin{figure}[!t] \centering \includegraphics[width=0.67\columnwidth]{pic/partBB_wrt_start_point/opt_4.pdf} \caption{Partial black-box performance w.r.t.\ optimization starting point. Each line plots an average performance of the target models in $T_2$ with the same underlying property, based on 5 latent code sets optimized from different starting points.} \label{figure:start_point} \end{figure} \mypara{Number of Shadow Models} \autoref{figure:partBBnumS} shows the partial black-box attack performance on all target models against the number of local shadow models used to optimize the latent code set. We leverage box plots to present the attack performance for all models' inference results, based on latent code sets optimized with different numbers of shadow models. Moreover, we also plot a curve to present the average inference result against the number of shadow models. As we can see, the average inference behavior changes slightly when using different numbers of shadow models. For example, when increasing the number of shadow models from 50 to 100 in \autoref{figure:partBBnumSEUC3}, the average absolute difference of all target models changes slightly from 10.2\% to 10.8\% in $T_3$. Even though the median and average attack accuracies are slightly higher when optimizing latent code set based on only 25 shadow models for most tasks, the variance of attack results is quite large, even more than 10\% in $T_3$ (scaled by the difference between the lower and upper quartile in a box plot). It means the property inference attack suffers from severer instability when there are only a few shadow models. Finally, considering both the accuracy and stability of the attack performance, we set the number of shadow models as 100 in our other experiments. Moreover, our results indicate that our proposed partial black-box attack is able to achieve high inference accuracy with just a limited amount of shadow models, suggesting it is a practical and realistic threat in the real world. \mypara{Optimization Starting Point} Since we utilize the stochastic gradient descent (SGD) method to optimize a latent code set to implement our partial black-box attack, the optimized results may be affected by the optimization initialization, i.e., the optimization starting point. \autoref{figure:start_point} shows the performance of our partial black-box attack on $T_2$ with five different random starting points. Similar to \autoref{figure:fullBBsight} and \autoref{figure:partBBsight}, we exhibit the average performance for all target models. As we can see, the five curves depicting the average inference results are close to each other. For instance, in \autoref{figure:start_point} considering target models with 70\% males in the underlying training dataset, the average inference results range from 63.2\% to 65.2\% using five latent code sets based on different optimization starting points. It suggests that the optimization starting point would not obviously affect our partial black-box attack performance. \begin{figure}[!t] \centering \includegraphics[width=0.67\columnwidth]{pic/sight_on_part_BB/out_of_range.pdf} \caption{Partial black-box performance w.r.t.\ out-of-range target property. Each bar represents the number of target models with relative partial black-box attack results. All 21 target GANs have 20\% males and 80\% females in the underlying dataset and follow the same setting as $T_4$. The underlying property of the shadow models ranges from 30\% to 70\%.} \label{figure:out_of_range} \end{figure} \mypara{Target Property Out of Range} In the above experiment, we simply set the range of the property of the shadow models to cover the underlying property of the target models. When facing an unknown distribution, the adversary can also expand the covering property range of the shadow models to 0\% to 100\%. In this part, we examine our attack behavior when the target property is out of the range of the shadow models. \autoref{figure:out_of_range} shows the partial black-box attack result of target GANs with 20\% males in the underlying dataset based on shadow models with 30\% to 70\% males. As we can see in \autoref{figure:out_of_range}, there are 38\% (8/21) target GANs whose partial black-box inference error is lower than 2\%. It indicates that our partial black-box attack still works when the target property is out of the range of the property of the shadow models. Compared with the full black-box scenario, our proposed attack methodology on the partial black-box scenario shows more stable performance, as it is loosely related to the choice of several attack hyper-parameters, e.g., the optimization initialization and the number of shadow models. \subsection{Comparison Between Two Attacks} In this paper, we develop two property inference strategies against GANs on two practical attack scenarios: the full black-box setting and the partial black-box setting. The key difference relies on whether the adversary is able to craft latent codes to control the target model’s output. In the former case, we choose a more general and straightforward way by directly sending random generated latent codes to get random samples. As it does not require any internal information of the target model (e.g., parameters, structures), this full black-box methodology is also applicable for the partial black-box adversary. In the latter case, we propose to leverage auxiliary knowledge of the target model to help search for optimized latent codes via SGD, and then send them to the target model. Our aforementioned experimental results show that the partial black-box adversary can achieve an accurate inference with a limited amount of latent codes. In this subsection, we present a more comprehensive comparison between the two attacks based on their attack stability and accuracy. As we can see in \autoref{figure:fullBBnum}, our full black-box attack can achieve a quite good attack performance based on over 256 random generated samples, so our comparison below will focus on using a relatively small amount of samples. \begin{figure}[!t] \centering \includegraphics[width=0.66\columnwidth]{pic/compare/com.pdf} \caption{Ratio of optimized samples behaving better than random ones. For each task, we obtain 80 random inference results and one optimized inference result based on a limited number of generated samples for each target GAN. Then we compare them to get the frequency of optimized samples behaving better than random ones and finally obtain the ratio considering all target models. In this way, we show the ratio of partial black-box attack (optimized) behaving better than the full black-box attack (random) against different numbers of samples.} \label{figure:comp2} \end{figure} \begin{figure*}[!t] \centering \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/classifier/online1.pdf} \caption{Evaluation on $T_1$ with \\ IMDB-WIKI classifier} \label{figure:online1} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/classifier/online2.pdf} \caption{Evaluation on $T_2$ with \\ IMDB-WIKI classifier} \label{figure:online2} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/classifier/online3.pdf} \caption{Evaluation on $T_1$ with \\ Audience classifier} \label{figure:online3} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/classifier/online4.pdf} \caption{Evaluation on $T_2$ with \\ Audience classifier} \label{figure:online4} \end{subfigure} \begin{subfigure}{0.4\columnwidth} \includegraphics[width=\columnwidth]{pic/classifier/online5.pdf} \caption{Evaluation on $T_4$ with \\ EMNIST classifier} \label{figure:online5} \end{subfigure} \caption{Full black-box performance w.r.t.\ unknown training distribution. We adopt two off-the-shelf classifiers (based on IMDB-WIKI and Audience dataset) and a locally trained model (based on EMNIST dataset) to achieve our attacks. Each point depicts the full black-box attack performance with corresponding underlying property and inferred property. The blue curve plots the average performance of the target models with the same underlying property. The orange line marks the ideal attack result. The green line shows the dataset property reported by directly running the off-the-shelf classifier on the underlying training dataset.} \label{figure:online} \end{figure*} We first give a comparison based on the \emph{attack stability} for the proposed two methodologies. Based on the observation in \autoref{figure:fullBBnum}, we can find that the number of samples strongly affects the full black-box attack performance and the inference results are extremely unstable when using less than 256 samples. On the other hand, the partial black-box attack methodology is not affected too much by the number of optimized samples, as the red curves in \autoref{figure:partBBnumL} are much smoother than curves in \autoref{figure:fullBBnum}. For instance, the average absolute difference of our partial black-box attack result decreases from 8.2\% to 3.6\% in $T_4$ when the number of optimized generated samples increases from 25 to 200. In this way, the partial black-box methodology produces a relatively stable performance when using a limited amount of samples (less than around 128 to 256). We follow with a comparison of the \emph{attack accuracy} for the proposed two methodologies, based on the experiment established below. For each target model and specific size of the latent code set, we perform the partial black-box attack once using the corresponding number of optimized samples. In the meanwhile, because the full black-box attack presents a relatively unstable behavior, especially when the number of random samples is small, we repeat our full black-box attack 80 times to reduce observation randomness. Then we compare each full black-box result with the corresponding partial black-box result. Finally, for each tested number of samples, we respectively calculate the ratio of comparisons in which the optimized samples produce a more accurate inference with full consideration of all target models. The statistical result is plotted in \autoref{figure:comp2}. It shows that the partial black-box attack method is more likely to provide more accurate inferences than the full black-box attack when using a small number of samples (less than around 150). For tasks except $T_1$, we can find that inferences with optimized samples are better than those with random samples in most cases (i.e., the ratio is exceeding 0.5). Another observation is that, as the number of samples increases, more full black-box inferences outperform the partial black-box attack results. Overall, we give a conclusion for these two methodologies. \mypara{Partial Black-box Methodology} Our partial black-box attack achieves a better inference performance in both accuracy and stability with a limited number of optimized samples (around 150). Note that obtaining a large number of samples from the target GAN can be possibly detected as an abnormal event, our partial black-box attack is supposed to be a more stealthy one. Moreover, the partial black-box attack helps to reduce query charges when the adversary needs to pay for the generated samples. \mypara{Full Black-box Methodology} When the adversary is allowed to obtain a large number of samples, our full black-box methodology provides a more convenient way to achieve her attack, as it avoids the consumption to optimize the latent code set. Besides, we believe that our full black-box methodology has also presented realistic threats against generative models, as it provides a more generic and easy solution without any extra knowledge of the target model. \subsection{Evaluation on Property Classifier} \label{section:property_classifier} Both of our full and partial black-box attack pipelines include a property classifier $f_{\property}$, which directly impacts the final inference result. In this subsection, we study how the property classifier influences the property inference attack with respect to two factors: the training dataset distribution and the structure of the property classifier. \mypara{Training Dataset from a Different Distribution} We firstly consider a strict situation that the adversary has no knowledge of the target model’s training dataset distribution. In that case, we investigate how the full black-box attack behaves when the property classifier is trained on a different dataset. In our experiment, we adopt three property classifiers, i.e., an off-the-shelf CNN model\footnote{\url{https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/}} trained on the IMDB-WIKI dataset~\cite{RTG18}, an off-the-shelf CNN model\footnote{\url{https://github.com/dpressel/rude-carnie}} trained on the Audience dataset~\cite{LH15}, and a model locally trained with the EMNIST dataset~\cite{CATS17}. As the former two off-the-shelf models are gender classifications, we directly adopt them to achieve the full black-box attack on $T_1$ and $T_2$. \autoref{figure:online3} and \autoref{figure:online4} shows that the property classifier based on Audience dataset has a good attack performance on both tasks. But as shown in \autoref{figure:online2}, the IMDB-WIKI classifier results in a significant inference accuracy decline in $T_2$. This phenomenon is possibly due to the relatively poor performance of the IMDB-WIKI classifier on the AFAD dataset. In order to verify our guess, we run the off-the-shelf property classifier directly on the target model’s underlying training dataset and then calculate the dataset property. The green curve in \autoref{figure:online2} depicts the result. We can clearly see that the average attack results correspond closely to the dataset property recognized by the property classifier. For instance, when the proportion of male training samples is 30\%, the property classifier reports a male proportion of 56\%, which is close to the average inferred proportion of 59\%. This phenomenon reveals that the property classifier plays a pivotal role in the inference attack. Moreover, we also train a local classifier with the EMNIST dataset~\cite{CATS17} to achieve the full black-box attack on $T_4$. The EMNIST dataset is a variant of the full NIST dataset and shares the same image structure and parameters as the original MNIST dataset, but it is completely disjoint with the MNIST dataset. \autoref{figure:online5} shows that the EMNIST classifier can still help us to accomplish a relatively accurate property inference attack. Our result shows that it is possible to achieve an accurate property inference attack even without the knowledge of the training dataset distribution, as long as the adversary owns a property classifier that has good enough accuracy on the target problem. \begin{figure}[!t] \centering \includegraphics[width=0.6\columnwidth]{pic/classifier/structure.pdf} \caption{Full black-box performance w.r.t.\ property classifier architecture. Each line gives an average performance of target models with the same underlying property. ``c3f2'' means the classifier architecture begins with 3 convolution layers and follows with 2 fully connected layers.} \label{figure:structure} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\columnwidth]{pic/shadow_gan/WGANGP_result_PGGANnoise.pdf} \caption{Evaluate on $T_1$} \label{figure:shadow1} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\columnwidth]{pic/shadow_gan/PGGAN_result_WGANGPnoise.pdf} \caption{Evaluate on $T_2$} \label{figure:shadow2} \end{subfigure} \caption{Partial black-box performance w.r.t.\ unknown training distribution and structure of target GAN. For $T_1$, we optimize the latent code set based on PGGAN with IMDB-WIKI dataset. For $T_2$, we optimize the latent code set based on WGANGP with CelebA dataset. For both tasks, we adopt a downloaded CNN model based on IMDB-WIKI dataset as our property classifier. The blue curve plots the average performance of the target models with the same underlying property. The orange line marks the best attack result. The green line shows the training dataset property directly reported by the off-the-shelf classifier.} \label{figure:shadow} \end{figure} \mypara{Classifier Architecture} We secondly examine the attack behavior based on property classifiers with different architectures. In this paper, our property classifier in $T_1$ is a convolutional neural network with two convolutional layers and two fully connected layers (shortened as c2f2 in this paper). Additionally, we adopt three other structures which have different numbers of convolution layers or fully connected layers (i.e., c2f3, c1f2, and c3f2). As shown in \autoref{figure:structure}, the average full black-box attack results in $T_1$ are very close to each other when using property classifiers with different structures. For instance, the average inference results range from 26\% to 31\% when the underlying training dataset contains 30\% males. As a result, our attack methodologies are only sightly influenced by the classifier architecture. \subsection{Evaluation on Shadow Models with Less Hyper-parameters} \label{section:Shadow} As we assume that the partial black-box adversary can train shadow models with the same structure and training hyper-parameters on the auxiliary dataset of the target GAN. In this subsection, we investigate the behavior of our partial black-box attack based on shadow models with different structure and training dataset. As we still control the input latent code of the target GAN in the partial black-box methodology, we only set the size of the latent code layer in shadow models the same as the target model. For $T_1$, we use the PGGAN trained on the IMDB-WIKI dataset as shadow models, while the target model is WGANGP based on the CelebA dataset. \autoref{figure:shadow1} shows the partial black-box attack result is close to the benchmark line. As a result, the latent code set optimized with shadow models still works well in spite of that the adversary has no knowledge of the main structure and dataset of the target model. For $T_2$, we use the WGANGP trained on the CelebA dataset as shadow models. As shown in \autoref{figure:shadow2}, there is a certain deviation between the partial black-box attack result and the benchmark line. But the inference result corresponds closely to the dataset property recognized by the off-the-shelf property classifier. Moreover, the similarity between \autoref{figure:shadow} and \autoref{figure:online} proves the effectiveness of our partial black-box methodology without knowledge of the structure and underlying dataset of the target model. And this phenomenon further certifies the needless similarity of the generated samples in shadow and target GANs to achieve our partial black-box methodology. \subsection{Evaluation on Multi-class Property} \label{section:Multi} As the five tasks shown above all focus on inferring the distribution of attributes with \emph{binary classes}, we present here an extra experiment to evaluate the performance of our attack on the property with \emph{multiple classes}, i.e., the distribution of 10 digits in the training dataset. Facing a target DCGAN trained on MNIST with a specific distribution (including digits from 0 to 9), \autoref{figure:mnist_ori} shows how our full black-box attack behaves when inferring this multi-class property. We can clearly observe that the inferred distribution follows a close trend with the underlying property. Moreover, the cosine similarity is over 0.99 when considering these two distributions as vectors. As a result, our attack gives a good performance on inferring property with not only binary classes, but also multiple classes. \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{pic/mnist/mnist_ori.pdf} \caption{Full black-box performance w.r.t.\ multi-class property. The blue bar shows the distribution of each digit in the training dataset, while the orange one depicts the inferred distribution based on our full black-box attack.} \label{figure:mnist_ori} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{pic/mnist/mnist_fake.pdf} \caption{Mitigation performance. The blue bar shows the original distribution of each digit in the training dataset, while the orange one depicts the inferred distribution based on our full black-box attack. The green bar shows the fake distribution after rebalancing dataset.} \label{figure:mnist_fake} \end{figure} \subsection{Discussion on Mitigation} \label{section:mitigate} \mypara{Local Property Classifier} One way to mitigate our attack is by introducing a local property classifier to pre-test the property of generated samples. In this way, even though the real generated samples expose the underlying property of the training dataset, we can provide a subset of the generated samples to hide this distribution with the help of the local property classifier. \mypara{Rebalancing the Training Dataset} Another solution would be simply rebalancing the training dataset with respect to the target property by adding extra new samples to the training dataset. \autoref{figure:mnist_fake} shows the full black-box attack performance after filling the training dataset to form a fake distribution. We can find that the inferred property is closer to the fake distribution than the real one, and the cosine similarity reduces from 0.99 to 0.88. This method is useful in most scenarios, but can possibly jeopardize the performance of the target GAN. \section{Enhancing Membership Inference Attack} \label{section:MIA} So far, we have demonstrated the effectiveness of our property inference attacks against GANs. Next, we investigate whether our property inference attacks can be used as a building block to launch other attacks. In particular, we focus on membership inference, one of the most well-established privacy attacks against GANs~\cite{HMDC19,CYZF20}. \mypara{Methodology} In general, the membership inference attack intends to infer whether a target sample belongs to the underlying training dataset of a target GAN. State-of-the-art attacks in this field proposed by Chen et al.~\cite{CYZF20} follow three steps: \begin{itemize} \item Use a distance metric $L(\cdot,\cdot)$ to evaluate the reconstruction error of the target sample $x$ against the target GAN $(\mathsf{G}_v)$. In different scenarios, they deliver different methodologies to obtain the most similar generated sample $\mathcal{R}(x|\mathsf{G}_v)$. \item Build a shadow GAN $(\mathsf{G}_r)$ to repeat the first step and get a reference reconstruction error. In this way, the calibrated reconstruction error $L_{cal}(\cdot,\cdot)$ can be calculated as: \begin{equation} L_{cal}(x,\mathcal{R}(x|\mathsf{G}_v)) = L(x,\mathcal{R}(x|\mathsf{G}_v)) - L(x,\mathcal{R}(x|\mathsf{G}_r)) \end{equation} \item Infer whether the target sample is in the training dataset based on a threshold. Formally, the attack flow works as \begin{equation} \label{equation:MIA} \mathcal{A}(x) = \mathbbm{1}[L_{cal}(x,\mathcal{R}(x|\mathsf{G}_v)) < \epsilon] \end{equation} i.e., when the calibrated reconstruction error is smaller than a threshold, it belongs to the training dataset. \end{itemize} Our enhancement follows the intuition that a sample has a larger possibility to be a member when it shares the same property with the majority of samples in the target GAN's training dataset. For instance, if the target GAN's training dataset contains more males than females (obtained by our property inference attacks), then a target male sample is more likely to be a member than a female sample. Based on this, we add an extra item on the threshold of \autoref{equation:MIA} to enhance membership inference. Formally, the new membership inference attack is modified as the following. \begin{equation} \label{equation:enhanced_MIA} \mathcal{A}(x) = \mathbbm{1}[L_{cal}(x,\mathcal{R}(x|\mathsf{G}_v)) < \epsilon + \lambda_{p}\frac{1}{N}\sum_{i}^N \sl{f}(\mathcal{P}_{\it{i}})] \end{equation} where $\lambda_{p}$ controls the magnitude of our enhancement, $N$ refers to the number of considered attributes of the query sample, $\mathcal{P}_{\it{i}}$ is the proportion of the $i$th attribute in the training dataset, and $\sl{f}({\mathcal{P}_{\it{i}}}) = 2 {\times} {\mathcal{P}_{\it{i}}} - 1$. The term $\lambda_{p}\frac{1}{N}\sum_{i}^N \sl{f}(\mathcal{P}_{\it{i}})$ in \autoref{equation:enhanced_MIA} helps to calibrate a target sample's membership probability with respect to the target model's training dataset's property. When the target sample shares the same attribute as a higher proportion of the underlying training dataset ($\mathcal{P}_{\it{i}}>50\%$), the new threshold rises and leads to a better membership probability. \mypara{Evaluation} We evaluate the performance of the enhanced membership inference attack with the help of the additional knowledge of the gender distribution of samples in the training dataset, which is obtained by our property inference attack. We set up the target GANs using 2,048 CelebA samples with a control of its underlying property (the proportion of males) and the structure is the same as $T_1$ discussed in \ref{section:target_models}. We adopt the same full black-box membership inference attack methodology as Chen et al.~\cite{CYZF20}. We set $\lambda_{p} = 2$ and each evaluation dataset has 2,048 members and 6,144 non-members (8,192 in total). \begin{figure}[!t] \centering \includegraphics[width=0.66\columnwidth]{pic/MIA.pdf} \caption{Enhanced membership inference attack performance.} \label{figure:MIA} \end{figure} As shown in \autoref{figure:MIA}, with the knowledge of the training dataset's underlying property, i.e., 30\% male samples, our enhanced membership inference's AUC (area under the ROC curve) increases from 0.52 to 0.61. Furthermore, when the distribution of gender is more polarized, the enhancement is more pronounced. As a result, the extra item added to the threshold in \autoref{equation:enhanced_MIA} indeed calibrates the membership probability effectively, which further demonstrates the applicability of our property inference attacks. \begin{figure}[!t] \centering \includegraphics[width=0.66\columnwidth]{pic/inaccurate_MIA.pdf} \caption{Enhanced membership inference attack performance w.r.t inferred property with deviations.} \label{figure:inaccurate_MIA} \end{figure} \mypara{Impact of Inferred Properties} As our property inference can not deliver the exact proportion of the considered attribute in the target training dataset, we further evaluate our enhancement algorithm based on inferred proportions with deviations. As we can see in \autoref{figure:inaccurate_MIA}, our enhancement still works on a target GAN with 30\% males when the utilized inference property is less than 50\%. Moreover, the enhanced membership inference's AUC changes slightly when our property inference attack delivers a proportion less than the underlying property, but the AUC decreases significantly to the baseline when the inferred proportion comes closer to 50\%. This further illustrates the applicability of our membership inference attack enhancement algorithm based on the property inference attack. \section{Related Work} \label{section:related} \mypara{Membership Inference Attacks} Membership inference attack tries to infer whether a sample belongs to a specific dataset. Previous studies have demonstrated successful attacks against various targets, such as biomedical data~\cite{HSRDTMPSNC08,BBHM16,HZHBTWB19} and location data~\cite{PTC18}. Shokri et al.~\cite{SSSS17} introduce the first membership inference attack against machine learning models. The key idea is to leverage a set of shadow models to mimic the target model's behavior, and then train an attack model to discriminate member from non-member samples on model outputs. Salem et al.~\cite{SZHBFB19} show that the membership inference attack can attain high accuracy even relaxing the three assumptions in~\cite{SSSS17}. In recent years, membership inference attacks have been investigated on various other scenarios, e.g., white-box models~\cite{NSH19,LF20}, federated learning~\cite{MSCS19}, generative models~\cite{HMDC19,CYZF20}, machine unlearning~\cite{CZWBHZ21}, graph neural networks~\cite{HWWBSZ21,ONK21}, recommender systems~\cite{ZRWRCHZ21}, self-supervised models~\cite{HZ21,LJQG21}, label-only cases~\cite{CTCP21,LZ21}, etc. Despite current research efforts on the membership inference threat for generative models, a wide range of privacy issues of generative models still remain largely unexplored. To fill this gap, we present the first study to specify the property inference attack against GANs. Our results show that even with limited knowledge and access to the target, it is still possible to infer sensitive properties of the training dataset accurately. \mypara{Property Inference Attacks} Property inference attacks aim to infer properties of the target model or the training dataset which are unintended to share by the producer. In fact, sensitive {\it{properties}} cover a wide range of information, which would violate intellectual property if exposed. They can be model-related, such as the model structure and activation functions; as well as data-related, such as where the data are produced or the distribution of the training data. And our work lies in the data-related property inference attacks against GANs. Ganju et al.~\cite{GWYGB18} propose the first property inference attack against discriminative models, which focuses on fully connected neural networks (FCNN), while ours focuses on GANs. As FCNN and GANs have different types of inputs and outputs, the information that our attack exploits is different from \cite{GWYGB18}. As a result, our attack relies on an optimized latent code to the target model in the partial black-box case, while \cite{GWYGB18} uses the weights of FCNN as its property classifier’s input (since it is a white-box attack). Moreover, \cite{GWYGB18} works on the white-box scenario and only treats the inference attack as a binary prediction task, which cannot return a precise prediction of the target property. Different from Ganju et al.~\cite{GWYGB18}, our attacks aim to predict the target property in a far more precise fashion, by modeling the attack task as a regression problem. Furthermore, our proposed attacks work well on two more realistic scenarios: the full black-box and partial black-box setting. Moreover, Carlini et al.~\cite{CLEKS19} demonstrate the secret leakage problem caused by unintended memorization of sequential generative models. It focuses on recovering specific sensitive training records from sequential models, while our work targets at inferring the global data privacy of the training dataset against another important kind of generative model--GANs. Besides the above, there also exists a wide range of study on the security and privacy risks of ML model, such as model stealing~\cite{TZJRR16,OSF19,JCBKP20,YYZTHJ20}, model inversion~\cite{FJR15}, backdoor attack~\cite{WYSLVZZ19,SWBMZ20,LMALZWZ18,CSBMSWZ21} and other attacks under specific background~\cite{SBBFZ20,MSCS19,SS19,HJBGZ21,LWHSZBCFZ22}. \section{Conclusion} \label{section:conclusion} In this paper, we perform the first property inference attack against GANs, the goal of which is to infer the macro-level information of a target GAN's underlying training dataset. We propose a general attack pipeline for two different attack scenarios, following the intuition that the generated samples can reflect the distribution of its underlying training dataset. In the full black-box setting, we rely on random generated samples and a property classifier to realize our attack. In the partial black-box setting, we introduce a novel optimization framework to reduce the number of queries with the help of shadow models. Comprehensive experiments show the effectiveness of both the attack methodologies in a variety of setups including five property inference tasks, four datasets, and four victim GAN models. We also compare our two methodologies to verify the advantage of the partial black-box attack when using a limited number of samples based on two factors, i.e., stability and accuracy. We additionally show the effectiveness of our full black-box attack without \emph{any} knowledge of the target model. Moreover, we present how to leverage our property inference attack to enhance membership inference attacks, which demonstrates the applicability of the proposed property inference method. \section*{Acknowledgement} We thank all the anonymous reviewers for their insightful suggestions and comments to improve the paper. This work is supported by National Key R\&D Program (2020YFB1406900), National Natural Science Foundation of China (U21B2018, 61822309, 61773310, U1736205), Shaanxi Province Key Industry Innovation Program (2021ZDLGY01-02), and the Helmholtz Association within the project ``Trustworthy Federated Data Analytics'' (TFDA) (funding number ZT-I-OO1 4). \bibliographystyle{IEEEtranS}
2023-04-23T08:18:03.327Z
2021-11-16T02:28:32.000Z
redpajama/arxiv
arxiv_0000
1,102
12,954
f485fdcdf4dd56735fff5d6cef5d8d0e2b5d81c2
\section*{Introduction} In this paper we investigate mathematically a process which is used by the building industry in order to protect and conserve cultural goods and other structure works. Such structures which are subject to weathering can be strengthened by filling the pores by a water-lime-mixture. The mixture penetrates into the pore structure of the stone and the calcium hydroxide reacts with carbon dioxide and builds calcium carbonate and water. A solid layer is built in the pore space which strengthens the material. The main problem is, however, that the consolidated layer is in practice rather thin and is located too close to the active boundary. In order to avoid possible ambiguity, it is necessary to explain that the term `consolidation' is to be interpreted here as a process of formation of calcium carbonate which has the property of binding the particles together. It might also be called `cementation' or `compaction'. In the literature there are a lot of works dealing with those problems. Different practical strategies for the wetting and drying regime which lead to a more uniform distribution of the consolidant are compared in \cite{sli}. A mathematical model is proposed in \cite{ssv1993, ssv1995}, and \cite{cetal}, where the authors derive governing equations for moisture, heat, and air flow through concrete. A numerical procedure based on the finite element method is developed there to solve the set of equations and to investigate the influence of relative humidity and temperature. It is shown that the amount of calcium carbonate formed in a unit of time depends on the degree of carbonation,~i.\,e., the availability of calcium hydroxide, the temperature, the carbon dioxide concentration and the relative humidity in the pore structure of the concrete. An extension of the aforementioned papers by studying the hygro-thermal behavior of concrete in the special situation of high temperatures can be found in \cite{getal}. In the present case chemical reactions take place. Various approaches exist which describe such processes by models which stem from different backgrounds (e.\,g., from mixture theory or empirical models). An overview on the development of theories especially for porous media including chemical reactions is given in \cite{bd}. The interactions between the constituents of a porous medium are not necessarily of chemical nature which would lead to a chemical transformation of one set of chemical substances to another. Simpler is the mass exchange between the constituents by physical processes like adsorption. Adsorption-diffusion processes have been studied by B. Albers (the former name of B. Detmann) e.\,g.~in \cite{ads}. Other works on sorption in porous solids including molecular condensation are \cite{bazant1} or \cite{bazant2}. In these works the diffusivities of water and carbon dioxide are assumed to be strongly dependent on pore humidity, temperature and also on the degree of hydration of concrete. The authors realized that the porosity becomes non-uniform in time. This is an observation which is interesting also in the present case because the structure of the channels clearly changes with the progress of the reaction. A survey of consolidation techniques for historical materials is published in \cite{pd}. The influence of the particle size on the efficiency of the consolidation process is investigated in \cite{svmlma}. Experimental determination of the penetration depth is the subject of \cite{blhvs1,blhvs2,blhvs3}. Different variants of the consolidants is studied in \cite{drsli,msm,poam,ss}, and an experimental work on mechanical interaction between the consolidant and the matrix material is carried out in \cite{mpe}. A further work dealing with chemical reactions and diffusion in concrete based on the mixture theory for fluids introduced by Truesdell and coworkers is by A.\,J.~Vromans et al.~\cite{vetal}. The model describes the corrosion of concrete with sulfuric acid which means a transformation of slaked lime and sulfuric acid into gypsum releasing water. It is a similar reaction we are looking at. A similar topic is dealt with in \cite{VanBalen}, where it is shown how the carbonation process in lime mortar is influenced by the diffusion of carbon dioxide into the mortar pore system by the kinetics of the lime carbonation reaction and by the drying and wetting process in the mortar. Experimental results of $\mathrm{CaCO_{3}}$ precipitation kinetics can be found in \cite{Roques}. The porosity changes during the reaction. This was studied by Houst and Wittmann, who also investigated the influence of the water content on the diffusivity of $\mathrm{CO_{2}}$ and $\mathrm{O_{2}}$ through hydrated cement paste \cite{Houst}. An investigation of the physico-chemical characteristics of ancient mortars with comparison to a reaction-diffusion model by Zouridakis et al.~is presented in \cite{zour}. A slightly different reaction involving also sulfur is mathematically studied in \cite{bohm} by B\"ohm et al. There the corrosion in a sewer pipe is modeled as a moving-boundary system. A strategy for predicting the penetration of carbonation reaction fronts in concrete was proposed by Muntean et al. in \cite{munt}. A simple 1D mathematical model for the treatment of sandstone neglecting the effects of chemical reactions is proposed in \cite{cnns} and further refined in \cite{bfngrt}. We model the consolidation process as a convection-diffusion system coupled with chemical reaction in a 3D porous solid. The physical observation that only water can be evacuated from the porous body, while lime remains inside, requires a nonstandard boundary condition on the active part of the boundary. We choose a simple one-sided condition for the lime exchange between the interior and the exterior. The main result of the paper consists in proving rigorously that the resulting initial-boundary value problem for the PDE system in 3D has a solution satisfying natural physical constraints, including the boundedness of the concentrations proved by means of time discrete Moser iterations. We also show the result of numerical simulation in a simplified 1D situation. The structure of the present paper is the following. In Section \ref{mod}, we explain the modeling hypotheses and derive the corresponding system of balance equations with nonlinear boundary conditions. In Section \ref{math}, we give a rigorous formulation of the initial-boundary value problem, specify the mathematical hypotheses, and state the main result in Theorem \ref{t1}. The solution is constructed by a time-discretization scheme proposed in Section \ref{time}. The estimates independent of the time step size derived for this time-discrete system constitute the substantial step in the proof of Theorem \ref{t1}, which is obtained in Section \ref{limi} by passing to the limit as the time step tends to zero. A numerical test for a reduced 1D system is carried out in Section \ref{nume} to illustrate a qualitative agreement of the mathematical result with experimental observations. \section{The model}\label{mod} We imagine a porous medium (sandstone, for example) the structure of which is to be strengthened by letting calcium hydroxide particles driven by water flow penetrate into the pores. In contact with the air present in the pores, the calcium hydroxide reacts with the carbon dioxide contained in the air and produces a precipitate (calcium carbonate) which is not water-soluble, remains in the pores, and glues the sandstone particles together. Unlike, e.\,g., in \cite{hkk,schK}, we do not consider the porosity as one of the state variables. The porosity evolution law is replaced with the assumption that the permeability decreases as a result of the calcium carbonate deposit in the pores. The chemical reaction is assumed to be irreversible and we write it as $Ca(OH)_2 + CO_2 \to CaCO_3 + H_2O$.\medskip Notation: \begin{itemize} \item[] $\dot c^W$ ... mass source rate of $H_2 O$ produced by the chemical reaction \item[] $\dot c^H$ ... mass source rate of $Ca(OH)_2$ produced by the chemical reaction \item[] $\dot c^P$ ... mass source rate of $Ca C O_3$ produced by the chemical reaction \item[] $\dot c^G$ ... mass source rate of $C O_2$ produced by the chemical reaction \item[] $m^W$ ... molar mass of $H_2 O$ \item[] $m^H$ ... molar mass of $Ca(OH)_2$ \item[] $m^P$ ... molar mass of $Ca C O_3$ \item[] $m^G$ ... molar mass of $C O_2$ \item[] $\rho^W$ ... mass density of $H_2 O$ \item[] $\rho^H$ ... mass density of $Ca(OH)_2$ \item[] $p$ ... capillary pressure \item[] $s$ ... water volume saturation \item[] ${h}$ ... relative concentration of $Ca(OH)_2$ \item[] $p^{\partial\Omega}$ ... outer pressure \item[] $h^{\partial\Omega}$ ... outer concentration of $Ca(OH)_2$ \item[] $v$ ... transport velocity vector \item[] $k(c^P)$ ... permeability of the porous solid \item[] $q$ ... liquid mass flux \item[] $q^H$ ... mass flux of $Ca(OH)_2$ \item[] $\gamma$ ... speed of the chemical reaction \item[] $\kappa$ ... diffusivity of $Ca(OH)_2$ \item[] $n$ ... unit outward normal vector \item[] $\sigma(x)$ ... transport velocity interaction kernel \item[] $\alpha(x)$ ... boundary permeability for water \item[] $\beta(x)$ ... boundary permeability for the inflow of $Ca(OH)_2$ \end{itemize} Mass balance of the chemical reaction: $$ \frac{\dot c^P}{m^P} = \frac{\dot c^W}{m^W} = -\frac{\dot c^H}{m^H} = -\frac{\dot c^G}{m^G}. $$ Water mass balance in an arbitrary subdomain $V$ of the porous body: $$ \frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_V \rho^W s \,\mathrm{d} x + \int_{\partial V} q\cdot n \,\mathrm{d} S = \int_V \dot c^W \,\mathrm{d} x. $$ Calcium hydroxide mass balance in an arbitrary subdomain $V$ of the porous body: $$ \frac{\,\mathrm{d}}{\,\mathrm{d} t}\int_V \rho^H {h} \,\mathrm{d} x + \int_{\partial V} q^H\cdot n \,\mathrm{d} S = \int_V \dot c^H \,\mathrm{d} x. $$ Water mass balance in differential form: $$ \rho^W\dot s + \mathrm{\,div\,} q = \dot c^W. $$ Calcium hydroxide mass balance in differential form: $$ \rho^H\dot{h} + \mathrm{\,div\,} q^H = \dot c^H. $$ The water mass flux is assumed to obey the Darcy law: $$ q = -k(c^P) \nabla p, $$ with permeability coefficient $k(c^P)$ which is assumed to decrease as the amount $c^P$ of $Ca C O_3$ given by the formula $$ c^P(x,t) = \int_0^t \dot c^P(x,t')\,\mathrm{d} t' $$ increases and fills the pores. The flux of $Ca(OH)_2$ consists of transport and diffusion terms: $$ q^H = \rho^H {h} v - \kappa s \nabla {h}. $$ The mobility coefficient $\kappa s$ in the diffusion term is assumed to be proportional to $s$: If there is no water in the pores, no diffusion takes place. We assume that the transport of $Ca(OH)_2$ at the point $x \in \Omega$ is driven by the water flux in a small neighborhood of $x$. In mathematical terms, we assume that there exists a nonnegative function $\sigma$ with support in a small neighborhood of the origin such that the transport velocity $v$ can be defined as $$ v(x,t) = \frac{1}{\rho^H}\int_{\Omega} \sigma(x-y) q(y,t) \,\mathrm{d} y. $$ The main reason for this assumption is a mathematical one. The strong nonlinear coupling between $s$ and $h$ makes it difficult to control the bounds for the unknowns in the approximation scheme. We believe that such a regularization of the transport velocity makes physically sense as well. The wetting-dewetting curve is described by an increasing function $f$: $$p=f(s).$$ We focus on modeling the chemical reactions. Capillary hysteresis, deformations of the solid matrix, and thermal effects are therefore neglected here. We plan to include them following the ideas of \cite{ak} in a subsequent study. The dynamics of the chemical reaction is modeled according to the so-called \textit{law of mass action}, which states that the rate of the chemical reaction is directly proportional to the product of the concentrations of the reactants. We assume it in the form \begin{equation}\label{reac} \dot c^P = \gamma m^P {h} s(1-s). \end{equation} Its meaning is that no reaction can take place if either no $Ca(OH)_2$ is available (that is, $h=0$), or no water is available (that is, $s=0$), or no $C O_2$ is available (that is, $s=1$), according to the hypothesis that the chemical reaction takes dominantly place on the contact between water and air. In order to reduce the complexity of the problem, we assume directly that the available quantity of $C O_2$ is proportional to the air content. On the boundary $\partial\Omega$ we prescribe the normal fluxes. For the normal component of $q$, we assume that it is proportional to the difference between the pressures $p$ inside and $p^{\partial\Omega}$ outside the body. For the flux of $Ca(OH)_2$, we assume that it can point only inward proportionally to the difference of concentrations and to $s$, and no outward flux is possible. Inward flux takes place only if the outer concentration $h^{\partial\Omega}$ is bigger than the inner concentration $h$: $$ \left. \begin{array}{rcl} q\cdot n &=& \alpha(x) (p-p^{\partial\Omega})\\[1.5 mm] q^H \cdot n &=& -\beta(x) s (h^{\partial\Omega}- h)^+ \end{array} \right\rbrace \text{on }\partial\Omega.$$ \section{Mathematical problem}\label{math} Let $\Omega \subset \mathbb{R}^3$ be a bounded Lipschitzian domain. We consider the Hilbert triplet $V \subset H \equiv H' \subset V'$ with compact embeddings and with $H = L^2(\Omega)$, $V= W^{1,2}(\Omega)$. For two unknown functions $s(x,t),h(x,t)$ defined for $(x,t) \in\Omega\times (0,T)$ the resulting PDE system reads \begin{align}\nonumber &\int_{\Omega} (\rho^W s_t\phi(x) + k(c^P) f'(s) \nabla s\cdot\nabla \phi(x))\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(f(s) - f(s^{\partial\Omega}))\phi(x) \,\mathrm{d} S(x)\\ \label{e01} &\qquad = \gamma m^W \int_{\Omega}{h} s(1-s)\phi(x)\,\mathrm{d} x,\\ \nonumber &\int_{\Omega} (\rho^H h_t\psi(x) + (\kappa s \nabla {h} - \rho^H hv)\cdot\nabla\psi(x))\,\mathrm{d} x - \int_{\partial\Omega} \beta(x)s (h^{\partial\Omega} - h)^+\psi(x)\,\mathrm{d} S(x)\\ \label{e02} &\qquad = -\gamma m^H \int_{\Omega} {h} s(1-s) \psi(x)\,\mathrm{d} x. \end{align} for all test functions $\phi, \psi \in V$, where $s^{\partial\Omega} := f^{-1}(p^{\partial\Omega})$, and with initial conditions \begin{equation}\label{ini} s(x,0) = s^0(x), \quad h(x,0) = h^0(x) \quad \mathrm{for\ } x \in \Omega. \end{equation} \begin{hypothesis}\label{h1} The data have the properties \begin{itemize} \item[{\rm (i)}] $s^{\partial\Omega} \in L^\infty(\partial\Omega\times (0,T))$, $s^0 \in L^\infty(\Omega)\cap W^{1,2}(\Omega)$ are given such that $s^{\partial\Omega}_t \in L^2(\partial\Omega\times (0,T))$, $0<s^\flat\le s^{\partial\Omega}(x,t) \le 1$ for a.\,e. $(x,t) \in \partial\Omega {\times} (0,T)$, $0<s^\flat \le s^0(x) \le 1$ for a.\,e. $x \in \Omega$; \item[{\rm (ii)}] $h^{\partial\Omega} \in L^\infty(\partial\Omega\times (0,T))$, $h^0 \in L^\infty(\Omega)$ are given such that $0\le h^{\partial\Omega}(x,t) \le h^\sharp$ for a.\,e. $(x,t) \in \partial\Omega \times (0,T)$, $0 \le h^0(x) \le h^\sharp$ for some $h^\sharp > 0$ and for a.\,e. $x \in \Omega$; \item[{\rm (iii)}] $f:[0,1] \to \mathbb{R}$ is continuously differentiable, $0 < f^\flat \le f'(s) \le f^\sharp$ for $0\le s \le 1$; \item[{\rm (iv)}] $k$ is continuously differentiable and nonincreasing, $0 < k^\flat \le k(r) \le k^\sharp$ for $r\ge 0$; \item[{\rm (v)}] $\sigma: \mathbb{R}^3 \to [0,\infty)$ is continuous with compact support, $\int_{\mathbb{R}^3}\sigma(x)\,\mathrm{d} x = 1$; \item[{\rm (vi)}] $\alpha, \beta \in L^\infty(\partial\Omega)$, $\alpha(x)\ge 0$, $\beta(x) \ge 0$ on $\partial\Omega$, $\int_{\partial\Omega} \alpha(x)\,\mathrm{d} S(x) > 0$, $\int_{\partial\Omega} \beta(x)\,\mathrm{d} S(x) > 0$. \end{itemize} \end{hypothesis} The meaning of Hypothesis \ref{h1}\,(vi) is that the boundary $\partial\Omega$ is inhomogeneous, with different permeabilities at different parts of the boundary. The transport of water (supply of $Ca(OH)_2$) through the boundary takes place only on parts of $\partial\Omega$ where $\alpha >0$ ($\beta>0$, respectively). The remaining sections are devoted to the proof of the following result. \begin{theorem}\label{t1} Let Hypothesis \ref{h1} hold. Then system \eqref{e01}--\eqref{e02} with initial conditions \eqref{ini} admits a solution $(s,h)$ such that $s_t \in L^2(\Omega\times (0,T))$, $\nabla s \in L^\infty(0,T;L^2(\Omega;\mathbb{R}^3))$, $\nabla h \in L^2(\Omega\times(0,T);\mathbb{R}^3)$, $h_t \in L^2(0,T; W^{-1,2}(\Omega))$, $h \in L^\infty(\Omega\times (0,T))$, $s(x,t) \in [0,1]$ a.\,e., $h(x,t) \ge 0$ a.\,e. \end{theorem} We omit the positive physical constants which are not relevant for the analysis. The strategy of the proof is based on choosing a cut-off parameter $R>0$, replacing $h$ in the nonlinear terms with $Q_R(h) = \min\{h^+, R\}$, $1-s$ with $Q_R(1-s)$, and $v$ in \eqref{e02} with $v^R := (Q_R(|v|)/|v|)v$. We also extend the values of the function $f$ outside the interval $[0,1]$ by introducing the function $\tilde f$ by the formula $$ \tilde f(s) = \left\{ \begin{array}{ll} f(0) + f'(0) s & \mathrm{for\ } s<0,\\ f(s) & \mathrm{for\ } s \in [0,1],\\ f(1) + f'(1)(s-1) & \mathrm{for\ } s>1, \end{array} \right. $$ and consider the system \begin{align}\nonumber &\int_{\Omega} (s_t\phi(x) + k(c^P) \tilde f'(s) \nabla s\cdot\nabla \phi(x))\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(\tilde f(s) - \tilde f(s^{\partial\Omega}))\phi(x) \,\mathrm{d} S(x)\\ \label{e1} &\qquad = \int_{\Omega} Q_R(h) s Q_R(1-s)\phi(x)\,\mathrm{d} x,\\ \nonumber &\int_{\Omega} (h_t\psi(x) + (s\nabla {h} - h v^R)\cdot\nabla\psi(x))\,\mathrm{d} x - \int_{\partial\Omega} \beta(x)s (h^{\partial\Omega} - h)^+\psi(x)\,\mathrm{d} S(x)\\ \label{e2} &\qquad = - \int_{\Omega} {h} s(1-s) \psi(x)\,\mathrm{d} x \end{align} for all $\phi, \psi \in V$. We first construct and solve in Section \ref{time} a time-discrete approximating system of \eqref{e1}--\eqref{e2}, and derive estimates independent of the time step. In Section \ref{limi}, we let the time step tend to $0$ and prove that the limit is a solution $(s,h)$ to \eqref{e1}--\eqref{e2}. We also prove that this solution has the property that $s \in [0,1]$, $h$ is positive and bounded, and $v$ is bounded, so that for $R$ sufficiently large, the truncations are never active and the solution thus satisfies \eqref{e01}--\eqref{e02} as well. \section{Time discretization}\label{time} For proving Theorem \ref{t1}, we first choose $n \in \mathbb{N}$ and replace \eqref{e1}--\eqref{e2} with the following time-discrete system with time step $\tau = \frac{T}{n}$: \begin{align}\nonumber &\int_{\Omega} \left(\frac1\tau (s_j - s_{j-1})\phi(x) + k(c^P_{j-1}) \tilde f'(s_j) \nabla s_j\cdot\nabla \phi(x)\right)\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(\tilde f(s_j) - \tilde f(s^{\partial\Omega}_j))\phi(x) \,\mathrm{d} S(x) \\ \label{de1} &\qquad = \int_{\Omega} Q_R(h_{j-1}) s_j Q_R(1-s_j)\phi(x)\,\mathrm{d} x,\\ \nonumber &\int_{\Omega} \left(\frac1\tau(h_j {-} h_{j-1})\psi(x) + (s_{j-1}\nabla {h_j} {-} h_j v^R_{j-1})\cdot\nabla\psi(x)\right)\,\mathrm{d} x - \int_{\partial\Omega} \beta(x)s_{j-1} (h^{\partial\Omega}_j - h_j)^+\psi(x)\,\mathrm{d} S(x)\\ \label{de2} &\qquad = - \int_{\Omega} {h_j} s_{j-1}(1-s_{j-1}) \psi(x)\,\mathrm{d} x, \end{align} for $j=1, \dots, n$ with initial conditions $s_0 = s^0$, $h_0 = h^0$, and with $c^P_i = 0$ for $i\le 0$, with \begin{align}\label{qj} q_j(x) &= -k(c^P_j(x)) \nabla \tilde f(s_j(x)) & \mathrm{for\ }\ x \in \Omega,\\ \label{vj} v_j(x) &= \int_{\Omega} \sigma(x-y) q_j(y) \,\mathrm{d} y & \mathrm{for\ }\ x \in \Omega, \\ \label{sj} s^{\partial\Omega}_j(x) &= \frac{1}{\tau}\int_{t_{j-1}}^{t_j} s^{\partial\Omega}(x,t)\,\mathrm{d} t & \mathrm{for\ }\ x \in \partial\Omega,\\ \label{hj} h^{\partial\Omega}_j(x) &= \frac{1}{\tau}\int_{t_{j-1}}^{t_j} h^{\partial\Omega}(x,t)\,\mathrm{d} t & \mathrm{for\ }\ x \in \partial\Omega, \end{align} where $t_j = j\tau$ for $j=0,1,\dots, n$. Moreover, we define inductively \begin{equation}\label{cond} c^P_j - c^P_{j-1} = \tau h_j s_j (1-s_j) \ \mbox{ for } \ j=1,\dots, n. \end{equation} We now prove the existence of solutions to \eqref{de1}--\eqref{de2} and derive a series of estimates which will allow us to pass to the limit as $n \to \infty$. We denote by $C$ any positive constant depending possibly on the data and independent of $n$. For $\varepsilon >0$ we denote by $H_\varepsilon: \mathbb{R} \to \mathbb{R}$ the function \begin{equation}\label{f1} H_\varepsilon(r) = \left\{ \begin{array}{ll} 0 &\mathrm{for\ } r\le 0,\\[2mm] \frac{r}{\varepsilon} & \mathrm{for\ } r \in (0,\varepsilon),\\[2mm] 1 & \mathrm{for\ } r\ge \varepsilon \end{array} \right. \end{equation} as a Lipschitz continuous regularization of the Heaviside function, and by $\hat H_\varepsilon$ its antiderivative \begin{equation}\label{f2} \hat H_\varepsilon(r) = \left\{ \begin{array}{ll} 0 &\mathrm{for\ } r\le 0,\\[2mm] \frac{r^2}{2\varepsilon} & \mathrm{for\ } r \in (0,\varepsilon),\\[2mm] r - \frac{\varepsilon}{2} & \mathrm{for\ } r\ge \varepsilon \end{array} \right. \end{equation} as a continuously differentiable approximation of the ``positive part'' function. Note that we have $r H_\varepsilon(r) \le 2 \hat H_\varepsilon(r)$ for all $r \in \mathbb{R}$. \begin{lemma}\label{l1} Let Hypothesis \ref{h1} hold. Then for all $n$ sufficiently large there exists a solution $h_j, \, s_j$ of the time-discrete system \eqref{de1}--\eqref{de2} with initial conditions $s_0 = s^0$, $h_0 = h^0$, $c^P_i = 0$ for $i\le 0$, which satisfies the bounds: \begin{equation}\label{dest1} s^\flat \le s_j(x) \le 1 \quad a.\,e.\quad \mathrm{for\ } j=0,1,\dots, n. \end{equation} \begin{equation}\label{dest3} h_j(x) \ge 0 \quad a.\,e.\quad \mathrm{for\ } j=0,1,\dots, n. \end{equation} \end{lemma} \begin{proof} To prove the existence, we proceed by induction. Assume that the solution to \eqref{de1}-\eqref{de2} is available for $i=1, \dots, j-1$ with the properties \eqref{dest1}--\eqref{dest3}. Then Eq.~\eqref{de1} for the unknown $s:= s_j$ is of the form \begin{equation}\label{mono} \begin{aligned} &\int_{\Omega} (a_0(x,s)\phi(x) + a_1(x)\tilde{f}'(s)\nabla s \cdot \nabla\phi(x))\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(\tilde f(s) - a_2(x))\phi(x) \,\mathrm{d} S(x) \\ &= \int_{\Omega} a_3(x)\phi(x)\,\mathrm{d} x, \end{aligned} \end{equation} where $$ a_0(x,s) = \frac1\tau s(x) - Q_R(h_{j-1}(x)) s(x) Q_R(1-s(x)) $$ and $a_k$, $k=1,2,3$, are given functions which are known from the previous step $j-1$. For $n>TR^2$ the function $s \mapsto a_0(x,s)$ is increasing. Hence, \eqref{mono} is a monotone elliptic problem, and a unique solution exists by virtue of the Browder-Minty Theorem, see \cite[Theorem 10.49]{rr}. Similarly, Eq.~\eqref{de2} is for the unknown function $h := h_j$ of the form \begin{align}\nonumber &\int_{\Omega} (a_4(x)h\psi(x) + (a_5(x)\nabla h - a_6(x) h) \cdot \nabla\psi(x))\,\mathrm{d} x -\int_{\partial\Omega} \beta(x) a_7(x)(a_8(x)- h)^+\psi(x)\,\mathrm{d} S(x)\\ \label{mono2} &\qquad = \int_{\Omega} a_9(x)\psi(x)\,\mathrm{d} x, \end{align} which we can solve in an elementary way in two steps. First, we consider the PDE \begin{align}\nonumber &\int_{\Omega} (a_4(x)h\psi(x) + (a_5(x)\nabla h - a_6(x) w) \cdot \nabla\psi(x))\,\mathrm{d} x -\int_{\partial\Omega} \beta(x) a_7(x)(a_8(x)- h)^+\psi(x)\,\mathrm{d} S(x)\\ \label{mono3} &\qquad = \int_{\Omega} a_9(x)\psi(x)\,\mathrm{d} x \end{align} with a given function $w \in L^2(\Omega)$. Here again the functions $a_k$, $k=4,\dots,9$, are known. We find a solution $h$ to \eqref{mono3} once more by the Browder-Minty Theorem. Since $a_4(x) \ge \frac1\tau$, $a_5(x) = s_{j-1}(x) \ge s^\flat$, and $a_6(x) = v_{j-1}^R \in [-R,R]$, we see that for $n > TR^2/2s^\flat$, the mapping which with $w$ associates $h$ is a contraction on $L^2(\Omega)$, and the solution to \eqref{mono2} is obtained from the Banach Contraction Principle. To derive the bounds for the solution, we first test \eqref{de1} by $\phi = H_\varepsilon(s_j-1)$ (or any other function of the form $g(s_j - 1)$ with $g$ Lipschitz continuous, nondecreasing, and such that $g(s) = 0$ for $s \le 0$). The right-hand side identically vanishes, whereas the boundary term and $\nabla s_j\cdot \nabla H_\varepsilon(s_j-1)$ are nonnegative, which yields that $$ \int_{\Omega} (s_j - s_{j-1})H_\varepsilon(s_j - 1)\,\mathrm{d} x \le 0. $$ From the convexity of $\hat H_\varepsilon$ we obtain that $(s_j - s_{j-1})H_\varepsilon(s_j - 1) \ge \hat H_\varepsilon (s_j-1)- \hat H_\varepsilon (s_{j-1}-1)$, hence, $$ \int_{\Omega} \hat H_\varepsilon (s_j-1) \,\mathrm{d} x \le \int_{\Omega} \hat H_\varepsilon (s_{j-1}-1) \,\mathrm{d} x. $$ We have by hypothesis $s_{j-1} \le 1$ a.\,e., and by induction we get \begin{equation}\label{dest1-r} s_j(x) \le 1 \quad a.\,e.\quad \mathrm{for\ } j=0,1,\dots, n. \end{equation} We further test \eqref{de1} by $\phi = -H_\varepsilon(s^\flat-s_j)$. Then both the boundary term and the elliptic term give a nonnegative contribution, and using again the convexity of $\hat H_\varepsilon$ we have \begin{align*} \frac1\tau \int_{\Omega} (\hat H_\varepsilon (s^\flat-s_j) - \hat H_\varepsilon (s^\flat-s_{j-1})) \,\mathrm{d} x &\le \int_{\Omega} -s_j H_\varepsilon(s^\flat-s_j) Q_R(h_{j-1})Q_R(1-s_{j}) \,\mathrm{d} x\\ &\le \int_{\Omega} (s^\flat-s_j) H_\varepsilon(s^\flat-s_j) Q_R(h_{j-1}) Q_R(1-s_j) \,\mathrm{d} x\\ &\le 2R^2\int_{\Omega} \hat H_\varepsilon (s^\flat-s_j)\,\mathrm{d} x, \end{align*} and from the induction hypothesis we get for $n >2TR^2$ that \begin{equation}\label{dest2} s_j(x) \ge s^\flat \quad a.\,e.\quad \mathrm{for\ } j=0,1,\dots, n. \end{equation} We have in particular $Q_R(1-s_j) = 1-s_j$ for $R\ge 1$ as well as $\tilde{f} = f$. Test \eqref{de2} by $\psi = -H_\varepsilon(-h_j)$. Then \begin{align*} &\frac{1}{\tau} \int_{\Omega} (\hat H_\varepsilon (-h_j) - \hat H_\varepsilon (-h_{j-1}))\,\mathrm{d} x + \int_{\Omega} s^\flat H'_\varepsilon(-h_j)|\nabla h_j|^2 \,\mathrm{d} x \le \int_{\Omega} h_j H'_\varepsilon(-h_j)v^R_{j-1}\cdot \nabla h_j \,\mathrm{d} x\\ &\qquad \le \frac{s^\flat}2\int_{\Omega} H'_\varepsilon(-h_j)|\nabla h_j|^2 \,\mathrm{d} x + \frac1{2s^\flat} \int_{\Omega} h_j^2 |v^R_{j-1}|^2 H'_\varepsilon(-h_j)\,\mathrm{d} x \end{align*} We have $0 \le h_j^2 H'_\varepsilon(-h_j) \le \varepsilon$ and $\lim_{\varepsilon \to 0} \hat H_\varepsilon(-h_j) = (-h_j)^+$, hence, passing to the limit as $\varepsilon \to 0$, by induction we get \eqref{dest3}. \end{proof} \begin{lemma}\label{2} Let Hypothesis \ref{h1} hold. Then $h_j$ satisfies the following estimate \begin{equation}\label{dest4} \int_{\Omega} h_j(x)\,\mathrm{d} x \le C \end{equation} independently of $j=0,1, \dots, n$. \end{lemma} \begin{proof} Test \eqref{de2} by $\psi = 1$. Note that the boundary term is bounded above by a multiple of $h^\sharp$ and the right-hand side is negative or zero, so that $$ \frac{1}{\tau} \int_{\Omega} (h_j - h_{j-1}) \,\mathrm{d} x \le \hat{C}, $$ that is, $$ \int_{\Omega} h_j\,\mathrm{d} x \le \tau\hat{C} + \int_{\Omega} h_{j-1}\,\mathrm{d} x. $$ Summing up over $j=1, \dots, j^*$ we get $$ \int_{\Omega} h_{j^*}(x)\,\mathrm{d} x \le \int_{\Omega} h_{0}(x)\,\mathrm{d} x + T\hat{C}, $$ for an arbitrary $j^*$, which yields \eqref{dest4}. \end{proof} The main issue will be a uniform upper bound for $h_j$ which will be obtained by a time-discrete variant of the Moser-Alikakos iteration technique presented in \cite{ali}. We start from some preliminary integral estimates of $s_j$. \begin{lemma}\label{3} Let Hypothesis \ref{h1} hold. Then $s_j$ satisfy the bounds \begin{equation}\label{dest5} \tau\sum_{j=1}^{n}\int_{\Omega} |\nabla s_j(x)|^2\,\mathrm{d} x +\tau\sum_{j=1}^{n}\int_{\partial\Omega} \alpha (x) s_j^2(x)\, dS(x)\le C, \end{equation} \begin{equation}\label{dest6} \frac{1}{\tau} \sum_{j=1}^{n} \int_{\Omega} |s_j - s_{j-1}|^2 \,\mathrm{d} x + \max_{j=1, \dots, n} \int_{\Omega} |\nabla s_j(x)|^2 \,\mathrm{d} x \le C \left(1+ \tau\sum_{j=0}^{n}\int_{\Omega} h^2_j(x)\,\mathrm{d} x \right). \end{equation} \end{lemma} \begin{proof} Test \eqref{de1} by $\phi = s_j$. From \eqref{dest4} we then obtain: \begin{align*} &\frac1{2\tau} \int_{\Omega} (s_j^2 - s_{j-1}^2) \,\mathrm{d} x + \frac{f^\flat}{2}\int_{\partial\Omega} \alpha (x) s_j^2(x)\, dS(x)+ k^\flat f^\flat\, \int_{\Omega} |\nabla s_j(x)|^2\,\mathrm{d} x \\ &\le C\int_{\partial\Omega} \alpha (x)\, dS(x)+ \int_{\Omega} h_{j-1}\,\mathrm{d} x \le C. \end{align*} Taking the sum with respect to $j=1,\dots, n$ yields $$ \tau\sum_{j=1}^{n}\int_{\Omega} |\nabla s_j(x)|^2\,\mathrm{d} x + \tau\sum_{j=1}^{n}\int_{\partial\Omega} \alpha (x) s_j^2(x)\, dS(x) \le C + \int_{\Omega} s_0^2 \,\mathrm{d} x \le C, $$ which is precisely \eqref{dest5}. Let us prove \eqref{dest6} now. Test \eqref{de1} by $\phi = f(s_j)- f(s_{j-1})$. Then \begin{align} \nonumber &\frac{f^\flat}{\tau} \int_{\Omega} |s_j - s_{j-1}|^2 \,\mathrm{d} x + \frac12 \int_{\Omega} \left(k(c^P_{j-1}) |\nabla f(s_j)|^2 - k(c^P_{j-2}) |\nabla f(s_{j-1})|^2\right)\,\mathrm{d} x\\ \nonumber &\qquad\qquad + \frac12 \int_{\partial\Omega} \alpha(x) (f^2(s_j) - f^2(s_{j-1})) \,\mathrm{d} S(x)\\ \nonumber & \qquad \le \frac12 \int_{\Omega} (k(c^P_{j-1}) - k(c^P_{j-2})) |\nabla f(s_{j-1})|^2\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x) (f(s_j)f(s_j^{\partial\Omega}) - f(s_{j-1})f(s_{j-1}^{\partial\Omega})) \,\mathrm{d} S(x) \\ \nonumber &\qquad\qquad + \left( \int_{\partial\Omega} \alpha(x) f^2(s_{j-1})\,\mathrm{d} S(x)\right)^{1/2}\, \left( \int_{\partial\Omega} \alpha(x) |f(s_{j-1}^{\partial\Omega}) - f(s_{j}^{\partial\Omega})|^2 \,\mathrm{d} S(x)\right)^{1/2} \\ \label{dest6a} &\qquad\qquad + \int_{\Omega} h_{j-1}(f(s_{j})-f(s_{j-1}))\,\mathrm{d} x. \end{align} The function $k$ is nonincreasing and the sequence $\{c^P_j\}$ is nondecreasing, so the first integral in the right-hand side of \eqref{dest6a} is negative. By Hypotheses \ref{h1}\,(i),(iii) and \eqref{sj} we further have $$ \frac1\tau \sum_{j=1}^{n}\int_{\partial\Omega} \alpha(x) |f(s_{j}^{\partial\Omega}) - f(s_{j-1}^{\partial\Omega})|^2 \,\mathrm{d} S(x) \le C, $$ and from \eqref{dest5} it follows that $$ \tau \sum_{j=1}^{n}\int_{\partial\Omega} \alpha(x) f^2(s_{j-1})\,\mathrm{d} S(x) \le C, $$ hence, by H\"older's inequality for sums, $$ \sum_{j=1}^{n}\left( \int_{\partial\Omega} \alpha(x) f^2(s_{j-1})\,\mathrm{d} S(x)\right)^{1/2}\, \left( \int_{\partial\Omega} \alpha(x) |f(s_{j-1}^{\partial\Omega}) - f(s_{j}^{\partial\Omega})|^2 \,\mathrm{d} S(x)\right)^{1/2} \le C. $$ Applying Young's inequality to the last integral in the right hand side we similarly have $$ \int_{\Omega} h_{j-1}(f(s_{j})-f(s_{j-1}))\,\mathrm{d} x \le C\tau \int_{\Omega} h^2_{j-1}\,\mathrm{d} x + \frac {f^\flat}{2\tau}\int_{\Omega} |s_{j}-s_{j-1}|^2\,\mathrm{d} x. $$ Hence, taking the sum with respect to $j$ and using again \eqref{dest5}, we get \eqref{dest6}. \end{proof} \begin{remark} As a consequence of the definition of $v_j$ and $v^R_j$, we get for all $j=1, \dots, n$ that \begin{equation}\label{dest7} \mathop{\rm sup\,ess\,}_{x\in \Omega}|v^R_j(x)| \le \mathop{\rm sup\,ess\,}_{x\in \Omega}|v_j(x)| \le C\left(1 + \left(\int_{\Omega}|\nabla s_j(x)|^2\,\mathrm{d} x\right)^{1/2}\right). \end{equation} \end{remark} \begin{lemma}\label{3a} Let Hypothesis \ref{h1} hold. Then $h_j$ satisfies the bound \begin{equation}\label{dest8} \max_{j=1, \dots, n}\int_{\Omega}|h_j(x)|^2\,\mathrm{d} x + \tau \sum_{j=1}^{n}\int_{\Omega}|\nabla h_j(x)|^2 \,\mathrm{d} x \le C. \end{equation} \end{lemma} \begin{proof} Test \eqref{de2} by $\psi = h_j$. Note that the boundary term is bounded above by a multiple of $(h^\sharp)^2$. Then \begin{equation}\label{dstep8} \frac1{2\tau} \int_{\Omega} (h_j^2 - h_{j-1}^2)\,\mathrm{d} x + s^\flat \int_{\Omega} |\nabla h_j(x)|^2 \,\mathrm{d} x \le C + K_j \end{equation} with $K_j := \int_{\Omega} h_jv^R_{j-1}\cdot\nabla h_j\,\mathrm{d} x$. The evaluation of this integral constitutes the most delicate part of the argument. For simplicity, we denote by $|\cdot|_r$ the norm in $L^r(\Omega)$ for $1 \le r \le \infty$. We first notice that by H\"older's inequality and \eqref{dest7} we have $$ K_j \le C(1+ |h_j|_2|\nabla s_{j-1}|_2|\nabla h_j|_2). $$ Let us recall the Gagliardo-Nirenberg inequality for functions $u \in W^{1,p}(\Omega)$ on bounded Lipschitzian domains $\Omega \subset \mathbb{R}^N$ in the form \begin{equation}\label{gn1} |u|_q \le C\left(|u|_s + |u|_s^{1-\nu}|\nabla u|_p^{\nu}\right) \end{equation} which goes back to \cite{gag,nir} and holds for every $s\le p \le q$ such that $\frac1q \ge \frac1p - \frac1N$, where \begin{equation}\label{gn2} \nu = \frac{\frac1s - \frac1q}{\frac1N + \frac1s - \frac1p}. \end{equation} In our case we have \begin{equation}\label{gn} |h_j|_2 \le C\left(|h_j|_1 + |h_j|_1^{1-\nu}|\nabla h_j|_2^{\nu}\right) \end{equation} with $\nu = 3/5$. Hence, by \eqref{dest3} and \eqref{dest4}, $$ K_j \le C\left(1+ |\nabla s_{j-1}|_2|\nabla h_j|_2^{8/5}\right). $$ Using H\"older's inequality once again we obtain $$ \tau \sum_{j=1}^{n} K_j \le C\left(1+ \left(\tau \sum_{j=1}^{n}|\nabla s_{j-1}|_2^5 \right)^{1/5} \left(\tau \sum_{j=1}^{n} |\nabla h_j|_2^2\right)^{4/5}\right). $$ We have $$ \tau \sum_{j=1}^{n}|\nabla s_{j-1}|_2^5 \le \tau \max_{j=0, \dots, n} |\nabla s_j|_2^3 \sum_{j=0}^{n}|\nabla s_j|_2^2, $$ and \eqref{dest5}--\eqref{dest6} together with \eqref{gn} yield $$ \tau \sum_{j=1}^{n}|\nabla s_{j-1}|_2^5 \le C\left(1+ \left(\tau \sum_{j=1}^{n}|h_j|_2^2\right)^{3/2}\right) \le C\left(1+ \left(\tau \sum_{j=1}^{n}|\nabla h_j|_2^2\right)^{9/10}\right), $$ so that $$ \tau \sum_{j=1}^{n} K_j \le C\left(1+ \left(\tau \sum_{j=1}^{n}|\nabla h_j|_2^2\right)^{49/50}\right), $$ and we conclude by summing up over $j$ in \eqref{dstep8} that \eqref{dest8} is true. \end{proof} \begin{corollary} As an immediate consequence of \eqref{dest8} and of \eqref{dest6}, \eqref{dest7} we obtain \begin{align} \label{dest8b} \frac{1}{\tau} \sum_{j=1}^{n} \int_{\Omega} |s_j - s_{j-1}|^2 \,\mathrm{d} x + \max_{j=1, \dots, n} \int_{\Omega} |\nabla s_j(x)|^2 \,\mathrm{d} x \le C,\\ \label{dest8a} \max_{j=1, \dots, n}\mathop{\rm sup\,ess\,}_{x \in \Omega}|v_j(x)| \le C\left(1 + \max_{j=1, \dots, n} \left(\int_{\Omega} |\nabla s_j(x)|^2 \,\mathrm{d} x\right)^{\frac 12} \right) \le C \end{align} with a constant $C$ independent of $R$ and $n$. \end{corollary} \begin{corollary} The following estimate is a direct consequence of the inequality \eqref{dest8} \begin{equation}\label{dest9} \sum_{j=1}^{n}\int_{\Omega} |h_j - h_{j-1}|^2 \,\mathrm{d} x \le C. \end{equation} \end{corollary} \begin{proof} To get it we test \eqref{de2} by $\psi = h_j - h_{j-1}$. On the left-hand side we keep the term $$ \frac1{\tau}\int_{\Omega} |h_j - h_{j-1}|^2 \,\mathrm{d} x $$ and move all the other terms to the right-hand side. Thanks to \eqref{dest1} and \eqref{dest8a}, the right-hand side contains only quadratic terms in $h_j$, $h_{j-1}$, $\nabla h_j$, $\nabla h_{j-1}$. The boundary term can be estimated using the trace theorem, so that we get an inequality of the form \begin{equation}\label{dstep9} \frac1{\tau}\int_{\Omega} |h_j - h_{j-1}|^2 \,\mathrm{d} x \le C\left (1+\int_{\Omega} \left(|h_j^2| + |h_{j-1}|^2 + |\nabla h_j|^2 + |\nabla h_{j-1}|^2\right) \,\mathrm{d} x \right ), \end{equation} and it suffices to apply \eqref{dest8}. \end{proof} The next lemma shows global boundedness of $h_j$ by means of the Moser-Alikakos iteration technique. \begin{lemma} Let Hypothesis \ref{h1} hold. Then $h_j$ satisfies the bound: \begin{equation}\label{dest10} \max_{j=1, \dots, n}\mathop{\rm sup\,ess\,}_{x \in \Omega}|h_j(x)| \le C. \end{equation} \end{lemma} \medskip \begin{proof} Consider a convex increasing function $g:[0,\infty) \to [0,\infty)$ with linear growth, and test \eqref{de2} by $\psi = g(h_j)$. We define $$ G(h) = \int_0^h g(u)\,\mathrm{d} u, \quad \Gamma(h) = \int_0^h \sqrt{g'(u)} \,\mathrm{d} u. $$ This yields $$ \frac1\tau \int_{\Omega} (G(h_j) - G(h_{j-1}))\,\mathrm{d} x + s^\flat \int_{\Omega} |\nabla \Gamma(h_j)|^2\,\mathrm{d} x \le Cg(C) + C\int_{\Omega} h_j g'(h_j)|\nabla h_j| \,\mathrm{d} x, $$ hence, \begin{equation}\label{ge1} \max_{j=1, \dots, n} \int_{\Omega} G(h_j) \,\mathrm{d} x + \tau \sum_{j=1}^{n} \int_{\Omega} |\nabla \Gamma(h_j)|^2\,\mathrm{d} x \le G(C) + Cg(C) + C\tau \sum_{j=1}^{n}\int_{\Omega} h_j^2 g'(h_j) \,\mathrm{d} x \end{equation} with a constant $C$ independent of $n$ and of the choice of the function $g$. We now make a particular choice $g = g_{M,k}$ depending on two parameters $M>1$ and $k>0$, namely $$ g_{M,k}(h) = \left\{ \begin{array}{ll} \frac{1}{2k+1}h^{2k+1} & \mathrm{for\ } 0\le h \le M,\\[2mm] \frac{1}{2k+1}M^{2k+1} + M^{2k}(h-M) & \mathrm{for\ } h>M. \end{array} \right. $$ Then \begin{align*} g_{M,k}'(h) &= \min\{h, M\}^{2k} = \left\{ \begin{array}{ll} h^{2k} & \mathrm{for\ } 0\le h \le M,\\[2mm] M^{2k} & \mathrm{for\ } h>M, \end{array} \right.\\[3mm] G_{M,k}(h) &= \left\{ \begin{array}{ll} \frac{1}{(2k+2)(2k+1)}h^{2k+2} & \mathrm{for\ } 0\le h \le M,\\[2mm] \frac{1}{(2k+2)(2k+1)}M^{2k+2}+ \frac{1}{2k+1}M^{2k+1}(h-M) +\frac{1}2 M^{2k}(h-M)^2 & \mathrm{for\ } h>M, \end{array} \right.\\[3mm] \Gamma_{M,k}(h) &= \left\{ \begin{array}{ll} \frac{1}{k+1} h^{k+1} & \mathrm{for\ } 0\le h \le M,\\[2mm] \frac{1}{k+1} M^{k+1}+ M^{k}(h-M)& \mathrm{for\ } h>M. \end{array} \right. \end{align*} Note that for all $h\ge 0$, $M>0$ and $k > 0$ we have \begin{equation}\label{ge1-1} G_{M,k}(h) \le \Gamma_{M,k}^2(h) \le 4 G_{M,k}(h), \ \ h^2 g_{M,k}'(h) \le (k+1)^2 \Gamma_{M,k}^2(h), \ \ h g_{M,k}(h) \le (k+1) \Gamma_{M,k}^2(h). \end{equation} It thus follows from \eqref{ge1} that \begin{align}\nonumber &\max_{j=1, \dots, n} \int_{\Omega} \Gamma_{M,k}^2(h_j) \,\mathrm{d} x + \tau \sum_{j=1}^{n} \int_{\Omega} |\nabla \Gamma_{M,k}(h_j)|^2\,\mathrm{d} x\\ \label{ge2} &\qquad \le C \left((k+1)^2 \Gamma_{M,k}^2(C)+\tau \sum_{j=1}^{n}\int_{\Omega} h_j^2 g_{M,k}'(h_j) \,\mathrm{d} x\right) \end{align} with a constant $C$ independent of $k$ and $M$. We now apply again the Gagliardo-Nirenberg inequality \eqref{gn1} in the form $$ |u|_q \le C\left(|u|_2 + |u|_2^{1-\nu}|\nabla u|_2^\nu \right) $$ to the function $u=\Gamma_{M,k}(h_j)$, with $q = 10/3$ and $\nu = 3/5$. From \eqref{ge1-1}--\eqref{ge2} it follows that \begin{align}\nonumber &\frac{1}{k+1} \left(\tau \sum_{j=1}^{n}\left|h_j \sqrt{g_{M,k}'(h_j)}\right|_q^q\right)^{1/q} \le \left(\tau \sum_{j=1}^{n}|\Gamma_{M,k}(h_j)|_q^q\right)^{1/q}\\ \label{ge3} &\qquad \le C\max\left\{(k+1)^2\Gamma_{M,k}^2(C),\tau \sum_{j=1}^{n}\int_{\Omega} h_j^2 g_{M,k}'(h_j) \,\mathrm{d} x \right\}^{1/2}. \end{align} Let us start with $k=0$. The right-hand side of \eqref{ge3} is bounded independently of $M$ as a consequence of \eqref{dest8}. We can therefore let $M\to \infty$ in the left-hand side of \eqref{ge3} and obtain $$ \tau \sum_{j=1}^{n}|h_j|_q^q < \infty. $$ We continue by induction and put $\omega_i := (q/2)^i$ for $i \in \mathbb{N}$. Assuming that $$ \tau \sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i} < \infty $$ for some $i \in \mathbb{N}$ (which we have just checked for $i=1$) we can estimate the right-hand side of \eqref{ge3} for $k= \omega_i -1$ independently of $M$, and letting $M\to \infty$ in the left-hand side we conclude that \begin{align*} \frac{1}{\omega_i}\left(\tau\sum_{j=1}^n|h_j|_{2\omega_{i+1}}^{2\omega_{i+1}}\right)^{\omega_i/2\omega_{i+1}} &\le C\max\left\lbrace\omega_i^2\Gamma_{M,\omega_i-1}^2(C),\tau\sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i}\right\rbrace^{1/2}\\ &\le C\max\left\lbrace C^{2\omega_i},\tau\sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i} \right\rbrace^{1/2}. \end{align*} which implies that \begin{equation}\label{ge4} \left(\tau\sum_{j=1}^{n}|h_j|_{2\omega_{i+1}}^{2\omega_{i+1}}\right)^{1/{2\omega_{i+1}}} \le (C\omega_i)^{1/\omega_i}\max\left\lbrace C,\left(\tau\sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i}\right)^{1/2\omega_i} \right\rbrace \end{equation} with a constant $C>0$ independent of $n$ and $i$. For $i\in \mathbb{N}$ set $$ X_i := \max\left\lbrace C,\left(\tau \sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i}\right)^{1/2\omega_i}\right\rbrace, $$ and $\Lambda_i = \log X_i$. From \eqref{ge4} it follows that $\Lambda_{i+1} \le \frac{1}{\omega_i}\log(C\omega_i) + \Lambda_i$. Summing up over $i \in \mathbb{N}$ we obtain \begin{equation}\label{ge5} \left(\tau \sum_{j=1}^{n}|h_j|_{2\omega_i}^{2\omega_i}\right)^{1/2\omega_i} \le \tilde{C} \end{equation} with a constant $\tilde{C}$ independent of $i$. The statement now follows in a standard way. For $\varepsilon>0$ and $j=1, \dots, n$ put $\Omega_{j,\varepsilon}:= \{x \in\Omega : |h_j(x)|\geq \tilde{C}+\varepsilon\}$, where $\tilde{C}$ is the constant from \eqref{ge5}. Then $$ \int_{\Omega} |h_j(x)|^{2\omega_i}\,\mathrm{d} x \ge |\Omega_{j,\varepsilon}|(\tilde{C}+\varepsilon)^{2\omega_i}, $$ so that $$ \tilde{C}^{2\omega_i} \ge \tau \sum_{j=1}^{n}\int_{\Omega}|h_j(x)|^{2\omega_i}\,\mathrm{d} x \ge \left(\tau \sum_{j=1}^{n}|\Omega_{j,\varepsilon}|\right)(\tilde{C}+\varepsilon)^{2\omega_i}. $$ Letting $i \to \infty$ we thus obtain $$ \tau \sum_{j=1}^{n}|\Omega_{j,\varepsilon}| \le \lim_{i\to\infty} \left(\frac{\tilde{C}}{\tilde{C}+\varepsilon}\right)^{2\omega_i} = 0. $$ Passing to the limit as $\varepsilon \to 0$, we obtain \eqref{dest10}. \end{proof} \section{Limit as $n \to \infty$}\label{limi} We now construct piecewise linear and piecewise constant interpolations of the sequences $\{s_j\}$, $\{h_j\}$ constructed in Section \ref{time}. Since we plan to let the discretization parameter $n$ tend to $\infty$, we denote them by $\{s_j^{(n)}\}$, $\{h_j^{(n)}\}$ to emphasize the dependence on $n$. For $x \in \Omega$ and $t\in ((j-1)\tau, j\tau]$, $j=1, \dots, n$ set \begin{equation}\label{le1} \begin{aligned} \bar s^{(n)}(x,t) &= s_j^{(n)}(x),\\[2mm] \underline s^{(n)}(x,t) &= s_{j-1}^{(n)}(x),\\[2mm] \hat s^{(n)}(x,t) &= s_{j-1}^{(n)}(x) + \frac{t- (j-1)\tau}{\tau} (s_j^{(n)}(x) - s_{j-1}^{(n)}(x)), \end{aligned} \end{equation} and similarly for $\bar h^{(n)}, \hat h^{(n)}, \underline h^{(n)}, \underline v^{(n)}, \underline c^{P,(n)}, \bar s^{\partial\Omega, (n)}, \bar h^{\partial\Omega, (n)}$ etc. By virtue of the above estimates we can choose $R$ sufficiently large, so that the truncations are never active, and we can rewrite the system \eqref{de1}--\eqref{de2} in the form \begin{align}\nonumber &\int_{\Omega} (\hat s^{(n)}_t\phi(x) + k(\underline c^{P,(n)}) f'(\bar s^{(n)}) \nabla \bar s^{(n)}\cdot\nabla \phi(x))\,\mathrm{d} x + \int_{\partial\Omega} \alpha(x)(f(\bar s^{(n)}) - f(\bar s^{\partial\Omega, (n)}))\phi(x) \,\mathrm{d} S(x)\\ \label{ne1} &\qquad = \int_{\Omega} \underline h^{(n)} \bar s^{(n)} (1-\bar s^{(n)})\phi(x)\,\mathrm{d} x,\\ \nonumber &\int_{\Omega} (\hat h^{(n)}_t\psi(x) + (\underline s^{(n)}\nabla \bar h^{(n)} - \bar h^{(n)} \underline v^{(n)})\cdot\nabla\psi(x))\,\mathrm{d} x - \int_{\partial\Omega} \beta(x)\underline s ({\overline h}^{\partial\Omega, (n)} - \bar h^{(n)})^+\psi(x)\,\mathrm{d} S(x)\\ \label{ne2} &\qquad = - \int_{\Omega} {\bar h^{(n)}} \underline s^{(n)}(1-\underline s^{(n)}) \psi(x)\,\mathrm{d} x. \end{align} The estimates derived in Section \ref{time} imply the following bounds independent of $n$: \begin{itemize} \item $\hat h^{(n)}$ are bounded in $L^2(0,T; V)$; \item $\hat s^{(n)}$ are bounded in $L^\infty(0,T; V)$; \item $\hat s^{(n)}_t$ are bounded in $L^2(\Omega\times (0,T))$, \item $\hat h^{(n)}_t$ are bounded in $L^2(0,T; V')$; \item $\hat h^{(n)}, \hat s^{(n)}$ are bounded in $L^\infty(\Omega\times (0,T))$. \end{itemize} The bound for $\hat h^{(n)}_t$ in $L^2(0,T; V')$ follows by comparison in \eqref{ne2}. Indeed, choosing arbitrary test functions $\psi \in V$ and $\zeta \in L^2(0,T)$ in \eqref{ne2}, we obtain from the above estimates that the inequality $$ \int_{\Omega} \hat h^{(n)}_t(x,t)\psi(x)\zeta(t) \,\mathrm{d} x \le C(1 + |\nabla \bar h^{(n)}(t)|_2)|\psi|_V|\zeta(t)| $$ holds for a.\,e. $t \in (0,T)$. Integrating over $(0,T)$ and owing to estimate \eqref{dest8}, we obtain the assertion. By the Aubin-Lions Compactness Lemma (\cite[Theorem 5.1]{li}) we can find a subsequence (still labeled by $(n)$ for simplicity) and functions $s,h$ such that \begin{itemize} \item $\hat h^{(n)}\to h$, $\hat s^{(n)}\to s$ strongly in $L^p(\Omega\times (0,T))$ for every $1\le p < \infty$; \item $\nabla \hat h^{(n)}\to \nabla h$ weakly in $L^2(\Omega\times (0,T); \mathbb{R}^3)$; \item $\nabla\hat s^{(n)}\to \nabla s$ weakly-* in $L^\infty(0,T; L^2(\Omega; \mathbb{R}^3))$. \end{itemize} In fact, the Aubin-Lions Lemma guarantees only compactness in $L^2(\Omega\times (0,T))$. Compactness in $L^p(\Omega\times (0,T))$ for $p>2$ follows from the fact that the functions are bounded in $L^\infty(\Omega\times (0,T))$ by a constant $K>0$, so that, for example, $$ \int_0^T \int_{\Omega} |\hat s^{(n)}(x,t) - s(x,t)|^p \,\mathrm{d} x\,\mathrm{d} t \le (2K)^{p-2}\int_0^T \int_{\Omega} |\hat s^{(n)}(x,t) - s(x,t)|^2 \,\mathrm{d} x\,\mathrm{d} t. $$ Note that for $t\in ((j-1)\tau, j\tau]$ we have $$ |\bar s^{(n)}(x,t) - \hat s^{(n)}(x,t)| \le |s_j^{(n)}(x) - s_{j-1}^{(n)}(x)|, $$ hence, $$ \int_0^T\int_{\Omega} |\bar s^{(n)}(x,t) - \hat s^{(n)}(x,t)|^2\,\mathrm{d} x\,\mathrm{d} t \le \tau \sum_{j=1}^n \int_{\Omega} |s_j^{(n)}(x) - s_{j-1}^{(n)}(x)|^2 \,\mathrm{d} x \le C\tau^2 $$ by virtue of \eqref{dest6} and \eqref{dest8}. Similarly, $$ \int_0^T\int_{\Omega} |\bar h^{(n)}(x,t) - \hat h^{(n)}(x,t)|^2\,\mathrm{d} x\,\mathrm{d} t \le \tau \sum_{j=1}^n \int_{\Omega} |h_j^{(n)}(x) - h_{j-1}^{(n)}(x)|^2 \,\mathrm{d} x \le C\tau $$ by virtue of \eqref{dest9}. The same estimates hold for the differences $\underline s^{(n)} - \hat s^{(n)}$, $\underline h^{(n)} - \hat h^{(n)}$. We conclude that \begin{itemize} \item $\bar h^{(n)}\to h$, $\bar s^{(n)}\to s$ strongly in $L^p(\Omega\times (0,T))$ for every $1\le p < \infty$; \item $\underline h^{(n)}\to h$, $\underline s^{(n)}\to s$ strongly in $L^p(\Omega\times (0,T))$ for every $1\le p < \infty$; \item $\nabla \bar h^{(n)}\to \nabla h$ weakly in $L^2(\Omega\times (0,T); \mathbb{R}^3)$; \item $\nabla\bar s^{(n)}\to \nabla s$ weakly-* in $L^\infty(0,T; L^2(\Omega; \mathbb{R}^3))$. \end{itemize} As a by-product of the arguments in \cite[Proof of Theorem 4.2, p.~84]{ne}, we can derive the trace embedding formula $$ \int_{\partial\Omega} |u|^2 \,\mathrm{d} S(x) \le C (|u|_2^2 + |u|_2|\nabla u|_2) $$ which holds for every function $u \in V$. Consequently, we obtain strong convergence also in the boundary terms \begin{itemize} \item $\bar h^{(n)}\big|_{\partial\Omega}\to h\big|_{\partial\Omega}$, $\bar s^{(n)}\big|_{\partial\Omega}\to s\big|_{\partial\Omega}$ strongly in $L^2(\partial\Omega\times (0,T))$. \end{itemize} We can therefore pass to the limit as $n \to \infty$ in all terms in \eqref{ne1}--\eqref{ne2} and check that $s,h$ are solutions of \eqref{e01}--\eqref{e02} modulo the physical constants provided $R$ is chosen bigger than the constants $C$ in \eqref{dest8a} and \eqref{dest10}. \section{Numerical test}\label{nume} \begin{figure}[htb] \label{fig1} \begin{center} \includegraphics[width=16.4cm]{obr2b.pdf} \caption{Numerical simulations for the system \eqref{n01}--\eqref{n06}.} \end{center} \end{figure} In order to illustrate the behavior of the solution, we propose a simplified 1D model with $\Omega = [0,1]$ described by the system \begin{align} \label{n01} \rho^W s_t(x,t) - k s_{xx}(x,t) &= \gamma m^W {h} s(1-s),\\ \label{n02} \rho^H h_t(x,t) - (\kappa s h_x - \rho^H hv)_x &= -\gamma m^H {h} s(1-s), \end{align} for $x \in (0,L)$ and $t \in (0,T)$, with boundary conditions \begin{align} \label{n03} k s_x(0,t) &= \alpha (s(0,t)-\bar s(0,t)),\\ \label{n04} \kappa s h_x(0,t) - \rho^H h(0,t)v(0,t) &= -\beta s(0,t) (\bar h(0,t)- h(0,t))^+,\\ \label{n05} k s_x(1,t) &= -\alpha s(1,t),\\ \label{n06} \kappa s h_x(1,t) - \rho^H h(1,t)v(1,t) &= 0, \end{align} with some constants $\alpha>0, \beta >0$. The data are chosen so as to model the following situation: We start with initial conditions $s^0 = s^\flat$, $h^0 = 0$, and in the time interval $[0,T/4]$, we choose $\bar s = 1$ and $\bar h = 1$. This corresponds to the process of filling the structure with lime water solution until the time $t=T/4$. Then, at time $t=T/4$ we start the process of drying by switching $\bar s$ to $s^\flat$ and $\bar h$ to $0$. With these boundary data, we let the process run in the time interval $[T/4, T]$. Figure 1 shows the spatial distributions across the profile $x \in [0,1]$ at successive times $t=T/4, T/2, 3T/4, T$. We have chosen a finer mesh size near the origin, where the solution exhibits higher gradients. High concentration of $Ca C O_3$ near the active boundary $x=0$ exactly corresponds to the measurements shown, e.\,g., in \cite{blhvs3,sli}. The parameters of our model cannot be easily taken from the available measurements, and a complicated identification procedure would be necessary. This is beyond the scope of this paper, whose purpose is to present a model to be validated by numerical simulations. For this qualitative study we have therefore chosen fictitious parameters with simple numerical values $\rho^W = \rho^H = m^W = m^H = \alpha = \beta = 1, s^\flat = 0, k=2\cdot 10^{-4}, \kappa=10^{-3}$ and $\gamma=10^{-2}$. The final time $T$ is determined by the number of time steps which are necessary to reach approximate equilibrium. In fact, the question of asymptotic stabilization for large times will be a subject of a subsequent study. \section*{Acknowledgments} The authors wish to thank Zuzana Sl\'{\i}\v zkov\'a and Milo\v s Drd\'ack\'y for stimulating discussions on technical aspects of the problem.
2023-04-23T08:18:03.406Z
2021-12-22T02:17:42.000Z
redpajama/arxiv
arxiv_0000
1,105
9,216
819a867bbf21d4a402682d24a410a19729cfd119
\section{Introduction}\label{s:intro} The FLRW metric is an exact solution to Einstein's equations, achieved under the implication of space homogeneity and isotropy. It has been well recognized for satisfactorily explaining several other observational evidence about our Universe, including the distribution of large-scale galaxies and the near-uniformity of the CMB temperature \cite{Planck}. The FLRW metric \cite{cao} underpins the existing accepted cosmological model, which is quite good at likely fitting continued application data sets and trying to explain measured cosmic acceleration. The fact that the cosmological space-time metric differs from the FLRW metric would have massive consequences for inflation theory as well as fundamental physics. Alternative explanations of gravity have long been considered to prevent a few of the contradictions in conventional cosmology \cite{Clifton,paddy}. A potential substitute is the $f(R, T)$ gravity, created recently by Harko et al. \cite{harko}. The latest identification of gravitational waves (GWs) by the Advanced LIGO group has opened up a massive door to analyse the Universe. \cite{abb1,abb2,abb3}. Apart from directly detecting GWs with LIGO/VIRGO interferometers, one could use the informal identification of GWs by assessing the substantial reduction of the orbital period of stellar binary configuration. Detecting nano-Hertz GWs with a pulsar timing array includes timing various millisecond pulsars, which seem to be extremely stable celestial clocks, according to Jenet \cite{jenet}. This connection is effected by the angular distance ($\theta$) between both the two pulsars, as well as the polarization of GW and graviton mass, according to $C(\theta)$ \cite{lee}. The range of the GW, including its polarization modes, is based on the theories. In the radiative domain, the polarization and dispersion of GWs in vacuum are two critical features of GWs that distinguish between the authenticity of gravity theories.GWs can also have up to six conceivable polarization states in substitute metric theories, four more than GR permits. Hou et al. \cite{hou} carried out a detailed analysis of the polarization mode for the Horndeski theory. Using GWs polarization, Alves et al. \cite{alves1} investigated the f(R) framework. In $ f (R) $ gravity metric methodology, the model, including other $ f (R) $ theoretical models, confirms the effectiveness of scalar degrees of freedom. There is a scalar mode of polarization of GWs exists in theory. This polarization mode appears in two different states: a massive longitudinal mode and a transverse massless breathing mode with non-vanishing trace \cite{gogoi}. Capozziello and Laurentis \cite{Capozziello} find the palatini formalism, conformal transformations and find the new polarization states for gravitational radiation for the higher order of extended gravity $ ( f(R) = R + \alpha R^2) $ Later on, Alves et al. \cite{Alves2} studied for $ f (R, T) $ and $ f (R, T^{\phi}) $ theoretical models, . In this article, we studied the polarization modes based on the potential, which is a function of the scalar field under the framework of modified gravity $ f (R, T^{\phi}) $ for the vacuum system. In Sec. \ref{s1} we developed the basic formalism of the modified gravity. The scalar field structure and equation of motion is developed in Sec. \ref{s2}. Polarization modes using Newman-Penrose (NP) formalism is analyzed Sec. \ref{s4}. And in Sec. \ref{s5} we conclude the results. \section{Basic formalism of the modified gravity} \label{s1} In the context of modified gravity \cite{harko}, for the vacuumed system, the total action including the scalar field can be introduced in the following manner, \begin{equation} S = \int d^4 x \sqrt{-g} \big[f(R,T^{\phi})+ \mathcal{L(\phi, \partial_{\mu} \phi)} \big],\label{3} \end{equation} where $ R $ stands for the Ricci scalar, while $ T^{\phi} $ is the trace of the scalar field's energy-momentum tensor. The field's action, with g as the metric's determinant and signature (-, +, +, +). We use geometric units with the formula $G = c = 1$. Following that, we considered $\mathcal{L(\phi, \partial_{\mu} \phi)} = \mathcal{L}_{\phi}$. Here $ \mathcal{L}_{\phi} $ is the standard Lagrangian density for a real scalar field ($ \phi $), as follow \cite{moraes}, \begin{equation} \mathcal{L}_\phi = \frac{1}{2} \nabla_\alpha \phi \nabla^\alpha \phi -V(\phi). \label{4} \end{equation} A self-interacting potential is represented by $V(\phi)$. In this theory, matter fields have a relatively limited coupling to gravity and no coupling to the scalar field. The stress-energy tensor can define as \begin{equation} T^\phi_{\mu \nu} = -\frac{2}{\sqrt{-g}} \frac{\delta (\sqrt{-g} \mathcal{L} )}{\delta g^{\mu \nu}}. \end{equation} We assumed that the Lagrangian density $ L $ is free of its derivatives and is only conditional on the metric tensor modules $g^{\mu \nu}$. Therefore, the energy-momentum tensor of the scalar field is \begin{equation} T^\phi_{\mu \nu} = \frac{1}{2}g_{\mu \nu} \nabla_\alpha \phi \nabla^\alpha \phi -g_{\mu \nu} V(\phi)-\nabla_\mu \phi \nabla_\nu \phi, \label{5} \end{equation} and the corresponding trace is given by \begin{eqnarray} T^\phi=\nabla_\alpha \phi \nabla^\alpha \phi - 4 V(\phi). \label{6} \end{eqnarray} The generalized form of the Einstein field equation in vacuum in the involvement of scalar field is obtained by varying the gravitational field's action S concerning the metric tensor components, $g_{\mu \nu}$, and then on integration as follow, \begin{equation} f_R R_{\mu \nu}-\frac{f}{2}g_{\mu \nu}= \frac{1}{2}T_{\mu \nu}^\phi +f_{T} T_{\mu \nu}^\phi -f_{T} g_{\mu \nu}\mathcal{L}_\phi\label{9} \end{equation} Here, $f_R = f_R(R,T^\phi)$ and $f_T = f_T(R,T^\phi)$ denotes $\partial f(R,T^\phi)/ \partial R$ and $\partial f(R,T^\phi)/ \partial T^\phi$, respectively. We assume that the modified gravity function $f(R,T^{\phi})$ is given by $f(R,T^{\phi})= R + \beta T^{\phi}$, $\beta$ is an arbitrary constant. The field equation immediately takes the following form, \begin{equation} G_{\mu\nu} = \frac{1}{2}[T_{\mu\nu}^{\phi}+g_{\mu\nu}\beta T^{\phi}-2\beta \nabla_{\mu}\phi\nabla_{\nu}\phi].\label{10} \end{equation} \section{Scalar Field}\label{s2} On contraction and simplification, the Eq. (\ref{9}) the Ricci scalar of can be obtained as follows, \begin{eqnarray} R = -\frac{1}{2 }[ 4 \beta T^{\phi} +T^\phi -2\beta\nabla_\mu \phi \nabla^\mu \phi]\label{11} \end{eqnarray} The equation of motion for the scalar field can be found from the covariant divergence of the field Eq. (\ref{10}) as follows, \begin{eqnarray} (1+2 \beta)\Box \phi + (1+4\beta)\bigg(\frac{\partial V}{\partial \phi}\bigg) = 0.\label{12} \end{eqnarray} Since we are considering the vacuum system, we consider the potential in the following form, \begin{equation} V(\phi) = \frac{1}{2} \mu^2 \phi^2 + \frac{1}{4} \lambda \phi^4, \label{18} \end{equation} where, $\mu \textrm{~and~} \lambda$ are real constants. \begin{figure}[h!]\centering \includegraphics[width=8cm]{P.eps} \caption{ variation of the potential $V({\phi})$ with scalar variable $ \phi $. Red curve shows the variation for $ \mu^2 >0 $, and Blue curve shows the variation for $ \mu^2 < 0 $. } \label{f2} \end{figure} We limited ourselves to first-order terms in $\phi$. The third term of Eq. (\ref{12}) disappear as a result of this estimation. $ V $ is being expanded around the non-null minimum value $ V_0 $. $\eta = \phi-\phi_0$ can be used to expand the field. Following identical approach as before, we encounter it with such assumptions and first-order restrictions, \begin{equation} \Box \phi + \Bigg(\frac{1+4 \beta }{1+2 \beta}\Bigg)\bigg(\frac{\partial V}{\partial \phi}\bigg) = 0. \label{19} \end{equation} The field equations in the linear region has been investigated and leads to the solution in the following form, \begin{equation} \phi(x) = \phi' + \phi_1 \exp{(iq_\rho x^\rho)},\label{15} \end{equation} Solution of the scalar field corresponds to the above equation can be written as in Eq. (\ref{15}) with, \begin{eqnarray} \phi' = \phi_0 - \Bigg(\frac{\mu^2 + \lambda \phi_0^2}{\mu^2 + 3 \lambda \phi_0^2 }\Bigg)\phi_0, \end{eqnarray} and \begin{equation} q_\mu q^\mu = (\mu^2+3\lambda \phi_0^2) \Bigg(\frac{1+4 \beta }{1+2 \beta}\Bigg). \label{14} \end{equation} The variation of the effective mass ($ m_{\phi} $) with the coupling constant is shown in the Fig. \ref{f1}. The restricted range is for $ -0.50 \leq \beta \leq -0.25 $, from Eq. (\ref{14}). \begin{figure}[h!]\centering \includegraphics[width=10cm]{m.eps} \caption{ variation of the mass function $m_{\phi}$ with coupling constant $ \beta $.} \label{f1} \end{figure} Corresponding energy of the system can be written as \begin{equation} E = \pm \Bigg[ q^2 + (\mu^2+3\lambda \phi_0^2) \Bigg(\frac{1+4 \beta }{1+2 \beta }\Bigg) \Bigg]^{1/2} \end{equation} \begin{figure}[h!]\centering \includegraphics[width=9cm]{box+.eps}\\ \includegraphics[width=9cm]{box-.eps} \caption{Propagation for the perturbation of vacuum scalar field. Upper panel shows the variation for $ \mu^2<0 $, and lower panel shows the variation for $ \mu^2>0 $. Considered $ \beta = 0.5. $} \end{figure} The first-order minimally coupled scalar field exposes an effective cosmological constant, as follows: \begin{equation} \Lambda = \frac{V_0}{2} \bigg(4 \beta + 1 \bigg). \end{equation} With $\lambda$ being a positive constant, the potential in Eq. (\ref{18}) could be categorized into two situations: (i) $\mu^2 > 0$, and (ii) $\mu^2<0$. This is what the universe needs to be stable. While the minimum scalar field for $\mu^2 < 0$ is non-zero, the effective cosmological constant is non-zero. The cosmological constant that is effective is \begin{equation} \Lambda = -\frac{1}{2} \bigg[ \beta \bigg(\frac{\mu^4}{ \lambda}\bigg) + \frac{\mu^4}{4 \lambda} \bigg]. \end{equation} The steady minimum of the scalar field is zero for $\mu^2 > 0$, which causes the effective cosmological constant ($ \Lambda $) to be zero. \section{Polarization modes of the modified gravity}\label{s4} \subsection*{Newman-Penrose formalism} The Newman-Penrose (NP) \cite{New1,New2} method is used to find additional polarization modes; further information is available in the references \cite{eardley1,eardley2}. Tetrads are a combination of standardized linearly independent vectors $(e_t, e_x, e_y, e_z)$ that could be used to describe the NP quantities that correlate to all of the six polarization modes of GWs at any spatial position. The NP tetrads $k,~l,~m~,\bar{m}$. can be used to recognize these vectors. The actual null vectors are as follows: \begin{eqnarray} k = \frac{1}{\sqrt{2}}(e_t+e_z),~~ l = \frac{1}{\sqrt{2}}(e_t-e_z), \end{eqnarray} And the other two complex null vector are, \begin{eqnarray} & & m = \frac{1}{\sqrt{2}}(e_x+ie_y),~~ \bar{m}= \frac{1}{\sqrt{2}}(e_x-ie_y).\\ & & -k.l=m. \bar{m}=1, ~~E_a = (k, l, m, \bar{m}).\nonumber \end{eqnarray} While all other dot product vanishes. In the NP notation, the indefinable components of the Riemann tensor $R_{\lambda \mu \kappa \nu}$ are defined by ten components of the Wely tensor ($\Psi$'s), nine components of the traceless Ricci tensor ($\Phi$'s), and a curvature scalar ($\Lambda$). They are reduced to six by some symmetrical and differential properties: $\Psi_2, \Psi_3,\Psi_4 \textrm{~and~} \Phi_{22}$ are real and $ \Psi_3\textrm{~and~}\Psi_4 $ are complex. These NP variables are associated with the following components of the Riemann tensor in the null tetrad basis: \begin{eqnarray} & &\Psi_2 = -\frac{1}{6} R_{lklk} \sim \textrm{longitudinal scalar mode,}\nonumber\\ & &\Psi_3 = -\frac{1}{2} R_{lkl \bar{m}} \sim \textrm{vector-x \& vector-y modes,}\nonumber\\ & &\Psi_4 = - R_{l\bar{m}l\bar{m}} \sim \textrm{+,} \times \textrm{tensorial mode,}\nonumber\\ & &\Phi_{22} = - R_{lml\bar{m}} \sim \textrm{breathing scalar mode.} \label{e30} \end{eqnarray} The additional nonzero NP variables are $\Phi_{11} = 3 \Psi_2 / 2$, $\Phi_{12}=\Phi_{21}= \Psi_3$ and $\Lambda = \Psi_2/2$, respectively. All of them can be defined base on the variables in Eq. (\ref{e30}). The group E(2), the group of the Lorentz group for massless particles, can be used to classify these four NP variables $\Psi_2, \Psi_3,\Psi_4, \textrm{~and~} \Phi_{22}$ based on their transformation properties. Only $ \Psi_2 $ is invariant, and the amplitudes of the four NP variables are not observer-independent, according to these transformations. The absence (zero amplitude) of some of the four NP variables, on the other hand, is not dependent on the observer. The following relations for the Ricci tensor and the Ricci scalar hold: \begin{eqnarray} & & R_{lklk}= R_{lk},\nonumber \\ & & R_{lklm}= R_{lm},\nonumber\\ & & R_{lkl \bar{m}} = R_{l \bar{m}},\nonumber\\ & & R_{l \bar{m} l \bar{m}}= \frac{1}{2} R_{ll},\nonumber\\ & & R = -2R_{lklk}= 2R_{lk}.\label{e31} \end{eqnarray} Following Eq. (\ref{9}), the Ricci tensor can be written as, \begin{eqnarray} R_{\mu \nu} = \frac{1}{2 \alpha}[ \alpha R g_{\mu \nu} + g_{\mu \nu} f(T^\phi) +T^\phi_{\mu \nu} -2f_T\nabla_\mu \phi \nabla^\mu \phi]\label{51} \end{eqnarray} Using Eq. (\ref{e30}) and Eq. (\ref{e31}), one finds the following Ricci tensors: \[ R_{lklk} \neq 0, R_{lml \bar{m}} \neq 0, R_{lklm} = R_{lkl \bar{m}} = 0. \] From the above relation and Eq. (\ref{e30}), one finds the following NP quantities: \[ \Psi_2 \neq 0 ; \Psi_3 = 0 ; \Psi_4 \neq 0, \textrm{~~and~~} \Phi_{22} \neq 0 \] Thus we get four polarization modes for the GW: +,$ \times $ tensorial mode, breathing scalar mode and longitudinal scalar mode. \section{Conclusion}\label{s5} The theoretical foundations of modified gravity, a new approach intended to address and find solutions to the shortcomings and discrepancies of GR, are outlined in this report. These issues primarily manifest themselves at infrared and ultraviolet ranges, i.e., cosmological and astrophysical scales on the one hand and quantum scales on the other. The stability analysis of the scalar field varies depending on the circumstances of potential, and we have taken into account the spontaneous symmetry breaking analogous potential for our structure. The scalar field's behaviour varies identification and characterization of the critical parameter $( \mu^2 )$. The stable minimum value of the scalar field for $\mu^2 > 0$ is zero, resulting in a zero effective cosmological constant ($ \Lambda $). For $\mu^2 < 0$, the minimum scalar field would be non-zero, and the effective cosmological constant is non-zero as well. The variation of potential is shown in Fig. \ref{f2}. The $\mu^2 > 0$ variation is shown in red, whereas in blue coloured, the interpretation of $\mu^2 < 0$ is shown. The scalar field Lagrangian is taken in conjunction may emerge a new set of Friedmann equations. Due to a mathematical constraint, the effective mass has a finite discontinuity. It is found for the range $ -0.50 \leq \beta \leq -0.25 $ effective mass is discontinuous. The variation is shown in Fig. \ref{f1}. The post-Minkowskian constraint of modified gravity, the problem of gravitational radiation, also deserves careful consideration. When the gravitational action is just not Hilbert–Einstein, new polarizations emerge: in general, massive, massless, and ghost modes must be considered, whereas, in GR, only massless modes and two polarizations are present. This result necessitates a rethinking of GW physics. If GWs have nontensorial polarization modes, as mentioned, an analyzed signal, such as a stochastic cosmological background of GWs, would be an integration of each of these modes. In Einstein's General Relativity, the plus and cross modes of polarization are quite common. The plus mode is depicted by $ P_+=R_{txtx}+R_{tyty} $, the cross mode by $ P_{\times}=R_{txty} $, the vector-x mode by $ P_{xz}=R_{txtz} $, the vector-y mode by $ P_{yz}=R_{tytz} $, and the longitudinal mode by $ P_l=R_{tztz} $, and the transverse breathing mode by $ P_b=R_{txtx}+R_{tyty} $. For the form of potential $ V(\phi) = \frac{1}{2} \mu^2 \phi^2 + \frac{1}{4} \lambda \phi^4 $, in the frame of modified gravity $ f(R, T^\phi) = R + \beta T^\phi $, we obtain four polarization modes of GWs exists : +,$ \times $ tensorial mode, breathing scalar mode and longitudinal scalar mode, respectively. \section*{Acknowledgements} The southern federal university supported the work of SRC (SFedU) (grant no. P-VnGr/21-05-IF). SRC is also thankful to Ranjini Mondol of IISc, Bangalore, for the fruitful discussion to improve the manuscript.
2023-04-23T08:18:04.315Z
2021-11-22T02:08:52.000Z
redpajama/arxiv
arxiv_0000
1,134
2,737
ca374d3319871a235eb644ad5a4489cdba90d549
\section{Introduction} Cellular networks are a vital component of a truly mobile augmented reality (AR) system/application such as ``Pokemon Go," as they offer the widest coverage to the end-users. With the rising demand for immersive AR experience, the AR market is set to cross \$100 Billion and the total mobile network traffic is expected to exceed 300 Exabytes per month in 2026 \cite{EriccsonMobilityReport2021}. However, mobile AR traffic is latency-critical, uplink-heavy, and bursty in nature and the current LTE/LTE-A (Long Term Evolution/Long Term Evolution-Advanced) networks lack the capability to offer a seamless mobile AR experience \cite{ARLTE}. Consequently, cellular operators have taken several measures such as dense deployment of small-cells (SCs) and access points (APs) and utilization of the unlicensed spectrum through LTE-WiFi coexistence. The prospect of effectively utilizing the unlicensed spectrum through \textit{LTE in unlicensed spectrum} (LTE-U) and \textit{LTE license assisted access} (LTE-LAA) appeals to the mobile operators. Hence, there is a rapid deployment of both LTE small-cells and Wi-Fi APs in the 5GHz band where 500 MHz of the unlicensed spectrum is shared by both LTE and Wi-Fi networks \cite{ACM,icdcn}. This work focuses on two aspects of LTE-WiFi coexistence \emph{viz.,} coexistence network performance analysis and time-critical optimization. To that end, a comparative performance analysis of unlicensed LTE standards (LTE-U/LAA) is done through network feature relationship parameters learned from network data. Thereafter, the learned feature relationships are utilized to reduce the time-cost of performance optimization in a dense coexistence network. \subsection{Motivation} With the proliferation of unlicensed coexistence networks, there has been a significant debate on the comparison of LTE-U and LAA standards and their performance. While cellular operators such as AT\&T and Verizon have opted in favor of LAA deployments \cite{ACM}, recent works claim that LTE-U may offer better coexistence with Wi-Fi under specific conditions \cite{UvsLaa}. The existing comparative studies of LTE unlicensed standards are lacking in three respects. First, they primarily rely on simulations and make several assumptions \cite{LTEvsWiFi, UvsLaa}. Secondly, the offered comparative analysis is based only on \textit{measurements}, \emph{i.e.,} by simply comparing several network performance evaluation metrics such as throughput, latency, number of re-transmissions, \emph{etc.} In contrast, \textit{feature relationship analysis} looks for patterns in network data that can reveal relationships between network variables such as dependence, correlation, causation, \emph{etc.} Finally, the variation in performance of LTE unlicensed variant with the variation in coexisting Wi-Fi standard is often overlooked. In addition, the impact of factors such as bandwidth allocation and signaling data is rarely studied. With the increase in the deployment of small-cells and access points, dense networks (DNs) with inter-site distance $\leq$ 10m, and ultra-dense networks (UDNs) with inter-site distance $\leq$ 5m, have proliferated in most urban centers \cite{dense2}. Thus, performance optimization of the rapidly growing dense coexistence networks is a major challenge. This becomes particularly important when time-critical mobile AR services/applications need to be supported by coexistence networks. However, the literature currently lacks network feature relationship (NFR) analysis from the perspective of dense LTE-WiFi coexistence networks. Further, to the best of our knowledge no existing study makes use of network feature relationships in dense coexistence network optimization. \subsection{Contributions} In this work, we address these concerns through the following contributions \begin{itemize} \item Study network feature relationship in dense coexistence networks such as SINR-Capacity relationship, through machine learning algorithms. \item Analyze the impact of factors such as the choice of LTE unlicensed standard, coexisting Wi-Fi standard, and bandwidth allocation on NFRs in coexistence networks. \item Compare LTE-LAA/LTE-U and Wi-Fi 802.11n/ac coexistence performance based on NFR parameters such as the choice of predictor variable, R-sq (model validity), residual error (absolute and normalized), outliers, \emph{etc.} \item Utilize NFRs to optimize dense coexistence network performance through network capacity and signal strength optimization. \end{itemize} The comparative analysis in this work is distinct from the state-of-the-art studies \cite{LTEvsWiFi, UvsLaa} in that it is not limited to measuring and analyzing individual network variables. It involves data-learning to discover feature relationship patterns which determine network performance. Further, the data is gathered through real-time experiments instead of simulations. For the experiments, dense and ultra-dense co-existence networks were implemented with the help of USRP NI-SDRs and WiFi APs. The \textit{learning model selection policy} considered for feature relationship analysis is also explicitly described for replication and validation. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth,trim=5mm -5mm 0mm -3mm]{Plots/LTE_INT.pdf} \caption{Interference in Dense Unlicensed Coexistence Networks} \label{LTEINT} \end{figure} \section {A Review of Related Works} \label{related} \subsection{Network Feature Relationships in Dense Networks} In the recent past, several state-of-the-art studies have used regression algorithms, decision trees, and other machine learning techniques for NFR analysis \cite{regmswim, regsigcom, regwcnc, icdcn}. Some of these works leverage the learned NFRs to improve network performance. For example, learning 802.11n feature relationships can facilitate improved configuration selection and enhanced rate adaption \cite{regmswim}. Yet, the current literature lacks a robust analysis of NFRs, such as the capacity-interference relationship (CIR) in unlicensed coexistence networks. Further, as shown in Figure~\ref{LTEINT}, densification of LTE-WiFi coexistence systems will exacerbate the adverse impact of interference and pose additional challenges. While densification may lead to an initial gain in LTE-WiFi coexistence system capacity, network performance eventually deteriorates with rise in density \cite{CoexDense1}. Moreover, the impact of factors \emph{e.g.,} unlicensed LTE variant, Wi-Fi standard, bandwidth allocated, and signaling data, \emph{etc.,} on dense coexistence CIR also remains unexplored. For example, the analysis presented in \cite{smkcomsnets} is limited to demonstrating how the SINR-Capacity relationship differs in regular and dense/ultra-dense networks, and fails to explore the impact of the factors listed above. Therefore, this work focuses on various aspects of the relationship between interference and network performance in a dense coexistence network. \subsection{Optimization Challenges in Dense Networks} The need for low association times and fast-handovers in a dense environment makes network optimization \textbf{time-critical}. However, consistent densification significantly increases network scale and complexity which leads to high convergence times and computational overhead to arrive at optimal solutions \cite{dense2}. This is a major challenge for ultra-low-latency AR applications as already the LTE/LTE-A deployments account for almost 30\% of the end-to-end AR latency \cite{ARLTE}. With densification, the latency problem will exacerbate and diminish the gains in throughput. Thus, it is important not only to study the impact of densification on NFRs but also ascertain how these feature relationships can be used to accelerate optimization in dense coexistence networks by making it computationally less expensive \cite{OptCIR}. Broadly speaking, wireless network performance can be optimized through three major frameworks \emph{viz.,} optimization, machine learning, and a hybrid approach that involves machine learning based optimization \cite{MLOPT, OptCIR}. This work paves the way for an empirical and practical approach to \textit{\textbf{network feature relationship based optimization}} (NeFRO). NeFRO adopts the hybrid model wherein feature relationships learned from network data serve as a constraint in network optimization formulations. By using the feature relationship equation for performance optimization, NeFRO accounts for the ambient network environment and is free from theoretical pre-suppositions. Due to these factors, NeFRO is shown to significantly reduce the time-costs in dense network performance optimization. \section{Experimental Set-up} This section describes the experimental platform designed to create a dense LTE-WiFi coexistence environment in the 5GHz unlicensed spectrum. The testbed is used to collect data for NFR analysis. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth,trim=5mm -5mm 0mm -3mm]{Plots/CAM.pdf} \caption{Wi-Fi, LTE-LAA, and LTE-U: Channel Access Mechanisms} \label{COMAC} \end{figure} \subsection{Testbed Design} Two variants of LTE unlicensed operation have been standardized and released, viz., LTE-U and LTE-LAA, albeit with starkly different medium sensing and access mechanisms. LTE-U relies on a load-dependent duty-cycle mechanism based on Carrier Sense Adaptive Transmission (CSAT). On the other hand, LTE-LAA depends on a Listen-Before-Talk (LBT) mechanism which is similar to the CSMA/CD MAC protocol of Wi-Fi, making it relatively easier for LAA to coexist with IEEE 802.11 WLANs. The medium access mechanisms of the two LTE unlicensed variants and Wi-Fi are juxtaposed in Figure~\ref{COMAC}. \par\textbf{LAA-LTE/LTE-U Platform} The National Instruments \textit{NI RIO} testing-platform is used as the LAA/LTE-U testbed as shown in Figure~\ref{exp}~(a). The PHY on the NI Labview system is the standard PHY implementation as prescribed in the LTE-A 3GPP release. More technical details on the testbed are presented in Table~\ref{sim}. The system offers high operational flexibility through advanced user-defined configuration of signal transmission and reception. Several network parameters can be configured, such as the sub-carrier modulation scheme, resource block allocation, LAA transmission opportunity (TXOP), Energy Detection (ED) threshold, LBT category option, LTE-U duty cylce ON \& OFF, transmission power, OFDM parameters (\emph{e.g.,} 1 to 3 control channels), carrier frequency offset, and timing offset estimation. \par\textbf{Wi-Fi Platform} Netgear wireless routers are used to design the Wi-Fi testbed. The off-the-shelf Wi-Fi routers, supporting both 802.11n and 802.11ac in the 5 GHz band serve as the typical Wi-Fi nodes. The Wi-Fi testbed supports easy modification and monitoring of parameters and functions in both the MAC and PHY layers of Wi-Fi such as DIFS, CWmin, CWmax, channel bandwidth, and transmission power. \subsection{Experiment Design} All experiments are carried out in the typical setting of an indoor office at the University of Chicago campus. This work focuses mainly on gathering SNR and throughput data for NFR analysis. Other network parameters such as contention window size, request to send (RTS), clear to send (CTS), inter-beacon interval time, power range, channel assignment (static or dynamic), and bandwidth in the PHY layer are also configured as required. In the experiments, the LAA transmitter always uses LBT protocol to sense if the channel is available and the maximum TXOP is 8 ms, which is similar to the transmission of LTE-A in licensed bands. The Power Spectral Density (PSD) of LAA transmissions is controlled so as to ensure that the power of the interference from LAA is below Clear Channel Assessment (CCA) threshold of Wi-Fi communications. \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[Representative Testbed] {\includegraphics[width=.5\linewidth]{Plots/test-bed.pdf}}\hspace*{0.1cm}\hfill% \subfloat[Representative Topology] {\includegraphics[width=.40\linewidth]{Plots/topology.pdf}}\hspace*{0.1cm}\hfill% \end{tabular} \caption{Experimental Set-up} \label{exp} \end{figure*} Several experiments were designed to explore dense unlicensed coexistence performance by creating combinations of LAA/LTE-U, 802.11n/802.11ac, and different bandwidths (5/10/15/20 MHz). LAA and LTE-U use the same underlying mechanism of Dynamic Bandwidth Adaptation (DBA) for spectral efficiency as LTE-A. Therefore, while Wi-Fi APs generally operate in a bandwidth of 20 MHz, LAA and LTE-U possess the capability to support multiple bandwidths (1.4/3/5/10/15/20 MHz). Bandwidth is an important factor that may influence capacity interference relationship due to cross-talk interference. Therefore, this work considers bandwidth to be an important parameter for CIR analysis. Further, dense random topologies are considered where LAA/LTE-U/Wi-Fi nodes are placed at inter-nodal distances of 5m to 10m. A representative illustration is presented in Figure~\ref{exp}~(b). Apart from a small inter-nodal distance, a dense coexistence scenario in an indoor setting is also interesting due to the prevalence of significant multi-path fading and presence of obstacles such as walls, furniture, objects, etc. \begin{table}[h]\centering \caption{Experiment Parameters} \resizebox{0.65\textwidth}{!}{\begin{tabular}{|c|c|} \hline \textbf{Parameter} & \textbf{Value} \\ \hline Number of nodes & 6 \\ \hline Transmission Power & 23 dBm \\ \hline Operating Frequency & 5 GHz \\ \hline LTE-U/LAA RF Transmission & Loopback \\ \hline LTE Transmission Channel & PDSCH, PDCCH \\ \hline Data Traffic & Full buffer \\ \hline Wi-Fi Channel Access Protocol & CSMA \\ \hline LAA Channel Access Protocol & LBT \\ \hline \multicolumn{2}{l}{\footnotesize *PDSCH - Physical Downlink Shared Channel} \end{tabular}} \label{sim} \end{table} \section{Network Feature Relationship Analysis Methodology} Regression is a popular machine learning paradigm used to determine the relationship between network parameters in continuous space~\cite{regmswim, regsigcom, regwcnc, icdcn}. Regression algorithms not only offer reliable feature relationships, but also provide insights into the relationship in terms of model validity, outliers, residual error \emph{etc.} Thus CIR is modeled as a bi-directional regression problem where the goal is to estimate or predict network capacity through SINR feature points, and vice versa. \subsection{Learning Algorithms for Relationship Analysis} Let $N$ represent the number of training points and let dimensionality of the feature vector be denoted by $D$. Then, the coexistence network data can be represented as $\{\v x_i, y_i\}_{i=1}^N$, where $\v x_i \in \mathbb{R}^D$ is the feature vector and $y_i \in \mathbb{R}$ is the ground truth value for $i^{th}$ training point. The goal is to learn a mapping $f: \v x_i \xrightarrow{} y_i$ where $x_i$ is the predictor (SINR or Capacity) and $y_i$ is the response (Capacity or SINR). This work considers the following basket of learning algorithms for the regression analysis: \begin{itemize} \item \textbf{Linear Regression} This group of algorithms learns a linear relationship by solving $\argmin_{\v w, b} \sum_{i=1}^N ||(\v w^{\top} \v x_i+b) - y_i||_2^2 + \alpha\v w^{\top}\v w$~\cite{murphy2012machine}. Here, the weight vector is denoted by $\v w \in \mathbb{R}^D$ and the bias term is $b \in \mathbb{R}$. Further, the weightage (importance) of the $l_2$-regularization term is controlled by the hyper-parameter denoted by $\alpha$, which is set to zero for Ordinary Least Squares Linear Regression~(OLS). However, for Ridge Regression~(RR), $\alpha$ is set through $k$-fold cross validation (kCV). \item\textbf{Kernel Ridge Regression} A non-linear mapping is expected to be more suitable for the SINR-Capacity relationship \cite{Manas}. Therefore, we make use of the Kernel Ridge Regression~\cite{murphy2012machine} that employs non-linear transformations such as Polynomial and Radial Basis Function~(RBF). Its goal is to solve $\argmin_{\v w, b} \sum_{i=1}^N ||K(\v w, \v x_i)+b - y_i||_2^2 + \alpha\v w^{\top}\v w$. Here, $\v w \in \mathbb{R}^D$ is the weight vector, $b \in \mathbb{R}$ is the bias term, and $\alpha$ is a hyper-parameter defined above. Finally, $K(a, b)$ is a kernel function which allows to compute dot product in an arbitrary large space without the need to explicitly project features in high dimensional space. Varying the kernel function as RBF and Polynomial leads to Kernel RBF Regression~(RBF) and Multi-variate Polynomial Regression~(MPR), respectively. \end{itemize} \subsection{Selection of Regression Models} Regression Model selection depends upon objective criteria such as R-sq, higher-order terms, \emph{etc.,} and some subjective value-judgments, \emph{e.g.,} selecting a model with a higher R-sq even if the higher-order terms are not significant. However, studies often discuss network feature relationships and existence of correlation without going into the details of the underlying regression models \cite{regsigcom}. Failure to highlight such details poses a challenge while replicating these studies. To avoid this problem, the model selection policy considered in this work is described below. \par\textbf{Regression Model Selection Policy} \label{RMSP} The regression algorithms are subjected to \textit{k-Fold Cross-validation (kCV)} averaged over 30 runs (for $k=5$). Feature relationship models are evaluated based on their R-sq or \textit{Regression Model Validity} (RMV). A high RMV value signifies the \textit{goodness} of the fit. Also, outlier detection and removal is performed using the Local Outlier Factor (LOF) algorithm. First, CIR models with 1--3 degree polynomials are learned and to avoid over-fitting of feature point data, the higher-order terms considered are limited to statistically significant cubic terms. Further, the optimal model is chosen on the basis of RMV via kCV as it best explains the feature relationship \cite{Reg2}. For example, between a CIR model learned from the baseline data-set and the model learned from the data-set processed through LOF outlier removal, the model and feature relationship with the higher RMV is considered. This work focuses primarily on quadratic CIR models for the following reasons. First, the \% difference in average RMVs of linear \& quadratic and quadratic \& cubic models is 3.63\% and 0.98\% respectively. Thus, as compared to quadratic models, the linear models exhibit a relatively weak CIR and the RMV gain in cubic models is very low. Second, CIR in wireless networks is expected to be quadratic \cite{2Gupta}. Finally, low convergence time is a primary constraint in dense network optimization. Whatever little gain the higher RMV of a cubic model might offer in performance optimization, will be irrelevant compared to the increase in the computational overhead of a third-degree polynomial constraint. \subsection{Analytical Methodology} To study the impact of dense network configuration on NFRs, it is necessary to isolate individual network parameters and observe the consequent variation in the feature relationship. \par\textbf{Comparative Themes} The analysis seeks to draw a comparison between the performance of LTE unlicensed variants (LTE-U and LTE-LAA) in coexistence with the Wi-Fi variants (802.11n/ac). We also study the impact of bandwidth allocation and the choice of predictor variable on CIR in these network configurations. Thus, a total of 32 Test Scenarios are considered (denoted by TS$_{i}$, where $i\in \{1\ldots32\}$). Each TS$_{i}$ indicates a unique unlicensed coexistence network scenario based on the LTE unlicensed variant (LTE-U/LTE-LAA), coexisting Wi-Fi standard (802.11n/ac), bandwidth allocated (5/10/15/20 MHz), and predictor variable (SINR/Capacity). For each TS$_{i}$, the CIR model is selected through the regression model selection policy outlined earlier. \par\textbf{Comparison Parameters} The performance of different LTE-WiFi network configurations is evaluated through analysis of learning parameters such as model validity, standard deviation in RMV, residual standard deviation (RSD), outliers, \emph{etc.}. Trends of average network values observed in the experiments are used as well. For each of these parameters, two types of comparisons are carried out, \emph{viz.} scenario-specific comparison and component-specific comparison for LTE-WiFi-Predictor configurations. The former is aimed at a comparative analysis of individual network scenarios (\emph{e.g.,} LTE-U, 802.11n, at 5MHz vs. LTE-LAA, 802.11n, at 5MHz ) while the second is aimed at capturing component level trends (\emph{e.g.,} SINR as a predictor vs. Capacity as a predictor). Reliable inferences are drawn only if the findings are consistent at both levels of comparative analysis. Wherever possible, plausible explanations are offered. \section{CIR in Dense Unlicensed Coexistence Networks} \label{CIRDCN} CIR model parameters are analyzed, and the results are presented for scenario-specific comparisons in Figure~\ref{scenario}, and configuration-level trends in Figure~\ref{average}. Please note that only for Figure~\ref{average}~(b), a logarithmic scale is used to show ``\% Difference" due to a high variation in values. Based on these results, various aspects of unlicensed coexistence network performance are discussed ahead. Some results, such as those related to outliers, are mentioned during the course of the discussion itself. \subsection{Unlicensed LTE: LTE-U vs LAA} We begin with measurement based observations on average network capacity, as most comparative studies primarily focus on this metric \cite{UvsLaa}. In 75\% of the test-scenarios, LTE-LAA outperforms LTE-U in coexistence with corresponding Wi-Fi variant (n/ac). Likewise, in 87.5\% scenarios, 802.11ac outperforms 802.11n in coexistence with corresponding LTE variant (LTE-U/LAA). Further, LTE-LAA in coexistence with 802.11n/ac offers a higher SINR on average than LTE-U in all scenarios save one. The LBT mechanism of LAA is quite similar to the CSMA channel access protocol of Wi-Fi and leads to a higher network capacity on average in LTE-LAA. Further, LAA nodes sense the energy level on the medium (-72 dBm) prior to transmission which mitigates co-channel interference from Wi-Fi and other LAA APs, ensuring higher SINR on average than LTE-U. On the contrary LTE-U has a duty-cycle based channel access mechanism which leads to inefficient transmissions and packet-collisions in both, the LTE-U and Wi-Fi components of the coexistence system. \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[LTE-LAA vs. LTE-U] {\includegraphics[width=.45\linewidth]{Plots/UvLAA.pdf}}\hspace*{0.1cm}\hfill% \subfloat[802.11ac vs. 802.11n] {\includegraphics[width=.45\linewidth]{Plots/nVac.pdf}}\hspace*{0.1cm}\hfil \\ \subfloat[SINR vs. Capacity (P$_{var}$)] {\includegraphics[width=.45\linewidth]{Plots/Predictor.pdf}}\hspace*{0.1cm}\hfil \end{tabular} \caption{Test-scenario Specific Comparative Analysis} \label{scenario} \end{figure*} \par\textbf{Regression Model Validity (RMV)} LAA and LTE-U models fare equally well, in a scenario specific comparison with $\leq$5\% difference in RMVs in 13/16 comparisons (26/32 scenarios). CIR in LAA seems to be only slightly better as it outperforms LTE-U in the remaining 3 scenarios. In terms of average RMVs across all 32 scenarios, LAA and LTE-U are comparable, although LAA has a slight edge ($<$1\%). Likewise, in LAA-WiFi-Predictor configuration combinations, LAA has a slight edge (0--2\%). Prima facie, based on RMV alone, CIR does not seem to be impacted by the unlicensed LTE variant. However, RMV can not be considered to be the sole goodness-of-fit measure for feature relationships. Higher RMV is an indicator of the variation in dependent variable explained by the model, but it does not indicate how far the data-points lie from the regression line. Further, the standard deviation of RMV with kCV for a specific scenario must also be low. The analysis ahead explores these dimensions. \par\textbf{Residual Standard Deviation (RSD)} The capability of a feature relationship model to make accurate predictions is highly desirable for the model to be deployed in real-world network performance management. Thus, residual error or RSD is a measure of precision of the model's predictions and should ideally be low for a robust CIR. Higher residual error is observed in twice as many LTE-U scenarios as compared to LAA scenarios (5\% margin of error). On average, LTE-U scenarios have a 6\% higher RSD than LAA. Further, average residual error in all LTE-WiFi-Predictor network-configurations is lower for LAA when compared to LTE-U. Thus, LAA models seem to be more precise in their ability to predict coexistence network performance, regardless of the response variable (Capacity or SINR). \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[Avg. RMV and Residual Error] {\includegraphics[width=.5\linewidth]{Plots/R2RSD.pdf}}\hspace*{0.2cm}\hfill% \subfloat[Avg. \% Gain \& Std. Deviation in RMV] {\includegraphics[width=.5\linewidth]{Plots/GSTD.pdf}}\hspace*{0.1cm}\hfill% \end{tabular} \caption{Configuration-level Comparative Analysis} \label{average} \end{figure*} \par\textbf{Gain and Standard Deviation in RMV} It is important to notice the standard deviation (SD) in CIR model validities when subjected to kCV, especially after LOF outlier removal. While outlier reduction yields higher RMVs, the Gain in RMV should be accompanied with low SD in RMV, averaged across all kCV runs. Thus, we consider high Gain and low SD as a characteristic for stable CIR models. LTE-U fares much worse than LAA in terms of both Gain and SD. LAA outperforms LTE-U by 47.67\% in Gain and registers a 24.5\% lower SD, averaged across all scenarios. A similar trend can be observed in LTE-WiFi-Predictor combinations as well. Thus, LAA has a higher Gain post-outlier-removal along with a lower SD, which demonstrates robustness of the LAA CIR models. \par\textbf{Outliers} For a network system, the outlier \% may be considered to be a good indicator of the degree of fluctuation in network performance, and consequently the ability of a network to deliver the promised Quality of Service (QoS). However, the selection of outlier detection algorithm is a subjective choice. While this work steers clear of making inferences based on outliers, we compare the outliers in LTE-U and LAA data detected by LOF algorithm with the outliers detected by ``Minitab," a standard tool for data-analysis \cite{minitab}. Minitab's outlier detection algorithm labels samples with extreme ``leverage points" and ``large residuals" as outliers. As expected the percentage of data-points labeled as outliers is different in LOF and Minitab. However, LTE-U has higher a fraction of outliers as compared to LAA in both LOF (by 9.11\%) and Minitab (by 5.14\%). The reason for high fluctuation in LTE-U can be attributed to greater susceptibility of an LTE-U node to the unpredictable interference from Wi-Fi APs in its proximity. This primarily happens during the LTE-U ON state as there are no energy detection thresholds in LTE-U. Unlike LTE-U, Wi-Fi considers the energy threshold as -62 dBm and preamble detection threshold as -82 dBm. Similar to Wi-Fi, the LBT mechanism in LAA has an energy threshold of -72 dBm, making it less vulnerable to interference from Wi-Fi APs, and ensuring fewer extreme network performance fluctuations. Thus, LAA seems to offer a more reliable performance from the perspective of end-user QoS experience. \par\textbf{LTE-LAA vs LTE-U: A Feature Relationship Perspective} A clear pattern emerges after the analysis of various learning model parameters. Residual error, standard deviation in RMV, and outlier \% in LTE-U is higher than LAA, while post-outlier-removal Gain in RMV is lower. This is true for the majority of test-scenarios regardless of the choice of Wi-Fi variant, predictor variable, and bandwidth allocated. Thus, CIR in LTE-LAA networks is qualitatively better in terms of the spread of data along the expected curve fit. This implies that LAA offers greater consistency in networks performance and lower fluctuations in system variables such as the signal strength or the throughput at the end-user device. This finding has a strong correlation with the industry trends. The Global Mobile Suppliers Association (GMSA) report states that 38 operators in 21 countries have made investments in LAA as compared to only 11 operators investing in LTE-U. In terms of global deployments, 30 operators are planning to deploy or are actively deploying LAA networks in 18 countries, in contrast to LTE-U which is being deployed in only 3 countries. Further, LTE-U deployments are designed with an upgrade path to LAA and eLAA \cite{LAA2}. Clearly, LAA is the preferred choice of industry for LTE unlicensed networks. From a data-learning perspective, this appears to be reasonable as LAA offers a more robust network performance than LTE-U. \subsection{Wi-Fi: 802.11n vs 802.11ac} \par\textbf{Measurement Based Analysis} 802.11ac outperforms 802.11n in 87.5\% scenarios in terms of average network capacity. This is expected as 802.11ac supports 80 MHz channels (with optional support up to 160 MHz), higher modulation schemes (256 QAM), and 8x8 Multi-user Multiple-input Multiple-output (MU-MIMO), among other features. \par\textbf{Feature Relationship Analysis} 802.11ac is slightly better than 802.11n in scenario-specific RMV comparison, while in terms of component-specific average RMV, the two are comparable. The post-outlier-removal Gain in 802.11n is much higher, even though the average RMVs are comparable. However, 802.11ac has a lower deviation in model validities, which implies more reliable CIR models than 802.11n. In terms of residual error, 802.11ac registers lower error in 33\% more models as compared to 802.11n. This signifies more accurate predictive modeling in 802.11ac. \par\textbf{802.11ac vs 802.11n : A Feature Relationship Perspective} The CIR analysis reveals only a marginal advantage in coexistence performance for 802.11ac as compared to 802.11n. The trends are underwhelming because the 802.11ac standard supports compressed \textit{beamforming} which along with channel state information (CSI) is quite efficient in mitigating link-conflicts \cite{ac}. Hence, a stronger relationship between network capacity and SINR was expected. However, the observations can be reasonably explained through two facts. First, in an LTE-WiFi coexistence system, the unlicensed LTE (LTE-U/LAA) subsystem has a greater impact on the performance of the Wi-Fi subsystem than the latter has on the former. Thus, the unlicensed LTE subsystem is the primary determinant of the overall system performance. Second, the adverse impact of LTE-U on coexisting Wi-Fi (n/ac) performance is much worse than that of LAA on Wi-Fi \cite{UvsLaa}. The duty cycling mechanism of LTE-U combined with the LTE-U's transmission at energy threshold's lower than those prescribed by Wi-Fi cause interference to Wi-Fi transmissions \cite{LTEU-WiFi1}. LAA's LBT avoids collisions with Wi-Fi transmissions, and leads to a better coexistence system performance. This is observed in the LTE-WiFi-Predictor combination analysis as well. Thus, from a data analysis perspective, the unlicensed LTE is the dominant subsystem in the coexistence paradigm, and determines the overall system performance. Further, the feature relationship analysis of network-data also supports the findings from measurement based studies that LTE-U has a higher adverse impact on Wi-Fi performance as compared to LAA \cite{UvsLaa}. Another major takeaway is that it seems more appropriate to study the Wi-Fi (n/ac/ax) subsystems performance only in conjugation with the coexisting unlicensed LTE (LTE-U/LAA) or 5G NR-U subsystem. \subsection{Choice of Network Predictor Variable} A bidirectional regression analysis reveals the impact that the choice of predictor variable \emph{e.g.,} SINR (P$_{SINR}$) or Capacity (P$_{Cap}$), has on network feature relationships. We find that network capacity is a much better predictor of SINR than SINR is of network capacity. This is a pattern that can be clearly and consistently seen across all CIR model parameters and all comparative themes without any ambiguity. In scenario-specific comparison, RMV of P$_{SINR}$ models is always either comparable to, or lower than P$_{Cap}$ models. RMV of P$_{Cap}$ models is higher on average for both LTE-U and LAA components when compared to RMV of corresponding P$_{SINR}$ models. P$_{Cap}$ models also exhibit a significantly higher post-outlier-removal Gain and lower average standard deviation in RMV. Finally, the residual error is higher in P$_{SINR}$ on average, and in twice as many scenarios, when compared to P$_{Cap}$. It may seem counter-intuitive that it is more accurate to predict the expected values of SINR for given values of network capacity, than the reverse. However, recent analysis of operator data gathered from public LAA deployments shows that high SINR doesn't always guarantee high throughput in coexistence deployments, as end-user QoS depends on other factors such as resource block allocation \cite{icdcn}. On the other hand, for high throughput a high SINR is a necessary, if not a sufficient condition. Thus, the direction of NFR analysis and the choice of predictor has a clear effect on the learned network model, regardless of the unlicensed LTE and Wi-Fi variants considered. Further, this also indicates that other variables may also be relevant to the unlicensed coexistence NFR analysis such as resource block allocation, physical cell-id, \emph{etc.} \subsection{Impact of Bandwidth} From Figure~\ref{bandwidth}~(a), prima facie it appears that when throughput is the response variable, the residual error of the models increases consistently with bandwidth. This pattern seems consistent for both LTE-U and LAA models. This would make sense as well, because with higher bandwidth allocation there is a greater possibility of fluctuation in network capacity values in real-world systems due to poor resource allocation and temporal variation in factors such as interference. To confirm this pattern, we normalized the coexistence data and learned the feature relationships and associated parameters again. The data was normalized as $\hat{\mathbf{z}} = \frac{\mathbf{z} - \mathbf{\mu}}{\mathbf{\sigma}}$, where $\mathbf{\mu}, \sigma$ are the mean and the standard deviation of the data. As a result, the processed data is zero mean and unit variance, and thus more suited to evaluate the impact of bandwidth. Prior to normalization, in 11 out of 12 scenario-specific comparisons the RSD had increased with an increase in bandwidth. However, after normalization, in almost half the scenarios there is no increase in residual error with increase in bandwidth and the earlier trend is non-existent. This finding has serious implications for QoS promised to the end-user. Cellular operators attempt to satisfy the guaranteed user demand according to the data plan. Had higher bandwidth allocation exhibited an association (if not causation) with greater fluctuation in network performance, it would be worrisome. However, this does not seem to be the case. \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[RSD and Bandwidth] {\includegraphics[width=.5\linewidth]{Plots/RSDBW.pdf}}\hspace*{0.2cm}\hfill% \subfloat[Normalized RSD and Bandwidth] {\includegraphics[width=.5\linewidth]{Plots/RSDBWN.pdf}}\hspace*{0.1cm}\hfill% \end{tabular} \caption{Impact of Bandwidth} \label{bandwidth} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth,trim=5mm -5mm 0mm -3mm]{Plots/nefro.pdf} \caption{Network Feature Relationship based Optimization} \label{nefro} \end{figure} \section{NFRs and Dense Network Optimization} The feature relationships learned from network-data can be further utilized in improving dense network performance. \subsection{Network Feature Relationship based Optimization (NeFRO)} To facilitate the use of NFRs in network performance enhancement, this work proposes the \textit{\textbf{Network Feature Relationship based Optimization}} (NeFRO) framework. The high-level schema of the NeFRO framework is outlined in Figure~\ref{nefro}. First, data is collected for a network deployment periodically. In each epoch, network feature relationship analysis is performed using machine learning algorithms. Strong NFRs are identified and selected for possible utilization in network performance optimization. These NFRs are fed to a \textit{constraint selector module} that selects relevant constraints for the optimization model/formulation. The module compares an NFR learned from network data for a network feature-point set $\{f_1, f_2,\ldots,f_n\}$ with available theoretical constraints relevant to the feature point set. While the NFR is more ``suitable," as it is derived from actual network data, it still has to be tested for \textit{convergence time viability}. Thus the constraint selector module compares the NFR with the theoretical constraint for complexity, and selects the more viable constraint for network optimization. Although the illustration highlights the process-flow for a coexistence network, the NeFRO approach will apply similarly to network optimization in all wireless networks, with minor modifications, if required. \par\textbf{Benefits of the NeFRO Approach} There are several advantages of the proposed NeFRO framework over conventional network optimization. First, since the learned NFRs are grounded in empirical data, they reflect the ambient network conditions. Therefore, it is more practical to use them in network performance optimization than theoretical constraints involving similar network variables. Second, NFRs can be used ``as is" in optimization without making any assumptions, unlike theoretical constraints which need to be justified through assumptions. Finally, if the learned NFRs are less complex than the theoretical constraints, it automatically solves the problem of arbitrary or forced relaxation of constraints. Even if the NFRs are of a comparable complexity and require similar computational overhead, they have the advantage of reflecting the actual network parameter dynamics, which facilitates a more informed network optimization. \subsection{Implementation and Validation of NeFRO} Convergence time and accuracy trade-off is a primary challenge in dense network performance optimization \cite{dense5}. Therefore, NeFRO envisions the twin objectives of \textit{convergence time reduction}, while maintaining high \textit{accuracy}, vis-à-vis the baseline optimization model. The validation of NeFRO is done by implementing it on recent state-of-the-art studies on coexistence network optimization. \par\textbf{Validation Methodology} The validation methodology involves the following steps. First, works with two optimization objectives are considered, \emph{viz.} network signal strength optimization and network capacity optimization. The proposed optimization models are implemented on GAMS \cite{GAMS}, as per the network configuration and specifications of the testbed/experiments. Second, the baseline optimization models are implemented for the test-scenarios considered in this work. Further, two values are observed, (a) the optimal value of network performance metric (SINR or Capacity), and (b) the \textit{convergence time} required by the formulation to arrive at the optimal value. Thereafter, the complex theoretical SINR-Capacity constraint in each of the proposed optimization formulation is replaced with the second-degree polynomial CIR equation derived from feature relationship analysis in this work. Please note that the baseline models that optimize network capacity are considered for test-scenarios with SINR as the predictor, and vice-versa. \par\textbf{Evaluation of NeFRO} Two yardsticks are considered to carry out the performance evaluation of NeFRO. First, is the closeness of the ``NeFRO Optimal" value generated by the NeFRO model, to the optimal value generated by the baseline literature model. This is referred to as the \textbf{Accuracy} of the NeFRO model. Accuracy can be defined as, the \textit{```\% difference in the optimal value generated by the baseline model and the NeFRO-optimal value."} Second, is the reduction in the time taken by the NeFRO model to arrive at the optimal value. This is defined as \textbf{Convergence Time Fraction} (CTF). CTF indicates \textit{``what fraction (\%) of the baseline model's convergence time is NeFRO's convergence time."} \footnote{For example, if baseline model takes $10ms$ to converge at the optimal solution, and NeFRO requires $9ms$ to arrive at the NeFRO-optimal value, then CTF is 90\%} Thus, NeFRO is evaluated on its ability to offer a \textit{low CTF} while maintaining \textit{high Accuracy}, with respect to the baseline optimization model. Please note that the state-of-the-art optimization models are implemented for the small-scale dense unlicensed coexistence scenarios implemented on the experimental testbed. We expect that in a real-world network of a much higher scale and density, the benefits of NeFRO will be far more pronounced. \par\textbf{Baseline Optimization Models Considered} Four recent works are considered that propose formulations to optimize coexistence network performance. Two of these works aim at optimizing network capacity, while the other two optimize signal strength available to the UEs. A brief description is presented, starting with the capacity optimization works. An optimal resource allocation scheme aimed at maximizing LTE-LAA capacity in a LTE-WiFi coexistence network is proposed in \cite{Cap1}. Another study proposes an LBT-compliant channel access approach for both LTE-U/LAA in the 5GHz band that seeks to maximize system throughput, while also mitigating the impact of interference from the unlicensed LTE on the Wi-Fi subsystems capacity \cite{Cap2}. Further, \cite{SINR2} seeks to enhance and optimize network signal strength for LTE-U/LAA coexistence networks through strategic optimal placement of nodes. Finally, the model proposed in \cite{SINR1}, aims to optimize network performance by taking into account the spectrum usage of Wi-Fi APs in addition to the optimal placement of nodes. Henceforth, the capacity optimization models \emph{viz.,} \cite{Cap1} and \cite{Cap2}, are referred to as COM$_1$ and COM$_2$, respectively. Likewise, signal-strength optimization models \emph{viz.,} \cite{SINR1} and \cite{SINR2}, are referred to as SOM$_1$ and SOM$_2$, respectively. \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[NeFRO vs. COM$_1$ ] {\includegraphics[width=.45\linewidth]{Plots/LAANCOM1.pdf}}\hfill% \subfloat[NeFRO vs. COM$_2$] {\includegraphics[width=.45\linewidth]{Plots/LAANCOM2.pdf}}\hfill \\ \subfloat[NeFRO vs. SOM$_1$] {\includegraphics[width=.45\linewidth]{Plots/LAANSOM1.pdf}}\hfill% \subfloat[NeFRO vs. SOM$_2$] {\includegraphics[width=.45\linewidth]{Plots/LAANSOM2.pdf}}\hfill \end{tabular} \caption{NeFRO Performance in LAA Capacity and SINR Optimization} \label{nefroLAA} \end{figure*} \subsection{Optimization Results and Performance Evaluation} The results of the optimization simulations run on GAMS are presented in Figure~\ref{nefroLAA} and Figure~\ref{nefroLTU}, for LAA and LTE-U test-scenarios, respectively. Further, Figures~\ref{nefroLAA}~(a), \ref{nefroLAA}~(b), \ref{nefroLTU}~(a), and \ref{nefroLTU}~(b), present results for test-scenarios where the objective is to optimize network capacity. The remaining figures show results for signal-strength optimization test-scenarios. It can be discerned that NeFRO performs remarkably well by reducing the required convergence times while delivering NeFRO-optimal values very close to the optimal results of the respective models. A scenario-specific evaluation of NeFRO can be performed by observing the difference in the length of bars of Accuracy and CTF for a particular test-scenario. The greater the difference in their height, the lower is the trade-off, and the better is the NeFRO performance. Two points are noteworthy. First, in LAA scenarios NeFRO offers a significant reduction in convergence time, while in LTE-U scenarios, the CTF is somewhat subdued. Network optimization in LTE-U is inherently more challenging due to its channel access mechanism. Hence, it is more computationally intensive, and requires a longer convergence time. Second, for LAA scenarios the difference in NeFRO performance for capacity optimization and SINR optimization is negligible. However, in LTE-U, there appears to be a difference in NeFRO performance for these two objectives. Particularly, the CTF for SINR optimization in LTE-U is rather low. The average performance of NeFRO across all test-scenarios for the four optimization models is presented in Table~\ref{nefroT}. On average, when compared to SOM$_1$ and SOM$_2$, the CTF of NeFRO is lower than its average Accuracy, showing a marginal gain. However, Figure~\ref{nefroLTU}~(d) shows that for two scenarios there seems to be no overall gain from NeFRO as compared to SOM$_2$. Thus one dimension that needs to be further investigated is the variation in accuracy and convergence time-trade off with application of NeFRO. It is possible that the correlation or association between the RMV of the learned model and the network performance metric which is the objective of the optimization (SINR or Capacity), may explain this variation. In general, NeFRO outperforms the baseline model across all test-scenarios, and both unlicensed LTE variants, by significantly reducing the convergence time. The average Accuracy, as shown in Table~\ref{nefroT}, is very high as well. Further, NeFRO seems to perform better in LTE-LAA scenarios as compared to LTE-U, which can be expected based on the discussion and findings presented in this work. Thus, the NeFRO framework stands validated. \begin{figure*}[ht!] \centering% \begin{tabular}{cc} \subfloat[NeFRO vs. COM$_1$ ] {\includegraphics[width=.45\linewidth]{Plots/LTEUNCOM1.pdf}}\hfill% \subfloat[NeFRO vs. COM$_2$] {\includegraphics[width=.45\linewidth]{Plots/LTEUNCOM2.pdf}}\hfill \\ \subfloat[NeFRO vs. SOM$_1$] {\includegraphics[width=.45\linewidth]{Plots/LTEUNSOM1.pdf}}\hfill% \subfloat[NeFRO vs. SOM$_2$] {\includegraphics[width=.45\linewidth]{Plots/LTEUNSOM2.pdf}}\hfill \end{tabular} \caption{NeFRO Performance in LTE-U Capacity and SINR Optimization} \label{nefroLTU} \end{figure*} \begin{table} [hbt \caption{Performance Trends in Test-scenarios} \centering \small \begin{tabular}{|m{1.5cm}|m{0.85cm}|m{0.85cm}|m{0.85cm}|m{0.85cm}||m{0.85cm}|m{0.85cm}|m{0.85cm}|m{0.85cm}|} \hline \multicolumn{1}{|c|}{\textbf{NeFRO}}&\multicolumn{4}{|c||}{\textbf{LTE-LAA Scenarios} ($\%$)}&\multicolumn{4}{|c|}{\textbf{LTE-U Scenarios} ($\%$)}\\ \cline{2-9} \multicolumn{1}{|c|}{\textbf{Parameter}}&\textit{COM$_1$}&\textit{COM$_2$}&\textit{SOM$_1$}&\textit{SOM$_2$}&\textit{COM$_1$}&\textit{COM$_2$}&\textit{SOM$_1$}&\textit{SOM$_2$}\\ \hline CTF&76.46&78.25&79.89&76.02 &90.10&89.05&94.17&93.60\\ \hline Accuracy&95.04&93.31&92.28&93.82 &94.97&96.12&96.38&97.16\\ \hline \end{tabular} \label{nefroT} \vspace{-0.1cm} \end{table} \section{Conclusion and Future Direction} This work presented a comparative study of unlicensed coexistence networks through network feature relationship analysis. Network-data was collected through comprehensive real-world experiments and then analyzed through a family of regression algorithms. The relevance of network feature relationships was highlighted by analyzing LTE-WiFi networks on a variety of regression model parameters such a R-sq, residual error, \emph{etc.} Several insightful inferences were made on aspects such as the impact of bandwidth, residual error, and outliers on coexistence network performance. Further, NeFRO, a feature relationship based optimization framework was proposed and validated through signal strength and capacity optimization. NeFRO offered reduced convergence times by as much as 24\% and offered accuracy as high as 97.16\% on average. In the future, we will investigate convergence time and accuracy trade-off by considering feature relationships of varying degrees. Further, studying the association between the R-sq of the learned models and the network performance metrics is also a relevant topic. The impact of control/signaling data on network feature relationships will be explored as well. Most importantly, we intend to implement an AR system on a simulator and employ NeFRO to reduce latency. \bibliographystyle{splncs04}
2023-04-23T08:18:04.331Z
2021-11-16T02:27:35.000Z
redpajama/arxiv
arxiv_0000
1,135
7,613
f2b9441b464e2abb7ad36592246e2759d3f8c0cc
\section{Introduction} Spin qubits in semiconductor quantum dots are a promising technique to realize universal quantum computation due to their potential scalability via combining the quantum technologies with the state-of-the-art semiconductor industry \cite{Chatterjee.2021}. Currently, fast and high-fidelity control is still crucial for spin qubits to realize fault-tolerant quantum tasks. Thus various types of qubits using the electron spin degree of freedom in quantum dot systems have been tested over recent years, including single-dot spin qubits \cite{Loss.1998,Veldhorst.2014, Huang.2019,Noiri.2022}, double-dot singlet-triplet qubits \cite{Petta.2005,Foletti.2009, Zhang.2017, Abadillo.2019, Pascal.2020,Federico.2021}, as well as hybrid qubits \cite{Shi.2012, Frees.2019}, triple-dot exchange-only qubits \cite{Divincenzo.2000, Laird.2010}, and resonant qubits \cite{Taylor.2013,Medford.2013b}. Among these possible candidates, singlet-triplet qubits are particularly standing out, since their use can implement all-electrical gate operation control with long coherence times and fast gate operation. Gate operation for singlet-triplet qubits is implemented via tuning the Heisenberg exchange interaction between the two spins with gate times on the order of nanoseconds, thanks to the large exchange interaction ($\sim$ GHz) \cite{Petta.2005}. Recently, researchers have experimentally observed that the relaxation time of the spin states in a quantum dot can be as long as 9 s, and the Overhauser noise can be substantially suppressed \cite{Ciriano-Tejel.2021} by using the isotopic purification technique in the silicon platform. Nevertheless, singlet-triplet qubits in a quantum dot setup are still sensitive to the background charge noise, which occurs in the vicinity of the quantum dot \cite{Bermeister.14,Chan.18}. This hinders high-fidelity operation for either single or two-qubit quantum gates. Much work has been devoted to mitigating the charge noise, including working near the charge noise sweet spots \cite{Martins.2016} and designing gates using dynamical corrected gates \cite{Wang.2014,Throckmorton.17}. Alternatively, the geometric phase \cite{Berry.84} is believed to be useful to combat the noise effect. After a cyclic evolution in the parameter space, the quantum state can acquire an extra global phase factor, i.e., the Berry phase, under the adiabatic condition. Inspired by Berry's idea, it is realized that the global property of the geometric phase can be a powerful tool for quantum computation \cite{Pachos.99,Zanardi.99,Duan.01,Zhu.02,Zhu.03}. However, the adiabatic condition hinders wide application of geometric gates owing to the overly long evolution time, which renders more decoherence. Recently, a universal set of quantum gates based on the nonadiabatic geometric phase, namely, the Aharonov-Anandan phase \cite{Aharonov.87}, has been realized in superconducting qubits \cite{abdumalikov.13, xu.18,Tao.18, Liu.19, egger.19, xu.20,Xujing.20,Tao.20,Lisai20,Ding.21a,Ding.21b}, trapped ions \cite{ai.20,ai.21,Guo.2021}, semiconductor quantum dots \cite{Solinas.03a, Mousolou.14, Mousolou.17b,Zhang.2020}, etc. The point of constructing a geometric gate is to cancel out the accompanied dynamical phase during the cyclic evolution, leaving only the wanted geometric phase. Typically, this can be realized by introducing a microwave field to operate its time-dependent phase to ensure that the quantum state is always evolving along the longitude of the Bloch sphere \cite{Zhao.17,Zhang.2020}. By applying microwave-driven pulses on the detuning value, a recent experiment \cite{Takeda.20} showed 99.6\% single-qubit gate fidelity for a singlet-triplet qubit in a silicon-based semiconductor quantum dot. On the other hand, the Rabi frequency reported there is with only several MHz. This small value for the singlet-triplet system is also comparable to the typical values using the electric-dipole spin resonance technology in similar devices for single-dot spin qubits \cite{Yoneda.18} Therefore the gate time of the desired geometric gate can be typically on the order of microseconds, which is much longer than the traditional dynamical gate without using the microwave field. To fully employ the advantage of geometric gates, one has to seek ways to enable fast and appropriate operation. A good compromise is to use the Landau-Zener interferometry \cite{Shevchenko.10,Stehlik.12,Wang.16} with respect to the quantum state to fulfill cyclic evolution using the dc-gating pulse. However, a recent experiment \cite{Wang.16} indicates that the dynamical phase is difficult to remove and a more complicated technique like spin echo is needed, making it impractical for quantum computation. Here we propose a framework to realize both single- and two-qubit nonadiabatic geometric gates without the external microwave field so that the gate time can be only several nanoseconds. By only modulating the time-dependent exchange interaction, the quantum state in the parameter space can evolve along the specific geodesic line where no dynamical phase will be introduced. Thus our method is simple but experimentally feasible. The avoidance of using a microwave field not only enables short gate duration but can also simplify the control complexity for the system. This could be another potential advantage for this approach, especially considering power dissipation and addressability in the context of scaling to large qubit numbers \cite{Pascal.2020}. By numerically performing randomized benchmarking \cite{Emerson.2005,Knill.2008,Easwar.2012} and calculating the filter function \cite{Green.2012,Green.2013,Paz-Silva.2014} under the realistic $1/f$ charge noise environment, we surprisingly find that all gate fidelities can be higher than 99\%, which surpasses the conventional dynamical gate. Our results indicate that singlet-triplet qubits might benefit from the preservation of the geometric operation to obtain high-fidelity control. We emphasize that our method is not only suited to the exchange-coupled singlet-triplet spin qubits but also can be readily extended to other systems that can be described by the Ising-type interacting Hamiltonian \cite{Buterakos.21}, such as the superconducting transmon qubits \cite{Collodo.20} and the capacitively coupled charge qubits \cite{Shinkai.09}. \begin{figure}[bp] \includegraphics[width=0.9\columnwidth]{figure1.pdf} \caption{(a) The schematic of the evolution path to induce a geometric gate. The dressed state $|\psi_{+}\rangle$ evolves along the cyclic path A-B-C-D-E-A to obtain the global geometric phase so as to get a desired geometric gate. The normal vector (also, the rotation axis) with respect to the plane B-C-D is denoted as $\boldsymbol{r}=h \hat{x}+J_{2} \hat{z}$. The angle between the vector and the $x$ axis is $\gamma/2$. (b) Energy level of the double-dot system as a function of the detuning $\epsilon$, used to control the exchange interaction $J$. (c) A lateral four-quantum-dot system with each dot labeled by 1, 2, 3, and 4, from the left to the right to enable two-qubit operation for singlet-triplet qubits, where dots 1 and 2 form qubit a, and dots 3 and 4 form qubit b. The quantum dots are coupled via the exchange interaction denoted by $J_{i,i+1}$ ($i=1,2,3$).} \label{fig:path} \end{figure} \section{MODEL}\label{sec:model} The control Hamiltonian for a singlet-triplet qubit is \cite{Wang.2014} \begin{equation} H_{\rm{ST}}(t)=\frac{h}{2}\sigma_x+\frac{J[\epsilon{(t)}]}{2}\sigma_z, \label{eq:Hamiltonian1} \end{equation} where $\sigma_x$ and $\sigma_z$ are Pauli matrices. The computational basic states are the spin triplet state $|0\rangle = |\rm{T}(1,1)\rangle = (\left|\uparrow\downarrow\right\rangle+\left|\downarrow \uparrow\right\rangle) / \sqrt{2}$ and the singlet state $ |1\rangle = |\rm{S}(1,1)\rangle = (\left|\uparrow\downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle)/\sqrt{2} $. Here, we define the spin state $\left|\downarrow\uparrow\right\rangle = c_{1 \downarrow}^{\dagger} c_{2\uparrow}^{\dagger}|\mathcal{V}\rangle$, where $c_{i \tau}^{\dagger}$ ($i=1,2$) denotes creating an electron with spin $\tau$ at the $i$th quantum dot, and $|\mathcal{V}\rangle$ denotes the vacuum state. $h=g \mu \Delta B$ refers to the magnetic field gradient across the two quantum dots, where $g$ is the electron $g$ factor, $\mu$ is the Bohr magneton, and $\Delta B$ denotes the difference of the magnetic field between the double quantum dots. Experimentally, $h$ can set to be any desired constant value from several MHz to $\sim$ GHz, by either the dynamical nuclear polarization \cite{Bluhm.10} or micromagnet \cite{Watson.2018} technique, in both GaAs \cite{Nichol.17} and silicon \cite{Watson.2018} heterostructure. The exchange interaction $J[\epsilon{(t)}]$ can be controlled via operating the detuning $\epsilon$ with respect to the gate voltage, which refers to the energy splitting between the spin singlet state $ |\rm{S}(1,1)\rangle$ and the triplet state $ |\rm{T}(1,1)\rangle$, as seen in Fig.~\ref{fig:path} (b). Here we consider a phenomenological model $J[\epsilon{(t)}]=J_{0}\mathrm{exp}[\epsilon(t)/\epsilon_{0}]$, which is well fitted from the experimental data for both GaAs \cite{Dial.2013,Cerfontaine.14,Pascal.2020} and silicon \cite{Wu.14} system. According to Ref.~\cite{Cerfontaine.14}, $J_{0}=1\ \mathrm{ns}^{-1}$, $\epsilon_{0}=0.272\ \mathrm{meV}$, and the detuning is constrained as $-5\epsilon_{0}<\epsilon<5\epsilon_{0}$, which implies $0.007J_{0}\leqslant J \leqslant 148 J_{0}$ . In our simulation we consider $J_{\mathrm{min}}\ll h$, such that we assume $0\leqslant J \leqslant J_{\rm{max}}$. Since both $J$ and $h$ can be obtained to be on the order of $\sim$ GHz, the time induced from such coupling is with the order of nanosecond. In the absence of noise, the Hamiltonian in Eq. (\ref{eq:Hamiltonian1}) leads to the rotation with the form as \begin{equation}\label{eq:dynamical naive1} R(J,\phi)=e^{-\frac{i}{\hbar}\frac{h\sigma_x+J\sigma_z}{2}\frac{\phi}{\sqrt{J^2+h^2}}}, \end{equation} where $R(J,\phi)$ denotes the rotation in the $x$-$z$ plane by an angle $\phi$, and the rotation axis in the plane is determined by $J/h$. In this work we assume $J$ as a square pulse, namely, $H_{\rm{ST}}(t)$ is a piecewise Hamiltonian and therefore $R(J,\phi)$ is a one-piece rotation in a specific time duration. Hereafter, we define $U(\boldsymbol{r}, \phi)=\exp \left(-i \frac{\boldsymbol{\sigma} \cdot \boldsymbol{r}}{|\boldsymbol{r}|} \frac{\phi}{2}\right)$ as rotation around an axis defined by vector $\boldsymbol{r}$. In this way we have $R(J, \phi)=U(h \hat{x}+J \hat{z}, \phi)$. The rotation out of the $x$-$z$ plane can be implemented by a $z$ rotation sandwiched between two $x$ rotations \cite{nielsen.2002,Zhang.2017}: \begin{equation} U(\hat{r},\xi)=U(\hat{x},\xi_1)U(\hat{z},\xi_2)U(\hat{x},\xi_3). \label{eq:rotation-p1} \end{equation} As $h>0$ always exists, $R(J,\phi)$ cannot implement a single $z$-axis rotation. But a $z$-axis rotation with arbitrary rotation angle can be decomposed into three pieces of composite sequences (the Hadamard-$x$-Hadamard sequence) \cite{Wang.2014}, i.e., \begin{equation} U(\hat{z},\xi_0)=-U(\hat{x}+\hat{z},\pi)U(\hat{x},\xi_0)U(\hat{x}+\hat{z},\pi). \label{eq:rotation-p2} \end{equation} Then, by inserting Eq.~(\ref{eq:rotation-p2}) into Eq.~(\ref{eq:rotation-p1}), one finds \begin{small} \begin{equation} U(\hat{r},\xi)=U(\hat{x},\xi_1)U(\hat{x}+\hat{z},\pi)U(\hat{x},\xi_2)U(\hat{x}+\hat{z}, \pi)U(\hat{x},\xi_3). \label{eq:dynamical naive} \end{equation} \end{small} The rotation implemented by the pulse in Eq.~(\ref{eq:Hamiltonian1}) is sensitive to noise. There are mainly two noise sources resulting in gate error. One is the charge noise, which brings fluctuation to the detuning ($\delta \epsilon$) and further leads to the error in the exchange interaction labeled by $\delta J =g[J]\delta\epsilon$, with $g[J] \propto J$ \cite{Wang.2014}. Another is the Overhauser (nuclear spin) noise, which is a time-dependent fluctuation in the background of the nuclear spin bath, adding a small term into the Hamiltonian: $h\rightarrow h+\delta h$. A recent experiment based on the GaAs system indicated that, the standard deviation for Overhauser noise is about $\sigma_{\delta h}=2 \pi \times 2\ \mathrm{MHz}$, when taking $h=\ 2 \pi \times 40\ \mathrm{MHz}$ \cite{Pascal.2020}. In a silicon-based semiconductor quantum dot, the isotopic purification technique can strongly suppress the Overhauser noise \cite{Huang.2019}. A previous work \cite{Kalra.14} has shown that it can be as low as $\delta h / h=2 \times 10^{-5}$. In this work we focus on the silicon system and therefore neglect the Overhauser noise effect. Meanwhile, in a silicon heterostructure it would introduce the unwanted valley-spin coupling leading to relaxation \cite{Klinovaja.2012,Jock.2022}. According to the latest experiment \cite{Ciriano-Tejel.2021}, the relaxation time with the state-of-the-art silicon-based platform has reached $T_1=9 \ \rm{s}$. In this way we also neglect the relaxation effect and assume the evolution is unitary. Considering the noise effect, below any mentioned rotation $U(\hat{r},\xi)$ or $R(J,\phi)$ is with the error term. In the following we call them the naive dynamical gates. In this work our aim is to design the geometric gate to improve the naive dynamical gates. \section{Single-qubit geometric gates} \label{sec:singlegeo} Here we introduce how to construct the geometric gate via the control Hamiltonian in Eq.~(\ref{eq:Hamiltonian1}), and the evolution time can be generally separated into three parts. The Hamiltonian in each interval should satisfy \begin{equation} \begin{array}{lll} H_{1}(t)=\frac{h}{2}\sigma_x+\frac{J_1}{2}\sigma_z,\ \ & T_{\rm{A}} \leqslant t\leqslant T_{\rm{B}} \\ H_{2}(t)=\frac{h}{2}\sigma_x+\frac{J_2}{2}\sigma_z,\ \ & T_{\rm{B}} < t \leqslant T_{\rm{D}} \\ H_{3}(t)=\frac{h}{2}\sigma_x+\frac{J_3}{2}\sigma_z,\ \ & T_{\rm{D}} < t\leqslant T, \end{array} \label {eq:Hamiltoniian-paths} \end{equation} with $J_{i}({i=1,2,3})$ in each part satisfying \begin{equation}\label{eq:paths} \begin{aligned} &\int_{T_{\rm{A}}}^{T_{\rm{B}}}\sqrt{J_1^2+h^2}dt=\frac{\pi}{2}-\theta+2 n_1 \pi,\ \ & T_{\rm{A}} \leqslant t\leqslant T_{\rm{B}} \\ &\int_{T_{\rm{B}}}^{T_{\rm{D}}}\sqrt{J_2^2+h^2}dt=\pi,\ \ & T_{\rm{B}} < t \leqslant T_{\rm{D}} \\ &\int_{T_{\rm{D}}}^{T}\sqrt{J_3^2+h^2}dt=\frac{\pi}{2}+\theta+2 n_2 \pi, & T_{\rm{D}} < t\leqslant T. \end{aligned} \end{equation} Here $\theta$ is the parameter determined by the chosen rotation axis, as seen below. $n_i\ (i=1,2)$ depends on the values of $\theta$: \begin{equation} n_1=\left\{\begin{array}{ll} 1, & \theta > \pi/2 \\ 0, & \theta \leqslant \pi/2 \\ \end{array}\right.\ \ n_2=\left\{\begin{array}{ll} 0, & \theta \geqslant -\pi/2 \\ 1, & \theta< -\pi/2 \\ \end{array}\right.. \end{equation} On the other hand, since $h$ retains a constant value during all the gate operation processing, we take $h=1$ as our energy unit in the remainder of this work. The exchange interaction value is therefore equivalent to the ratio $J/h$. The corresponding evolution operators in each segment are \begin{equation}\begin{array}{lll} U_1(t,T_{\rm{A}})=e^{-\frac{i}{\hbar} \int_{T_{\rm{A}}}^{t} H_1(t')d t'},\ \ & T_{\rm{A}} \leqslant t\leqslant T_{\rm{B}} \\ U_2(t,T_{\rm{B}})=e^{-\frac{i}{\hbar} \int_{T_{\rm{B}}}^{t} H_2(t')d t'},\ \ & T_{\rm{B}} < t\leqslant T_{\rm{D}} \\ U_3(t,T_{\rm{D}})=e^{-\frac{i}{\hbar} \int_{T_{\rm{D}}}^{t} H_3(t')d t'},\ \ & T_{\rm{D}} < t\leqslant T. \\ \end{array} \label{eq:evo} \end{equation} Taking $J_1=J_3$ throughout this work, the total evolution operator at the final time will be \begin{widetext} \begin{equation} \begin{aligned} U_{\rm{g}}(T,{T_{\rm{A}}}) &={U_3}\left({T},{T_{\rm{D}}}\right){U_2}\left( {T_{\rm{D}}},{T_{\rm{B}}}\right){U_1}\left({T_{\rm{B}}},{T_{\rm{A}}}\right)\\ &=\left( \begin{array}{ccc} \frac{i(J_1-J_2)\cos{\theta}-\sqrt{J_1^2+1}(J_1J_2+1)}{(J_1^2+1)\sqrt{J_2^2+1}} & \frac{(J_2-J_1)(\sqrt{J_1^2+1}\sin{\theta}+iJ_1\cos{\theta})}{(J_1^2+1)\sqrt{J_2^2+1}} \\ \frac{(J_1-J_2)(\sqrt{J_1^2+1}\sin{\theta}-iJ_1\cos{\theta})}{(J_1^2+1)\sqrt{J_2^2+1}} & -\frac{i(J_1-J_2)\cos{\theta}+\sqrt{J_1^2+1}(J_1J_2+1)}{(J_1^2+1)\sqrt{J_2^2+1}} \\ \end{array}\right). \label{eq:evolution1} \end{aligned} \end{equation} \end{widetext} By setting $J_1=0,\ J_2=\tan({\gamma/2})$, we can further simplify $U_{\rm{g}}(T,T_{\rm{A}})$ as \begin{small} \begin{equation}\label{eq:evolution2} \begin{aligned} U_{\rm{g}}(\hat{r},{\gamma}) &=\left(\begin{array}{ccc} -\cos{\frac{\gamma}{2}}-i\sin{\frac{\gamma}{2}}\cos{\theta} & \sin{\frac{\gamma}{2}}\sin{\theta} \\ -\sin{\frac{\gamma}{2}}\sin{\theta} & -\cos{\frac{\gamma}{2}}+i\sin{\frac{\gamma}{2}}\cos{\theta} \\ \end{array}\right)\\ &=-e^{-i \frac{\gamma}{2} ( \sin{\theta} \sigma_y-\cos{\theta} \sigma_z)}. \end{aligned} \end{equation} \end{small}$U_{\rm{g}}({\hat{r},\gamma})$ represents rotation around the axis $\hat{r}=(0,\sin\theta,-\cos\theta)$ on the $y$-$z$ plane by an angle $\gamma$, where the rotation axis is determined by $\theta$. Other rotations that are out of the $y$-$z$ plane can be implemented by using either the sequence $y$-$z$-$y$ or $z$-$y$-$z$, similar to the case in Eq.~(\ref{eq:rotation-p1}). To demonstrate that $U_{\rm{g}}(\hat{r},{\gamma})$ is the desired geometric gate, we introduce the orthogonal dressed states: \begin{eqnarray}\label{eq:states} \left|\psi_{+}\right\rangle&=&\cos\frac{\theta}{2}|0\rangle -i \sin\frac{\theta}{2}|1\rangle, \notag\\ \left|\psi_{-}\right\rangle&=&i\sin\frac{\theta}{2} |0\rangle-\cos\frac{\theta}{2}|1\rangle. \end{eqnarray} For an arbitrary operator $U_{\rm{g}}({\hat{r},\gamma})$ with respect to $\theta$, its evolution can be visualized by the Bloch sphere using the dressed states, as shown in Fig.~\ref{fig:path}(a). The evolution of the dressed state $|\psi_{+}\rangle$ (the case for $|\psi_{-}\rangle$ is similar) follows the process as: \begin{equation} \begin{aligned} |\psi_{+}\rangle \stackrel{U_{1}}{\longrightarrow}|\psi_{+}^{\rm{B}}\rangle \stackrel{U_{2}}{\longrightarrow} |\psi_{+}^{\rm{D}}\rangle \stackrel{U_{3}}{\longrightarrow} e^{i(\frac{\gamma}{2}+\pi)}|\psi_{+}\rangle, \end{aligned}\label{eq:evpath} \end{equation} where \begin{equation} \begin{aligned} |\psi_{+}^{\rm{B}}\rangle&=\cos \frac{\pi}{4}|0\rangle-i\sin \frac{\pi}{4}|1\rangle,\\ |\psi_{+}^{\rm{D}}\rangle&=e^{i \pi} e^{i\frac{\gamma}{2}}\left(\cos \frac{\pi}{4}|0\rangle+i\sin \frac{\pi}{4}|1\rangle\right). \end{aligned}\label{eq:evpath2} \end{equation} Specifically, the dressed state starts from the given point A at the initial time $T_{\rm{A}}$. Under the action of $U_{1}(T_{\rm{B}},T_{\rm{A}})$, it travels along the longitude denoted by A-B to point B at $T_{\rm{B}}$ and the state turns to be $|\psi_{+}^{\rm{B}}\rangle$. Then it evolves to $|\psi_{+}^{\rm{D}}\rangle$ along the path denoted by B-C-D due to $U_{2}(T_{\rm{D}},T_{\rm{B}})$. Here the evolution path B-C-D is not along the longitude but a specific geodesic of the Bloch sphere. Finally, it goes back to the starting point A along path D-E-A owing to $U_{3}(T,T_{\rm{D}})$, where the path shares the same longitude as that of path A-B. The overall effect of $U(T,{T_{\rm{A}}})$ is to drive the dressed state $|\psi_{\pm}\rangle$ to fulfill a cyclic evolution with the path A-B-C-D-E-A and thus obtain a corresponding global phase $\pm(\frac{\gamma}{2}+\pi)$. Therefore, $U_{\rm{g}}(\hat{r},{\gamma})$ can also be described as \begin{equation} \begin{aligned} U_{\rm{g}}(\hat{r},{\gamma})=e^{i(\frac{\gamma}{2}+\pi)}|\psi_{+}\rangle\langle\psi_{+} |+e^{i(-\frac{\gamma}{2}-\pi)}|\psi_{-}\rangle\langle\psi_{-}|. \end{aligned}\label{eq:statedress} \end{equation} One can easily verify that for the designed evolution path in this work, the parallel transport condition \cite{erik.15} for the geometric gate is always satisfied for each segment, i.e., \begin{equation} \begin{array}{lll} \langle\psi_{\pm}| {U_1^{\dag}(t,T_{\rm{A}})}H_1(t)U_1(t,T_{\rm{A}})|\psi_{\pm}\rangle=0, \\ \langle\psi_{\pm}^{\rm{B}}|{U_2^{\dag}(t,T_{\rm{B}})}H_2(t)U_2(t,T_{\rm{B}})|\psi_{\pm}^{\rm{B}}\rangle=0,\\ \langle\psi_{\pm}^{\rm{D}}|{U_3^{\dag}(t,T_{\rm{D}})}H_3(t)U_3(t,T_{\rm{D}})|\psi_{\pm}^{\rm{D}}\rangle=0.\\ \end{array} \end{equation} Therefore the global phase that is obtained is the pure geometric phase, and $U_{\rm{g}}(\hat{r},{\gamma})$ represents the pure geometric gate. We note that this approach here is different from our previous one in Ref.~\cite{Zhang.2020}, where a silicon-based single-dot spin qubit rather than the singlet-triplet qubit is employed. The single-dot spin qubit there is resonantly driven by the microwave field. For the microwave-driven Hamiltonian, one can easily design the time-dependent phase such that the dressed states can evolve always along the longitude, and the dynamical phase is then canceled out. However, this is not applicable for the singlet-triplet qubit system here. For comparison, the path B-C-D during the second part in this work is no longer a longitude but a specific geodesic line determined by the exchange interaction. Meanwhile, as stated above, the small Rabi frequency in Ref.~\cite{Zhang.2020} would also prolong the gate duration and lead to complexity for the control system. \begin{figure} \includegraphics[width=1\columnwidth]{figure2.pdf} \caption{The pulse shapes for the geometric gates $U_{\rm{g}}(\hat{y},\pi/4)$ in (a) and $U_{\rm{g}}(\hat{z},\pi/4)$ in (b) are compared with that of the corresponding naive dynamical gates $U(\hat{y},\pi/4)$ and $U(\hat{z},\pi/4)$. The black solid line denotes the naive dynamical gate, while the red solid line implies the geometric gate. The time unit is determined by $t_0=1/h$. } \label{fig:pulse} \end{figure} Next we analyze the robustness of the geometric gates compared with the naive dynamical gates. Here we consider two typical rotations, i.e., $U_{\rm{g}}(\hat{y},\pi/4)$ [$U(\hat{y},\pi/4)$] and $U_{\rm{g}}(\hat{z},\pi/4)$ [$U(\hat{z},\pi/4)$]. For the geometric gate $U_{\rm{g}}(\hat{y},\pi/4)$ with $\theta=\pi/2$ and $\gamma=\pi/4$, the used parameters are $J_1=0$, $J_2=\tan(\pi/8)$, and the time duration with respect to each interval is $T_{\rm{AB}}=0$, $T_{\rm{BD}}=\pi\cos{(\pi/8)}$, and $T_{\rm{DA}}=\pi$, while for the corresponding naive dynamical gate $U(\hat{y},\pi/4)$ used in Eq.~(\ref{eq:dynamical naive}) we have $\xi_1=3\pi/2$, $\xi_2=\pi/4$, and $\xi_3=\pi/2$. For $U_{\rm{g}}(\hat{z},\pi/4)$, with $\theta=\pi$, $\gamma=\pi/4$, the parameters used are $J_1=0$, $J_2=\tan(\pi/8)$, $T_{\rm{AB}}=T_{\rm{DA}}=3\pi/2$, and $T_{\rm{BD}}=\pi\cos{(\pi/8)}$. For the naive dynamical gate $U(\hat{z},\pi/4)$ designed using the expression in Eq.~(\ref{eq:rotation-p2}), we have $\xi_0=\pi/4$. The pulse shapes for the geometric and the naive dynamical gates are plotted in Fig.~\ref{fig:pulse}. In Fig.~\ref{fig:robustness} we show their fidelities as a function of $\delta\epsilon$. Here $\delta\epsilon$ is assumed to be a quasi static noise and its time-dependent effect will be considered later. In all the considered region $-0.1\leqslant\delta\epsilon\leqslant0.1$, which implies $-0.1\leqslant\delta J/J\leqslant0.1$, the geometric gate can outperform its counterpart, i.e., the naive dynamical gate. We notice that for both two naive gates, when $\delta\epsilon$ becomes large the related fidelity drops quickly, whereas the fidelity for the geometric gates varies slowly as $\delta\epsilon$ is increasing. In addition, the advantage of the geometric gates over the naive ones becomes more and more pronounced when $\delta\epsilon$ is large. \begin{figure} \includegraphics[width=1\columnwidth]{figure3.pdf} \caption{The fidelity is as a function of $\delta\epsilon$, where the charge noise is $\delta J\rightarrow g[J]\delta \epsilon$. The geometric gates $U_{\rm{g}}(\hat{y},\pi/4)$ in (a) and $U_{\rm{g}}(\hat{z},\pi/4)$ in (b) are compared with the naive dynamical gates $U(\hat{y},\pi/4)$ and $U(\hat{z},\pi/4)$. } \label{fig:robustness} \end{figure} On the other hand, the performance of the geometric gate in the real experimental noise environment in a semiconductor quantum dot remains to be verified, where the noise typically varies over time. For the piecewise control Hamiltonian, the filter function \cite{Green.2012,Green.2013,Paz-Silva.2014} is a powerful tool to evaluate the fidelity for the time-dependent noise. The detail of the filter function is described in Appendix~\ref{appx:filter function}, where it is defined as $F_{i}(\omega)$ ($i=x,z$) in the frequency domain with the noise appearing in the $\sigma_{i}$ term of the Hamiltonian. Note that for the single-qubit case, the charge noise exists in the $\sigma_{z}$ direction. Therefore the filter function is \begin{equation}\label{eq:ffF} \begin{aligned} \mathcal{F}=1-\frac{1}{\pi} \int_{\omega_{\mathrm{ir}}}^{\omega_{\mathrm{uv}}} \frac{d\omega}{\omega^{2}} F_{z}(\omega)S(\omega), \end{aligned} \end{equation} where $S(\omega)$ is the noise power spectral density in the frequency domain, and $\omega_{\mathrm{uv}}$ and $\omega_{\mathrm{ir}}$ are the cutoff frequency. The filter functions for rotation $U_{\rm{g}}(\hat{y},\pi/4)$ [$U(\hat{y},\pi/4)$] and $U_{\rm{g}}(\hat{z},\pi/4)$ [$U(\hat{z},\pi/4)$] are shown in Figs.~\ref{fig:Filter} (a) and \ref{fig:Filter} (b), respectively. For both two rotations, the lines for the geometric gates are below the ones for the naive dynamical gates, which qualitatively implies the smaller infidelity for the geometric gates compared to their dynamical counterparts. \begin{figure} \includegraphics[width=0.95\columnwidth]{figure41.pdf} \includegraphics[width=0.95\columnwidth]{figure42.pdf} \includegraphics[width=0.95\columnwidth]{figure43.pdf} \caption{The filter function in panels (a) and (b) are responsible for $U_{\rm{g}}(\hat{y},\pi/4)$ [$U(\hat{y},\pi/4)$] and $U_{\rm{g}}(\hat{z},\pi/4)$ [$U(\hat{z},\pi/4)$], respectively. For the single-qubit case, we only consider the $z$-component noise which is described by $F_{z}(\omega)$. While the filter function for the two-qubit CZ gate is present in (c)- -(f). The superscript $i=1,2$ denotes the $i$th block, and the subscript $i=x,z$ denotes the noise appearing in the effective $\tilde{\sigma}_{i}$ term. The used parameters are $h/(2\pi)=1\ \rm{GHz}$ \cite{Nichol.17}, $S(\omega)=A_{J}/(\omega t_0)^{\alpha}$, with $A_{J} t_{0}=10^{-4}$, and $\alpha=1$. The cutoffs are $\omega_{\rm{ir}}=50\ \rm{kHz}$ and $\omega_{\rm{uv}}=1\ \rm{MHz}$. } \label{fig:Filter} \end{figure} Here we consider the $1/f^{\alpha}$ noise, which is the typical noise model to describe the time-dependent charge noise in a semiconductor quantum dot. The power spectral density with respect to the charge noise can be written as \cite{yang.2016} \begin{equation} S(\omega)=\frac{A_{J}}{(\omega t_0)^{\alpha}}, \label{1fnoise} \end{equation} where $A_{J}$ is the noise amplitude, and the exponent $\alpha$ denotes how much the noise is correlated. $t_0=1/h$ is the time unit, and we take $h/(2\pi)=1\ \rm{GHz}$ \cite{Nichol.17}. The noise amplitude can be determined by \cite{Zhang.2017} \begin{equation} \begin{aligned} \int_{\omega_{\mathrm{ir}}}^{\omega_{\mathrm{uv}}} \frac{A_{J}}{\left(\omega t_{0}\right)^{\alpha}}d\omega=\pi\left(\frac{\sigma_{J}}{Jt_0}\right)^{2}, \label{eq:integ} \end{aligned} \end{equation} where $\sigma_{J}$ represents the standard deviation for charge noise. Typically, in a semiconductor quantum dot environment ~\cite{Barnes.2016} we have $\alpha=1$ for the charge noise and the cutoffs are $\omega_{\mathrm{ir}}=50\ \mathrm{kHz}$ and $\omega_{\mathrm{uv}}=1\ \mathrm{MHz}$. In experiments, detuning can be operated via either symmetric (barrier) control or tilt control, which corresponds to $\sigma_{J}/J=0.00426$ for barrier control and $0.0563$ for tilt control, respectively \cite{Martins.2016}. Therefore the noise amplitude region is about $2.0\times10^{-5}\leqslant A_{J}t_{0}\leqslant 3.3\times10^{-3}$. In our simulation we have considered a medium value of $A_{J}t_{0}=10^{-4}$ if not mentioned specifically. We find that the fidelities related to $U_{\rm{g}}(\hat{y},\pi/4)$ and $U(\hat{y},\pi/4)$ are $\mathcal{F}_{\rm{nai}}=99.974\%, \mathcal{F}_{\rm{geo}}=99.997\%$, while for $U_{\rm{g}}(\hat{z},\pi/4)$ and $U(\hat{z},\pi/4)$, the fidelities are also $\mathcal{F}_{\rm{nai}}=99.974\%, \mathcal{F}_{\rm{geo}}=99.997\%$. \begin{figure} \includegraphics[width=1\columnwidth]{figure51.pdf} \includegraphics[width=1\columnwidth]{figure52.pdf} \caption{Randomized benchmarking for naive dynamical and geometric gates for $1/f^{\alpha}$ charge noise. The standard randomized benchmarking results are shown in (a) and (b), while the interleaved randomized benchmarking results are shown in (c) and (d) with respect to $U_{\rm{g}}(\hat{y},\pi/4)$ [$U(\hat{y},\pi/4)$] and $U_{\rm{g}}(\hat{z},\pi/4)$ [$U(\hat{z},\pi/4)$], respectively. The used parameters are $h/(2\pi)=1\ \rm{GHz}$ \cite{Nichol.17}, $S(\omega)=A_{J}/(\omega t_0)^{\alpha}$ with $A_{J} t_{0}=10^{-4}$. The noise exponent $\alpha=1$ and 2, corresponding to the left and right column, respectively. The cutoffs are $\omega_{\rm{ir}}=50\ \rm{kHz}$ and $\omega_{\rm{uv}}=1\ \rm{MHz}$.} \label{fig:RB} \end{figure} Except for the filter function, randomized benchmarking \cite{Emerson.2005,Knill.2008,Easwar.2012} is another effective technique to provide the average error for either all the gates on the Bloch sphere or a specific gate from the Clifford group. The former is related to the standard benchmarking, while the latter the interleaved benchmarking. The basic idea of the standard randomized benchmarking \cite{Wang.2014} is that for a given noise spectrum, we average the fidelity over many gate sequences which are randomly drawn from the single-qubit Clifford group composed of 24 specific gate operations and over random noise realizations. For each run of the sequences in our simulation, the noise is attributed to the $1/f$ form as described in Eq.~(\ref{1fnoise}). The interleaved benchmarking is a slight variant of the standard randomized benchmarking, where the specific gate to be estimated and the randomly chosen Clifford gate sequence interleave with each other \cite{Easwar.2012}. To ensure convergence, we have averaged the benchmarking over 1000 times of realizations. The standard randomized benchmarking results are shown in Figs.~\ref{fig:RB} (a) and \ref{fig:RB} (b). By fitting the resulted fidelity curve to $\left(1+e^{-d n}\right) / 2$, one can obtain the average error per gate $d$, where $n$ denotes the number of the used Clifford gates \cite{yang.2016}. The corresponding average fidelity per gate is therefore $\mathcal{F}=1-d$. For the typical value of $\alpha=1$ in Fig.~\ref{fig:RB} (a), the average fidelity for the naive dynamical gate and the geometric gate are $\mathcal{F}_{\rm{nai}}=99.965\%$ and $\mathcal{F}_{\rm{geo}}=99.972\%$. On the other hand, the charge noise spectrum with $\alpha=2$ \cite{Struck.2020} has also been observed in a recent experiment. The corresponding randomized benchmarking result is shown in Fig.~\ref{fig:RB} (b), where the fidelities for the two types of gates are $\mathcal{F}_{\rm{nai}}=99.468\%$ and $\mathcal{F}_{\rm{geo}}=99.565\%$. With the standard randomized benchmarking results in mind, one can further calculate the interleaved randomized benchmarking fidelity \cite{Easwar.2012} as $\mathcal{F}_{\rm{in}}=1- \left(1-p_{\rm{in}} / p_{\rm{st}}\right)/2$, where $p_{\rm{st}}$ and $p_{\rm{in}}$ are the depolarizing parameters for the standard randomized benchmarking and the interleaved randomized benchmarking, respectively, which are determined by $p=e^{-d}$. The interleaved randomized benchmarking results for the gates $U_{\rm{g}}(\hat{y},\pi/4)$ and $U(\hat{y},\pi/4)$ in Fig.~\ref{fig:RB} (c) are $\mathcal{F}_{\rm{nai}}=99.980\%$ and $ \mathcal{F}_{\rm{geo}}=99.998\%$, while for the gates $U_{\rm{g}}(\hat{z},\pi/4)$ and $U(\hat{z},\pi/4)$ in Fig.~\ref{fig:RB} (d) the results are $\mathcal{F}_{\rm{nai}}=99.970\%$ and $ \mathcal{F}_{\rm{geo}}=99.995\%$. \begin{figure} \includegraphics[width=1\columnwidth]{figure61.pdf} \includegraphics[width=1\columnwidth]{figure62.pdf} \caption{Average error per gate $d$ vs charge noise amplitude $A_{J}t_0$ for $1/f^{\alpha}$ noise, where the noise exponent $\alpha=1$ in (a) and 2 in (b). (c) Improvement ratio $\kappa$ vs $\alpha$. } \label{fig:error} \end{figure} To fully reveal the superiority of the geometric gate, we further consider its performance compared to the naive dynamical one for a wide range of $\alpha$ and noise amplitudes $A_{J}$. In Figs.~\ref{fig:error} (a) and \ref{fig:error} (b) we show the standard average error per gate $d$ as a function of the noise amplitude $A_{J}t_0$ considering the $\alpha$ values related to the recent experiment \cite{Struck.2020}. For $\alpha=1$ in Fig.~\ref{fig:error} (a), when $A_{J}t_0$ is small, the two lines for the naive and geometric gates are almost parallel (the related experimental noise amplitude $A_{J}t_{0}$ is from $10^{-5}$ to $10^{-3}$). However, when $A_{J}t_0$ is large enough (surpassing $10^{-3}$), these two lines are increasingly overlapping, which means there is no improvement for the naive dynamical gates. For $\alpha=2$ in Fig.~\ref{fig:error} (b), the line standing for the geometric gate is lower than the one for the naive dynamical gates in the whole considered noise amplitude. This means the geometric gate is more powerful for large $\alpha$. In Fig.~\ref{fig:error} (c) we further introduce an improvement ratio $\kappa$, which is defined as the error of the naive dynamical gates divided by that of the geometric ones in the small noise amplitude region. We can see that this improvement ratio $\kappa$ is increasing as $\alpha$ becomes larger. When $\alpha\leqslant0.9$, $\kappa$ is less than 1, which means the geometric gate performs worse. For the typical noise exponent $\alpha=1$, we have $\kappa=1.1$, while for the case $\alpha=2.85$, the improvement ratio is 1.56. \section{Two-qubit geometric gates}\label{sec:twoqubit} The Hamiltonian of the exchange-coupled two singlet-triplet qubits, as shown in Fig.~\ref{fig:path}(b), is in an Ising-like form \cite{EdwinBarnes.2022}, namely, the qubits are coupled via the form of $\sigma_{z}\otimes\sigma_{z}$ interaction. This model is also applied for the capacitively coupled charge qubits in the semiconductor quantum dot \cite{Shinkai.09}, the superconducting transmon qubits operating in the dispersive regime \cite{Collodo.20}, and also the nuclear magnetic resonance (NMR) system \cite{Vandersypen.05}. For our case, the two-qubit Hamiltonian for the two double-dot systems is \cite{Klinovaja.2012,Li.2012} \begin{widetext} \begin{equation}\label{eq:Hamiltonian2} \begin{aligned} H_{d0} &=\frac{J_{12}}{2}\tilde{\sigma}_a^x \otimes \tilde{I}_b +\frac{J_{34}}{2}\tilde{I}_a\otimes\tilde{\sigma}_b^x -\frac{J_{23}}{4}\tilde{\sigma}_a^z\otimes\tilde{\sigma}_b^z+h_a\tilde{\sigma}_a^z\otimes \tilde{I}_b+h_b\tilde{I}_a\otimes\tilde{\sigma}_b^z \\ &=\left(\begin{array}{cccc} -J_{23}/4+h_b+h_a & J_{34}/2 & J_{12}/2 & 0 \\ J_{34}/2 & J_{23}/4-h_b+h_a & 0 & J_{12}/2 \\ J_{12}/2 & 0 & J_{23}/4+h_b-h_a & J_{34}/2 \\ 0 & J_{12}/2 & J_{34}/2 & -J_{23}/4-h_b-h_a \end{array}\right). \end{aligned} \end{equation} For convenience, here we have redefined the basis states as $\{|\tilde{0}\tilde{0}\rangle,\ |\tilde{0}\tilde{1}\rangle,\ |\tilde{1}\tilde{0}\rangle,\ |\tilde{1}\tilde{1}\rangle\}=\{| \uparrow_a\downarrow_a,\uparrow_b\downarrow_b\rangle,\ | \uparrow_a\downarrow_a,\downarrow_b\uparrow_b\rangle,\ | \downarrow_a \uparrow_a,\uparrow_b\downarrow_b\rangle,\ |\downarrow_a\uparrow_a,\downarrow_b\uparrow_b\rangle\}$. In the following we use this basis state to describe the two-qubit operation. (The operation under the logical basis is shown in Appendix~\ref{appx:logic}.) The Pauli matrices are thus slightly different from before: \begin{equation} \begin{aligned} \tilde{\sigma}_i^x &= |\uparrow\downarrow\rangle_i\langle\downarrow\uparrow|_i + |\downarrow\uparrow\rangle_i\langle\uparrow\downarrow|_i, \\ \tilde{\sigma}_i^z &= |\uparrow\downarrow\rangle_i\langle\uparrow\downarrow|_i - |\downarrow\uparrow\rangle_i\langle\downarrow\uparrow|_i, \\ \tilde{I}_i&=|\uparrow\downarrow\rangle_i\langle\uparrow\downarrow|_i + |\downarrow\uparrow\rangle_i\langle\downarrow\uparrow|_i, \end{aligned} \end{equation} where $i=a,b$ denotes the qubit number, and $h_{i}$ is the gradient for each qubit. One can easily find that $\tilde{\sigma}_i^x=\sigma_i^z$ and $\tilde{\sigma}_i^z=\sigma_i^x$. $J_{k,k+1} (k=1,2,3)$ denotes exchange couplings between neighboring dots. Assuming $ h_a,\ h_b\ll J_{23}$ \cite{Li.2012} and setting $J_{12}=0$, $H_{d0}$ turns to be a block-diagonal matrix: \begin{equation}\label{eq:Hamiltonian3} \begin{aligned} H_d=\left(\begin{array}{cccc} -J_{23}/4 & J_{34}/2 & 0 & 0 \\ J_{34}/2 & J_{23}/4 & 0 & 0 \\ 0 & 0 & J_{23}/4 & J_{34}/2 \\ 0 & 0 & J_{34}/2 & -J_{23}/4 \end{array}\right). \end{aligned} \end{equation} In this way we can decompose $H_{d}$ into two independent subsystems with each subsystem being a $2 \times 2$ matrix: \begin{equation} H_{d1}=\left(\begin{array}{ll} -J_{23}/4 & J_{34}/2 \\ J_{34}/2 & J_{23}/4 \\ \end{array}\right), H_{d2}=\left(\begin{array}{ll} J_{23}/4 & J_{34}/2 \\ J_{34}/2 & -J_{23}/4 \\ \end{array}\right). \label{eq:25} \end{equation} Thus we can treat each block as the single-qubit case and design the corresponding geometric operation similar so that in Sec.~\ref{sec:singlegeo}. For the first block $H_{d1}$, we aim to design a single geometric gate. Similarly to the single-qubit case in Eq.~(\ref{eq:paths}) the three-piece evolution needs to satisfy \begin{equation}\label{eq:path2} \begin{aligned} &\int_{T_{\rm{A}}}^{T_{\rm{B}}}\sqrt{(-\frac{J_{23}^{(1)}}{4})^2 +(\frac{J_{34}}{2})^2}dt=\frac{\frac{\pi}{2}-\chi}{2}+2 m_1\pi, & T_{\rm{A}} \leqslant t \leqslant T_{\rm{B}} \\ &\int_{T_{\rm{B}}}^{T_{\rm{D}}}\sqrt{(-\frac{J_{23}^{(2)}}{4})^2 +(\frac{J_{34}}{2})^2}dt=\pi/2, & T_{\rm{B}} < t \leqslant T_{\rm{D}} \\ &\int_{T_{\rm{D}}}^{T}\sqrt{(-\frac{J_{23}^{(3)}}{4})^2+(\frac{J_{34}}{2})^2}dt =\frac{\frac{\pi}{2}+\chi}{2}+2 m_2\pi, & T_{\rm{D}} < t \leqslant T, \end{aligned} \end{equation} while for the second block $H_{d2}$, the geometric gate requires \begin{equation}\label{eq:path3} \begin{aligned} &\int_{T_{\rm{A}}}^{T_{\rm{B}}}\sqrt{(\frac{J_{23}^{(1)}}{4})^2 +(\frac{J_{34}}{2})^2}dt=\frac{\frac{\pi}{2}-\chi}{2}+2 m_1\pi, & T_{\rm{A}} \leqslant t \leqslant T_{\rm{B}} \\ &\int_{T_{\rm{B}}}^{T_{\rm{D}}}\sqrt{(\frac{J_{23}^{(2)}}{4})^2+(\frac{J_{34}}{2})^2}dt=\pi/2, & T_{\rm{B}} < t \leqslant T_{\rm{D}} \\ &\int_{T_{\rm{D}}}^{T}\sqrt{(\frac{J_{23}^{(3)}}{4})^2+(\frac{J_{34}}{2})^2}dt =\frac{\frac{\pi}{2}+\chi}{2}+2 m_2\pi, & T_{\rm{D}} < t \leqslant T, \end{aligned} \end{equation} where $\chi$ is similar to $\theta$, as shown in the single-qubit case, whose parameter is determined by the chosen rotation axis as seen below. In addition, we assume $J_{23}^{(1)}=J_{23}^{(3)}$. $m_i\ (i=1,2)$ depends on the values of $\chi$: \begin{equation} m_1=\left\{\begin{array}{ll} 1, & \chi > \pi/2 \\ 0, & \chi \leqslant \pi/2 \\ \end{array}\right.\ \ m_2=\left\{\begin{array}{ll} 0, & \chi \geqslant -\pi/2 \\ 1, & \chi< -\pi/2 \\ \end{array}\right.. \end{equation} Here we assume $J_{23}^{(i)}$ ($i=1,2,3$) is time dependent and the others remain unchanged. By setting $J_{23}^{(1)}=J_{23}^{(3)}=0$ and $J_{23}^{(2)}=2 J_{34}\tan{(\gamma/2)}$, one can acquire a two-qubit perfect entangling gate depending on the chosen value of $\gamma$ as \begin{small} \begin{equation}\label{eq:U2} \begin{aligned} U_{\rm{ent}}=\left(\begin{array}{cccc} -\cos{\frac{\gamma}{2}}+i\cos{\chi}\sin{\frac{\gamma}{2}}& -\sin{\frac{\gamma}{2}}\sin{\chi} & 0& 0 \\ \sin{\frac{\gamma}{2}}\sin{\chi}& -\cos{\frac{\gamma}{2}}-i\cos{\chi}\sin{\frac{\gamma}{2}} & 0& 0 \\ 0& 0& -\cos{\frac{\gamma}{2}}-i\cos{\chi}\sin{\frac{\gamma}{2}} & \sin{\frac{\gamma}{2}}\sin{\chi} \\ 0& 0& -\sin{\frac{\gamma}{2}}\sin{\chi} & -\cos{\frac{\gamma}{2}}+i\cos{\chi}\sin{\frac{\gamma}{2}} \end{array}\right), \end{aligned} \end{equation} \end{small} \end{widetext} where the so-called perfect entangling gate can generate the maximally entangled states, e.g., the $\footnotesize{\text{CNOT}}$ gate \cite{Calderon-Vargas.2015}. Generally, whether a two-qubit gate belongs to a perfect entangling gate can be verified by calculating the local invariants with respect to the matrix of this gate. A detailed description of the local invariant is given in Appendix~\ref{appx:entangling}, where the local invariant $G_{i}$ ($i=1,2,3$) is defined. When taking $\gamma=\pi/2$, we calculate the local invariants of $U_{\rm{ent}}$: $G_1=G_2=0,G_3=1$, which satisfies the condition for the perfect entangling operation \cite{Calderon-Vargas.2015}. It is of great interest that if we further take $\chi=0$, $U_{\rm{ent}}$ is equivalent to a controlled-phase ($\footnotesize{\text{CZ}}$) gate: \cite{Watson.2018} \begin{equation}\label{eq:UCZ} \begin{aligned} U_{\rm{CZ}}=e^{-i\frac{\gamma}{2}}\left(\begin{array}{cccc} 1& 0 & 0& 0 \\ 0& e^{i\gamma} & 0& 0 \\ 0& 0& e^{i\gamma} & 0 \\ 0& 0& 0 & 1 \end{array}\right). \end{aligned} \end{equation}The control Hamiltonian in Eq.~(\ref{eq:Hamiltonian3}) can alternatively implement a dynamical two-qubit gate, where the method is similar to the case in Sec.~\ref{sec:model}. For the first block, arbitrary rotation can be obtained via a composite pulse sequence like Eq.~(\ref{eq:dynamical naive}), \begin{equation} \begin{aligned} U_{d1}(\hat{\tilde{r}},\eta)=& U(\hat{\tilde{x}},\eta_1)U(-\hat{\tilde{z}}+\hat{\tilde{x}},\pi)U(\hat{\tilde{x}},\eta_2) \\ & U(-\hat{\tilde{z}}+\hat{\tilde{x}}, \pi)U(\hat{\tilde{x}},\eta_3), \label{eq:dynamical naive21} \end{aligned} \end{equation} while for the second block, \begin{equation} \begin{aligned} U_{d2}(\hat{\tilde{r}},\eta)=& U(\hat{\tilde{x}},\eta_1)U(\hat{\tilde{z}}+\hat{\tilde{x}},\pi)U(\hat{\tilde{x}},\eta_2) \\ & U(\hat{\tilde{z}}+\hat{\tilde{x}}, \pi)U(\hat{\tilde{x}},\eta_3). \label{eq:dynamical naive22} \end{aligned} \end{equation} To acquire the dynamical perfect entangling gate $U_{\rm{ent}}$, we set $\eta_1=\pi+\chi$, $\eta_2=\gamma$, and $\eta_3=\pi-\chi$. Normally, it is difficult to perform two-qubit randomized benchmarking simulation, since there are more than 10000 elements in the two-qubit Clifford group. In this way we consider calculating the fidelity of $U_{\rm{ent}}$ via the filter function, as similar for the single-qubit case. The charge noise leads to the error in both the effective $\tilde{\sigma}_{z}$ and $\tilde{\sigma}_{x}$ terms of the Hamiltonian for each block: $\delta J_{23} \propto J_{23} $ and $\delta J_{34} \propto J_{34} $. For simplicity, here we assume these two noise sources are independent of each other. For each block, the fidelity is \begin{equation} \begin{aligned} \mathcal{F}^{(i)}\simeq& 1-\frac{1}{\pi} \int_{\omega_{\mathrm{ir}}}^{\omega_{\mathrm{uv}}} \frac{\mathrm{d} \omega}{\omega^{2}} [S(\omega) \tilde{F}_{x}^{(i)}(\omega)+S(\omega) \tilde{F}_{z}^{(i)}(\omega)], \label{eq:f3} \end{aligned} \end{equation} where $\mathcal{F}^{(i)}$ ($i=1,2$) denotes the fidelity for the $i$th block, while $\tilde{F}_{x}^{(i)}$ and $\tilde{F}_{z}^{(i)}$ represent the $\tilde{x}$- and $\tilde{z}$-component filter function for the corresponding block. The filter function results are shown in Figs.~\ref{fig:Filter} (c)- -\ref{fig:Filter}(f). We find the geometric lines are under those for the naive dynamical gates. This indicates the geometric gates in each block have higher fidelity compared to the naive gates. With the fidelity in each block in mind, we can further calculate the fidelity for the entire evolution matrix. The specific expression is derived in Appendix~\ref{appx:two qubit fidelity}, where \begin{equation}\label{eq:fidelity5} \mathcal{F}=\frac{1}{5}+\frac{1}{5}(\mathcal{F}^{(1)}+\mathcal{F}^{(2)})^2. \end{equation}In Table~\ref{ap:table} we show the fidelity for several values of $\chi$. For all cases, the fidelities related to the geometric gates are surpassing their dynamical counterparts. Considering the region of $-\pi/2 \leqslant \chi \leqslant \pi/2$, the fidelity for the geometric gates can surpass 99\%. Specifically, when taking $\chi=-\pi/2$, the fidelity of the geometric gate is with its largest value of $99.508\%$. \begin{table} \caption{Fidelity of the perfect entangling gate $U_{\rm{ent}}$ for several $\chi$, where $\gamma=\pi/2$. When taking $\chi=0$, $U_{\rm{ent}}$ is equivalent to a $\footnotesize{\text{CZ}}$ gate \cite{Watson.2018}. $\mathcal{F}_{\rm{nai}}^{(i)}$ and $\mathcal{F}_{\rm{geo}}^{(i)}$ imply the fidelity for the naive dynamical and geometric gates for the $i$th block, respectively. $\mathcal{F}_{\rm{nai}}$ and $\mathcal{F}_{\rm{geo}}$ are the overall fidelities.} \scalebox{0.85}{ \begin{tabularx}{9.5cm}{ccccccc} \hline \hline $\chi$ \ \ \ & $\mathcal{F}^{(1)}_{\rm{nai}}$ \ \ \ & $\mathcal{F}^{(1)}_{\rm{geo}}$ \ \ \ & $\mathcal{F}^{(2)}_{\rm{nai}}$ \ \ \ & $\mathcal{F}^{(2)}_{\rm{geo}}$ \ \ \ & $\mathcal{F}_{\rm{nai}}$ \ \ \ & $\mathcal{F}_{\rm{geo}}$ \\ \hline $-\pi/2$ & 98.538\% & 99.692\% & 98.538\% & 99.692\% & 97.678\% & 99.508\% \\ $ -\pi/4$ & 98.525\% & 99.688\% & 98.525\% & 99.688\% & 97.658\% & 99.503\% \\ $0$ & 98.509\% & 99.685\% & 98.509\% & 99.685\% & 97.633\% & 99.496\% \\ $\pi/4$ & 98.490\% & 99.680\% & 98.490\% & 99.678\% & 97.603\% & 99.488\% \\ $\pi/2$ & 98.451\% & 99.674\% & 96.859\% & 98.451\% & 97.540\% & 99.479\% \\ \hline \hline \end{tabularx}} \label{ap:table} \end{table} \section{Conclusion }\label{sec:conclusion} In conclusion, we have proposed a framework to realize nonadiabatic geometric gates for singlet-triplet qubits in semiconductor quantum dots. By only modulating the time-dependent exchange interaction between neighboring quantum dots, both single- and two-qubit geometric gates can be implemented without introducing an extra microwave field. The results clearly shown that the achieved geometric gate is not only superior to its counterpart, namely, the naive dynamical gate, with a high fidelity surpassing 99\%, but can also realize high-speed gate operation with a gating time of nanoseconds. Our result indicates the superiority of the geometric gate, which has great potential to implement robust quantum computing. \section*{ACKNOWLEDGMENTS}\label{sec:ack} This work was supported by the National Natural Science Foundation of China (Grant No. 11905065, 11874156), and the Science and Technology Program of Guangzhou (Grant No. 2019050001).
2023-04-23T08:18:04.494Z
2022-03-22T01:38:07.000Z
redpajama/arxiv
arxiv_0000
1,138
8,102
dd7f0b3281697bcb24a240213b6590a83923c888
\subsection*{The scheme} The model we consider is a quantum simulator that has the underlying ability to be a universal quantum computer (one that can run an arbitrary algorithm), and the task is to learn how to use it as such. For this reason, we take the simulator as consisting of $n$ qubits with some interactions between them such that they form a fully connected graph where every qubit is (directly or indirectly) interacting with every other qubit. Furthermore we require that the timescale associated with this interaction is much shorter than the decoherence time in order for significant entanglement to be built up. In addition to this we need the ability to perform the following operations on each qubit individually: preparation in a complete basis set of states, fast rotations by applying strong Hamiltonians, and measurement in a complete basis set. Such a system can be described by the Hamiltonian \begin{equation} H = \sum_i\left(f_x^i(t) \sigma_x^i + f_y^i(t) \sigma_y^i\right) + \sum_{<i,j>} H^{i, j}, \label{eq:systemHam} \end{equation} where the first sum is over all qubits and the time-dependent control functions $f(t)$ are to be determined. The second sum is over all connected qubits and $H^{i, j}$ is the interaction between qubit $i$ and $j$. The choice of $\sigma_x$ and $\sigma_y$ for the controls is for convenience, any two Hamiltonians will work, and does not have to be the same for all qubits. As the controls are found \emph{in situ}, it is not necessary to know beforehand the form of the interactions $H^{i, j}$. These requirements are significant, but much easier than demanding direct control over two qubit operations, and correspond to the state-of-the-art in systems involving trapped ions \cite{Johanning2009, Lanyon2011, Blatt2012}, cold atoms \cite{Bloch2012, Labuhn2015}, NMR \cite{Peng2010, Cai2013, Silva2016} or superconducting circuits \cite{Houck2012, OMalley2015}. In these systems there already exist quantum simulators powerful enough to do simulations, and satisfy our requirements, but are not currently usable as computers as it is not known how to perform logic gates on them \cite{DiVincenzo2000, Johnson2014}. Our numerical results show that the scheme developed here scales well for a range of systems where $H^{i, j}$ is of the Ising type ($\sigma_z\otimes\sigma_z$). As Ising machines are useful for a wide range of quantum simulations and can be built with many different technologies \cite{Lanyon2011, Labuhn2015, McMahon2016}, this is a result with wide ranging applicability. In the model we consider, the connectedness of the qubits and the ability to do fast single qubit operations guarantees that the two core requirements of our proposed optimisation scheme are satisfied: there exists a universal gate set that can be reached at short times \cite{Dodd2002}, and process tomography can be performed \cite{Poyatos1997}. While other systems satisfy these requirements and the approach detailed here would work, we focus on this model for clarity. As single qubit operations are assumed, the gates that controls are needed for are entangling ones, canonically the controlled-not (C-NOT) gate; these are vastly harder to perform using conventional methods and typically have much lower fidelities. The steps for finding such a gate in the \emph{in situ} scheme are outlined in \figref{fig:flowchart}. These are very general and almost the same as in classical numerical optimisation: a guess for the optimal values is generated, these are fed into a function that computes their effect, the distance between this and the desired outcome is calculated, and this generates another guess for the optimal values. This is iterated until the values get close enough to the goal, or the process terminates unsuccessfully after some timeout condition is reached. The difficulty with doing this for a quantum simulator of the type discussed above is in computing what unitary is produced by a given choice of control parameters; this requires both an accurate model of a high dimensional system and an exponentially large classical computer to solve it. Neither of those things can be done for a quantum system large enough to be an interesting quantum computer. \begin{figure} \centering \includegraphics[trim = 1.5cm 0.5cm 1.5cm 0cm, width=.85\columnwidth]{flowchart} \caption{Outline of the process used in optimal control, the two red steps in the middle are done \emph{in situ} in our scheme while the others are classical processing. The starting point is an initial set of controls that parametrise the strength of the control Hamiltonians over the gate duration, in our examples these are generated randomly. The evolution of the system with these parameters is then calculated. On a classical computer this requires solving the time-dependent Schr\"odinger equation numerically for a model of the system, while in our scheme this is simply implementing the controls on the simulator. Evaluating the gate fidelity in the classical case is straightforward but, when done in situ, requires some form of tomography to measure it. We derived a tight bound for this gate fidelity in \eqref{eq:fidbound} that can be measured efficiently. If this fidelity is above a threshold, the process terminates successfully, otherwise the control parameters are updated based on the results of the latest and previous runs, and the process repeats. Such an approach can be used in a wide range of contexts, such as to perform quantum logic using random walks \cite{Lahini2018}.} \label{fig:flowchart} \end{figure} We eliminate these twin difficulties by using the quantum simulator to compute the effects of the control pulse on itself. This works because the simulator with a trial set of controls is guaranteed to be an accurate model of itself with those controls. The propagation step is therefore done \emph{in situ}, but the method by which the control parameters are updated remains purely classical. This is because the information extracted from the quantum simulator (the gate fidelity) and the parametrisation of the control pulses are purely classical. An upshot of this is that the myriad of different methods to do numerical optimisation that have already been developed and work for quantum systems can be used in this protocol directly. In order to use the quantum simulator as a universal computer, this optimisation procedure needs to be repeated for a universal set of gates. As single qubit operations are assumed, it is sufficient to find a complete set of C-NOT gates. The minimum number of these gates, such that every quantum circuit can be implemented, is $n-1$. In practice we expect it to be more efficient, and produce shorter circuits, to find the $\tfrac{1}{2} n (n-1)$ C-NOT gates that act between every pair of qubits. As is shown in the numerical section, this is readily achieved even for pairs of qubits that are not directly interacting. The question of whether the scheme works when the system Hamiltonian varies in time uncontrollably is important for an experimental implementation. If this change in time happens slowly compared to the time it takes to find a control pulse for a given gate, then it does not impede the ability of the scheme to find that pulse. However, it may mean that the pulse no longer produces the required dynamics at a later point in time when it is being used as part of an algorithm. In this case, it would be required to eventually rerun the optimisation scheme in order to correct for this drift. On the other hand, if the stochastic fluctuations in the Hamiltonian are faster than the evolution time required for a gate, the problem is a different one. Repeatedly evolving the system with the same controls (necessary in order to measure the fidelity) will result in the system evolving with a different Hamiltonian on each repeat. This noise in the Hamiltonian thus translates into a lower fidelity being measured. Therefore, as long as the fast fluctuations in the Hamiltonian are small, they are not expected to prevent the scheme from finding a successful pulse but will limit the maximum possible fidelity. \subsection*{Local gate fidelity}\label{sec:LocalFidelity} The measure used to gauge how close the system evolution is to the desired unitary is typically the gate fidelity \cite{Gilchrist2005}. This is a function between the dynamical map $M$ which describes the evolution of the system under a set of controls (including potential decoherence which acts on the system), and the target unitary $U$. It is defined as $F(M, U) = \bra{\psi} \rho_M \ket{\psi}$ where $\ket{\psi} = U \otimes \mathds{1} \ket{\Omega}$ (with $\ket{\Omega} = \sum_k \ket{kk}$ being a maximally entangled state) is the Choi state of $U$; and $\rho_M = (M \otimes id) \ket{\Omega}\bra{\Omega}$ is the Choi state of $M$. This distance measure is bounded between $0$ and $1$, with the upper limit being reached only when $M(\cdot) = U(\cdot)U^\dagger$. In the case of the system evolution being unitary simplifies down to $F(V, U) = |\tfrac{1}{d} \Tr{V^\dagger U}|^2$. When the propagation step of \figref{fig:flowchart} is done classically, the whole unitary describing the evolution of the system is calculated as an exponentially large matrix from which the gate fidelity must be calculated. In the \emph{in situ} scheme this is no longer the case; the only thing which is accessible is a quantum state after it has been evolved by the quantum simulator under some control parameters. The standard method of extracting the gate fidelity is to perform a variant of process tomography, known as certification. This requires preparing the system in a specific state, evolving it, and then performing a set of measurements. The number of different preparation-measurement combinations, $N_{\text{meas}}$, is of order $O(d^2)=O(2^{2n})$, and thus scales exponentially \cite{DaSilva2011}. However it is possible to do exponentially better for cases of interest where the target gate has a tensor product structure, $U = \bigotimes U_i$ where each $U_i$ is a unitary which acts on a small number of qubits. This would typically be a single C-NOT on one pair of qubits and identity on the rest, $\text{C-NOT}_{1,2} \otimes \mathds{1}_3 \otimes \mathds{1}_4 \otimes ...$, but it could be several simultaneous non-overlapping C-NOTs or even larger gates such as Tofolli. No matter what the exact form is, provided that the target can be decomposed into a tensor products of unitaries, the fidelity over the whole system is bounded by the \emph{local estimator} $F_{LE}$ according to \begin{equation} F(M, U) \ge F_{LE}(M,U) = 1 - \sum_i (1 - F(M_i, U_i)) \label{eq:fidbound} \end{equation} where $M_i(\rho_i) = M(\rho_i \bigotimes_{j\ne i} \tfrac{1}{d_j} \mathds{1}_j)$. This is the reduced dynamical map acting on subsystem $i$ where the other subsystems have been initialised in the maximally mixed state. This result is proved in the appendix below based on existing approaches \cite{Cramer2010}. \begin{figure*} \centering \includegraphics[scale=0.5]{IsingFidelity} \includegraphics[scale=0.5]{HeisenbergFidelity} \\ \small{a) Ising Chain \hspace{0.32\linewidth} b) Heisenberg Chain} \caption{ \bf Comparison of the gate fidelity with the local estimator of the fidelity during an optimisation run. \normalfont The gate fidelity and its local estimator, \eqref{eq:fidbound}, are plotted as a function of iteration step for one complete run of the \emph{in situ} optimisation scheme. The system is a five-qubit nearest-neighbour chain with the Hamiltonian of \eqref{eq:systemHam}; Ising on the left where $H^{i, j} = \sigma_z \otimes \sigma_z$, and Heisenberg on the right where $H^{i,j} = \sigma_x\otimes\sigma_x + \sigma_y\otimes\sigma_y + \sigma_z\otimes\sigma_z$. The target is a C-NOT gate on the first two qubits and identity on the others. The algorithm minimised the infidelity of the local estimator. The exact infidelity is plotted at each step for comparison. It is lower in both cases at all iteration steps, and highly correlated with the estimated infidelity, such that minimising the former also minimises the latter almost monotonically and the landscape remains trap free. Furthermore the difference between the two decreases rapidly as the infidelity approaches $0$. In the Heisenberg case the true gate fidelity converges slower than for the Ising chain; this behaviour is closely mapped onto the local estimator. This demonstrates the validity of maximising the local estimator of the fidelity as a proxy for maximising the true gate fidelity.} \label{fig:FidelityCurves} \end{figure*} The advantage of this local estimator to the fidelity is that it only requires certification to be performed over small dimensional subsystems of $1$ or $2$ qubits. As the size of these subsystems does not increase as the system is scaled up, the cost of measuring the fidelity does not increase exponentially with the number of qubits. Applying existing results for certification to each term in \eqref{eq:fidbound} sequentially gives $N_{\text{meas}}=O(\sum_i d_i^2)=O(n)$. As is discussed in the methods section below, it is possible to remove this linear scaling by noting that each term in \eqref{eq:fidbound} can be recovered in parallel. This results in a constant cost, $N_{\text{meas}}=O(\max_i d_i^2)=O(1)$, a vast improvement over the previous exponential scaling, $O(2^{2n})$. Beyond being efficiently recoverable, this estimator to the fidelity is useful for a number of reasons. It is a lower bound on the gate fidelity, so we are guaranteed that the true fidelity is at least as good. It converges to the exact fidelity in the limit that $F(M, U) \to 1$, this is important as we are most interested in having a measure of how good a gate is when it is close to the target. As can be seen in \figref{fig:FidelityCurves}, it is well behaved numerically and the initial convergence is fast. Increasing the number of qubits would increase the number of terms in \eqref{eq:fidbound} but not their structure, therefore we expect the qualitative features to remain the same as it is scaled up. The minimum value of the local fidelity is $1-n$, while the true gate fidelity cannot go below $0$, so it may be expected that the convergence to $1$ is slower in larger systems as the local fidelity has a larger range to cover. The scaling behaviour of the local fidelity was investigated by considering a target $U_T = \text{C-NOT} \otimes\mathds{1}\otimes\mathds{1}...$ and comparing the gate fidelity and local fidelity between it and the unitary $U = e^{-i H} U_T$ , where $H$ is a random Hamiltonian generated by a Gaussian distribution and normalised to $||H||_2 = 0.1$. As the number of qubits was increased from $3$ to $14$, the true gate fidelity averaged over different random $H$ stayed the same ($~99.5\%$) while the local fidelity dropped linearly, by less than $0.2\%$ per qubit. This is in accordance with our intuition that the local fidelity behaves similarly for different numbers of qubits, with the principle difference being the linearly increasing number of terms in the sum of \eqref{eq:fidbound}. \subsection*{Numerical investigation} The local fidelity detailed in the previous section shows that it is possible to estimate the fidelity of a quantum gate efficiently as the size of the system increases, removing one direct barrier from the scalability of the \emph{in situ} optimisation scheme. There are, however, other factors that determine the time the protocol takes which need to be taken into account to assess its scalability. This requires an expression for the total time required to construct a control sequence for a gate in terms of the number of qubits in the system. As this is an optimisation problem that would be done `numerically' on a hybrid classical-quantum computer, analytic expressions could not be obtained. In order to investigate this we conducted simulations of the protocol on a purely classical computer. We explored systems from $3$ to $9$ qubits; memory constraints on the cluster we used made larger systems unfeasible due to the difficulty of evolving (and doing gradient based optimisation of) operators. The average time needed to find a control sequence for a gate can be expressed as: \begin{equation} T_{\text{total}} = T_{\text{run}} N_{\text{runs}} / p_{\text{succ}} \label{eq:time} \end{equation} where $T_{\text{run}}$ is the time it takes to do one run of a control sequence on the quantum simulator, $N_{\text{runs}}$ is the number of sequences that are run on the simulator until the protocol halts, and $p_{\text{succ}}$ is the probability that the protocol halts with a control pulse that reaches the desired fidelity. $T_{\text{run}}$ can be decomposed as $T_{\text{run}} = T_{\text{init}} + T_{\text{gate}} + T_{\text{meas}}$ which is the time to initialise the system, evolve the system under the interaction and control Hamiltonians, and then measure it respectively. $T_{\text{init}}$ and $T_{\text{meas}}$ are determined by the type of system being used; we take them as fixed and independent of the number of qubits. The gate time, on the other hand, is a free parameter that must be decided before starting the \emph{in situ} optimisation. The total number of runs can be similarly decomposed as \begin{equation} N_{\text{runs}} = N_{\text{meas}}\;N_{\text{prec}}\;N_{\text{fids}}\;N_{\text{upds}}. \label{eq:cost} \end{equation} $N_{\text{meas}}$ is the number of times the experiment with the same control pulse, but with different input states and measurement basis, must be repeated in order to measure the gate fidelity once. As the previous section showed, this is $O(1)$ for the local estimator to the fidelity, which is the measure used henceforth. $N_{\text{prec}}$ is the number of times the fidelity must be measured to acquire sufficient statistics such that the fidelity is known to the desired precision. $N_{\text{fids}}$ is the number of different fidelities that need to be measured for the optimisation algorithm to update the control sequence. It is $1$ for gradient-free methods, while for steepest-ascent methods it is $1$ plus the number of gradients (when they are measured by finite difference). $N_{\text{upds}}$ is the number of the times the control sequences must be updated, corresponding to the number of times the scheme goes around the loop of \figref{fig:flowchart}. The scaling relation of the terms in \eqref{eq:time} depends on the underlying classical algorithm. We used a steepest ascent gradient method similar to Gradient Ascent Pulse Engineering (GRAPE) algorithm \cite{Khaneja2005} which is commonly used for optimising quantum control on classical computers with great success. In this approach, each of the independent Hamiltonians that can be controlled are taken as piecewise-constant with $N_{\text{ts}}$ time-slots of equal widths that span the full gate time $T_{\text{gate}}$. We used analytical gradients in the simulations, for computational efficiency \cite{Machnes2011}, which restricted us to piecewise constant and fixed $T_{\text{gate}}$. The precision to which the fidelities need to be measured experimentally also needs to be specified. This could be done by either fixing $N_{\text{prec}}$ itself, or by repeatedly measuring the fidelity until the error of the mean is below a specified value. We approximated the latter approach numerically by rounding each local fidelity measurement to some numerical accuracy $A_{\text{num}}$ and calculated the equivalent $N_{\text{prec}}$. The \emph{in situ} scheme therefore requires $T_{\text{gate}}$, $N_{\text{ts}}$ and $A_{\text{num}}$ to be chosen beforehand, as well as a target fidelity $F_{\text{targ}}$, and to know the number of different control Hamiltonians which, multiplied by $N_{\text{ts}}$, gives the total number of controls $N_{\text{ctrl}}$. In order to simulate this completely numerically we also need to specify exactly what the control Hamiltonians and the constant interaction Hamiltonians are for a given system. This is not the case were this scheme done \emph{in situ} experimentally. In an experiment $T_{\text{gate}}$ minimisation could be included in optimisation objectives as the gradients would be calculated via a finite-difference method. Alternative pulse parametrisation to piecewise constant could also be used, as best suits the experimental setup. All the different parameters mentioned above are summarised in \figref{fig:parametersummary}. \begin{figure} \small \begin{center} \begin{tabular}{| c | l |} \hline Parameter & Description \\ \hline $T_{\text{gate}}$ & Evolution time for the gate \\ $N_{\text{ts}}$ & Number of timeslots for control pulse \\ $A_{\text{num}}$ & Accuracy of fidelity measurements \\ $F_{\text{targ}}$ & Target fidelity for the desired gate \\ \hline $N_{\text{runs}}$ & Number of (\#) runs in total\\ $N_{\text{meas}}$ & \# different input-output pairs\\ $N_{\text{prec}}$ & \# repeats for required fidelity accuracy\\ $N_{\text{fids}}$ & \# different fidelities to update controls\\ $N_{\text{upds}}$ & \# control updates needed\\ $p_{\text{succ}}$ & probability of success\\ $N_{\text{ctrl}}$ & \# parameters in control pulse\\ \hline \end{tabular} \end{center} \caption{The different parameters defined in the text, summarised here for convenience. The top four are those which need to be fed into the classical optimiser in order for it to run GRAPE \emph{in situ}; other classical protocols could be used, which would require different parameters. The bottom seven are used to quantify the efficiency of the scheme.} \label{fig:parametersummary} \end{figure} We conducted a number simulations of this approach with a Hamiltonian of the type described in \eqref{eq:systemHam} for different number of qubits and interaction topologies. They were completed using the quantum optimal control modules in QuTiP \cite{Johansson2012, Johansson2013, qutipweb}. These provide methods for optimising a control pulse to some fidelity measure. The GRAPE implementation in QuTiP is described in the documentation, available at \cite{qutipweb}. The code used to perform the numerical simulations is available in an open-source repository \cite{qinsitu2018}. The \emph{local Choi fidelity} measure customisation, and a method for automating locating the $p_{\text{succ}}$ threshold, were developed for this study; they are fully described in the code documentation. The optimal $T_{\text{gate}}$ and $N_{\text{ts}}$ were determined by trialling a range of alternatives. As the result of the optimisation depends on the initial random control amplitudes (uniformly distributed in $[1,1]$), each scenario was repeated multiple times to gain reliable statistics. A high performance computing cluster was necessary for completing sufficient repetitions of the optimisation simulations of the larger systems in a reasonable time (the 9 qubit optimisations each required around four days of Intel Xeon CPU E5-2670 0 2.90GHz core processing time and were repeated hundreds of times). This is because the processing time required to optimise a pulse scales exponentially with system size due to the need to exponentiate the Hamiltonians in order to compute propagators. This difficulty precisely highlights the need to optimise pulses \emph{in situ} for quantum systems of the size that would perform a useful quantum computation. \begin{figure} \small \begin{center} \begin{tabular}{| c | c | c | c | c |} \hline Topology & Coupling & $T_{\text{gate}}$ & $N_\text{ts}$ & $N_\text{upds}$ \\ \hline chain & Ising & $\pi$ & $12$ & $60$ \\ star & Ising & $\pi$ & $12$ & $214$ \\ fully connected & Ising & $12\pi$ & $160$ & $295$ \\ chain & Heisenberg & $16\pi$ & $160$ & $585$ \\ star & Heisenberg & $12\pi$ & $160$ & $1043$ \\ fully connected & Heisenberg & $12\pi$ & $160$ & $881$ \\ \hline \end{tabular} \end{center} \caption{The cost of performing the \emph{in situ} optimisation scheme is investigated for a range of different 5 qubit systems for the Hamiltonian of \eqref{eq:systemHam}. The differences between the systems are their topology (a linear chain with nearest neighbour interactions, a star where all interact with a central qubit only, or fully connected where the interaction strengths are also randomised) and the interaction type. In each case the Hamiltonian used means that $N_{\text{ctrl}}=10$, the target operation is a C-NOT gate on two qubits and identity on the rest, $F_{\text{targ}} = 0.999$, and $p_{\text{succ}} > 0.98$. These simulations where done with full numerical precision. We see that, for five qubits, all six systems find the desired entangling gate, and do so at reasonable experimental cost. This indicates that the approach works for a range of possible quantum simulators.} \label{fig:topologies} \end{figure} We found numerically that, for a range of examples, there exist values of $T_{\text{gate}}$ and $N_{\text{ts}}$ such that the \emph{in situ} scheme converges. \figref{fig:topologies} shows typical values of the most important parameters for a variety of topologies and interaction types. We consistently found that Ising systems were easier to find controls for than Heisenberg systems. In particular, our results suggest that in Heisenberg systems a GRAPE-based algorithm may require a $T_{\text{gate}}$ that scales exponentially with the number of qubits in order for the optimisation to succeed. We found that this discrepancy also existed in purely classical optimisation techniques. This suggests that Heisenberg systems are intrinsically harder to solve with optimal control methods than Ising ones, and that this does not depend on whether an \emph{in situ} or classical approach is used. This is consistent with \figref{fig:FidelityCurves} where we compare the local estimator to the fidelity with the true fidelity for a $5$ qubit Heisenberg chain as a function of the iteration step as it is being optimised. The local estimator tracks the true fidelity steadily whether the underlying system is Heisenberg or Ising, the notable difference between the two plots is the plateau in the Heisenberg case. This shows that optimising the system is significantly harder and appears for both the local estimator and the true fidelity. An exponential scaling in the required gate time would make such an approach infeasible for a quantum computer. Regardless of these numerical difficulties, it can be shown \cite{Dodd2002} that it is possible to do fast entangling gates on such Heisenberg systems by using fast local unitaries and Trotter compositions to decouple the system into simple disconnected components. The problem is therefore with the choice of the particular optimisation algorithm that struggles to find the solution. It may be the case that using a different algorithm inside our \emph{in situ} protocol, such as parametrising the control Hamiltonians as a Fourier series rather than as piecewise-constant, would find control pulses for shorter gate times. \begin{figure*}[t] \centering a)\includegraphics[scale=0.48]{numIterPlot} b)\includegraphics[scale=0.48]{accPlot} \caption{\bf Numerical simulation of the experimental cost of finding a C-NOT gate in an Ising chain using steepest ascent \emph{in situ}. \normalfont The number of updates and the fidelity accuracy required for the optimisation protocol to succeed is plotted for a chain of qubits with the Hamiltonian of \eqref{eq:systemHam} with nearest-neighbour Ising interactions, a gate time of $T_{\text{gate}}=4\pi$, and $N_{\text{ts}}=48$ timeslots. The target in each case is a CNOT gate between two qubits in the middle of the chain, separated by one other qubit. The gate fidelity used is $F_{LE}$, therefore the true gate fidelity will be a little higher. Since the cost of the $n=3$ case is significantly lower than for the other, it has been omitted from all fits.\\ Figure a) shows how $N_{\text{upds}}$ scales with the number of qubits for different target gate fidelities (error bars are twice the standard error). For this plot, the accuracy to which the fidelity is measured, $A_{\text{num}}$, is picked to give a $p_{\text{succ}} = 50\%$ success rate. We see a strong linear relation in the number of iterations required as a function of the number of qubits giving $N_{\text{upds}} = O(n)$.\\ Figure b) shows how the accuracy to which the local fidelity needs to be measured, $A_{\text{num}}$, scales with the number of qubits for different target fidelities (error bars are 5 times the standard error). The data is expected to have an $O(1/n)$ scaling as, in order to reach a gate infidelity of $\epsilon$, the fidelity ought to require a measurement accuracy $O(\epsilon)$. As this is calculated from the sum of the fidelities of the subsystem, we conjectured that they each need to be measured to an accuracy $O(\epsilon/n)$. The data points lie very close to a $c/n$ curve, providing strong support for this argument. However, the constant $c$ does not appear to have quite a linear relationship with $\epsilon$; we did not investigate this further as it does not affect scalability.\\ The fidelity accuracy for the $p_{\text{succ}} = 50\%$ is estimated using an interpolation of $p_{\text{succ}}$ values for a range of $A_{\text{num}}$. Between 25 and 45 points are used in the interpolation. Each of these points are the average over a number of repetitions: 200 for $n=3,4$; 100 for $n=5$; 50 for $n=6,7$. The method for selecting the $A_{\text{num}}$ values for the simulations and the interpolations are described in more detail in the code repository \cite{qinsitu2018}.} \label{fig:numIterPlot} \end{figure*} For the case of an Ising chain we also varied the number of qubits in order to hypothesise a likely scaling for $T_{\text{total}}$ in terms of the number of qubits in the system and found that a polynomial scaling matched very well. As the cost of doing a classical simulation of the \emph{in situ} optimisation of the Ising chain is much lower than for the others, we also picked this system to investigate the impact of measurement noise by introducing a finite value of $A_{\text{num}}$, which parametrises the sensitivity of the system to measurement noise. The results are shown in \figref{fig:numIterPlot} and give us a good estimate of $N_{\text{upds}}=O(n)$ and $A_{\text{num}}=O(1/n)$. The latter implies that $N_{\text{prec}} = O(n^2)$, due to the central limit theorem that states the number of repetitions required scales quadratically with the desired accuracy which gives $N_{\text{prec}} \propto A_{\text{num}}^{-2} = O(n^2)$. Putting this together with the previous results that $N_{\text{fids}} = O(n)$ for gradient based optimisation and $N_{\text{meas}} = O(1)$ for the local estimator fidelity gives $N_{\text{runs}} = O(n^4)$. As this is done with a constant gate time and with a constant success probability, this implies that the time required to find a control sequence that implements a C-NOT gate on an Ising chain using a steepest-gradient \emph{in situ} scheme scales as $T_{\text{total}} = O(n^4)$. The other system we investigated in depth was an Ising ring where the target C-NOT was between two next-nearest-neighbour or two randomly located qubits. \figref{fig:ChainRing} shows the required $N_{\text{upds}}$ for up to $9$ qubits for this system. This graph is in agreement with the previous results of \figref{fig:numIterPlot} that the number of iterations required grows slowly with the number of qubits; in this case the results even show signs of being sub-linear. An unexpected feature shared by both sets of results is the low cost of the $3$ qubit case. Our intuition is that it is due to the additional symmetries present and the ease with which the single qubit not part of the target C-NOT gate can be kept disentangled. The reason for picking the ring topology and not-nearest-neighbour target gate was to check whether the Ising chain results were unique, to demonstrate the applicability of the \emph{in situ} approach to different systems, and to test its ability to reach more complex gates. Specifically, it shows that the scalability of the scheme did not rely on boundary effects or on qubits being adjacent to each other. While a quantum computer could be built using only nearest-neighbour gates, being able to entangle two arbitrary qubits in the time of a single gate drastically reduces the potential run time of algorithms. \begin{figure*} \centering \includegraphics[scale=0.5]{9QubitsChain} \includegraphics[scale=0.5]{9QubitsRing} \\ \small{a) Ising Chain \hspace{0.35\linewidth} b) Ising Ring} \caption{\bf Number of control pulse updates needed to find a C-NOT gate in an Ising chain and ring. \normalfont The number of updates required for the optimisation protocol to succeed is plotted for a chain (left) and a ring (right) of qubits with the Hamiltonian of \eqref{eq:systemHam} with nearest-neighbour Ising interactions $\sigma_z\otimes\sigma_z$, a gate time of $T_{\text{gate}}=4\pi$, $N_{\text{ts}}=48$ timeslots, and a gate fidelity of $F_{LE} = 0.999$. Full numerical accuracy was used in these simulations. Each data point represents repeated optimisations: 100 for $n < 8$; 96 for $n = 8$; 30 for $n = 9$. The number of successful optimisations is $p_{\text{succ}} > 90\%$ in all cases. The error bars are twice the standard error.\\ In both cases there is no evidence that the scaling is more than linear, even disregarding the $n=3$ data. As we did not have an obvious model to fit to these points, no best-fit is shown. The value of $N_{\text{upds}}$ is slightly higher for the ring than the chain, but appears to have a smaller gradient in $n$. For both graphs the results for nearest-neighbour and random qubits are statistically indistinguishable. This highlights that the optimisation scheme operates equally well in both cases and works more efficiently than a naive dynamically decoupling protocol.} \label{fig:ChainRing} \end{figure*} \subsection*{Discussion} This polynomial scaling observed numerically in these two different cases is encouraging evidence that the protocol may indeed be efficient. Some of the components that make up this exponential scaling come from numerical data, so several fits are possible. However the points lie so close to a linear fit in \figref{fig:numIterPlot} that a different fit, such as an exponential one, would diverge only slowly. \figref{fig:ChainRing} suggests than corrections to the fit are more likely to make it sub-linear than more costly. Although it is clear that the results presented here do not form an absolute proof of the scalability of an \emph{in situ} control scheme for all quantum simulators, they are at the very least strong evidence that this is an powerful approach to take for moderately large systems of a few tens of qubits. Systems of such a size are interesting as they correspond to the state-of-the-art that can be realised experimentally. Using the \emph{in situ} scheme for such systems would likely find control sequences for entangling gates that are not currently known, and where purely classical numerical optimisation schemes would fail due to the enormous computational requirements. Furthermore, testing these predictions in such experiments would extend these results to numbers of qubits that are completely unattainable for a purely classical computer to model, and test this protocol closer to full-scale universal quantum computation. One potential difficulty in optimal control is the existence of traps: local maxima of the fidelity that optimisation algorithms converge to which are not the global maxima. The question of whether traps exist in unitary control using the standard gate fidelity has been well studied \cite{Ho2009, Pechen2011, Russell2016}, and the conclusion is that generic quantum control landscapes are almost always trap free. This may also apply to the local estimator of the fidelity; traps were not a problem for the numerical simulations we performed and found no evidence of any new traps in \figref{fig:FidelityCurves} or elsewhere. The numerical results presented here used GRAPE, which decomposed the control pulses into piece-wise constant functions. A potentially more powerful approach, and one which is harder to do classically but may be easier to implement physically, would be to decompose them by frequency \cite{Bartels2013}, such as in CRAB \cite{Doria2011, Caneva2011} and GOAT \cite{Machnes2015}. Such algorithms are slow to run classically due to the difficulty of exponentiating the time-dependent Hamiltonian, a step which is bypassed in the \emph{in situ} scheme. They typically require fewer parameters to describe a successful control pulse and thus may prove faster than GRAPE when done experimentally. A different variation would be to change from a gradient based algorithm to a geometric or genetic one, or even to use machine-learning algorithms to learn about the system \cite{Palittapongarnpim2016}. Ideas from robust control \cite{Daems2013, Barnes2015} may also be usable in an \emph{in situ} framework, in order to make the approach more resilient to fluctuations in the system Hamiltonian. A future direction to take this work is to apply it to another important aspect of quantum computation: error correction. The protocol detailed here can be used in much the same way for this by replacing the target operation from a C-NOT gate, to one protecting some logical qubits. Preliminary results show that with a tuneable interaction the system can discover decoherence free subspaces and simple error correcting codes this way. Work remains on what the most useful tasks to optimise for are, and on showing the scalability of this approach. \subsection*{Acknowledgements} This work was supported by EPSRC through the Quantum Controlled Dynamics Centre for Doctoral Training, the EPSRC Grant No. EP/M01634X/1, and the ERC Project ODYCQUENT. We are grateful to HPC Wales for giving access to the cluster that was used to perform the numerical simulations. Many thanks to Stephen Glaser and David Leiner for discussions on possible implementations. \small \bibliographystyle{unsrtnat-etal}
2023-04-23T08:18:05.261Z
2018-08-07T02:22:42.000Z
redpajama/arxiv
arxiv_0000
1,163
6,266
ca89837df981948965e801e7fe660bbabcf45141
\section{Introduction} Motion prediction is important for finding the middle ground between pure teleoperation and autonomous control of robotic systems. It allows the robot to anticipate the future motions of the users and, consequently, their intention, and assist them in performing a given task. To improve the performance of motion prediction algorithms, it is beneficial to ground the prediction in experimentally-validated computational models of human movement \cite{wolpert2000}. Optimal control is used extensively in computational motor control, and provides a powerful framework for explaining a wide range of empirical phenomena associated with human motion \cite{Flash1985, Flash2013, Todorov2004}. In this view, it is hypothesized that human motion is driven by well-defined rewards or cost functions. The complimentary Inverse Optimal Control (IOC) framework attempts to identify the structure and parameters of these cost functions from a set of observed trajectories \cite{todorov_inverse}. Thus, IOC allows for the transition from modeling of human motion to motion prediction in a particular task \cite{mainprice_ioc}. However, the accuracy of the model that is used for a particular problem is critical to the success of IOC-based approaches. While many studies considered optimal control for modeling reaching trajectories in free space \cite{Flash1985, Todorov2004, Scott2004, Diedrichsen2010}, there has been much less effort towards modeling reaching in the presence of obstacles using optimal control \cite{wolpert_obstacle}. This in turn, hinders the development of efficient IOC based approaches for prediction. In the current paper, we propose a stochastic optimal control framework for modeling human reaching trajectories in the presence of obstacles. This framework is designed to be incorporated in motion prediction for a variety of applications of teleoperation in cluttered spaces. Our proposed framework is built on experimental studies that suggest that reaching movements amongst obstacles are optimized considering the likelihood of collision \cite{chapman1, chapman2, mon-williams}, and that obstacle avoidance is sensitive to human perception of free space \cite{chapman2}. In line with these findings, the proposed optimal control model incorporates probabilistic collision avoidance constraints to ensure that the likelihood of collision is below a specified threshold. We also consider signal-dependent noise in human movement control \cite{Harris98}, and the uncertainty in the perception of the size of the obstacle to model the error in estimation of free space. \noindent \textbf{Contributions:} Our main result is a reformulation of the optimal control problem proposed in \cite{wolpert_obstacle} which was shown to be effective in modeling reaching movements in the presence of obstacles. The proposed reformulations approximate a difficult non-linear and non-convex optimal control problem by a parametric quadratic optimization problem. We use substitution of chance constraints with a family of surrogate constraints \cite{bharath_iros15}. Satisfaction of each member of the family of surrogate constraints can be mapped to a lower bound probability with which the original chance constraints would be satisfied. Further, we show that the parameters of the reformulated quadratic optimization problem can be tuned to generate a diverse class of trajectories. To make the optimal control computationally tractable, we adapt \cite{Flash1985} and approximate the hand dynamics as a stochastic triple integrator system. Thus, our formulation does not address all the features of human reaching. Instead, we focus on capturing how parameters of our optimal control model that represent risk seeking behavior of human can explain the trade-off between movement velocity and obstacle clearance in the vicinity of an obstacle. The rest of the paper is organized as follows. Section \ref{rel} reviews the previous studies which considered collision avoidance within the context of optimal control. Section \ref{foc} presents the optimal control problem followed by a series of reformulations to convert it into a tractable parametric quadratic optimization problem. Section \ref{sim} presents simulation results that demonstrate how the parameters of the reformulated problem result in a diverse set of trajectories and control costs. In section \ref{disc} we discuss the results of our simulations in light of the existing experimental findings on reaching movements among obstacles and present future directions. \section{Related Work}\label{rel} \noindent \textbf{Optimal Control or Optimization based Obstacle Avoidance in Robotics} Optimal control or optimization are used extensively to plan collision-avoiding trajectories that also optimize a specified cost function \cite{chomp}, \cite{stomp}. In \cite{stomp}, optimal control is applied to stochastic systems with additive noise, and collision avoidance is ensured by introducing a penalty on trajectories that come close to the obstacles. An expectation over the cost is taken which suggests that the optimization is risk neutral; that is, it does not model the probability of collision avoidance. Trajectory optimizers like \cite{traj_opt}, \cite{muller2014risk} incorporates a penalty on the probability of collision avoidance. Some studies like \cite{chance_avoid1}, \cite{chance_avoid2} put hard constraints on probability of collision avoidance . However, \cite{chance_avoid1}, \cite{chance_avoid2} assumed an additive noise model. We aim at planning trajectories for a human hand which is assumed to be modeled as a stochastic system with signal dependent noise \cite{Harris98}. An optimal control based framework presented in \cite{vandenberg} presents collision avoidance under signal dependent noise, but for single integrator systems. In contrast, our formulation incorporates a higher order dynamics. \noindent \textbf{Obstacle Avoidance in Computational Motor Control} Optimal control or optimization has been an important tool for studying arm movements in computational motor control community. These works include both deterministic \cite{Flash1985}, \cite{soechting}, \cite{kang}, \cite{uno} as wells as stochastic models \cite{Todorov2004}, \cite{Harris98}, \cite{tops},\cite{todorov_feedback}. Works like \cite{soechting}, \cite{kang}, \cite{uno} consider the full arm motion in their analysis. However, the arm dynamics are highly non-linear and its integration with probabilistic collision avoidance constraints would result in a computationally intractable optimal control problem. Thus, in contrast to these works, we focus solely on the hand trajectories. Reaching trajectories in the presence of obstacles were studied in computational motor control for understanding movement coordination. Experimental studies \cite{chapman1}, \cite{chapman2}, \cite{mon-williams}, \cite{Tressilian} investigated the effects of obstacle position and size on obstacle avoidance. In particular, \cite{Tressilian} observed that the obstacle avoidance strategy exhibited by human subject during reach to grasp movements, consisted of two basic but coupled components namely moving around the obstacle or slowing down near them. An optimal control model for a single-obstacle avoidance was proposed in \cite{wolpert_obstacle}. They solved the optimal control problem using simulated annealing. Simple obstacle configurations, predominantly with a single obstacle were considered. Our proposed approach differs from \cite{wolpert_obstacle} in terms of the technical approach followed to solve the optimal control problem. In particular, we exploit some efficient structures in the problem. Moreover, we consider complex obstacle configurations to highlight the interaction between parameters, control cost and probability of collision avoidance. Our proposed approach also differs from \cite{todorov_obstacle} wherein obstacle avoidance is included as a cost function and consequently do not model the probability of collision avoidance. Although, \cite{todorov_flexible} analyzes collision avoidance behavior in the presence of obstacles, the presented optimal control formulation do not explicitly include collision avoidance constraints or costs. Rather, collision avoidance is used as a test case to study variability of reaching movements as explained by stochastic optimal control as compared to other models. \section{Proposed Forward Optimal Control (FOC)}\label{foc} \subsection{Dynamics and Task Description} We consider the task of reaching movements in a 2D cluttered environment. We chose a simple linear model for the movement of the end point of the hand - a triple integrator -- system. We denote the state of the hand at time instant $t$ by $\textbf{X}^t = (x^t, y^t, \dot{x}^t, \dot{y}^t, \ddot{x}^t, \ddot{y}^t)$, where the individual state variables are defined as the Gaussian distributions. The parameters of the distributions, i.e. their means and variances are obtained from the following discrete time dynamics with jerk $U =(u_x,u_y)=(\dddot{x},\dddot{y})$ as the control input. \small \begin{equation} X^{t+1} = \textbf{A} X^{t}+\textbf{B}(U^{t}+\varepsilon_{U}^t), \label{linear} \end{equation} \normalsize \noindent where $\textbf{A}$ and $\textbf{B}$ represent state transition and control scaling matrices of dimensions conforming to that of the state, and \small \begin{equation} \varepsilon_U^t= \sum_{i=1}^{2}\phi_i \textbf{M}_iU \label{varepsidef} \end{equation} \begin{equation} \textbf{M}_1= \begin{bmatrix} c_x & 0 \\ 0 & 0 \\ \end{bmatrix}, \textbf{M}_2 = \begin{bmatrix} 0 & 0 \\ 0 & c_y \\ \end{bmatrix}. \end{equation} \normalsize The term $\varepsilon^{t}_U$ in (\ref{varepsidef}) represents the time varying signal-dependent noise, and is formulated in terms of constant scaling matrices $\textbf{M}_i$ and $\phi_i$ which are a set of zero-mean unit-variance normal random variables. This form of (\ref{varepsidef}) ensures that indeed the standard deviation of the noise grows linearly with the magnitude of the control signal \cite{Todorov2004}, and the constants $c_x,c_y$ determine the magnitude of noise as a fraction of the control input. \subsection{Optimal Control} \noindent The discrete time optimal control can be represented by the following set of equations. \small \begin{eqnarray}\label{opt1} \min J_{opt} = J_{U^t}+J_{X^t} \\\nonumber Pr(C_j^t(x^{t},y^{t},x_j,y_j, R_j)\leq 0 )\geq \eta , j = {1,2..n},\nonumber \end{eqnarray} \normalsize \vspace{-0.4cm} \small \begin{equation} J_{U^t} = \Vert U \Vert ^{2}, J_{X^t} = \sum_{t=t_0}^{t=t_f}E[L(X^{t},U^{t})], \label{cont_statecost} \end{equation} \normalsize \vspace{-0.3cm} \small \begin{equation} L(X^{t},U^{t}) = \sum_{i=1}^{6}w_i(X_i^t-X^{t_f})^2, \end{equation} \vspace{-0.3cm} \begin{equation} R_j \approx N(\mu_{R_j}, \sigma_{R_j}^2). \label{pernoise} \end{equation} \normalsize \noindent The objective function in (\ref{opt1}) consists of a control effort term and a state-dependent term which penalizes the end point variance of the trajectory. The term $w_i$ determines the relative weighting between the components of the state-dependent cost term. The constraints $C_j(.)\leq 0$ in (\ref{opt1}) represent the collision avoidance requirement in a deterministic setting. Thus, the set of inequalities in (\ref{opt1}) signify constraints that the collision avoidance requirement is satisfied with a particular lower bound probability $\eta$. The terms $x_j$, $y_j$ and $R_j$ denote the position and size of the $j^{th}$ obstacle. To model the uncertainty in the estimation of obstacle size, $R_j$ is defined as normally-distributed random variable. The optimization (\ref{opt1}) is difficult to solve due to the constraints on probability of collision avoidance, also known as chance constraints, and are computationally intractable \cite{chance1}. Hence, we next reformulate these chance constraints into a tractable form and show that the reformulation naturally leads to an efficient optimization structure. \noindent \textbf{Reformulating Chance Constraints:} We follow \cite{bharath_iros15}, and substitute of $Pr(C_i^t(.))$ with: \vspace{-0.2cm} \small \begin{eqnarray}\label{meanvar} Pr(C_j^t(x^{t},y^{t},x_j,y_j, R_j)\leq 0 )\geq \eta\\\nonumber \Rightarrow E[C_j^t(.)]+k\sqrt(Var[C_j^t(.)]\leq 0, \eta \geq \frac{k^2}{1+k^2}. \end{eqnarray} \normalsize \noindent where $E[C_j^t(.)]$ and $Var[C_j^t(.)]$ represent the expectation and variance of the constraints $C_j^t(.)$ with respect to the random variables $x^t,y^t$. This suggests that satisfaction of the deterministic surrogate in \ref{meanvar} ensures satisfaction of the original probabilistic constraints with at least a probability $\frac{k^2}{1+k^2}$. In \cite{bharath_iros15}, it is shown that computing an analytical expression for $E[C_j^t(.)]$ and $Var[C_j^t(.)]$ in terms of random variable arguments $x^t,y^t,R_j$ etc. is simpler compared to computing that for $Pr(C_j^t(.))$. We can further simplify (\ref{meanvar}) by approximating obstacle regions in 2D as circles. This simplifies the collision avoidance inequality $C_j^t(.)$: \small \begin{equation} C_j^t: -(x^t-x_j)^2-(y^t-y_j)^2 +R_j^2\leq 0. \label{C_i} \end{equation} \normalsize \noindent Because (\ref{C_i}) is purely concave in terms of hand position variables $x^t$ and $y^t$, an affine upper bound can by obtained by linearizing $C_i^t$ around an initial trajectory guess $(x_*^t,y_*^t)$ \cite{sqp}: \vspace{-0.3cm} \small \begin{equation} C_j^t\approx ^*C_j^t+\bigtriangledown_{x^t} C_j^t(x^t-x_*^t)+\bigtriangledown_{y^t} C_j^t(y^t-y_*^t)\leq 0, \label{affine} \end{equation} \normalsize \noindent Where, $^*C_j^t$ is obtained by evaluating (\ref{C_i}) at $(x_*^t,y_*^t)$. Similarly, $\bigtriangledown_{x^t} C_j^t $ and $\bigtriangledown_{y^t} C_j^t$ represent the partial derivative of $C_j^t(.)$ with respect to $x^t$ and $y^t$, evaluated at $(x_*^t,y_*^t)$. The affine approximation (\ref{affine}) can be further improved by updating $(x_*^t,y_*^t)$, during the course of the optimization. This sequential linearization of concave constraints forms the basis of the \emph{convex concave procedure} \cite{sqp}. \noindent In light of (\ref{affine}), $E[C_j^t(.)]$ and $Var[C_j^t(.)]$ take the form \small \begin{eqnarray}\label{expec} E[C_j^t(.)] = \sigma^2_{R_j}\\\nonumber+h_1(\mu_{x^t},x^t_*,\mu_{y^t},,y^t_*,\sigma_{x^t}^2, \sigma_{y_t}^2\mu_{x_j},\mu_{y_j},\mu_{R_j}) \end{eqnarray} \normalsize \vspace{-0.4cm} \small \begin{eqnarray}\label{var} Var[C_j^t(.)] = C_{R_j}\sigma_{R_j}^2+2\sigma_{R_j}^4\\\nonumber +h_2(\mu_{x^t},x^t_*,\mu_{y^t},y^t_*,\sigma_{x^t}^2,\sigma_{y^t}^2,\mu_{x_j},\mu_{y_j},\mu_{R_j}), \end{eqnarray} \normalsize \noindent where the terms $(\mu_{x^t},\mu_{y^t})$ and $(\sigma_{x^t}^2,\sigma_{y^t}^2)$ represent the mean and variance of the hand position $(x^t,y^t)$. The term $C_{R_j}$ and functions $h_1(.)$ and $h_2(.)$ are given in (\ref{cri})-(\ref{h2}). It can be noted that $h_2(.)$ can be represented as sum of squares and thus, is non-negative. \small \begin{eqnarray} C_{R_i} = 4\mu_{R_i}^2\label{cri}\\ h_1 = \mu_{R_i}^2+2\mu_{x^t}\mu_{x_i}-\mu_{x_i}^2+2\mu_{y^t}\mu_{y_i}-\mu_{y_i}^2-2\mu_{x^t} x^t_*\\\nonumber-2\mu_{y^t}y^t_*+(x^t_*)^2+(y^t_*)^2\label{h1}\\ h_2 = 2(2\mu_{x_i}^2\sigma_{x^t}^2+2\mu_{y_i}^2\sigma_{y^t}^2-4\mu_{x_i}\sigma_{x^t} ^2(x^t_*)^2-4\mu_{y_i}\sigma_{y^t}^2(y^t_*)^2\label{h2}\\\nonumber + 2\sigma_{x^t}^2(x^t_*)^2+2\sigma_{y^t}^2(y^t_*)^2) \end{eqnarray} \normalsize \noindent \textbf{Reformulated Optimal Control Problem:} To arrive at the final reformulated version of (\ref{opt1}), we make the following sequence of observations. The second term of the surrogate constraints proposed in (\ref{meanvar}) is non-negative. Thus, for a given $k$, the surrogate constraints (\ref{meanvar}) are satisfied when the first term, $E[C_j^t(.)]$ is sufficiently negative and the second term, $\sqrt(Var[C_j^t(.)]$ is sufficiently small in magnitude. Due to (\ref{var}) and (\ref{h2}) we note that $\sqrt(Var[C_j^t(.)]$ is a non-decreasing function of the positional variance at each point of the trajectory $(\sigma^2_{x^t},\sigma^2_{y^t})$. Thus, making $\sqrt(Var[C_j^t(.)]$ small is equivalent to minimizing the positional variance at each point of the trajectory. In light of all these arguments, FOC (\ref{opt1}) can be replaced with the following simpler problem. \vspace{-0.3cm} \small \begin{eqnarray}\label{opt2} J_{aug}= \Vert U \Vert ^{2}+\sum_{t=t_0}^{t=t_f}E[L(X^{t},U^{t})]+\lambda\sum_{t= t_0}^{t_f}(\sigma^2_{x^t}+\sigma^2_{y^t})\\\nonumber E[C_j^t(.)]+\tau \leq 0 \end{eqnarray} \normalsize The original trajectory optimization (\ref{opt1}) has been converted to the new formulation (\ref{opt2}) by substituting the parameter $\eta$ which represented probability of avoidance in (\ref{opt1}) with two new sets of variables $\tau$ and $\lambda$. The positive constant $\tau$ can be manipulated to make $E[C_j^t(.)]$ as negative as required and consequently control the clearance from a given set of obstacles. Similarly, $\lambda$ is a positive constant which can be manipulated to minimize the positional variance at each point along the trajectory. Hence, we can manipulate $\tau$ and $\lambda$ to achieve a particular probability of avoidance $\eta$. Moreover, each $\eta$ can be mapped to various choices of $\tau$ and $\lambda$ leading to a diverse set of collision avoidance behaviors. Within this diverse set, $\tau$ determines the geometry of the path, and $\lambda$ determines the velocity profile along the path. The reformulated FOC (\ref{opt2}) is very different from those typically used in the context of human motion modeling. A central hypothesis in current frameworks is that relative weighting of each term in the cost function can be tuned to produce a diverse set of trajectories. The FOC (\ref{opt2}) takes on a different approach -- its parameters appear not only in the cost function but also in the constraints. The reformulated FOC (\ref{opt2}) can be solved in one shot if the right set of $\tau$ and $\lambda$ are given. For the cases where such set is not available, we can derive a framework for mapping a probability of collision avoidance $\eta$ to $\tau$ and $\lambda$ and solving (\ref{opt2}) in the process. \noindent \textbf{Solutions in Different Homotopies:} The linearization of collision avoidance constraints (\ref{C_i}) to obtain affine inequalities (\ref{affine}) inherently limits the solution trajectories of (\ref{opt2}) to be locally optimal. The physical interpretation of this is that FOC, (\ref{opt2}) cannot search over the solution trajectories belonging to different homotopies. Existing optimal control approaches capable of searching over different homotopies either reformulate collision avoidance constraints, (\ref{C_i}) through use of binary variables \cite{milp1} or introduce additional constraints which model the topological information about the different possible homotopies \cite{homotopy1}, \cite{homotopy2}, \cite{homotopy3}. However, adopting such approaches would significantly increased the complexity of our optimization, (\ref{opt2}). Instead, we opt for an approximate solution. We vary the initial trajectory guess to produce optimal trajectories in different homotopies. In particular, an initial guess for each homotopy is pre-computed and stored and recalled as and when required. This initial guess could be computed from even sampling based planners. Some existing works on stochastic optimal control based collision avoidance also adopt similar approach \cite{vandenberg_coll}. Our approximate approach is also motivated by our eventual future goal of using the proposed formulation for learning reaching movements. In that context, a data set of initial guesses in different homotopies can be obtained from the user demonstration. \noindent \textbf{Efficiently Solving the Proposed FOC:} \noindent Algorithm 1 summarizes a sequential quadratic programming (SQP) routine for solving FOC (\ref{opt2}). The optimization starts with an initial guess trajectory $(x^t_*,y^t_*)$ and initialization of an index counter $i$ and two non-negative variables $\tau$ and $\lambda$. The outermost loop checks whether the constraints are satisfied and reduction in the cost function between two consecutive iterations is within a specified threshold, $\xi$. If either of these checks are violated, then the algorithm proceeds to the inner loop where we check whether the surrogate constraints (\ref{meanvar}) are satisfied. If not, then we increment the value of the $\tau$ by $\delta$ and $\lambda$ by a factor of $\Delta$. Thereafter (\ref{opt2}) is solved with the current values of $\tau$ and $\lambda$ and the solution obtained is used to update the initial guess trajectory, which in turn is used to obtain a better estimate of $C_j^i(.)$ through (\ref{affine}) for the next iteration. Algorithm 1 has two important features. Firstly, $E[C_j^t(.)]$ is affine and $J_{aug}$ is convex quadratic in terms of control variables. Thus, solving (\ref{opt2}) for a given $\tau$ and $\lambda$ amounts to solving a quadratic programming (QP) problem. This is turn can be accomplished efficiently through open source solvers like CVX \cite{CVX}. Secondly, algorithm 1 is different from the standard SQP routines used to solve general non-convex problems in the sense that it does not require a trust region update. This, in turn, is because the affine approximation of $C_j^t$ in (\ref{affine}) acts as a global upper bound for the original collision constraints (\ref{C_i}) Each $\eta$ can be mapped to numerous combinations of $\tau$ and $\lambda$. This redundancy is captured in algorithm \ref{algo1} by manipulating the update rates of $\tau$ and $\lambda$. We discuss this in more detail in Section \ref{sim} with the help of specific examples. \begin{algorithm} \caption{Sequential Quadratic Programming for solving FOC }\label{algo1} \begin{algorithmic} \small \State \textbf{Initialization}: Initial guess for optimal trajectory $x^t_*,y^t_*$. \State $i =0$ ,$\tau = 0$, $\lambda = 1$ \State \While $\vert J^{i+1}_{opt}-J^i_{opt}\vert<\xi$ and $E[C_j^t(.)]+k\sqrt(Var[C_j^t(.)]\leq 0$ \If{$E[C_j^t(.)]+k\sqrt(Var[C_j^t(.)]>0$}\\ $\tau \leftarrow \tau+\delta$\\ $\Delta \leftarrow \Delta\lambda$\\ \EndIf\\ $U$ $\leftarrow$ $ \arg\min J_{aug}$ \\\hspace{0.4cm} $E[C_j^t(.)]+\tau\leq 0$ \State Update $x^t_*,y^t_*$ through $U$\\ $i \leftarrow i+1$ \EndWhile \normalsize \end{algorithmic} \end{algorithm} \section{Simulation Results}\label{sim} \subsection{Collision Avoidance Strategies} To ensure collision avoidance, humans can choose to maintain high clearance from the obstacles resulting in a large deviation from straight line paths. Alternatively, they can choose to reduce the deviation but compensate for it by moving with high precision near the obstacles (reduce positional variance). In light of the the signal dependent noise (\ref{varepsidef}), moving with precision near the obstacle requires moving with low velocities. For the ease of exposition, from hereon, we will refer to the slowing down strategy as "Low Velocity" or \textbf{LV} and strategy of maintaining large clearance from the obstacles as "High Clearance" or \textbf{HC}. Both these strategies can be modeled through (\ref{opt2}) by using appropriate values for parameters $\tau$ and $\lambda$. For example, Fig.~\ref{plotcomp_config1} shows two solution trajectories of (\ref{opt2}) between the same start and goal configurations. The probability of avoidance, $\eta$ for both trajectories is $0.94$. However, both trajectories achieve this probability of collision avoidance through different combinations of $\tau$ and $\lambda$. The trajectory resulting from strategy \textbf{LV} was obtained with $\tau=0.0009, \lambda = 2.28*10^6$, while that resulting from strategy \textbf{HC} was obtained with $\tau=0.0012, \lambda = 0.9*10^6$. These values of $\tau$ and $\lambda$ were obtained using different update rates of of $\tau$ and $\lambda$ in algorithm \ref{algo1}. For simulating strategy \textbf{LV} we used $\delta = 0.00005$, $\Delta = 10$ in the update rule of $\tau$ and $\lambda$, and for simulating strategy \textbf{HC} we used $\delta = 0.0001$, $\Delta = 10$. Since, $\tau$ controls the clearance from the obstacles, setting higher update rates for $\tau$ resulted in trajectories belonging to strategy \textbf{HC}. On the other hand, a lower update rate for $\tau$ puts a higher emphasis on $\lambda$ and consequently manipulation of positional variance through velocity control for collision avoidance, thus, resulting in trajectories belonging to strategy \textbf{LV}. The velocity profiles shown in Fig.~\ref{plotcomp_dev_vel_config1} demonstrate that a higher $\lambda$ forces the velocity magnitude along the trajectory closer to the obstacle (strategy \textbf{LV}) to be small during the initial stages, i.e, while the trajectory is near the obstacles. Consequently, the positional variance is reduced and desired probability of collision avoidance is maintained. In contrast, the trajectory with higher clearance from the obstacle (strategy \textbf{HC}) has the liberty to move with faster velocity and let the variance of the movement grow. The velocity magnitude along trajectory resulting from strategy \textbf{LV} increases eventually, but this happens towards the end of the movement, after crossing the obstacles. \begin{figure}[!h] \centering \subfigure[]{ \includegraphics[width= 4.3cm, height=3.2cm] {strat_comp_config1.eps} \label{plotcomp_dev_config1} }\hspace{-0.6cm} \subfigure[]{ \includegraphics[width= 4.3cm, height=3.2cm] {strat_comp_vel_config1.eps} \label{plotcomp_dev_vel_config1} }\hspace{-0.8cm} \caption{Demonstration of the effect of the choice of $\tau$ and $\lambda$ on the collision avoidance strategies. Two sets of trajectories between same start and goal locations and having same probability of avoidance, $\eta$ were computed. However, to generate these two trajectories we used a different set of $\tau$ and $\lambda$ to achieve the specified probability of avoidance. The trajectories shown in green were computed using $\tau=0.0012, \lambda = 0.9*10^6$, while trajectories shown in red were computed using $\tau=0.0009, \lambda = 2.28*10^6$.} \label{plotcomp_config1} \end{figure} \subsection{Mapping Avoidance Strategies to Control Cost} If we would derive a variant of the optimization (\ref{opt2}) for a system with an additive constant-variance noise, the probability of collision avoidance, $\eta$ would solely depend on the clearance from the obstacles. Thus, increase in $\eta$ would directly lead to an increase in arc lengths, and consequently, control costs. However, to develop a framework that is suitable for modeling human arm movements, we incorporated signal dependent noise \cite{Harris98}. In the presence of signal-dependent noise, control cost of trajectories depends on the probability of avoidance $\eta$, and more importantly, on the combination of $\tau$ and $\lambda$ that is used in the optimization (\ref{opt2}) to achieve this $\eta$. In other words, the control cost depends on the strategy that is used to achieve a particular probability of collision avoidance. In Fig. \ref{strat_comp_traj1}-\ref{strat_comp_traj2_vel} we present simulated trajectories that correspond to both strategy \textbf{LV} and \textbf{HC} for probabilities of collision avoidance $\eta=0.86$ and $\eta=0.95$. The paths that resulted from strategy \textbf{HC} indeed has higher clearance from the obstacles. In contrast, the paths that resulted from strategy \textbf{LV} have lower clearance and thus, heavily rely on modifying the velocity magnitudes and consequently positional variance for collision avoidance. Consequently, paths resulting from strategy \textbf{HC} have higher arc lengths as compared to paths resulting from strategy \textbf{LV}. In Fig.~ (\ref{cost_inter_hom}) the ratio of control costs for trajectories resulting from both the strategies is presented as a function of $\eta$. For low $\eta$, paths resulting from strategy \textbf{LV} which have lower arc lengths are less costly. But, as $\eta$ increases, the higher arc length paths resulting from strategy \textbf{HC} become less costly. The observations discussed above are apparent from the structure of the optimization (\ref{opt2}). Increasing either $\tau$, $\lambda$, or both, leads to an increase in the control cost. At low values of $\eta$, there is very little restriction on the growth of positional variance and thus the control cost is dictated by $\tau$ which controls the arc length. But as $\eta$ increases, the effect of $\lambda$ becomes prominent. This is consistent with the significant reduction in positional variance that is depicted in Fig.~ \ref{strat_comp_traj2} and the corresponding skewed velocity profile shown in Fig.~ \ref{strat_comp_traj2_vel}. Since trajectories resulting from strategy \textbf{LV} has lesser clearance from the obstacles, they require a higher value of $\lambda$ to achieve the same $\eta$ (similar to the result shown in previous section). Thus, at higher probabilities trajectories resulting from strategy \textbf{LV} become more costly in spite of having lower arc lengths. \begin{figure}[!h] \centering \subfigure[]{ \includegraphics[width= 4.4cm, height=3.2cm] {strat_comp_traj1.eps} \label{strat_comp_traj1} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.4cm, height=3.2cm] {strat_comp_traj1_vel.eps} \label{strat_comp_traj1_vel} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 4.4cm, height=3.2cm] {strat_comp_traj2.eps} \label{strat_comp_traj2} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.4cm, height=3.2cm] {strat_comp_traj2_vel.eps} \label{strat_comp_traj2_vel} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 8.3cm, height=4.3cm] {cost_intra_hom.eps} \label{cost_intra_hom} } \caption{Control costs vary with probability of avoidance. (a)- (d) Movements with different strategies between the same start and goal locations, the same obstacle configurations, and with noise level $c_x, c_y=0.15$. (a), (c) present the paths with standard deviation ellipses of the two strategies. The obstacles are represented as blue filled circles and grey shaded region around them represent uncertainty about the size of the obstacle. (b), (d) present the velocity profiles. (e) the ratio of the control costs between the two strategies, $\frac{J_U^{LV}}{J_U^{HC}}$, as a function of $\eta$.} \vspace{-0.5cm} \end{figure} The results presented above, were obtained with $c_x=c_y=0.15$ in (\ref{varepsidef}). That is, the noise was $15\%$ of the control input. Next, we examined how the cost shown in Fig.~ \ref{cost_intra_hom} changes with a reduction in noise. Fig.~\ref{cost_lessnoise} depicts the ratio of control costs for trajectories resulting from strategy \textbf{LV} and \textbf{HC} for $c_x=c_y=0.05$. With lower noise, strategy \textbf{LV} becomes less costly even for higher probabilities. This result agrees with the common intuition. With a lesser noise there is no need to ensure high clearance from the obstacles, thereby making strategy \textbf{HC} redundant. In fact, for a zero noise system, the trajectory with least control cost would just graze the obstacle. We would like to highlight that Fig.~ \ref{cost_intra_hom} and Fig.~\ref{cost_lessnoise} are intended to demonstrate the general trend in ratio of control costs. An in depth analysis of the exact values and their dependence on the initial conditions of the optimization are beyond the scope of this current study. \begin{figure}[!h] \centering \includegraphics[width= 8.3cm, height=3.7cm] {cost_inter_hom_loewnoise.eps} \caption{ The ratio of the control costs between the two strategies, $\frac{J_U^{LV}}{J_U^{HC}}$, as a function of $\eta$ for noise level of $c_x, c_y=0.05$. } \label{cost_lessnoise} \vspace{-0.4cm} \end{figure} \subsection{Modeling Choice of Homotopies} In this section, we discuss how choice of strategy of collision avoidance or in other words, choice of $\tau$ and $\lambda$ for a given $\eta$ affects control cost of trajectories in different homotopies. \subsubsection{Strategy \textbf{LV}} In Fig.~\ref{plot1comp_config1} and \ref{plot2comp_config1} solution trajectories of (\ref{opt2}) having same start and goal positions, but belonging to different homotopies and having different probability of avoidance, $\eta$, are depicted. The trajectories in both the homotopies were generated by choosing such values for $\tau$ and $\lambda$ that ensure collision avoidance by slowing down near the obstacles and reducing positional variance (strategy \textbf{LV}) rather than taking a large deviation from them. Thus, as $\eta$ increases from 0.9 (figure \ref{plot1comp_config1}) to 0.965 (figure \ref{plot2comp_config1}), we observe only a small change in arc length, but a significant change in the positional variance along the trajectories. Moreover, since trajectories of homotopy 2 move through a more cluttered environment, the reduction of positional variance along it is higher than that along trajectories of homotopy 1. It is possible to relate the change in positional variance as $\eta$ increases to the change in the control costs through the velocity profiles. Firstly, in contrast to Fig.~ \ref{plot1comp_vel_config1}, velocity profiles shown in Fig.~ \ref{plot2comp_vel_config1} are skewed; i.e, they have low magnitudes during the initial phases and a peak which is shifted towards the right. This is to ensure that velocity magnitudes (and thus positional variance) are low near the obstacles and reach peak only after crossing the obstacles. Since trajectories in homotopy 2 require a larger reduction in positional variance, the skewness observed in their velocity profile is also higher. Finally, the skewness in velocity profiles is accompanied with higher peak velocities. This is because of the fixed final time paradigm of the optimization, (\ref{opt2}). Since, magnitudes are low during initial phases of the trajectories, it needs to be compensated by moving faster in obstacle free space to ensure that the goal position is reached in specified time. Now, it is easy to deduce that a skewed velocity profile with higher peaks would mean higher accelerations and jerks and thus, consequently higher control costs. To summarize, for collision avoidance strategy \textbf{LV}, maintaining high $\eta$ requires larger reduction in positional variance leading to larger skewness in velocity profiles and consequently higher control costs. However, since trajectories in homotopy 2 require a larger reduction in positional variance, the control costs along it would increase at a higher rate than that along trajectories in homotopy 1. We demonstrate this last observation in Fig.~ \ref{cost_inter_hom} which shows the ratio of control costs along homotopy 1 and homotopy 2 for the various values of $\eta$. For lower values (till $\eta =0.9$), the cost along homotopy 1 and homotopy 2 are similar owing to their similar velocity profiles. However, for higher $\eta$, cost along homotopy 1 is significantly lower than that along homotopy 2. \begin{figure}[!h] \centering \subfigure[]{ \includegraphics[width= 4.6cm, height=3.5cm] {plot1comp_config11.eps} \label{plot1comp_config1} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.1cm, height=3.5cm] {plot1comp_vel_config11.eps} \label{plot1comp_vel_config1} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 4.6cm, height=3.5cm] {plot2comp_config11.eps} \label{plot2comp_config1} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.1cm, height=3.5cm] {plot2comp_vel_config11.eps} \label{plot2comp_vel_config1} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 8.3cm, height=3.7cm] {cost_inter_hom.eps} \label{cost_inter_hom} } \caption{Movements between the same start and goal locations and obstacle configurations but with different probability of avoidance. (a), (c) present the paths with standard deviation ellipses of the two homotopies. The obstacles are represented as blue filled circles and grey shaded region around them represent uncertainty about the size of the obstacle. (b), (d) present the velocity profiles. (e) the ratio of the control costs between the two homotopies, $\frac{J_U^{H_1}}{J_U^{H_2}}$, as a function of $\eta$. For the chosen avoidance strategy \textbf{LV}, the control cost along the homotopies is similar for low $\eta$. For higher $\eta$, the control cost along homotopy 1 is significantly less.} \end{figure} \begin{figure}[!h] \centering \subfigure[]{ \includegraphics[width= 4.6cm, height=3.5cm] {hom_comp_traj1.eps} \label{hom_comp_traj1} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.1cm, height=3.5cm] {hom_comp_vel_traj1.eps} \label{hom_comp_traj1_vel} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 4.6cm, height=3.5cm] {hom_comp_traj2.eps} \label{hom_comp_traj2} }\hspace{-0.8cm} \subfigure[]{ \includegraphics[width= 4.1cm, height=3.5cm] {hom_comp_vel_traj2.eps} \label{hom_comp_traj2_vel} }\vspace{-0.7cm} \subfigure[]{ \includegraphics[width= 8.3cm, height=3.7cm] {cost_inter_hom_strat1.eps} \label{cost_inter_hom_strat1} } \caption{Movements between the same start and goal locations and obstacle configurations but with different probability of avoidance, $\eta$ . The results are similar to that shown in Fig. 4, but trajectories are now computed with respect to strategy \textbf{HC}, which gives higher emphasis on clearance from obstacles for obstacle avoidance. (e): Ratio of control cost along the two homotopies,$\frac{J_U^{H_1}}{J_U^{H_2}}$ with respect to strategy \textbf{HC}. } \end{figure} \subsubsection{Strategy \textbf{HC}} Here we re-analyze the cost along homotopies for the same configuration as shown in Fig.~4, but with respect to strategy \textbf{HC} where there is a bigger reliance on clearance from the obstacles for collision avoidance. The trajectories along both the homotopies are shown in Fig.~ \ref{hom_comp_traj1} and \ref{hom_comp_traj2}. Comparing these trajectories with Fig.~\ref{plot1comp_config1} and \ref{plot2comp_config1}, demonstrates that there is a significant increase in clearance from the obstacles with increase in $\eta$. Thus, a lesser restriction is required on the growth of positional variance and consequently, the velocity profiles along trajectories in both the homotopy become very similar even at higher $\eta$ (figure \ref{hom_comp_traj2_vel}). This is very different from the comparisons shown in Fig.~ \ref{plot2comp_vel_config1}. The similarity in velocity trajectories in turn results in similar control costs along both the homotopies (figure \ref{cost_inter_hom_strat1}). In particular, the control cost along homotopies 1 and 2 are similar for a larger range of $\eta$. The lowest ratio of cost is $0.67$ in figure \ref{cost_inter_hom_strat1} for $\eta = 0.9615$. In comparison, the ratio was $0.33$ in figure \ref{cost_inter_hom} for the same $\eta$. \section{Discussion and Future Work} \label{disc} In this paper, we presented a stochastic optimal control problem with signal dependent noise and probabilistic collision avoidance constraints as a model of human reaching among obstacles. We then reformulated it into a parameter optimization problem. The parameters $\tau$ and $\lambda$ which appeared in the reformulated optimization problem, (\ref{opt2}) served as a mapping between the probability of collision avoidance, $\eta$, and possible collision avoidance strategies. The parameter $\tau$ models the clearance from the obstacles, and the parameter $\lambda$ models the effect of slowing down near the obstacles. In our simulations, we demonstrate that effect of these two parameters on movement paths and velocity profiles is in agreement with the experimental findings reported in \cite{Tressilian} for reach to grasp movements around obstacles. Specifically, they discussed two basic but coupled strategies of collision avoidance which consists of moving around the obstacle and slowing down. We showed how each avoidance strategy results in a unique variation of control costs with respect to $\eta$ both within and across homotopies. These variation in control costs can be used as a basis for predicting user behavior between a given start and goal position and for a given obstacle environment. For example, in Fig.~\ref{strat_comp_traj1}-\ref{strat_comp_traj2}, \ref{cost_intra_hom}, we showed that a risk-seeking behavior (low $\eta$) is more likely to use strategy \textbf{LV} for collision avoidance as it requires less control effort. In contrast, a risk-averse behavior (high $\eta$) would likely choose strategy \textbf{HC}. We also showed how control cost along different homotopies is dictated by the choice of avoidance strategy. This variation in control cost can be used to predict the homotopy selection by the human. In particular, if two competing homotopies have similar control costs, then the human would have equal affinity towards either of it, thus leading to a random behavior. However, as the ratio of control costs departs from unity, the possibility of selection would incline towards the lesser control costs, thus leading to more well defined pattern. Our proposed framework has the following limitations. Firstly, we used a very simple dynamic model, and thus, we necessarily do not capture every aspect of the motion of the human arm. A second order linear mechanical system or a non-linear model of a serial link robotic manipulator are a better alternative. The second order mechanical system can be easily incorporated because as long as the system is linear, the structure of the optimization (\ref{opt2}) would not change. In contrast, incorporating even a simple planar two-link manipulator model is more challenging, and may require methods similar to that proposed in \cite{todorov_obstacle}. Secondly, the fixed final time paradigm of optimization (\ref{opt2}) is not equipped to capture the effect of increase in traversal time of reaching motions due to presence of obstacles. A possible solution to this could be developed using the time scaling concepts \cite{bharath_iros15}. Our current study is limited to developing and approximating an efficient solution for the computational framework, and demonstrating the homotopies and strategies that can be explained within this framework, and we do not test our predictions against real reaching data. From our simulation study, we conclude that if the parameters $\tau$ and $\lambda$ are known, the possible choice of homotopy as well as choice of trajectory within that homotopy can be predicted. Thus, currently our efforts are focused towards developing an inverse optimization framework which can automatically recover these parameters from example trajectories demonstrated by the human. \bibliographystyle{IEEEtran}
2023-04-23T08:18:05.668Z
2018-03-28T02:24:43.000Z
redpajama/arxiv
arxiv_0000
1,174
6,565
aaa1dfed58ad8fff95ba18870127f9909e2559f4
\section{Introduction} \subsection{} Let $F$ be a non-archimedean locally compact field. Let $\bf G$ be a quasi-split connected reductive group over $F$, and let $r$ be a complex representation of the Langlands dual group ${}^{\rm L}G$ of $\bf G$. For interesting choices of $\bf G$ and $r$, the Langlands-Shahidi method attaches a $\gamma$-factor $\gamma(s,\pi,r,\psi)$ to a smooth non-trivial character $\psi$ of $F$ and an irreducible smooth generic representation $\pi$ of ${\bf G}(F)$. Let $\mathcal{W}_F$ denote the absolute Weil group of $F$, and $\mathcal{W}_F'$ its Weil-Deligne group. The local Langlands correspondence --still mostly conjectural-- associates to $\pi$ a Langlands parameter $\phi_\pi: \mathcal{W}_F' \rightarrow {}^{\rm L}G$. Whenever such a correspondence is established for $\bf G$, we should have the following main equality: \begin{equation}\label{maineq} \gamma(s,\pi,r,\psi) = \gamma(s,r \circ \phi_\pi,\psi), \end{equation} where the $\gamma$-factors on the right are the ones defined by Langlands and Deligne. \subsection{} A noteworthy example arises from a trio $({\bf G},{\bf H},{\bf L})$ of quasi-split groups such that: $\bf G$ is the restriction of scalars ${\rm Res}_{E/F}{\rm GL}_2$, where $E$ is a cubic separable extension of $F$, so that $G = {\bf G}(F) = {\rm GL}_2(E)$; $\bf H$ is an ambient quasi-split group (over $F$) semisimple and simply connected of type $D_4$, with triality corresponding to $E/F$ (see \S~\ref{AsaiL}); and $\bf L$ is the maximal Levi subgroup of $\bf H$ obtained by removing the central root $\alpha_2$ from the Dynkin diagram of $\bf H$: \begin{center} \begin{tikzpicture} \draw[circle] (180:1) node {$\bullet$}; \draw[circle] (180:1) node[left] {$\alpha_1$}; \draw[circle] (195:1) arc (195:285:1); \draw[circle] (300:1) node {$\bullet$}; \draw[circle] (300:1.35) node {$\alpha_4$}; \draw[circle] (315:1) arc (315:405:1); \draw[circle] (60:1) node {$\bullet$}; \draw[circle] (60:1.35) node {$\alpha_3$}; \draw[circle] (75:1) arc (75:165:1); \draw[circle] (0:0) node {$\bullet$} node[right] {$\alpha_2$}; \draw[circle] (60:.125) -- (60:.875); \draw[circle] (180:.125) -- (180:.875); \draw[circle] (300:.125) -- (300:.875); \end{tikzpicture} \end{center} Writing $L = {\bf L}(F)$, there is a natural homomorphism of $L$-groups ${}^{\rm L}\iota : {}^{\rm L}G \rightarrow {}^{\rm L}L$, and a corresponding homomorphism $\iota : {\bf L} \rightarrow {\bf G}$. We use the Langlands-Shahidi method (see \cite{Sh1990} and \cite{LS}), applied to the pair $({\bf H},{\bf L})$ to produce $\gamma$-factors for $G$. More precisely, the adjoint action of ${}^{\rm L}L$ on the Lie algebra of the unipotent radical of the parabolic ${}^{\rm L}P$ of ${}^{\rm L}H$ with Levi ${}^{\rm L}L$ produces an $8$-dimensional representation $r_\mathcal{A}$ of ${}^{\rm L}L$. The Langlands-Shahidi method gives a $\gamma$-factor $\gamma(s,\tau,r_\mathcal{A},\psi)$ for any irreducible smooth generic representation $\tau$ of $L$. If $\pi$ is an irreducible smooth generic representation of $G$, we define Asai cube $\gamma$-factors for $\pi$ by setting \begin{equation}\label{Asai:gamma:def} \gamma(s,\pi,{}^\otimes{\rm I},\psi) \mathrel{\mathop:}= \gamma(s,\tau,r_\mathcal{A},\psi), \end{equation} where $\tau$ is a (generic) irreducible component of $\pi \circ \iota$. \subsection{} For ${\rm GL}_2$, the local Langlands correspondence was established in \cite{Ku}, see also the more detailed account of \cite{BuHe}. Let $\pi$ be as above, and let $\sigma$ be the corresponding degree $2$ representation of the Weil-Deligne group $\mathcal{W}_E'$ of $E$. Then, $\mathcal{W}_E'$ can be seen as an index $3$ subgroup of $\mathcal{W}_F'$, and we let ${}^\otimes{\rm I}(\sigma)$ be the $8$-dimensional representation of $\mathcal{W}_F'$ obtained from $\sigma$ by tensor induction from $\mathcal{W}_E'$ to $\mathcal{W}_F'$. The main equality in this case takes the form: \begin{equation}\label{maineq:Asai} \gamma(s,\pi,{}^\otimes{\rm I},\psi) = \gamma(s,{}^\otimes{\rm I}(\sigma),\psi). \end{equation} We establish it in \S~\ref{proof:GL2class}. Prasad and Schulze-Pillot posed the question of equality between Langlands-Shahidi $\varepsilon$-factors with those on the Galois side \cite{PrSP2008}; in \S~\ref{consequences} we extend equality \eqref{maineq:Asai} to non-generic $\pi$ and derive equality for $L$-functions and $\varepsilon$-factors: \begin{align}\label{maineq:L3} L(s,\pi,{}^\otimes{\rm I}) &= L(s,{}^\otimes{\rm I}(\sigma)) \\ \nonumber \varepsilon(s,\pi,{}^\otimes{\rm I},\psi) &= \varepsilon(s,{}^\otimes{\rm I}(\sigma),\psi). \end{align} As a consequence of \eqref{maineq:Asai}, we also prove stability of Asai cube $\gamma$-factors after twisting by a suitably highly ramified character in \S~\ref{stability:prop}. \subsection{Remark} An analogous $\gamma$-factor can be defined via the integral representation of Garrett \cite{Ga1987}, and also Piatetski-Shapiro and Rallis \cite{PSRa1987}. This approach can be very helpful in determining the location of poles and non-zero regions, see the work of Ikeda, e.g. \cite{Ik1992}. We do not address the question of equality of those factors with ours. \subsection{} The proof of \eqref{maineq:Asai} is local-global in nature, and its principle is well known: by the multiplicativity property of $\gamma$-factors, one reduces to the case where $\pi$ is supercuspidal. And if so, one uses instances where the global Langlands correspondence for ${\rm GL}_2$ has been established \cite{JaLa, La, Tu1978} --more details in \S~\ref{sketch:proof} and a full proof in \S~\ref{proof:GL2class}. The idea of a local to global argument goes back to the early days of class field theory; in the context of Langlands correspondences it has been used repeatedly since the 1970's. In situations closely related to ours, it was used by H. Kim in \cite{Ki2002} to establish the main equality \eqref{maineq} for a triple product of representations of ${\rm GL}_2(F)$, and by M. Krishnamurthy in \cite{Kr2003} for a product $\pi \times \Pi$, where $\pi$ is a representation of ${\rm GL}_2(F)$ and $\Pi$ of ${\rm GL}_2(E)$, $E/F$ quadratic. Specifically, we use the remark by Weil that local Galois groups are solvable \cite{We1974}; that idea was used again by Ramakrishnan in \cite{Ra2000}. A local-global method was used when $F$ has positive characteristic in recent work of the authors \cite{GaLo, HeLo2011, HeLo2013a, HeLo2013b}. \subsection{} In the global situation, we use a cubic separable extension $l/k$ of global fields, and we have to consider the algebra $k_v \otimes_k l$ for all places $v$ of $k$. Since such an algebra is not necessarily a field, we are forced to consider other $\gamma$-factors than the Asai cube ones, in particular, those of \cite{Ki2002} and \cite{Kr2003} mentioned above. Rather than relying on those specific instances, however, we prove in one swoop a more general result. Namely, Theorem~\ref{GL2class:thm} below, which illustrates the full extent of the method. We say that a connected reductive group $\bf G$ is of ${\rm GL}_n$-\emph{kind} if ${\bf G}$ is isomorphic to a product ${\bf G}_1 \times \cdots \times {\bf G}_t$, where for each $i$ we have ${\bf G}_i = {\rm Res}_{l_i/k}{\rm GL}_{n_i}$, $n_i \leq n$. Here, ${\rm Res}_{l_i/k}{\rm GL}_{n_i}$ denotes restriction of scalars with respect to a separable extension $l_i$ over a base field $k$. The Langlands-Shahidi method applies to a pair of quasi-split connected reductive groups $({\bf H},{\bf L})$ such that there is a parabolic subgroup $\bf P$ of the ambient group $\bf H$, with Levi component $\bf L$. The unipotent radical of $\bf P$ is denoted by $\bf N$ and has Lie algebra $\mathfrak{n}$. Let $\rho$ be an irreducible component of the adjoint representation of ${}^{\rm L}L$ on ${}^{\rm L}\mathfrak{n}$. We then say that a representation $r$ of ${}^{\rm L}G$ is an \emph{LS-representation} if there is such a pair $({\bf H},{\bf L})$ and an $L$-map ${}^{\rm L}\iota : {}^{\rm L}{\bf G} \rightarrow {}^{\rm L}{\bf L}$, inducing a separable isogeny on derived groups, such that $r = \rho \circ {}^{\rm L}\iota$. Then, the Langlands-Shahidi method attaches a $\gamma$-factor $\gamma(s,\pi,r,\psi)$ to any generic irreducible smooth representation $\pi$ of $G$ --the $\gamma$-factor depends only on $r$, not on the way $r$ is written $\rho \circ {}^{\rm L}\iota$ for some $\iota$. \begin{theorem}\label{GL2class:thm} Let $r$ be an LS-representation of ${}^{\rm L}G$ for a group $\bf G$ of ${\rm GL}_2$-kind. Let $\pi$ be a smooth irreducible generic representation of $G$, and let $\phi_\pi$ be its Langlands parameter. Let $\psi : F \rightarrow \mathbb{C}^\times$ be a non-trivial character. Then \begin{equation*} \gamma(s,\pi,r,\psi) = \gamma(s,r \circ \phi_\pi,\psi). \end{equation*} \end{theorem} Equality \eqref{maineq:Asai} is a special case. The proof of Theorem~\ref{GL2class:thm} occurs in \S~\ref{proof:GL2class}; its application to the Asai cube factors is found in \S~\ref{AsaiL}. \subsection{Remark} Compatibility with the local Langlands correspondence, i.e. equality \eqref{maineq}, can be stated for all LS-representations of groups of ${\rm GL}_n$-kind, since the local Langlands correspondence is known for ${\rm GL}_n$ \cite{La1973, LaRaSt1993, HaTa2001, He2000, Sc2013}. Equality \eqref{maineq} holds for the Rankin-Selberg $\gamma$-factors $\gamma(s,\pi_1\times\pi_2,\psi)$, essentially by definition of the local Langlands correspondence. Another case occurs when $G = {\rm GL}_n(F)$ and $r$ is the exterior square or the symmetric square representation of ${}^{\rm L}G$. In that case, for a local field $F$ of characteristic zero, \eqref{maineq} was proved up to a root of unity in \cite{He2010} and equality in \cite{CoShTs2017}; when $F$ has positive characteristic, exact equality was proved in \cite{HeLo2011} (and twisted exterior square or symmetric square $\gamma$-factors, for $G={\rm GL}_n(F) \times F^\times$, were treated in \cite{GaLo2015}). Sill another case is when $G = {\rm GL}_n(E)$, $E/F$ quadratic separable, and $r$ gives quadratic Asai $\gamma$-factors: for $F$ of characteristic zero, see \cite{He2010} for equality up to a root of unity and work in progress of D. Shankman, adapting \cite{CoShTs2017} for true equality; for $F$ of characteristic $p$, \cite{HeLo2013b} establishes equality \eqref{maineq}. In fact, when $F$ has positive characteristic, the method of \cite{HeLo2011,HeLo2013a,HeLo2013b} was generalized by Gan and Lomel\'i in \cite{GaLo}, yielding \eqref{maineq} for LS representations of ${\rm GL}_n$-type; so when $F$ has positive characteristic, Theorem~\ref{GL2class:thm} is in fact a consequence of \cite{GaLo}. Nonetheless, in \S~\ref{p>0} we indicate how to adapt \cite{HeLo2011} and \cite{HeLo2013b} to the present situation, as it gives a more direct proof than \cite{GaLo}. \subsection{}\label{sketch:proof} For the benefit of the reader, we now explain the local-global argument in the proof of \eqref{maineq:Asai} (for the Asai cube factors), when $F$ has characteristic zero. The general case of Theorem~\ref{GL2class:thm} is fully proved in \S~\ref{proof:GL2class}, using a similar pattern. Let $k$ be a number field. The principal obstacle is the lack of a complete Langlands correspondence for ${\rm GL}(2)$ over $k$, so we need to use the partial results available in that direction. More precisely, as in the positive characteristic case, we use a list of properties of the $\gamma$-factors $\gamma(s,\pi,r,\psi)$. All but one of these properties are local, and the corresponding properties for $\gamma(s,r \circ \phi_\pi,\psi)$ are easy to prove. One of these properties, namely multiplicativity, concerns the case where $\pi$ is a component of a representation ${\rm Ind}_P^G \rho$ induced from a proper parabolic subgroup of $G$. Multiplicativity and an inductive argument help us reduce the proof of \eqref{maineq:Asai} to the case where $\pi$ is cuspidal. To treat the cuspidal case, our tool is the global functional equation, which we now describe in the Asai cube case. Let $l$ be a cubic extension of $k$, and $\Pi$ a cuspidal automorphic representation of ${\bf G}(\mathbb{A}_k) = {\rm GL}_2(\mathbb{A}_l)$ where ${\bf G} = {\rm Res}_{l/k}{\rm GL}_2$. Then for each place $v$ of $k$, $\Pi_v$ is a generic irreducible representation of $G_v = {\bf G}(k_v) = {\rm GL}_2(k_v \otimes_k l)$, so that the Langlands-Shahidi method produces $\gamma$-factors $\gamma(s,\Pi_v,{}^\otimes{\rm I},\Psi_v)$ for any non-trivial character $\Psi$ of $k \backslash \mathbb{A}_k$. The global functional equation is \begin{equation}\label{globalFE} L^S(s,\pi,{}^\otimes{\rm I}) = \prod_{v \in S} \gamma(s,\Pi_v,{}^\otimes{\rm I},\Psi_v) L^S(1-s,\widetilde{\Pi},{}^\otimes{\rm I}), \end{equation} where $S$ is a finite set of places of $k$, containing the infinite ones, such that $l/k$, $\Pi$ and $\Psi$ are unramified for $v \notin S$, and $L^S$ denotes the product of (unramified) local $L$-functions ranging over all $v\notin S$ --which by construction will be compatible with Galois $L$-functions. Also, $\widetilde{\Pi}$ is the representation contragredient to $\Pi$. The proof of \eqref{maineq:Asai} then proceeds as follows for $\pi$ cuspidal. We choose a cubic extension of number fields $l/k$, with a place $v_0$ giving the local extension $E/F$, and a cuspidal automorphic representation $\Pi$ as above, giving $\pi$ at $v_0$. We assume that $\psi$ is the component at $v_0$ of a global additive character $\Psi$ (local properties later allow us to derive the case where $\psi$ is arbitrary). We apply the functional equation to $\Pi$. The set $S$ not only has to contain $v_0$, but we also need to adjust the global situation so that at places in $S$, different from $v_0$, equality \eqref{maineq:Asai} is valid. The case of Archimedean places comes from Shahidi \cite{Sh1985,Sh1990}, but at a finite place $v \in S \setminus \left\{ v_0 \right\}$ we need to assume that $\Pi_v$ is somewhat less complicated than $\pi$, for example, $\Pi_v$ not cuspidal or a level zero cuspidal. As mentioned before, that procedure necessarily yields situations which are not giving rise to Asai cube factors. For example, at a finite place distinct from $v_0$ the extension $l/k$ may be split or partially split; the former case requires a triple Rankin-Selberg product and the latter a quadratic Asai $\gamma$-factor. This explains why we consider the more general result of Theorem~\ref{GL2class:thm}, which also has intrinsic interest. The main problem, however, is to get a corresponding functional equation on the Galois side. While over function fields the functional equation for $L$-functions of $\ell$-adic Galois representations is a consequence of results of Deligne and Laumon, over number fields it is available only for complex representations of Weil groups. Only if $\Pi$ corresponds to a complex representation $\Sigma$ of the Weil group $\mathcal{W}_l$ of $l$, can we compare the functional equation \eqref{globalFE} with the one for $\Sigma$: \begin{equation}\label{galoisFE} L^S(s,{}^\otimes{\rm I}(\Sigma)) = \prod_{v \in S} \gamma(s,{}^\otimes{\rm I}(\Sigma_v),\Psi_v) L^S(1-s,{}^\otimes{\rm I}(\widetilde{\Sigma})). \end{equation} The comparison is done term by term: the $L$-functions in \eqref{globalFE} and \eqref{galoisFE} are the same because $\Pi$ corresponds to $\Sigma$, and the $\gamma$-factors at places in $S$ distinct from $v_0$ are the same by the inductive argument alluded to above. Equality \eqref{maineq:Asai} for $\pi$ then follows. Our way to produce corresponding $\Pi$ and $\Sigma$ is to start from the Weil group side, with $l/k$ chosen as mentioned above giving $E/F$ at $v_0$. The representation $\sigma$, corresponding to $\pi$ cuspidal, is irreducible, hence has finite image in the projective linear group; this image is isomorphic to $A_4$ or $S_4$, or is dihedral. In each case, we produce a representation $\Sigma$ of $\mathcal{W}_l$, yielding $\sigma$ at the place $v_0$, and with the same image as $\sigma$ in the projective linear group. By the strong Artin conjecture of Langlands and Tunnell \cite{La,Tu1981}, there is an automorphic cuspidal representation $\Pi$ of ${\rm GL}_2(\mathbb{A}_l)$ corresponding to $\Sigma$, and we can proceed as sketched. \subsection*{Acknowledgments} The authors would like to thank D. Prasad and F. Shahidi for mathematical discussions. The first author thanks his home institutions, Universit\'e Paris-Saclay and CNRS; in addition to Pontificia Universidad Cat\'olica de Valpara\'iso for a visit (supported by MEC Grant 80170039) where a second version of the paper was written. The second author would like to thank IH\'ES and Universit\'e d'Orsay for their hospitality during visits in 2016; in addition to IISER, Pune and TIFR, Mumbay during January and February 2017; he was supported in part by Project VRIEA/PUCV 039.367 and FONDECYT Grant 1171583. \section{LS method and ${\rm GL}(2)$}\label{LSmethod} For the moment, let $F$ denote either a local field or a global one. We write $\mathfrak{p}$ to denote the characteristic of $F$, where we allow $\mathfrak{p}$ to be either a prime $p$ or $0$. Fix $\mathfrak{p}$. As mentioned in the introduction, we say that a connected reductive group ${\bf G}/F$ is of ${\rm GL}_n$-\emph{kind} if ${\bf G}$ is isomorphic to a product ${\bf G}_1 \times \cdots \times {\bf G}_t$, where we have that each ${\bf G}_i$ is a group ${\rm Res}_{E_i/F}{\rm GL}_{n_i}$, $n_i \leq n$, obtained via restriction of scalars with respect to a separable extension $E_i$ over the base field $F$. Given an algebraic group $\bf H$ defined over $F$, we write $H={\bf H}(F)$. We write ${\bf Z}_{\bf H}$ to denote the center of $\bf H$. We let $\bf H$ be an ambient group that is quasi-split connected reductive defined over $F$, together with a subgroup $\bf L$ which is the Levi component of a maximal parabolic subgroup $\bf P$ of $\bf H$. Let $\bf N$ be the unipotent radical of $\bf P$ with Lie algebra $\mathfrak{n}$. The Langlands-Shahidi method \cite{Sh1990,LS}, applies to a representation $\rho$ which is an irreducible component of the adjoint representation of ${}^{\rm L}L$ on ${}^{\rm L}\mathfrak{n}$. We work with an extended version of these representations for groups $\bf G$ of ${\rm GL}_2$-kind. An \emph{LS-representation} $r$ of ${}^LG$ is one obtained via a pair $({\bf H},{\bf L})$ of quasi-split connected reductive groups and a representation $\rho$ as above, together with a map of L-groups ${}^{\rm L}\iota : {}^{\rm L}{\bf G} \rightarrow {}^{\rm L}{\bf L}$, inducing a separable isogeny on derived groups, and such that $r = \rho \circ {}^{\rm L}\iota$. \subsection{Notation} From \S~\ref{LSmethod} to \S~\ref{consequences} of the article, we let $F$ denote a local field and $k$ a global one. Let $\mathscr{A}_n(\mathfrak{p})$ be the class of tuples $(F,\psi,{\bf G},\pi,r)$ where $F$ is a locally compact field of characteristic $\mathfrak{p}$, $\psi$ is a non-trivial additive character of $F$, a quasi-split reductive group $\bf G$ of ${\rm GL}_n$-kind, $\pi$ a smooth irreducible representation $\pi$ of $G = {\bf G}(F)$, and $r$ an LS-representation of ${}^{\rm L}G$. We say that a tuple $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_n$ is generic if $\pi$ is generic. If $\pi$ is a smooth irreducible representation of $G$, we can adapt Silberger's reasoning \cite{Si1979}, in order to show that $\pi$ reduces in $L$ to a sum of finitely many irreducibles. Then, the Langlands-Shahidi method attaches a $\gamma$-factor \begin{equation}\label{general:def} \gamma(s,\pi,r,\psi) \mathrel{\mathop:}= \gamma(s,\tau,r,\psi) \text{ for } (F,\psi,{\bf G},\pi,\psi) \in \mathscr{A}_2(\mathfrak{p}), \end{equation} where $\tau$ is the (unique) generic component of the restricition of $\pi$ to $L$. We also let $\mathscr{A}_n^{\rm glob}(\mathfrak{p})$ be the class of tuples $(k,\Psi,{\bf G},\Pi,r,S)$ where $k$ is a global field of characteristic $\mathfrak{p}$, $\Psi$ a non-trivial character of $\mathbb{A}_k/k$ (note that $\Psi_v$ is unramified almost everywhere), $\bf G$ a quasi-split reductive group over $k$ of ${\rm GL}_n$-kind, $\Pi$ a cuspidal automorphic representation of ${\bf G}(\mathbb{A}_k)$ (we recall that such representations are globally generic), $r$ is an LS-representation of ${}^{\rm L}G$, and $S$ a finite set of places of $k$ containing the Archimedean ones, such that ${\bf G}_v$, $\Pi_v$ and $\Psi_v$ are unramified for $v \notin S$. Clearly such a tuple gives, for each place $v$ of $k$, a local tuple $(k_v,\Psi_v,{\bf G}_v,\Pi_v,r_v)$ where $r_v$ is obtained by composing $r$ with the natural map ${}^{\rm L}G_v \rightarrow {}^{\rm L}G_k$. \subsection{Properties of LS $\gamma$-factors}\label{LS:axioms} \begin{enumerate} \item[(i)] \emph{(Naturality).} Let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic. Let $\eta: F' \rightarrow F$ be an isomorphism extending to $G' = {\bf G}(F') \cong {\bf G}(F) = G$. Let $(F',\psi',{\bf G},\pi',r) \in \mathscr{A}_2(\mathfrak{p})$ be the tuple obtained via $\eta$. Then \begin{equation*} \gamma(s,\pi,r,\psi) = \gamma(s,\pi',r,\psi'). \end{equation*} \item[(ii)] \emph{(Isomorphism)}. Let $(F,\psi,{\bf G},\pi_j,r) \in \mathscr{A}_2(\mathfrak{p})$, $j=1$, $2$, be generic. If $\pi_1 \cong \pi_2$, then \begin{equation*} \gamma(s,\pi_1,r,\psi) = \gamma(s,\pi_2,r,\psi). \end{equation*} \item[(iii)] \emph{(Dependence on $\psi$).} Let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic. Given $a \in F^\times$, let $\psi^a : F \rightarrow \mathbb{C}^\times$ be the character given by $\psi^a(x) = \psi(ax)$. Then there is a rational character $z: F^\times \rightarrow Z_G$, $a \mapsto z(a) \in Z_G$, such that \begin{equation*} \gamma(s,\pi,r,\psi^a) = \omega_{\pi}(z(a)) \left| a \right|_F^{\dim(r)(s-\frac{1}{2})} \cdot \gamma(s,\pi,r,\psi). \end{equation*} \item[(iv)] \emph{(Multiplicativity).} Let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic. Let $\bf P$ be a parabolic subgroup of $\bf G$, with Levi subgroup $\bf M$, and assume that $\pi$ is the unique generic component of \begin{equation*} {\rm ind}_P^G \rho, \end{equation*} where $\rho$ is a generic smooth representation of $M$. Then $\bf M$ is of ${\rm GL}_2$-kind, the restriction of $r$ to ${}^{\rm L}M$ is a sum of LS representations $r_j$, $j \in \mathscr{J}$, and \begin{align*} \gamma(s,\pi,r,\psi) = \prod_{j \in \mathscr{J}} \gamma(s,\rho,r_j,\psi). \end{align*} \item[(v)] \emph{(Compatibility at representations with Iwahori fixed vectors).} Assume $F$ is non-archimedean and let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic, such that $\pi$ has fixed vectors under Iwahori subgroups of $G$. Let $\phi_\pi : \mathcal{W}_F' \rightarrow {}^{\rm L}G$ be the Langlands parameter corresponding to $\pi$. Then \begin{equation*} \gamma(s,\pi,r,\psi) = \gamma(s,r \circ \phi_\pi,\psi). \end{equation*} \item[(vi)] \emph{(Compatibility at archimedean places).} Assume that $F$ is Archimedean and such that $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ is generic. Let $\phi_\pi : \mathcal{W}_F \rightarrow {}^{\rm L}G$ be the Langlands parameter corresponding to $\pi$. Then \begin{equation*} \gamma(s,\pi,r,\psi) = \gamma(s,r \circ \phi_\pi,\psi). \end{equation*} \end{enumerate} Compatibility at archimedean places is the subject of \cite{Sh1985}. There is one global property that connects the local theory of $\gamma$-factors with partial $L$-functions defined by \begin{equation*} L^S(s,\Pi,r) = \prod_{v \notin S} L(s,\Pi_v,r_v), \end{equation*} for $(k,\Psi,{\bf G},\Pi,r,S) \in \mathscr{A}_2(\mathfrak{p})^{\rm glob}$. \begin{enumerate} \item[(vii)] \emph{(Functional equation).} Let $(k,\Psi,{\bf G},\Pi,r,S) \in \mathscr{A}_2(\mathfrak{p})^{\rm glob}$. Then \begin{equation*} L^S(s,\Pi,r) = \prod_{v \in S} \gamma(s,\Pi_v,r,\Psi_v) L^S(1-s,\widetilde{\Pi},r). \end{equation*} \end{enumerate} In \cite{HeLoFuture} we adapt Theorem~3.5 of \cite{Sh1990} and Theorem~4.1 of \cite{LS}, which give the existence and uniqueness of a system of $\gamma$-factors on $\mathscr{A}_2(\mathfrak{p})$ with Properties (i)--(vii). \subsection{Properties of the corresponding Galois $\gamma$-factors}\label{Galois:axioms} Let $(F,\psi,{\bf G},\pi,r)$ be a local tuple, and let $\phi_\pi$ be the $L$-parameter attached to $\pi$ via the local Langlands correspondence. Galois $\gamma$-factors $\gamma(s,r \circ \phi_\pi,\psi)$ possess the analogous properties of naturality ${\rm (i)}_\mathscr{G}$, and isomorphism ${\rm (ii)}_\mathscr{G}$ --in fact, when we talk about ``the'' $L$-parameter attached to $\pi$, we rather mean its equivalence class. Dependence on $\psi$ is, on the outset, more precise than for $\gamma(s,\pi,r,\psi)$: \begin{equation*} {\rm (iii)}_\mathscr{G} \ \gamma(s,r \circ \phi_\pi,\psi^a) = \det(r \circ \phi_\pi)(a) \left| a \right|_F^{\dim(r)(s-\frac{1}{2})} \cdot \gamma(s,r \circ \phi_\pi,\psi), \end{equation*} where $\det(r \circ \phi_\pi)$ is seen as a character of $F^\times$ via class field theory. And, multiplicativity is exactly parallel to the same property for $\gamma(s,\pi,r,\psi)$: \begin{enumerate} \item[${\rm (iv)}_\mathscr{G}$] With the hypothesis of Property~${\rm (iv)}$ of \S~\ref{LS:axioms}, the $L$-parameter $\phi_\pi$ factorizes as $\phi_\pi \circ i$, where $i$ is the inclusion of ${}^{\rm L}M$ into ${}^{\rm L}G$, and then \begin{align*} \gamma(s,r \circ \phi_\pi,\psi) = \prod_{j \in \mathscr{J}} \gamma(s,r_j \circ \phi_\pi,\psi). \end{align*} \end{enumerate} Properties ${\rm (v)}$ and ${\rm (vi)}$ of \S~3.2 are already statements about equality between LS and Galois $\gamma$-factors. For the final property, we consider a global tuple $(k,\Psi,{\bf G},\Pi,r,S)$, and we \emph{assume} that there is a global Weil group $L$-parameter \begin{equation*} \Phi_\Pi: \mathcal{W}_k \rightarrow {}^{\rm L}G, \end{equation*} which corresponds to $\Pi$ (at all places); it then satisfies the functional equation \begin{equation*} {\rm (vii)}_\mathscr{G} \ L^S(s,r \circ \Phi_\Pi) = \prod_{v \in S} \gamma(s,r \circ \Phi_{\Pi_v},\Psi_v) L^S(1-s,r \circ \Phi_{\widetilde{\Pi}}), \end{equation*} where partial $L$-functions are given by \begin{equation*} L^S(s,\Phi_\Pi,r) = \prod_{v \notin S} L(s,\Phi_{\Pi_v},r_v). \end{equation*} Our main point in the proof of Theorem~\ref{GL2class:thm} will be to ensure this assumption. \section{Proof of Theorem~\ref{GL2class:thm}}\label{proof:GL2class} \subsection{}\label{123} In this section, we prove Theorem~\ref{GL2class:thm} for $\mathfrak{p}=0$ in \S\S~\ref{begin:p0}--\ref{nonD:0}, then for $\mathfrak{p}>0$ in \S~\ref{p>0}. But first, we gather arguments which are the same in both situations. Let $F$ be non-archimedean, and ${\bf G}/F$ a connected reductive group of ${\rm GL}_2$-kind. \medskip \noindent{\bf 1)} Let $\pi$ be a smooth irreducible generic representation of $G$. Then there is a parabolic subgroup $\bf P$ of $\bf G$ with Levi subgroup $\bf M$ and a smooth irreducible generic supercuspidal representation $\rho$ of $M$ such that $\pi$ is a component of ${\rm Ind}_P^G(\rho)$. By the multiplicativity properties, \S~\ref{LS:axioms}(iv) and \S~\ref{Galois:axioms}${\rm (iv)}_\mathscr{G}$, if Theorem~\ref{GL2class:thm} is valid for the tuples $(F,\psi,{\bf M},\rho,r_j)$, then it is valid for $(F,\psi,{\bf G},\pi,r)$. We shall thus reason by induction on $n_G = \dim{\bf G} - \dim{\bf C}$, where $\bf C$ is the maximal $F$-split torus in the center of $\bf G$. If $\bf M$ is a proper Levi subgroup of $\bf G$ then $n_G > n_M$, so to prove Theorem~\ref{GL2class:thm} we may assume that $\pi$ is supercuspidal. \medskip \noindent{\bf 2)} Choose a global field $k$ with a place $v$ and an isomorphism $\eta$ of $k_v$ onto $F$. Assume Theorem~\ref{GL2class:thm} is proved for tuples $(F,\psi,{\bf G},\pi,r)$ such that $\psi \circ \eta$ is the component at $v$ of a character of $\mathbb{A}_k/k$. Choose such a tuple, and a character $\Psi$ of $\mathbb{A}_k/k$ with $\Psi_v = \psi \circ \eta$. For $b \in k^\times$, $\psi^{\eta(b)}\circ\eta$ is the component at $v$ of $\Psi^b : x \mapsto \Psi(bx)$, so we get by our assumption \begin{equation*} \gamma(s,\pi,r,\psi^{\eta(b)}) = \gamma(s,r\circ\phi_\pi,\psi^{\eta(b)}). \end{equation*} Then, by applying \S~\ref{LS:axioms}(iii) and \S~\ref{Galois:axioms}${\rm (iii)}_\mathscr{G}$, we conclude that \begin{equation*} \omega_\pi(z(\eta(b)) = \det(r\circ\phi_\pi)(\eta(b)), \end{equation*} for $b \in k^\times$ and $z : F^\times \rightarrow Z_G$ the rational character of \S~\ref{LS:axioms}(iii). But both $\omega_\pi \circ z$ and $\det(r \circ \phi_\pi)$ are continuous characters of $F^\times$. Since they are equal on $\eta(k^\times)$, which is dense in $F^\times$, they are equal. Thus, for all $a \in F^\times$ we get \begin{equation*} \gamma(s,\pi,r,\psi^a) = \gamma(s,r\circ\phi_\pi,\psi^a). \end{equation*} \noindent{\bf 3)} To apply {\bf 2)} we first need to lift our tuple $(F,\psi,{\bf G},\pi,r)$ to a global one. The following lemma (Lemma~3.6 of \cite{He1983}, Lemma~4.13 of \cite{DeAntwerp}) will also be used later. \begin{lemma}\label{GF:lemma} Let $\widetilde{E}/F$ be a finite Galois extension. Then there exist a global field $k$, a finite Galois extension $l/k$, with a place $v_0$ of $k$, and an isomorphism $\eta$ of $k_{v_0}$ onto of $F$ inducing an isomorphism of $k_{v_0} \otimes_k l$ onto $\widetilde{E}$. In particular, $l$ has only one place $w_0$ above $v_0$ and the decomposition subgroup of ${\rm Gal}(l/k)$ at $w_0$ is itself, and identifies with ${\rm Gal}(\widetilde{E}/F)$ via $\eta$. \end{lemma} \begin{remark}\label{rmk:0p} {\rm (i)} Assume that $F$ is a finite extension of $\mathbb{Q}_p$ (hence $\mathfrak{p}=0$). Applying the above procedure to an $\widetilde{E}$ which is Galois over $\mathbb{Q}_p$, we find a number field $l$ with only one place above $p$. {\rm (ii)} When $F$ has characteristic $2$ (so $\mathfrak{p}=2$), we shall in fact use a stronger result of Gabber-Katz \cite{Ka1986}. \end{remark} Using Lemma~\ref{GF:lemma}, we can lift ${\bf G}/F$ and its LS representation $r$ to a number field $k$. Let us briefly provide the argument for this (it applies to a general situation, not necessarily of ${\rm GL}_2$-kind \cite{HeLoFuture}). Recall that $r = \rho \circ {}^{\rm L}\iota$ has an underlying pair of connected quasi-split reductive groups $({\bf H},{\bf L})$ defined over $F$, together with an L-group homomorphism ${}^{\rm L}\iota : {}^{\rm L}G \rightarrow {}^{\rm L}L$. We choose a large enough finite Galois extension $\widetilde{E}$ of $F$ in $\overline{F}$, such that $\bf G$, $\bf H$ and $\bf L$ are split over $\widetilde{E}$. The action of $\mathcal{W}_F$ on the based root data of the (pinned) groups $\bf G$, $\bf H$, $\bf L$ factors through ${\rm Gal}(\widetilde{E}/F)$. Using the lemma we choose a number field $k$, and extension $l$, getting at the place $v_0$ an isomorphism of ${\rm Gal}(l/k)$ onto ${\rm Gal}(\widetilde{E}/F)$; in particular, we obtain based root data with an action of ${\rm Gal}(l/k)$, determining (uniquely up to unique isomorphism) pinned groups ${\bf G}_k$, ${\bf H}_k$, ${\bf L}_k$ together with an L-homomorphism ${}^{\rm L}\iota_k : {}^{\rm L}G_k \rightarrow {}^{\rm L}L_k$. As the action of ${}^{\rm L}L$ on ${}^{\rm L}\mathfrak{n}$ can be seen on the root datum, $\rho$ corresponds to an irreducible representation $\rho_k$ of ${}^{\rm L}L_k$ and we put $r_k = \rho_k \circ {}^{\rm L}\iota_k$. At the place $v_0$, the isomorphism $l_{v_0} \simeq \widetilde{E}$ gives unique isomorphisms of pinned groups $G_k \otimes k_{v_0} \simeq G$, compatible with the based root data, and similarly for $H$ and $L$. On the L-group side, if we choose a separable algebraic closure $\bar{k}$ of $k$ containing $l$, we get an L-group ${}^{\rm L}G_k$, and an embedding ${}^{\rm L}G \rightarrow {}^{\rm L}G_k$ at the place $v_0$, and similarly for ${}^{\rm L}H$ and ${}^{\rm L}L$. Composing $r_k$ with the embedding ${}^{\rm L}G \rightarrow {}^{\rm L}G_k$, we retrieve $r$ back. Of course, to only lift the group ${\bf G}/F$ of ${\rm GL}_2$-type, it is easier. Indeed, if $\bf G$ is the product of ${\rm Res}_{E_i/F}{\rm GL}_{n_i}$, then $\widetilde{E}$ contains the $E_i$'s, and we can take ${\bf G}_k$ to be the product of ${\rm Res}_{l_i/k}{\rm GL}_{n_i}$, where $l_i$ is the sub-extension of $l$ corresponding to $E_i$. \subsection{}\label{lemma2:ss} Before continuing, we shall need another well-known lemma. We include a proof for the convenience of the reader, where we use the following notation for a global field $k$: {\bf (i)} if $\mathfrak{p}=0$, interpret $v \in \left\{ \infty \right\}$ to mean that $v$ is an archimedean place of $k$; {\bf (ii)} if $\mathfrak{p}>0$, we take $\infty$ to be a fixed place at infinity for $k$, $v_0 \neq \infty$. \begin{lemma}\label{GL1:glob} Let $k$ be a global field, $v_0$ a finite place of $k$, $\lambda$ a multiplicative character of $k_{v_0}^\times$. Then there exists a character $\chi = \otimes \chi_v$ of $\mathbb{A}_k/k^\times$ such that: \begin{enumerate} \item[(i)] $\chi_{v_0} \simeq \lambda$; \item[(ii)] $\chi_v$ is unramified for $v \notin \left\{ v_0, \infty \right\}$; \item[(iii)] if $\mathfrak{p}>0$, then $\chi_\infty$ is tame. \end{enumerate} \end{lemma} \begin{proof} First let $\mathfrak{p}=0$. For each finite place $v$ of $k$, the group of units $U_v$ of $k_v^\times$ is compact and so is the product \begin{equation*} U = \prod_{v\ {\rm finite}} U_v. \end{equation*} We have $k^\times \cap U = \left\{ 1 \right\}$ in $\mathbb{A}_k^\times$, and $k^\times U$ is closed in $\mathbb{A}_k^\times$. The character of $k^\times U$ which is trivial on $k^\times$ and $U_v$ for $v \neq v_0$, and given by $\lambda$ on $U_{v_0}$, extends to a (unitary) character $\chi'$ of $\mathbb{A}_k^\times$, trivial on $k$ and $U_v$ for $v \neq v_0$, and $\chi_{v_0}'$ differs from $\lambda$ by a power of the absolute value character of $k_{v_0}^\times$. Modifying $\chi'$ accordingly by a power of the absolute value id\`ele class character of $\mathbb{A}_k^\times$, we find the desired $\chi$. \medskip Now assume $\mathfrak{p}>0$. For a place $v$ of $k$, let $U_v^1$ be the pro-$p$ radical of $U_v \subset k_v^\times$. The groups \begin{equation*} U' = U_{\infty}^1 \times \prod_{v \notin \{ v_0, \infty\}} U_v \text{ and } U = U_{v_0} \times U' \end{equation*} are compact and we have a map $\Phi : U \rightarrow k^\times \backslash \mathbb{A}_k^\times$, with ${\rm Im}\,\Phi$ compact. Also, ${\rm Ker}\,\Phi = \{ 1 \}$. Indeed, let $x \in {\rm Ker}\,\Phi$, so that $x_v \in U_v$ for all places $v$ of $k$. Then $x$ must be a constant function on the smooth projective curve $X$ with function field $k$. But, $x_\infty \in 1+\mathfrak{p}_\infty$, gives $x_\infty=1$. Hence, $x=1$. The character \begin{equation*} \lambda\vert_{U_{v_0}} \otimes \mathds{1}_{U'} \text{ of } U, \end{equation*} gives a character of ${\rm Im}\,\Phi$, which is a closed subgroup of $k^\times \backslash\mathbb{A}_k^\times$. Then, we can extend this character to one of $k^\times \backslash\mathbb{A}_k^\times$, which we denote by $\chi'$. At $k_{v_0}$, the character $\chi_{v_0}' \cdot \lambda^{-1}$ is unramified, hence a power of the absolute value character of $k_{v_0}^\times$. Twisting back $\chi'$ by a power of the absolute value id\`ele class character of $\mathbb{A}_k^\times$, we find the desired $\chi$. \end{proof} \subsection{}\label{begin:p0} We now begin the proof when $\mathfrak{p}=0$. So we let $\pi$ be a supercuspidal representation of $G$. As $G$ is a product of ${\rm GL}_{n_i}(E_i)$'s, $\pi$ is a tensor product of cuspidal representations $\pi_i$ of ${\rm GL}_{n_i}(E_i)$; if $\sigma_i$ is the representation of $\mathcal{W}_{E_i}$ corresponding to $\pi_i$, there is an unramified character $\eta_i$ such that $\eta_i \sigma_i$ has finite image. It will be convenient to choose the extension $\widetilde{E}$ of $F$ in Lemma~\ref{GF:lemma} to contain, not only the $E_i$'s, but also the fixed field of ${\rm Ker}\,(\eta_i \sigma_i)$. As explained above, Lemma~\ref{GF:lemma} allows us to choose a number field $k$ and an extension $l$ so that ${\bf G}/F$ and $r$ lift to $k$. For each $i$, we let $l_i$ be the subfield of $\widetilde{E}$ corresponding to $E_i$ via the lemma. We shall assume that $\psi$ is the component at $v_0$ of a character $\Psi$ of $k \backslash \mathbb{A}_k$, which is enough by \S~\ref{123} 2). It remains to lift $\pi$ appropriately to apply the reasoning sketched at the end of the introduction for the Asai cube case. The point is to globalize each $\sigma_i$ to a representation $\Sigma_i$ of $\mathcal{W}_{l_i}$, which corresponds via the global Langlands correspondence to a cuspidal automorphic representation $\Pi_i$ of ${\rm GL}_{n_i}(\mathbb{A}_{l_i})$. We then get from the $\Pi_i$'s a cuspidal automorphic representation $\Pi$ of ${\bf G}_k(\mathbb{A}_k)$, and from the $\Sigma_i$'s a corresponding global parameter $\Phi_\Sigma : \mathcal{W}_k \rightarrow {}^{\rm L}G_k$. As indicated in \S~\ref{sketch:proof}, we can then write the two global functional equations (vii) and ${\rm (vii)}_\mathscr{G}$ for a large enough finite set $S$ of places of $k$. Provided we have equality of the factors on both sides at places $v \in S \setminus \{ v_0 \}$, we get equality at $v_0$. Hence our desired equality \eqref{maineq:Asai}. \subsection{}\label{dihedral:0} Let us first treat the case where each $\sigma_i$ of dimension $2$ has dihedral image in the projective linear group --note that if the residual characteristic $p$ of $F$ is odd, we are necessarily in this dihedral situation \cite{We1974}. There is then for each $i$ such that $n_i=\dim \sigma_i = 2$ a quadratic separable extension $D_i$ of $E_i$ in $\overline{F}$ and a character $\chi_i$ of $D_i^\times$ such that $\sigma_i$ is induced from the character of $\mathcal{W}_{D_i}$ corresponding to $\chi_i$ via class field theory. By our choice of $\widetilde{E}$, $k$ and $l$, there is a quadratic extension $m_i$ of $l_i$ in $l$ giving $D_i/E_i$ at the place $v_0$. By Lemma~\ref{GL1:glob} in ${\rm char}\,\mathfrak{p}=0$, we choose a character $\mathcal{X}_i$ of $\mathbb{A}_{m_i}^\times/m_i^\times$ with component $\chi_i$ at $v_0$, and unramified at other finite places. We take for $\Sigma_i$ the dimension $2$ representation of $\mathcal{W}_{l_i}$ induced from $\mathcal{X}_i$ (or rather the character of $\mathcal{W}_{m_i}$ corresponding to it). For each $i$ such that $n_i=\dim\sigma_i=1$, we use Lemma~\ref{GL1:glob} to globalize $\sigma_i$ to a character $\Sigma_i$ of $\mathcal{W}_{l_i}$, unramified at finite places different from $v_0$. By construction if $n_i=1$, and Jacquet-Langlands \cite{JaLa} if $n_i=2$, there is a cuspidal automorphic representation $\Pi_i$ of ${\rm GL}_{n_i}(\mathbb{A}_{l_i})$ corresponding to $\Sigma_i$ via the global Langlands correspondence. In this way, we get a cuspidal automorphic representation $\Pi$ of ${\bf G}_k(\mathbb{A}_k)$ and a global parameter $\Phi_\Pi : \mathcal{W}_k \rightarrow {}^{\rm L}G_k$ corresponding to it via the global Langlands correspondence, as globally assumed in \S~\ref{Galois:axioms}. At a finite place $v \neq v_0$, $\Sigma_i$ is either unramified (if $n_i=1$ for example) or reducible, so that $\Pi_{i.v}$ is a principal series. In either case, the desired equality holds at $v$ by property (v) of \S~\ref{LS:axioms} or by the induction assumption of \S~\ref{123} 1). We conclude that equality holds at $v_0$ as well, which is what we wanted. \subsection{}\label{nonD:0} Let us now treat the case where $F$ has residue characteristic $2$ and some of the $\sigma_i$'s of dimension $2$ may have image in $A_4$ or $S_4$ in the projective linear group. We use Remark~\ref{rmk:0p}~(i) and choose $k$ (and $l$) to have only one place above $2$. For the indices $i$ such that $\sigma_i$ has dimension $1$, we globalize as before, using Lemma~\ref{GL1:glob}. For the indices $i$ such that $\sigma_i$ has dimension $2$, as we have ensured, the representation $\eta_i \sigma_i$ of \S~\ref{begin:p0} has finite image and factorizes through ${\rm Gal}(l/k)$, we get a representation of $\mathcal{W}_{l_i}$ yielding $\eta_i\sigma_i$ at the place $v_0$, which we can twist by a power of the absolute value character to obtain a representation $\Sigma_i$ of $\mathcal{W}_{l_i}$ yielding $\sigma_i$ at the place $v_0$. The image of $\Sigma_i$ in the projective linear group is dihedral, $A_4$ or $S_4$ so there is by \cite{JaLa,La,Tu1978} a corresponding cuspidal automorphic representation $\Pi_i$ of ${\rm GL}_2(\mathbb{A}_{l_i})$. Since finite places of $l_i$ different from $v_0$ have odd residue characteristic, at such places $\Sigma_i$ is either reducible or dihedral. We then proceed as before in comparing two global functional equations for $\Sigma$ and $\Pi$: at finite places of $k$ in $S$ distinct from $v_0$, equality holds because we already treated the dihedral case, or by induction, and consequently it holds at $v_0$ as well. \subsection{Case of $\mathfrak{p}>0$}\label{p>0} We use the same reasoning by induction as in the characteristic zero case, so we can assume that $\pi$ is cuspidal. If all of the $n_i$'s are $1$, then $\bf G$ is a product of induced tori, and the result is known, more generally, for any torus \cite{HeLoFuture}. So we can assume $n_i=2$ for at least one $i$. As in the characteristic zero case of \S~\ref{begin:p0}, we choose a Galois extension $\widetilde{E}$ of $F$ in $\overline{F}$, using Lemma~\ref{GF:lemma} to lift $\widetilde{E}/F$ to a global situation $l/k$. Following \S~\ref{begin:p0}, and using the same notation, we obtain representations $\sigma_i$, $\pi_i$. If for each $i$ such that $n_i=2$, $\sigma_i$ has dihedral image in the projective linear group, we can use Lemma~\ref{GL1:glob}, now in the characteristic $\mathfrak{p}>0$ case. Following \S~\ref{dihedral:0} in the case of dihedral image for each $i$, such that $n_i=2$, we choose $D_i$ and $\chi_i$. To $D_i$ corresponds in $l$ a quadratic extension $m_i$ of $l_i$ and we extend $\chi_i$ to a character $\mathcal{X}_i$ of $\mathbb{A}_{m_i}^\times/m_i^\times$ unramified outside $v_0$ and some other place $v_1$, where it is tame. We choose $v_1$ to be above a place of $l_i$ split in $m_i$. So at that place of $l_i$, $\Sigma_i$ (obtained by inducing from $\mathcal{X}_i$) is reducible. The same reasoning as in \S~\ref{dihedral:0} then yields the result. We have now proved our result for $\mathfrak{p}>2$, since all the $\sigma_i$'s of dimension $2$ are dihedral. If $\mathfrak{p}=2$ and one $\sigma_i$ of dimension $2$ has image $A_4$ or $S_4$ in the projective linear group, we use the result of Gabber and Katz \cite{Ka1986}. Namely, let $F$ be a non-archimedean local field of characteristic $p$, with residue field $\mathbb{F}_q$. We let $k = \mathbb{F}_q(t)$, and we choose an isomorphism $F \simeq \mathbb{F}_q (\!( t )\!)$, so that $F$ is the completion of $k$ at $0$. Then our Galois extension $\widetilde{E}/F$ can be globalized (uniquely, as it turns out) to a Galois extension $l$ of $k$, unramified outside $0$ and $\infty$, and tame at infinity. Now, following \S~\ref{nonD:0} we globalize $\sigma_i$ to $\Sigma_i$, which is unramified outside of $0$ and $\infty$ and tame at infinity; in particular, if $n_i=2$, $\Sigma_i$ is dihedral at $\infty$. Moreover, there is a cuspidal automorphic representation $\Pi_i$ of ${\rm GL}_{n_i}(l_i)$ corresponding to $\Sigma_i$, see Tunnell \cite{Tu1978}. We can then proceed as before, because at places other than $0$ and $\infty$ we are in an unramified situation, whereas at $\infty$ we have already treated the dihedral representations $\Sigma_{i,\infty}$ of dimension $2$. We can thus conclude equality at $0$. \begin{remark} The method of \cite{GaLo} would proceed a bit differently, rather producing first a globalization $\Pi_i$ of $\pi_i$ which is (at most) tamely ramified at $\infty$ and unramified outside $\left\{ 0, \infty \right\}$. Then invoking the proof by Drinfeld \cite{Dr1978} of the global Langlands conjecture for ${\rm GL}_2$ over $l_i$ to get an ($\ell$-adic) Galois representation $\Sigma_i$ associated to $\Pi_i$. \end{remark} \section{Consequences}\label{consequences} In this section we first draw consequences of the main equality, on twisting with unramified characters first, and with highly ramified characters secondly, thus proving stability. We then recall the definition of $L$-functions and $\varepsilon$-factors, and we extend Theorem~\ref{GL2class:thm} from generic to general smooth irreducible representations $\pi$. Compatibility with the local Langlands correspondence for $L$-functions and $\varepsilon$-factors is an immediate corollary. \subsection{Twisting with characters}\label{character} Let $\bf G$ be our (pinned) quasi-split reductive group of ${\rm GL}_2$-kind defined over $F$. If $\bf G$ is the product of ${\rm Res}_{E_i/F}{\rm GL}_{n_i}$, $n_i \leq 2$, a character $\chi$ of $G = {\bf G}(F) = \prod_{i}{\rm GL}_{n_i}(E_i)$ is a product $\chi = \prod_i \chi_i \circ \det$ where each $\chi_i$ is a character of $E_i^\times$. Thus $\chi$ factorizes through the maximal quotient torus $Y$ of $G$, $Y = \prod_i E_i^\times = \prod_i {\bf Y}_i(F)$, where ${\bf Y}_i = {\rm Res}_{E_i/F}{\bf G}_m$. Dually, there is an embedding ${}^{\rm L}Y \rightarrow {}^{\rm L}G$ with $\widehat{\bf Y}$ going (isomorphically, in fact) to the center $Z_{\widehat{\bf G}}$ of $\widehat{\bf G}$. If $\pi$ is a smooth irreducible representation of $G$, twisting $\pi$ by $\chi$ should correspond, via the local Langlands correspondence, to multiplying $\phi_\pi : \mathcal{W}_F \rightarrow {}^{\rm L}G$ by a map $\phi_\chi : \mathcal{W}_F \rightarrow Z_{\widehat{G}}$ determined by $\chi$ via the recipe of \cite{Bo1979}. And in fact it does: if $\tau_i$ is a smooth irreducible representation of ${\rm GL}_{n_i}(E_i)$ corresponding to a parameter $\phi_{\tau_i} : \mathcal{W}_{E_i} \rightarrow {\rm GL}_{n_i}(\mathbb{C})$, then twisting by $\tau_i$ by $\chi_i \circ \det$ corresponds to multiplying $\phi_{\tau_i}$ by the character $\eta_i$ of $\mathcal{W}_{E_i}$ corresponding to $\chi_i$ via class field theory. The $\eta_i$'s yield a parameter $\eta : \mathcal{W}_F \rightarrow {}^{\rm L}Y = \widehat{Y} \rtimes \mathcal{W}_F$ and twisting $\pi$ by $\chi$ indeed corresponds to twisting $\phi_\pi$ by the map given by the first component of $\eta$. \subsection{Unramified characters} The group ${\rm Hom}_F({\bf G},{\bf G}_m)$ is a finitely generated free $\mathbb{Z}$-module, and the group of unramified characters $X_{\rm nr}(G)$ is isomorphic to a quotient of ${\rm Hom}_F({\bf G},{\bf G}_m) \otimes \mathbb{C}$: if $(\chi_1, \ldots, \chi_e)$ is a basis of ${\rm Hom}_F({\bf G},{\bf G}_m)$, then $\chi_1 \otimes s_1 + \cdots + \chi_e \otimes s_e$ corresponds to the unramified character $g \mapsto \prod_{i=1}^e \left| \chi_i(g) \right|_F^{s_i}$. Let $r$ be an LS-representation of ${}^LG$, obtained via a pair $({\bf H},{\bf L})$ with $\bf H$ semisimple simply connected, and $\iota : {\bf L} \rightarrow {\bf G}$. Then ${\rm Hom}_F({\bf L},{\bf G}_m)$ has rank $1$, and a non-zero element is $\delta = \det({\rm Ad}_{\bf L}\vert_\mathfrak{n})$, where $\mathfrak{n}$ is the Lie algebra of the unipotent radical $\bf N$ of the standard parabolic subgroup $\bf P$ of $\bf G$ with Levi subgroup $\bf L$. We let $\tilde{\delta}$ be the unramified character corresponding to $\delta \otimes (1/\left\langle \delta,\alpha^\vee \right\rangle)$ if $\bf P$ is obtained by omitting the simple root $\alpha$. Now $\iota$ induces a morphism ${\rm Hom}_F({\bf G},{\bf G}_m) \rightarrow {\rm Hom}_F({\bf L},{\bf G}_m)$ and an analogous morphism when tensoring with $\mathbb{C}$, which we still write as composition with $\iota$. If $\chi$ is an unramified character of $G$, the corresponding unramified character $\chi \circ \iota$ of $L$ has the form $\tilde{\delta}^{s_0}$ for $s_0 \in \mathbb{C}$ well defined up to $(2\pi i/\log q_F) \mathbb{Z}$. From the known property of unramified character twists for the LS-representation of ${}^{\rm L}L$ yielding $r$, Theorem~5.1 of \cite{LS}, we get: \begin{enumerate} \item[${\rm (viii)}$] \emph{(Twists by unramified characters).} Let $(F,\psi,{\bf G},\pi_i,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic. Let $\chi$ be an unramified character of $G$, with $\chi \circ \iota = \tilde{\delta}^{s_0}$, then \begin{equation*} \gamma(s,\pi \otimes \chi,r,\psi) = \gamma(s+s_0,\pi,r,\psi). \end{equation*} \end{enumerate} \subsection{Highly ramified characters}\label{stability:prop} We shall take highly ramified characters in the following manner: we choose $\chi \in {\rm Hom}_F({\bf G},{\bf G}_m)$ yielding a morphism $G \rightarrow F^\times$, and compose it with a character $\lambda : F^\times \rightarrow \mathbb{C}^\times$. The following property is a corollary to Theorem~\ref{GL2class:thm}. \begin{enumerate} \item[${\rm (ix)}$] \emph{(Stability).} For $i = 1$ and $2$, let $(F,\psi,{\bf G},\pi_i,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic and such that their central characters are equal. Let $\chi \in {\rm Hom}_F({\bf G},{\bf G}_m)$ be such that $\chi \circ \iota \neq 0$. Then for all sufficiently ramified characters $\lambda : F^\times \rightarrow \mathbb{C}^\times$: \begin{equation*} \gamma(s,\pi_1 \otimes (\lambda \circ \chi),r,\psi) = \gamma(s,\pi_2 \otimes (\lambda \circ \chi),r,\psi). \end{equation*} \end{enumerate} \begin{proof} We translate to the Weil group side, where a corresponding property has been proved by Deligne-Henniart. More precisely, it follows from \cite{DeHe} that if $\sigma_1$ and $\sigma_2$ are two representations of $\mathcal{W}_F$ with the same dimension and determinant, then for all sufficiently ramified characters $\eta$ of $\mathcal{W}_F$, we have \begin{equation*} \gamma(s,\sigma_1 \otimes \eta,\psi) = \gamma(s,\sigma_2 \otimes \eta,\psi). \end{equation*} If $\phi_{\pi_i} : \mathcal{W}_F \rightarrow {}^{\rm L}G$ is the parameter of $\pi_i$, the parameter for $\pi_i \otimes(\lambda \circ \chi)$ is $\phi_{\pi_i} \cdot \phi_{\lambda \circ \chi}$ where $\phi_{\lambda \circ \chi}$ is described in \S~\ref{character}: explicitly $\chi : {\bf G} \rightarrow {\bf G}_m$ yields an L-map ${}^{\rm L}\chi : {}^{\rm L}{G}_m \rightarrow {}^{\rm L}G$ and ${}^{\rm L}\chi(\mathbb{C}^\times)$ is contained in the $\mathcal{W}_F$-fixed part of the center $Z_{\widehat{G}}$ of $\widehat{G}$; $\phi_{\lambda \circ \chi}$ is the map $\mathcal{W}_F \rightarrow {}^{\rm L}G$ obtained by composing ${}^{\rm L}\chi$ with the character $\mathcal{W}_F \rightarrow \mathbb{C}^\times$ corresponding to $\lambda$ via class field theory. Now $r = \rho \circ {}^{\rm L}\iota$ where $\rho$ is an irreducible representation of ${}^{\rm L}L$ and ${}^{\rm L}\iota : {}^{\rm L}G \rightarrow {}^{\rm L}L$ is dual to $\iota : {\bf L} \rightarrow {\bf G}$. Because $\iota$ induces an isogeny on derived subgroups ${}^{\rm L}\iota(Z_{\widehat{G}})$ is contained in the center $Z_{\widehat{L}}$ of $\widehat{L}$, and ${}^{\rm L}\iota \circ \phi_{\lambda \circ \chi}$ is a character with values in $Z_{\widehat{L}}^{\mathcal{W}_F}$. So that \begin{equation} r(\phi_{\pi_i} \cdot \phi_{\lambda \circ \chi}) = [ \rho \circ {}^{\rm L}\iota(\phi_{\pi_i}) ] \cdot \omega_\rho({}^{\rm L}\iota \circ \phi_{\lambda \circ \chi}), \end{equation} where $\omega_\rho$ is the central character of $\rho$. The assumption on $\chi$ implies that $\omega_\rho({}^{\rm L}\iota\circ{}^{\rm L}\chi)$ is a non-trivial (algebraic) character of $\mathbb{C}^\times$, and composing with sufficiently ramified characters of $\mathcal{W}_F$ still gives sufficiently ramified characters. It remains to verify that $r(\phi_{\pi_1})$ and $r(\phi_{\pi_2})$ have the same determinant: the above result of \cite{DeHe}, then gives the result. To prove $\det r(\phi_{\pi_1}) = \det r(\phi_{\pi_2})$, we use a property of the local Langlands correspondence: if $\pi$ is a smooth irreducible representation of ${\rm GL}_n(E)$ (where $E$ is a finite extension of $F$ in $\overline{F}$) and $\phi_\pi : \mathcal{W}_E \rightarrow {\rm GL}_m(\mathbb{C})$ is its L-parameter, then $\det \circ\,\phi_\pi : \mathcal{W}_E \rightarrow \mathbb{C}^\times$ corresponds to the central character $\omega_\pi$ of $\pi$ via class field theory. Write ${\bf G} = \prod_j {\bf G}_j$, where ${\bf G}_j = {\rm Res}_{E_j/F}{\rm GL}_{n_j}$, $n_j \leq 2$, and accordingly $\pi_i = \otimes_j \pi_{i,j}$, where $\pi_{i,j}$ is a smooth irreducible representation of $G_j = {\rm GL}_{n_j}(E_j)$. If $\phi_{i,j} : \mathcal{W}_{E_j} \rightarrow {\rm GL}_{n_j}(\mathbb{C})$ is the parameter of $\pi_{i,j}$, then $\phi_{\pi_i} : \mathcal{W}_F \rightarrow {}^{\rm L}G$ is obtained as follows: we see $\phi_{i,j}$ as a morphism $\mathcal{W}_{E_i} \rightarrow {}^{\rm L}{\rm GL}_{n_j}/E_j = {\rm GL}_{n_j}(\mathbb{C}) \ltimes \mathcal{W}_{E_j}$ (given by the identity on the second component), giving an induced morphism $\phi_{i,j}' : \mathcal{W}_F \rightarrow {}^{\rm L}{G}_{j} = \widehat{G}_j \ltimes \mathcal{W}_F$ and $\phi_\pi$ is the ``product'' of these morphisms in the sense that on the second component $\widehat{G} = \prod_j \widehat{G_j}$ it is the product of the $\phi_{i,j}'$. Now $\det \circ\,r$ is a $\mathcal{W}_F$-invariant algebraic character of $\widehat{G}$, which on each component comes from some power of the determinant character ${\rm GL}_{n_j}(\mathbb{C})$. Thus $\det r(\phi_{\pi_i})$ is a character of $\mathcal{W}_F$, where we can expand the determinant and find that it is given in an explicit way by the characters $\det \circ\,\phi_{i,j}$ of $\mathcal{W}_{E_j}$, i.e., by the central characters $\omega_{\pi_{i,j}}$. If $\omega_{\pi_1} = \omega_{\pi_2}$, then certainly we have $\omega_{\pi_{1,j}} = \omega_{\pi_{2,j}}$ for each $j$, and consequently $\det r (\phi_{\pi_1}) = \det r (\phi_{\pi_2})$. \end{proof} \subsection{$L$-functions and $\varepsilon$-factors}\label{L3:axioms} We first recall how to obtain $L$-funtions and $\varepsilon$-factors from $\gamma$-factors. And, with the same construction, we extend the definition of $\gamma$-factors from generic to general smooth irreducible representations. We begin by first recording the local functional equation for generic $\gamma$-factors, which follows either from uniqueness or from Theorem~\ref{GL2class:thm} and the corresponding property for Galois $\gamma$-factors. \begin{enumerate} \item[${\rm (x)}$] \emph{(Local functional equation).} Let $(F,\psi,{\bf G},\pi_i,r) \in \mathscr{A}_2(\mathfrak{p})$ be generic, then \begin{equation*} \gamma(s,\pi,r,\psi) \gamma(1-s,\tilde{\pi},r,\overline{\psi}) = 1. \end{equation*} \end{enumerate} We say that $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ is tempered if $\pi$ is a tempered representation of $G$. Observe that for $\bf G$ of ${\rm GL}_2$-kind, a tempered representation is also generic. The following property is the tempered $L$-function conjecture, originally stated in \cite{Sh1990} and proved for Langlands-Shahidi $L$-functions in \cite{HeOp2013}. Furthermore, we observe that for $\bf G$ of ${\rm GL}_2$-kind (even ${\rm GL}_n$-kind), given a tempered representation $\pi$ its L-parameter $\phi_\pi$ is bounded. Then if $\tau$ is the generic component of the smooth representation $\pi \circ \iota$ of $L$, then $\phi_{\tau}$ is also bounded. We thus obtain an alternative proof of the tempered $L$-function conjecture for $\bf G$ of ${\rm GL}_2$-kind, which now follows from the Galois side, in view of Theorem~\ref{GL2class:thm}. \begin{enumerate} \item[${\rm (xi)}$] \emph{(Tempered $L$-functions).} For $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ tempered, let $P_{\pi}(t)$ be the unique polynomial with $P_{\pi}(0) = 1$ and such that $P_{\pi}(q_F^{-s})$ is the numerator of $\gamma(s,\pi,r,\psi)$. Then \begin{equation*} L(s,\pi,r) = \dfrac{1}{P_{\pi}(q_F^{-s})}. \end{equation*} is holomorphic and non-zero for $\Re(s) > 0$. \end{enumerate} \noindent The above two properties lead us to the following well defined notion of $\varepsilon$-factors. \begin{enumerate} \item[${\rm (xii)}$] \emph{(Tempered $\varepsilon$-factors).} Let $(F,\psi,{\bf G},\pi_i,r) \in \mathscr{A}_2(\mathfrak{p})$ be tempered, then \begin{equation*} \varepsilon(s,\pi,r,\psi) = \gamma(s,\pi,r,\psi) \dfrac{L(s,\pi,r)}{L(1-s,\tilde{\pi},r)} \end{equation*} is a monomial in $q_F^{-s}$. \end{enumerate} If we start with a tempered (generic) representation, the dependence of the $L$-factor is holomorphic when we twist by unramified characters of $G$. This allows us to use the Langlands classification, which we now state, in order to address the general case. \begin{enumerate} \item[${\rm (xiv)}$] \emph{(Langlands classification).} Let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$. Let $\bf P$ be a parabolic subgroup of $\bf G$, with Levi subgroup $\bf M$, and such that $\pi$ is the unique quotient of \begin{equation*} {\rm ind}_P^G \rho \otimes \chi, \end{equation*} where $\rho$ is a tempered representation of $M$ and $\chi$ is an unramified character in the Langlands situation. Let $\phi_\pi$ be the $L$-parameter attached to $\pi$ via the local Langlands correspondence; and, let $\phi_\tau$ be the L-parameter of $\tau = \tau_0 \otimes \chi$. Then \begin{equation*} \phi_\pi: \mathcal{W}_F' \xrightarrow{\phi_\tau} {}^{\rm L}M \longrightarrow {}^{\rm L}G. \end{equation*} \end{enumerate} It is via Langlands classification that we extend the definition of $\gamma$-factors from generic to general smooth irreducible representations; in addition to passing from tempered $L$-functions and $\varepsilon$-factors to the general setting. We observe that, while we do obtain multiplicativity for $\gamma$-factors, we do not have multiplicativity of local $L$-functions and root numbers in general. The construction of $L$-functions and $\varepsilon$-factors from $\gamma$-factors follows the idea of Shahidi \cite{Sh1990}. Now, let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ be tempered (hence generic). A tempered representation $\pi$ corresponds to a bounded $L$-parameter $\phi_\pi$, for which its $L$-function is indeed given by the numerator of the corresponding Galois $\gamma$-factor. For tempered representations, Galois $\gamma$-factors, $L$-functions and $\varepsilon$-factors behave well under all unramified twists. Given a general $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$, local Galois factors are compatible with Langlands classification. We thus conclude the proof of Theorem~\ref{GL2class:thm} and obtain the following corollary. \begin{corollary}\label{L3:thm} Let $(F,\psi,{\bf G},\pi,r) \in \mathscr{A}_2(\mathfrak{p})$ and let $\phi_\pi$ be the $L$-parameter such that $\pi \leftrightarrow \phi_\pi$ are related via the local Langlands correspondence. Then \begin{align*} \gamma(s,\pi,r,\psi) &= \gamma(s,r \circ \phi_\pi,\psi), \\ L(s,\pi,r) &= L(s,r \circ \phi_\pi), \\ \varepsilon(s,\pi,r,\psi) &= \varepsilon(s,r \circ \phi_\pi,\psi). \end{align*} \end{corollary} \section{Asai cube representation}\label{AsaiL} In this section we show in detail that the Asai cube factors can indeed be obtained as LS-factors for a group of ${\rm GL}_2$-kind; applying Theorem~\ref{GL2class:thm} then gives \eqref{maineq:Asai}. Let $F$ be a local field or a global one; we fix a separable algebraic closure $\overline{F}$ of $F$ and let $\mathcal{W}_F$ be the Weil group of $\overline{F}/F$. We are given a cubic separable extension $E$ of $F$ in $\overline{F}$, with Galois closure $\widetilde{E}$, and we consider the group ${\bf G} = {\rm Res}_{E/F}{\rm GL}_2$. The $L$-group of $\bf G$ is the semi-direct product ${}^{\rm L}G = {\rm GL}_2(\mathbb{C})^J \rtimes \mathcal{W}_F$, where $J = {\rm Hom}_F(E,\bar{F})$ and $\mathcal{W}_F$ acts on ${\rm GL}_2(\mathbb{C})^J$ via its natural action on $J$. The obvious $8$-dimensional representation of ${\rm GL}_2(\mathbb{C})^J$ on $\otimes_{j \in J} \mathbb{C}^2$ extends to a representation ${}^\otimes{\rm I}$ of ${}^{\rm L}G$, where again $\mathcal{W}_F$ acts on $\otimes_{j \in J} \mathbb{C}^2$ by permuting the factors $\mathbb{C}^2$ via its natural action on $J$. When $E$ is a cubic extension of $F$ in $\bar{F}$, we can consider the Weil group $\mathcal{W}_E$ of $\bar{F}$ over $E$ as a subgroup of index $3$ of $\mathcal{W}_F$, and ${}^\otimes{\rm I}$ is obtained by tensor induction from the representation of ${\rm GL}_2(\mathbb{C})^J \rtimes \mathcal{W}_E$, where ${\rm GL}_2(\mathbb{C})^J$ acts via its $j$-component, and $\mathcal{W}_E$ acts trivially. We call ${}^\otimes{\rm I}$ the \emph{Asai cube representation} of ${}^{\rm L}G$. \subsection{} All our reductive groups over $F$ are quasi-split and pinned, i.e., equipped with a Borel subgroup and a Levi subgroup (in fact, a torus) of that Borel subgroup, both defined over $F$, and a $\mathcal{W}_F$-equivariant choice of isomorphisms of the root subgroup corresponding to a simple root (over $\overline{F}$) with the additive group ${\bf G}_a$ \cite{Bo1979}. For ${\rm GL}_2$, we take the standard pinning and $\bf G$ is equipped with the induced one [\emph{loc.\,cit.}]. To produce the $\gamma$-factor we need a (pinned) quasi-split reductive group $\bf H$ over $F$, which we take semisimple simply connected of type $D_4$, with triviality given by $E$ (see below). We consider the parabolic subgroup $\bf P$ of $\bf H$ (containing the fixed Borel subgroup) obtained by omitting the central root of the Dynkin diagram, and let $\bf L$ be its Levi subgroup containing the fixed maximal torus. We shall produce a morphism of pinned reductive groups $\iota : {\bf L} \rightarrow {\bf G}$ defined over $F$ and inducing a central isogeny on the derived subgroups. The pinning of $\bf H$, gives a based root datum $\mathcal{R}_H$ with an action of $\mathcal{W}_F$ \cite{Bo1979} factoring through the Galois group of $\widetilde{E}$ over $F$. Since $\mathcal{R}_H$ with its action of $\mathcal{W}_F$ determines $\bf H$ up to unique isomorphism, we shall work only with $\mathcal{R}_H$ (or rather its dual root datum $\mathcal{R}_H^\vee$). Similarly $\iota$ induces a $\mathcal{W}_F$-equivariant morphism of based root data $\iota_\mathcal{R} : \mathcal{R}_L \rightarrow \mathcal{R}_G$, by which $\iota$ is uniquely determined, so again we only need describe $\iota_\mathcal{R}$. \subsection{} On the dual side, we choose a complex (pinned) reductive group $\widehat{\bf H}$, with based root datum the dual $\mathcal{R}_H^\vee$ of $\mathcal{R}_H$; the natural action of $\mathcal{W}_F$ on $\mathcal{R}_H^\vee$ produces an L-group ${}^{\rm L}H = \widehat{H} \ltimes \mathcal{W}_F$. We let $\widehat{\bf P}$ be the parabolic subgroup of $\widehat{\bf H}$ corresponding to $\bf P$ (and containing the underlying Borel subgroup). We let $\widehat{\bf L}$ be the Levi subgroup of $\widehat{\bf P}$ containing the underlying maximal torus $\widehat{\bf T}$, and write $\widehat{\bf N}$ for the unipotent radical of $\widehat{\bf P}$. Then $\widehat{\bf L}$ has based root datum $\mathcal{R}_L^\vee$ and the L-group ${}^{\rm L}L = \widehat{H} \ltimes \mathcal{W}_F$ is a subgroup of ${}^{\rm L}H$; it acts naturally on the Lie algebra ${}^{\rm L}\mathfrak{n}$ of $\widehat{\bf N}$, and it is that action that we use to construct our $\gamma$-factors (see \S~\ref{adjoint} below --could define things in \S~2). \subsection{} We now specify $\mathcal{R}_H$ explicitly. Although we rather work with the dual root datum $(X,\Phi,X^\vee,\Phi^\vee)$, where $X$ is the group of characters of $\widehat{\bf T}$ and $\Phi$ is the set of roots of $\widehat{\bf T}$ in $\widehat{\bf H}$; the group $X^\vee$ is the group of cocharacters of $\widehat{\bf T}$ and $\Phi^\vee$ is the set of coroots, equipped with a bijection $\alpha \mapsto \alpha^\vee$ of $\Phi$ onto $\Phi^\vee$. Finally, the group $\mathcal{W}_F$ acts linearly on $X$ (via a finite quotient), and that action preserves $\Phi$; the dual action on $X^\vee$ preserves $\Phi^\vee$, compatibly with the bijection $\alpha \mapsto \alpha^\vee$. Our group ${}^{\rm L}H$ has type $D_4$ and to describe its root datum we use the notation of Bourbaki \cite{Bou}. However, we prefer to separate the roles of $X$ and $X^\vee$ (and not identify them via some Killing form), so that we let $X$ be the set of vectors in $V = \mathbb{R}^4$ with integer coordinates adding to an even number; we write $(e_1, \ldots, e_4)$ for the canonical basis of $V$, and choose $\alpha_1 = e_1 - e_2$, $\alpha_2 = e_2 - e_3$, $\alpha_3 = e_3 - e_4$ and $\alpha_4 = e_3 + e_4$ as a basis of $\Phi$. The vector space $V^\vee$ dual to $V$ has the basis $(e_1^\vee, \ldots, e_4^\vee)$ dual to $(e_1, \ldots, e_4)$; the simple coroots are $\alpha^\vee_1 = e^\vee_1 - e^\vee_2$, $\alpha^\vee_2 = e^\vee_2 - e^\vee_3$, $\alpha^\vee_3 = e^\vee_3 - e^\vee_4$ and $\alpha^\vee_4 = e^\vee_3 + e^\vee_4$; the lattice $X^\vee$ in $V^\vee$ is generated by $\frac{1}{2}(e_1^\vee + e_2^\vee + e_3^\vee + e_4^\vee)$ and $e_2^\vee$, $e_3^\vee$, $e_4^\vee$. Writing $\mathfrak{S}$ for the group of permutations of $\left\{ 1,3,4 \right\}$, we have a natural action of $\mathfrak{S}$ on $V$ preserving $X$, fixing $\alpha_2$ and permuting $\alpha_1$, $\alpha_3$, $\alpha_4$ according to the indices. \begin{center} \begin{tikzpicture} \draw[circle] (180:1) node {$\bullet$}; \draw[circle] (180:1) node[left] {$\alpha_1$}; \draw[circle] (195:1) arc (195:285:1); \draw[circle] (300:1) node {$\bullet$}; \draw[circle] (300:1.35) node {$\alpha_4$}; \draw[circle] (315:1) arc (315:405:1); \draw[circle] (60:1) node {$\bullet$}; \draw[circle] (60:1.35) node {$\alpha_3$}; \draw[circle] (75:1) arc (75:165:1); \draw[circle] (0:0) node {$\bullet$} node[right] {$\alpha_2$}; \draw[circle] (60:.125) -- (60:.875); \draw[circle] (180:.125) -- (180:.875); \draw[circle] (300:.125) -- (300:.875); \end{tikzpicture} \end{center} \noindent Any group homomorphism $\mathcal{W}_F \rightarrow \mathfrak{S}$ then gives a group root datum with action of $\mathcal{W}_F$, determining a pinned reductive groups $\bf H$ over $F$. If $E$ is our fixed cubic extension of $F$ in $\bar{F}$, any identification of $J = {\rm Hom}_F(E,\bar{F})$ with $\left\{ 1, 3, 4 \right\}$ gives a homomorphism $\mathcal{W}_F \rightarrow \mathfrak{S}$ producing the group ${}^{\rm L}H$ we seek. \begin{remark}\label{GFasai} \emph{If we start with a global field $k$ and a cubic separable extension $l$ of $k$, we are producing groups ${\bf H}/k$ and ${}^{\rm L}H_k$. If $v$ is a place of $k$, the root datum for the $L$-group ${}^{\rm L}H_{k_v}$ of ${\bf H} \otimes_{k} k_v$ is obtained from the root datum of ${}^{\rm L}H_k$ by composing the action of $\mathcal{W}_k$ with the homomorphism $\mathcal{W}_{k_v} \rightarrow \mathcal{W}_k$ coming from the completion (such a homomorphism depends on an isomorphism of $\bar{k}$ with the algebraic closure of $k$ in $\bar{k}_v$, but changing it changes ${}^{\rm L}H_{k_v}$ to an isomorphic group). Note that even if the homomorphism $\mathcal{W}_k \rightarrow \mathfrak{S}$ is surjective, the local homomorphism $\mathcal{W}_{k_v} \rightarrow \mathfrak{S}$ might not be. For example, at a split place $v$ of a global cubic extension $l/k$ we have $l_v \simeq k_v \times k_v \times k_v$; the local homomorphism $\mathcal{W}_{k_v} \rightarrow \mathfrak{S}$ is trivial. This explains why we cannot restricted to the cases where $E/F$ is always a cubic extension, and why it is better to deal with a general situation of ${\rm GL}_2$-kind.} \end{remark} \subsection{Adjoint representation}\label{adjoint} The roots of $\widehat{\bf T}$ in $\widehat{\bf L}$ are the roots in $\Phi$ which are linear combinations of $\alpha_1$, $\alpha_3$, $\alpha_4$, whereas the roots of $\widehat{\bf T}$ in ${}^{\rm L}\mathfrak{n}$ are the (positive) roots in $\Phi$ where $\alpha_2$ appears with a positive coefficient. The adjoint representation of $\widehat{\bf L}$ on ${}^{\rm L}\mathfrak{n}$ has two irreducible components $r_i$, $i = 1$, $2$, and the corresponding roots of $\widehat{\bf T}$ are the roots in $\Phi$ where $\alpha_2$ appears with coefficient $i$. We use the LS-representation of ${}^{\rm L}G$ coming from $r_1$ via a morphism $\iota : {}^{\rm L}G \rightarrow {}^{\rm L}L$ that we now construct. Now we relate ${}^{\rm L}L$ and ${}^{\rm L}G$, and $r_\mathcal{A}$ with ${}^\otimes{\rm I}$. With the chosen identification of $J$ with $\left\{ 1, 3, 4 \right\}$, ${}^{\rm L}G$ becomes ${\rm GL}_2(\mathbb{C})^{\left\{ 1,3,4 \right\}} \rtimes \mathcal{W}_F$; let us describe the corresponding root datum and relate it to the root datum for ${}^{\rm L}L$. For a root datum $(Y,\Psi,Y^\vee,\Psi^\vee)$ for ${}^{\rm L}G$, we can take \begin{equation*} Y = Y_1 \oplus Y_2 \oplus Y_3 \text{ with } Y_i = \mathbb{Z} e_i \oplus \mathbb{Z} f_i, \end{equation*} where $\alpha_i = e_i - f_i$ for $i = 1, 3, 4$ as simple roots; similarly \begin{equation*} Y^\vee = Y^\vee_1 \oplus Y^\vee_2 \oplus Y^\vee_3 \text{ with } Y^\vee_i = \mathbb{Z} e^\vee_i \oplus \mathbb{Z} f^\vee_i, \end{equation*} where $\alpha^\vee_i = e^\vee_i - f^\vee_i$ for $i = 1,3,4$ -- the duality is the obvious one, and $\mathcal{W}_F$ acts via its action on $\left\{ 1,3,4 \right\}$. Consider the quotient ${}^{\rm L}\overline{G}$ of ${}^{\rm L}G$ by the central subgroup made out of central elements $(x_1,x_3,x_4)$ in ${\rm GL}_2(\mathbb{C})^{\left\{ 1,3,4 \right\}}$ such that $x_1 x_3 x_4 = 1$; the corresponding root datum is $(Z,\Psi,\bar{Z}^\vee,\overline{\Psi}^\vee)$, where $Z$ is the sublattice of $Y$ of elements $a_1e_1 + b_1f_1 + a_3e_3 + b_3d_3 + a_4e_4 + b_4f_4$ such that $a_1 + b_1 = a_3 + b_3 = a_4 + b_4$; then $\bar{Z}^\vee$ is the corresponding quotient of $Y^\vee$ and $\overline{\Psi}^\vee$ is the image of $\Psi^\vee$ in $\bar{Z}^\vee$. The action of $\mathcal{W}_F$ is, again, obtained via the action on $\left\{ 1,3,4 \right\}$. One verifies immediately that the root datum $(Z,\Psi,\bar{Z}^\vee,\overline{\Psi})$ is isomorphic to the one for ${}^{\rm L}L$ by sending $\alpha_i$ to $\alpha_i$, for $i = 1$, $3$, $4$, and $f_1 + f_2 + f_3$ to $\alpha_2$, so that dually $\bar{\alpha}_i^\vee$ in $\bar{Z}^\vee$ is sent to $\alpha_i^\vee$, for $i =1$, $3$, $4$, wheras the image of $e_1^\vee + f_1^\vee$ in $\bar{Z}^\vee$ (which is the same as the image of $e_2^\vee + f_2^\vee$ or $e_3^\vee + f_3^\vee$), is sent to $\alpha_1^\vee + \alpha_3^\vee + \alpha_4^\vee - 2 \alpha_2^\vee$ (so that $e_1^\vee$ is sent to $\alpha_1^\vee - \alpha_2^\vee + \frac{1}{2}(\alpha_3^\vee + \alpha_4^\vee)$). It follows that we can choose an isomorphism $\varphi$ of ${}^{\rm L}\overline{G}$ onto ${}^{\rm L}L$, compatible with the isomorphism of root data just described. Since the representation $r$ of $\widehat{\bf L}$ on ${}^{\rm L}\mathfrak{n}$ has weights $\alpha_2$, $\alpha_1 + \alpha_2$, $\alpha_2 + \alpha_3$, $\alpha_2 + \alpha_4$, $\alpha_1 + \alpha_2 + \alpha_3$, $\alpha_1 + \alpha_2 + \alpha_4$, $\alpha_2 + \alpha_3 + \alpha_4$, $\alpha_1 + \alpha_2 + \alpha_3 + \alpha_4$, we see that through $\widehat{\bf G} \rightarrow \overline{\widehat{\bf G}} \rightarrow \widehat{\bf L}$, $r$ gives rise to the tensor product representation of $\widehat{\bf G} = {\rm GL}_2(\mathbb{C})^{\left\{ 1,3,4 \right\}}$, $\left\{ g_1, g_3, g_4 \right\} \mapsto g_1 \otimes g_3 \otimes g_4$. That identification even extends to ${}^{\rm L}G$ and its action via $r_\mathcal{A}$: indeed in the representation of ${}^{\rm L}L$ on ${}^{\rm L}\mathfrak{n}$ we can choose bases for the root subspaces so that the action of $\mathfrak{S}$ (hence $\mathcal{W}_F$) on these vectors is given by its action on the roots via the permutation of $\left\{ 1,3,4 \right\}$. It is then clear that via ${}^{\rm L}G \rightarrow {}^{\rm L}L$, $r_1$ does indeed give ${}^\otimes{\rm I}$. \subsection{Remark} It appears to be impossible to construct a quasi-split group ${\bf H}'$ over $F$, with a Levi subgroup ${\bf L}'$ isomorphic to $\bf G$ and giving rise to the representation $r$ by the LS method; this is where the extra datum of $\iota: {\bf L} \rightarrow {\bf G}$ is important. We study this issue in detail in \cite{HeLoFuture}.
2023-04-23T08:18:05.784Z
2018-09-06T02:16:59.000Z
redpajama/arxiv
arxiv_0000
1,178
12,534
c0e4175ec844cc7ceab1cc90306bab47471395ba
\section{Introduction} The distribution within galaxies of the atomic elements created via stellar evolution is a key observable affected by the various processes associated with galaxy evolution. In principle, the physical mechanisms associated with specific processes affecting galaxies can be deciphered from a spatially-resolved characterization of the distribution of atomic elements heavier than Helium (located either in the gas phase or locked in subsequent generations of stars), provided that the spatial and temporal enrichment of the interstellar medium (ISM) in the system can be constrained. In practice, the co-existence of multiple evolutionary mechanisms effective on different spatial and temporal scales significantly complicates such an analysis for any given system. Evolutionary mechanisms include galactic gaseous outflows, gaseous inflows, stellar and gaseous radial migration (e.g. through the presence of a bar), gravitational interactions, mergers, accretion events and other forms of perturbations, including strangulation, harassment, and stripping. In recent years, integral field spectrographs (IFSs) have largely supplanted long-slit spectrographs in studies designed to characterize the abundance distribution of heavy elements in galaxies. Among other benefits, the ability of IFS to measure abundances throughout the full two-dimensional extent of a galaxy (or a large part thereof) and detect azimuthal and radial trends has often been praised. In practice, the relatively small field-of-views (FoVs) and/or large size (on-sky) of the spatial pixels (a.k.a spaxels) of many IFSs have so far restricted the feasibility of performing such combined analysis of both sub-kpc trends with larger kpc-scales variations in systems located beyond a few Mpc. Instead, gaseous abundance distributions are often characterized via the slope of the azimuth-averaged gradient; an approach usually driven by the lack of spatial resolution and/or signal-to-noise (S/N), but that also allows for a more straight-forward comparison between galaxies at higher redshift, modulo the issues inherent to comparing oxygen abundances derived using different methodologies \citep[e.g.][]{Kewley2008, Lopez-Sanchez2012}. The advent of the MUSE (Multi-Unit Spectroscopic Explorer) IFS has opened a new observational window as the combination of a relatively large FoV (1$\times$1\,square arcmin) and small spaxels (0.2$\times$0.2\,square arcsec) coupled to the excellent seeing of Cerro Paranal is now allowing to map galaxies out to 2-3 effective radius (R$_{e}$) and (simultaneously) down to sub-kpc scales out to distance of $\sim$100\,Mpc. MUSE is effectively pushing outwards our ability to study galaxies as complex systems down to the crucial sub-kpc scales (the scale of individual H\,{\smaller II} region complexes): a feat previously restricted to galaxies closer-in via IFS with larger FoV such as PMAS/PPAK \citep[e.g. via the PINGS and CALIFA surveys;][]{Rosales-Ortega2010,Sanchez2012,Sanchez2012a,Sanchez2014} and VIRUS-P \citep[e.g. via the VENGA survey; ][]{Blanc2013, Kaplan2016}. This push is particularly interesting from a galaxy evolution perspective, as MUSE effectively allows the study of galaxies inside a wider range of environments, including that of compact groups and clusters, with unprecedented spatial resolution. In this article, we present follow-up MUSE observations of the nearly face-on spiral galaxy HCG~91c. Initial observations of this galaxy with the Wide Field Spectrograph \citep[WiFeS;][]{Dopita2007,Dopita2010} mounted on the ANU 2.3m telescope \citep{Mathewson2013} at Siding Spring observatory revealed spatially rapid and localized variations of the oxygen abundance in the system associated with at least one star-forming complex. HCG~91c is a member of a compact group of galaxies, which could indicate a possible influence of the environment on the chemical content within the galaxy. However, the WiFeS data was insufficient to distinguish between the possibility that gas of different metallicity had fallen in from another galaxy (or from the Intergalactic Medium), or whether the variation was caused by secular processes. The new MUSE data presented in Sec.~\ref{sec:obs} (thanks to the large FoV and 3$\times$ better spatial resolution of this instrument) now allows us to do this. Our analysis procedure is described in Sec.~\ref{sec:analysis}. We present our results in Sec.~\ref{sec:results} and discuss them in Sec.~\ref{sec:discussion}. Our conclusions are summarized in Sec.~\ref{sec:conclusion}. \section{Observations and data reduction}\label{sec:obs} HCG~91c was observed during the second Science Verification run for MUSE (mounted at the Nasmyth B focus of the Unit 4 -Yepun of the Very Large Telescope on Cerra Paranal), under program ID 60.A-9317(A) (P.I.: Vogt). The observation strategy for this program is described in detail in \cite{Vogt-thesis}, and summarized here for completeness. A total of 12 individual exposures (eleven of 1050\,s and one of 525\,s) on-target separated in 6 observation blocks (OBs) were acquired over 4 distinct nights between 2014, August 20 and 2014, August 25. In each OB, two exposures on-target were surrounding the observation of a dedicated empty sky field located near-by. Each on-target exposure was reduced individually using the \textsc{reflex} \citep{Freudling2013} MUSE workflow (v1.6), including the construction of the individual datacubes. Out of the 12 on-target exposures, 3 were acquired under seeing conditions $>$0.8\,arcsec, measured from the reconstructed individual datacubes using a star in the FoV (and consistent with the reported values of the Slow Guiding System). The other 9 exposures were acquired under seeing conditions $<$ 0.7 arcsec: these 9 exposures were combined together to form the final data cube presented in this article via the dedicated \textsc{reflex} \textsc{muse\_exp\_combine.xml} workflow. The combined datacube thus corresponds to $8\cdot1050+1\cdot525=8925$\,s on-source, and the spatial full-width at half-maximum (FWHM) of stars in the FoV are measured to be 0.6\,arcsec in the V-band. Spectrally, the cube extends from 4750\,\AA\ to 9350\,\AA\ in steps of 1.25\,\AA\footnote{The MUSE spectral resolution increases from $R\cong1700$ to $R\cong3500$ between 4750\,\AA\ and 9350\,\AA.}, with $319\times316$ spaxels. The individual exposures were acquired at four distinct position angles (P.A.; 0$^{\circ}$, 90$^{\circ}$, 180$^{\circ}$, 270$^{\circ}$) and with sub-arcsec spatial offsets to best remove the background-level artefacts associated with the 24 individual integral field units inside MUSE. The reduced datacube was uploaded to the ESO science archive facility following the recommendations of the \textit{Phase 3} stage of observations with ESO facilities\footnote{\url{https://www.eso.org/sci/observing/phase3.html}}. A pseudo-RGB image of the final datacube is presented in Figure~\ref{fig:RGB}, where the R, G \& B-bands correspond to the sum of the cube slices in the spectral ranges [7500\,\AA; 9300\,\AA], [6000\,\AA; 7500\,\AA] \& [4800\,\AA; 6000\,\AA], respectively. The spiral structure of the galaxy is readily evident, extending throughout the entire optical disk. Two foreground stars are visible at [22$^{h}$09$^{m}$14.\!\!$^{s}$02; -27$^{\circ}$47$^{\prime}$15.\!\!$^{\prime\prime}$1] and [22$^{h}$9$^{m}$12.\!\!$^{s}$37; -27$^{\circ}$46$^{\prime}$48.\!\!$^{\prime\prime}$4], and provide a visual scale for the spatial resolution of the image. Numerous background galaxies are also visible as redder sources across the FoV. At the distance of HCG~91c \citep[104\,Mpc, see Table~\ref{table:HCG91c} and][]{Vogt2015}, 10\,arcsec correspond to 5\,kpc. \begin{figure} \centerline{\includegraphics[width=\hsize]{./fig1.pdf}} \caption{Pseudo-RGB image of HCG~91c, where the R, G \& B colors correspond to the sum of the reduced MUSE cube slices in the spectral ranges [7500\,\AA; 9300\,\AA], [6000\,\AA; 7500\,\AA] \& [4800\,\AA; 6000\,\AA], respectively. The spatial FWHM of the datacube is $\sim$0.6\,arcsec in the V-band. The star-forming complexes found by \cite{Vogt2015} (then dubbed ``C1'', ``C2'' and ``C3'') to have discrepant oxygen abundances with respect to their immediate surroundings are marked with white circles. Two foreground stars (indicative of the spatial resolution of the data) are marked with red boxes.} \label{fig:RGB} \end{figure} \begin{table} {\smaller \caption{Basic characteristics of HCG~91c.}\label{table:HCG91c} \flushleft{ \begin{tabular}{l p{2.2cm} l} \hline \hline Property & Value & Reference\\ \hline Names & HCG~91c &\\ & ESO 467 - G 013 & \\[0.5ex] R.A. [J2000] & 22$^{h}$09$^{m}$07.7$^{s}$ &\\[0.5ex] Dec. [J2000] & -27$^{\circ}$48$^{\prime}$34$^{\prime\prime}$ & \\[0.5ex] Redshift & 0.024377 & \cite{Vogt2015}\\[0.5ex] Distance & 104 Mpc &\\[0.5ex] Spatial scale & 504 pc arcsec$^{-1}$ &\\[0.5ex] R$_{25}$ & 26.75$\pm$3.25 arcsec & \cite{deVaucouleurs1991}\\[0.5ex] Rotation velocity & 100$\pm$11 km s$^{-1}$ & \cite{Vogt2015}\\ \quad(at radii$>$22~kpc) & \\[0.5ex] Star formation rate & 2.19 M$_{\odot}$ yr$^{-1}$ & \cite{Bitsakis2014}\\ & 2.10$\pm$0.06 M$_{\odot}$ yr$^{-1}$ & \cite{Vogt2015}\\[0.5ex] Stellar mass & 1.86$\times$10$^{10}$ M$_{\odot}$ & \cite{Bitsakis2014} \\[0.5ex] H\,\textsc{\smaller I} mass & 2.3$\times10^{10}$ M$_{\odot}$ & \cite{Borthakur2010}\\ \hline \end{tabular} }} \end{table} \section{Data post-processing}\label{sec:analysis} With $>10^{5}$ spectra in the final MUSE datacube, any manual data processing step becomes extremely costly time-wise. Each second invested for the analysis of a single spectra would immediately require $\sim$28\,hours to manually perform the same task for all the spaxels in the MUSE datacube (one after another). To circumvent this time sink (also associated with the processing of large number of IFS observations from other instruments), several tools have been developed to process IFS data products in an automated fashion: recent examples include \textsc{pycasso} \citep{CidFernandes2013}, \textsc{lzifu} \citep[][]{Ho2016}, and \textsc{pipe3d} \citep[][]{Sanchez2016,Sanchez2016a}. For our analysis, we have developed our own post-processing tool called \textsc{brutus}: a set of \textsc{python} modules designed to automatically process datacubes from integral field spectrographs. \textsc{brutus} is hosted on Github, and is made freely available to the community\footnote{\url{http://fpavogt.github.io/brutus}}. It is designed with a modular structure in mind \citep[inspired by \textsc{pywifes};][]{Childress2014,Childress2014a}, allowing users to choose which processing steps are to be run (or not). A detailed description of the code is outside the scope of this article, but for completeness we list in Appendix~\ref{app:brutus} the specific steps employed to process the datacube of HCG~91c. The emission line flux maps for H$\alpha$ and [O\,\textsc{\smaller III}]$\lambda$5007 constructed using \textsc{brutus} are presented in Figure~\ref{fig:flux-maps}. In this work, we restrict our analysis to spaxels with SNR(H$\alpha$, H$\beta$)$\geq$5 and SNR$\geq$3 for all other lines: a good detection of the first two Hydrogen Balmer lines ensures reliable measurements of the tied velocity and velocity dispersions, hence leading to stable fits for the other lines for SNRs$\geq$3. A significant detection of the first two Hydrogen Balmer lines is also essential to ensure a reliable correction of the extragalactic attenuation on a spaxel-by-spaxel basis. In the present case, the extragalactic attenuation is corrected with \textsc{brutus} via the $H\alpha$ to H$\beta$ line flux ratio and the theoretical model of a turbulent dust screen from \cite{Fischera2005}, with $R_V=3.08$ and $R_V^A=4.3$ which in practice results in a correction function very similar to that of \cite{Calzetti2000} across the MUSE spectral range. \begin{figure} \centerline{\includegraphics[width=\hsize]{./fig2a.pdf}} \centerline{\includegraphics[width=\hsize]{./fig2b.pdf}} \caption{H$\alpha$ (top) and [O\,\textsc{\smaller III}]$ \lambda$5007 flux map of HCG~91c. The emission lines were fitted with tied gaussian components in each spaxel using \textsc{brutus}. For the H$\alpha$ map, all spaxels with SNR$\geq$5 are shown. For the [O\,\textsc{\smaller III}]$ \lambda$5007 map, all spaxels with SNR$\geq$3 are shown.} \label{fig:flux-maps} \end{figure} Whereas a spaxel-based analysis best exploits the high spatial resolution of MUSE observations, the SNR in the strong emission lines (and in particular H$\beta$) is too little for numerous star-forming regions in the outskirts of the galaxy to perform a reliable analysis. Hence, we supplement the spaxel-based approach with an aperture-based one. Namely, we use \textsc{brutus} to detect H\,\textsc{\smaller II} regions in the data cube automatically, by detecting local maxima in the integrated H$\alpha$ flux map (see Fig.~\ref{fig:flux-maps}). Our approach is somewhat reminiscent of that adopted by \cite{Sanchez2012}, but our codes are entirely different in practice (see Appendix~\ref{app:brutus} for details). For HCG~91c, we defined 556 circular apertures associated with individual local maxima in the H$\alpha$ flux map, all of which are presented in Fig.~\ref{fig:apertures}. \begin{figure} \centerline{\includegraphics[width=\hsize]{./{fig3}.pdf}} \caption{H$\alpha$ flux map of HCG~91c overlaid with the 556 apertures associated with local maxima. The apertures were first identified automatically by \textsc{brutus}, and subsequently inspected and adjusted manually using a dedicated interactive module inside the code.} \label{fig:apertures} \end{figure} \subsection{Deriving the oxygen abundance and ionization parameter estimates} From the set of strong emission line fluxes derived from both the spaxel-based and aperture-based spectral fitting, we derive estimates of the gas-phase oxygen abundance 12+$\log$(O/H) and ionization parameter $\log$(Q). Here, we rely on the \textsc{pyqz} \textsc{python} module v0.8.1, first introduced (as v0.4) in \cite{Dopita2013} with the full propagation of observational errors subsequently implemented in v0.6 \citep[see Appendix B in][]{Vogt2015}. The latest embodiment of the code relies on the photo-ionization models from the \textsc{mappings v} code\footnote{\url{https://miocene.anu.edu.au/Mappings/}} (Sutherland et al., in prep.), and is now hosted on a dedicated Github repository\footnote{\url{http://fpavogt.github.io/pyqz}}. The principle and physical basis of the code however remains unchanged with respect to \cite{Dopita2013}: specific line ratio spaces --in which the photo-ionization model grids are \emph{flat, without wraps} and cleanly allow to disentangle the influence of 12+$\log$(O/H) and $\log$(Q)-- are used to derive estimates of these two parameters associated with a given set of strong line fluxes. For this analysis, we rely on the plane-parallel photo-ionization models with $P_k=5.0$ and $\kappa=\infty$ \citep[i.e. a Maxwell-Boltzmann energy distribution for the electrons, see][]{Nicholls2012,Nicholls2013}. The propagation of observational errors is achieved through the generation of 400 (in this case) random realizations of line fluxes according to the error distribution of the measured lines (assumed to be gaussian), and the subsequent reconstruction of the full probability density function in the 12+$\log$(O/H) \emph{vs.} $\log$(Q) plane from these 400 individual estimates, using Kernel Density Estimate (KDE) techniques. For this work, we adopt the \textsc{mappings} 5.1 models with the local Galactic concordance (LGC) abundance set, in which the \textit{local region} reference abundance corresponds to 12+$\log$(O/H) = 8.76 (Nicholls et al., 2016, MNRAS, submitted). Several of the \textsc{pyqz} diagnostic grids involve the [O\,\textsc{\smaller II}]$\lambda\lambda$3726,3729 emission lines, which at the redshift of HCG~91c do not fall within the MUSE spectral range. Here, we derive our estimates of 12+$\log$(O/H) and $\log$(Q) from the combination of the following two diagnostic grids that do not employ that line: \begin{eqnarray} \log \left( \frac{[\mathrm{N}\,\textsc{\smaller II}]\lambda 6583}{[\mathrm{S}\,\textsc{\smaller II}]\lambda\lambda 6716, 6731}\right) &vs& \log\left( \frac{[\mathrm{O}\,\textsc{\smaller III}]\lambda 5007}{\mathrm{H}\beta} \right),\mathrm{ and} \nonumber \\ \log \left( \frac{[\mathrm{N}\,\textsc{\smaller II}]\lambda 6583}{[\mathrm{S}\,\textsc{\smaller II}]\lambda\lambda 6716, 6731}\right) &vs& \log\left( \frac{[\mathrm{O}\,\textsc{\smaller III}]\lambda 5007}{[\mathrm{S}\,\textsc{\smaller II}]\lambda\lambda 6716,6731} \right). \end{eqnarray} The distribution of each of the 556 apertures in both line ratio planes in shown in Fig.~\ref{fig:pyqz_grids}. Both diagnostics are in excellent agreement throughout most of the abundance range --except for the star-forming regions with 12+$\log$(O/H)$\lesssim 8.1$, which tend to be less enriched according to the first diagnostic grid. This mismatch is most likely impacted by problems within the theoretical models at the low-abundance end, rather than fully by a miscorrection of the extragalactic reddening (which is nonetheless likely to be playing a role as well). Indeed, a reddening correction issue would have likely affected all apertures, which is not seen here. \begin{figure} \centerline{\includegraphics[width=\hsize]{./{fig4}.pdf}} \caption{Emission line ratio diagnostic grids from the \textsc{mappings} 5.1 photo-ionization code, for a plane-parallel geometry with $P_k=5.0$ and $\kappa=\infty$. Each circle corresponds to one simulation with a specific abundance and ionization parameter. Small crosses indicate the location of each of the 556 circular aperture, color-coded as a function of the combined (where suitable) abundance value derived from both diagnostics. Orange crosses mark the apertures for which \textsc{pyqz} could not derive a reliable abundance: either from a too large discrepancy between both grids, or because the estimates fall outside both grids.} \label{fig:pyqz_grids} \end{figure} A detailed investigation of this mismatch --complicated by the lack of the [O\,\textsc{\smaller II}]$\lambda\lambda$3726,3729 in the MUSE spectral range-- is outside the scope of this article. It may for example be that our model with $P_k=5.0$ may not be suitable for these star-forming regions, and/or this mismatch may simply reflect genuine limitations of the \textsc{mapping v} models at the low abundance end. In this analysis, we choose to also use the apertures for which only one diagnostic grid returns a suitable estimate, noting and stressing that this choice does not affect our conclusions. In particular, we note that the trends discussed in the next Section are present when considering either diagnostic grids separately, or together. \section{Results}\label{sec:results} The maps of 12+$\log$(O/H) and $\log$(Q) for both the spaxel-based case and the aperture-based case are presented in Figs.~\ref{fig:Z-maps} and \ref{fig:Q-maps}. The spatial resolution of these maps (and the gain provided by MUSE) can be compared with those obtained with the WiFeS integral field spectrograph described in \cite{Vogt2015}: the general features identified with WiFeS remain (in particular the sharp decrease in the oxygen abundance at $\sim$6\,kpc North from the galaxy center), but the significant improvement in the spatial resolution (both in terms of seeing and spaxel size) also allows to better resolve structures within the disk, as well as to detect and characterize H\,\textsc{\smaller II} regions at larger radii than with WiFeS. \begin{figure} \centerline{\includegraphics[width=\hsize]{./fig5a.pdf}} \centerline{\includegraphics[width=\hsize]{./fig5b.pdf}} \caption{Spaxel-based map of the oxygen abundance (top) and ionization parameter (bottom) in HCG~91c, constructed using the \textsc{brutus} and \textsc{pyqz} codes. The color bars span the full range of values covered by the \textsc{mappings v} simulations, highlighting the large range of oxygen abundances and narrower range of ionization parameters throughout the galaxy.} \label{fig:Z-maps} \end{figure} \begin{figure} \centerline{\includegraphics[width=\hsize]{./fig6a.pdf}} \centerline{\includegraphics[width=\hsize]{./fig6b.pdf}} \caption{Same as Fig.~\ref{fig:Z-maps}, but for the 556 apertures associated with local maxima in the H$\alpha$ flux map of the galaxy, identified with \textsc{brutus}.} \label{fig:Q-maps} \end{figure} Local enhancements of the ionization parameter are present throughout the disc, and are associated with individual star-forming complexes. The largest values of $\log$(Q) are found in the outskirts of the galaxy, beyond the effective radius ($R_{e}$=5.1\,kpc). In terms of the oxygen abundance, the picture revealed by MUSE is clearly more complex than that discussed by \cite{Vogt2015} from the WiFeS observations of the system. We present in Fig.~\ref{fig:gradient} and \ref{fig:azimuth} the oxygen abundance gradient of HCG~91c observed by MUSE, both global and along specific azimuths. When considering all spaxels and/or apertures at once, the oxygen abundance in HCG~91c displays a linear decline of $-0.082\pm0.001$\,dex\,kpc$^{-1}\equiv -0.418\pm0.005$\,dex\,R$_{e}^{-1}$ from 1\,R$_{e}$ (5.1\,kpc) to 2\,R$_{e}$ (10.2\,kpc), with a flattening inwards of $\sim$0.8\,R$_{e}$ (4\,kpc), and (possibly) outwards of 2.1\,R$_{e}$ (11\,kpc): a trend already already detected with WiFeS \citep{Vogt2015}. Modulo a reduced scatter, both the spaxel-based and aperture-based gradients display similar trends, indicative that (as one would expect) the distance to HCG~91c is large enough not to affect a spaxel-based approach through the resolution of the temperature structures of H\,\textsc{\smaller II} regions. The consistency between the spaxel-based and aperture-based approach also indicates that the presence of Diffuse Ionized Gas (DIG) in HCG~91c is not affecting our ability to derive reliable estimates of the 12+$\log$(O/H) and $\log$(Q). We note that large areas void of clearly-identified star-forming regions (e.g. the inter-arm region to the South-West of the galaxy center) -and thus possibly dominated by DIG emission- have too little S/N to be processed reliably by \textsc{pyqz}. \begin{figure*} \centerline{\includegraphics[scale=0.5]{./fig7a.pdf}} \centerline{\includegraphics[scale=0.5]{./fig7b.pdf}} \caption{Global oxygen abundance gradient (top panel) and for regions within 0.3\,arcsec of the specific azimuth 79.5$^{\circ}$ East-of-North (i.e. the area between the white \& black dashed line). Individual spaxels are shown as colored squares with the associated 1-$\sigma$ errors in grey. Measurements for apertures (with associated 1-$\sigma$ errors) are shown in red. Whereas globally, the gradient in HCG~91c can be described as linear with a flattening inwards of 4\,kpc and (possibly) outwards of 11\,kpc, the trends are highly non-linear for individual azimuths. Rapid and localized variations, detected both in individual spaxels and integrated apertures alike are present at all radii. Ellipses marking the deprojected effective radius R$_{e}$ (5.1\,kpc, orange) and 1.5\,R$_{e}$ (dashed-orange) radius from the galaxy center are shown in all oxygen abundance maps (both spaxel-based and aperture based).} \label{fig:gradient} \end{figure*} \begin{figure*} \centerline{\includegraphics[scale=0.5]{./fig8a.pdf}} \centerline{\includegraphics[scale=0.5]{./fig8b.pdf}} \caption{Same as Fig.~\ref{fig:gradient} but for azimuths 258.0$^{\circ}$ (top) and 322.5$^{\circ}$ (bottom) East-of-North, respectively. For completeness, the individual oxygen abundance gradients of HCG~91c extracted for all azimuths in steps of 0.5$^{\circ}$ have been stacked into a movie available as supplementary material. \textit{Note to arXiv readers: until publication, the movie will be available at \texttt{http://www.sc.eso.org/$\sim$fvogt/supp\_mat/HCG91c/O\_gradient.mp4}}} \label{fig:azimuth} \end{figure*} A flattening of the oxygen abundance gradient beyond 2\,R$_{e}$ has been identified as a generic feature in CALIFA galaxies \citep{Sanchez2014,Sanchez-Menguiano2016,Zinchenko2016}, with the flattening of the gradient inside $\sim$0.5\,R$_{e}$ identified primarily in galaxies with stellar masses $\log(M_{\star}/M_{\odot})\geq10.5$. The gradient slope derived by \textsc{pyqz} for HCG~91c is somewhat steeper than the universal gradient measured with CALIFA galaxies \citep[$-0.075$\,dex\,$R_{e}^{-1}$ with a scatter of 0.016\,dex\,$R_{e}^{-1}$, see][]{Sanchez-Menguiano2016}. This mismatch is due to the different techniques used to derive the oxygen abundance values (as illustrated in Fig.~\ref{fig:O3N2}): compared to the O3N2 calibrations of \cite{Marino2013}, \textsc{pyqz} and the underlying \textsc{mappings v} photo-ionization models return a wider range of abundances in HCG~91c, leading to a steeper gradient. Taking this scaling difference into account, the oxygen abundance gradient in HCG~91c (between 1\,R$_{e}$ and 2\,R$_{e}$) is consistent with the universal gradient reported by \cite{Sanchez-Menguiano2016}. \begin{figure} \centerline{\includegraphics[scale=0.5]{./fig9.pdf}} \caption{Comparison of the oxygen abundance values derived via \textsc{pyqz} v0.8.1 (i.e. using the \textsc{mappings v} photo-ionisation models; this work) and the O3N2 calibration of \cite{Marino2013}, as employed by \cite{Sanchez-Menguiano2016} for all the apertures spectra in HCG~91c. \textsc{pyqz} estimates span a wider range of oxygen abundances, so that \textsc{pyqz}-derived gradient slopes are steeper than those derived by \cite{Sanchez-Menguiano2016}. Despite a scatter of $\sim$0.1\,dex at the lower abundance end, the correspondence between the two techniques remains nonetheless linear over the span of oxygen abundances in HCG 91c.} \label{fig:O3N2} \end{figure} Our ability to spatially resolve sub-kpc scales for all azimuths in HCG~91c also reveals additional features of the oxygen abundance distribution throughout the galaxy, beyond the existence of a global gradient. Two distinct types of behaviors are present: first, on sub-kpc-scales, and second on kpc-scales. We describe them separately in the next Sections. \subsection{The sub-kpc-scale variations of 12+log(O/H)} The global gradient visible in Fig.~\ref{fig:gradient} (top) contains a vertical scatter of $\sim$0.2\,dex in the oxygen abundance present at all radii in HCG~91c. When considering only the aperture-based measurements, the vertical scatter remains of the order of $\sim$0.15\,dex despite intrinsically smaller measurement errors. It becomes evident that this vertical scatter is real (rather than related to measurement errors) when inspecting the oxygen abundance gradient as a function of the azimuth, as illustrated by the lower panel of Fig.~\ref{fig:gradient} and those in Fig.~\ref{fig:azimuth}. Restricting the azimuthal range of the gradient diagram reveals the presence of spatially localized and coherent variations in the measured oxygen abundances. The variations are of the order of 0.1-0.2\,dex over distances $\leq$1\,kpc, significant at more than 5-$\sigma$ for most cases, and detected both in individual spaxels and integrated apertures (both consistent with one another). These rapid variations can be visually identified in the maps of the oxygen abundance (see Fig.~\ref{fig:Z-maps}), the clearest example of which is located 15\,arcsec East of the galaxy center. \subsection{The kpc-scale variations of 12+log(O/H)} An alternative approach to visualizing the gaseous oxygen abundance distribution in HCG~91c is presented in Fig.~\ref{fig:dewrapped}, where we \textit{de-wrapped} the oxygen abundance gradient on the \emph{azimuth-distance} plane. In this projection, all points at a given height are located at the same distance from the galaxy center. This projection facilitates the identification and inspection of spiral structures beyond the effective radius, at the cost of lesser clarity closer from the galaxy center (stretched horizontally across the diagram). \begin{figure*} \centerline{\includegraphics[width=\textwidth]{./fig10.pdf}} \caption{The \emph{de-wrapped} abundance distribution of HCG~91c. Each spaxel in the datacube was reprojected in the azimuth-distance plane according to its location with respect to the galaxy center. For spaxels associated to a given aperture but without a reliable \emph{individual} measure of 12+$\log$(O/H), the aperture-derived value of 12+$\log$(O/H) is shown instead. Spiral arms can be identified and tracked from 2\,kpc outwards in this de-projected space. The iso-distance ellipses shown in the different panels of Fig.~\ref{fig:gradient} and Fig.~\ref{fig:azimuth} become horizontal line in this projection. Inclined purple lines mark the boundaries of spiral structures displaying a rapid variation of the oxygen abundance. The star-forming complexes found by \cite{Vogt2015} to have discrepant oxygen abundances with respect to their immediate surroundings are marked with white circles.} \label{fig:dewrapped} \end{figure*} Large kpc-scale coherent trends in the oxygen abundance distribution pattern are present throughout HCG~91c, in addition to the spatially localized variations described previously. These features, the most prominent of which are traced by purple lines in Fig.~\ref{fig:dewrapped}, are located at the edges of the spiral arms of the galaxy, as illustrated in Fig.~\ref{fig:spiral_arms}. In other words, the variations in the gaseous oxygen abundance are sharper and more abrupt when moving across the spiral structure, and more gradual when moving along the spiral arms. The rapid variation of the oxygen abundance measured with WiFeS to the North of the galaxy center \citep{Vogt2015} corresponds to the (then) unresolved \emph{crossing} of a spiral structure at that location, which the MUSE datacube reveals is not an isolated case, but rather the sharpest example of a behavior present at all azimuths. \begin{figure} \centerline{\includegraphics[width=\hsize]{./fig11.pdf}} \caption{White-light image of HCG~91c, reconstructed by collapsing the entire MUSE datacube. The galaxy center is marked with an orange dot. The projected effective radius $R_{e}$ and $1.5\cdot R_{e}$ ellipses are shown in orange and dashed-orange, respectively. The location of the coherent, kpc-scale variations of the oxygen abundance traced in Fig.~\ref{fig:dewrapped} are shown with purple curves. These effectively trace the edge of several of the spiral structures of HCG~91c.} \label{fig:spiral_arms} \end{figure} \section{Discussion}\label{sec:discussion} The detection of sub-kpc scale, spatially localized variations of the oxygen abundance in HCG~91c --a spiral galaxy-- is not excessively suprising. In near-by grand design spirals for example, \cite{Bresolin2009} report that the intrinsic scatter of the oxygen abundance is of the order of 0.2\,dex throughout the disk. \cite{Croxall2016} measure intrinsic dispersion in 12+$\log$(O/H) of 0.074\,dex in NGC~5457 \citep[M101, see also][]{Kennicutt1996}. In the same galaxy, \cite{Li2013} report two cases of locally lower and higher oxygen abundances for two H\,\textsc{\smaller II} regions compared to their immediate surroundings. \cite{Rosolowsky2008} detect a scatter of 0.21\,dex in M33 \citep[but see also][that from a sample of 25 H\,\textsc{\smaller II} regions only finds a scatter of 0.06\,dex]{Bresolin2011}. \cite{Sanders2012} find localized variations of the oxygen abundance associated with individual star-forming regions of the order of 0.3\,dex in M31, and \cite{Berg2015} report a scatter of 0.165\,dex in NGC~0628 \citep[see also][]{Rosales-Ortega2011}. Although caution must certainly be used when comparing these different studies with one another, the presence of localized variations of the oxygen-abundance of star-forming regions with respect to their immediate surroundings within spiral galaxies remains unequivocal. From this perspective, the case of HCG~91c clearly illustrates the importance of characterizing the gaseous oxygen abundance throughout the entire optical discs of galaxies \emph{while} distinguishing individual star-forming complexes within, in order to capture both pc-scale and kpc-scale trends. For long slit studies of near-by systems, the case of HCG~91c reinforces the importance of selection bias in deriving local abundance scatter and global trends from a handful of locations within a galaxy's disc. Ideally, such studies should characterize different galactic-centric radii (including both in and between spiral arms) and azimuths using not one but several H\,{\smaller II} regions in the immediate vicinity of one another for any given location. The full mapping of the oxygen throughout HCG~91c also reveals large scale azimuthal variations unambiguously associated with galactic structures (i.e. the spiral arms). Observationally, similar behaviors were already reported for different systems, with varying degrees of certainty. For example, \cite{Sanchez2015} detected (with MUSE) possible azimuthal variations in the oxygen abundance gradient of NGC~6754 associated with the spiral pattern of this galaxy \citep[see also the recent re-analysis of this target by][]{Sanchez-Menguiano2016a}. In NCG~5457, \cite{Croxall2016} identify a mild correlation of the oxygen abundance distribution with the spiral arms, from a sample of $\sim$100 H\,\textsc{\smaller II} regions observed with the Large Binocular Telescope. Such a correlation suggests a picture where ISM enrichment occurs preferentially along the spiral structures, rather than across them. HCG~91c lies within a compact group, but its very regular HI envelope is a clear indication that the galaxy has not yet interacted strongly with the group environment \citep{Vogt2015}. It thus appears more likely that the larger scale abundance variations tracing the spiral pattern in HCG~91c are an intrinsic behaviour of the system, rather than a consequence of the environment. In particular, a (set of) self-driven mechanism(s) appear more plausible than the different environment-driven hypothesis discussed in \cite{Vogt2015}. Still, the question remains: to what extent are gas phase abundance variations in HCG~91c (local or global) affected by dynamics, and to what extent do they reflect an enhanced star formation efficiency within the spiral structure (that keeps the metals in place)? Theoretically, \cite{Grand2016} presents evidence for the ability of spiral patterns to transfer comparatively more metal-rich stars from the inner regions of the galaxy to the outer parts along the spiral arms \citep[see also][]{Grand2012}. As these enriched stars end their lives in comparatively less enriched regions of the galaxy, they could contribute to a local enhancement of the ISM at larger radii. Alternatively, spiral arms in HCG~91c might have merely been acting as gravitational sink, effectively trapping heavy elements while favouring star-formation activity, thus leading to the present distribution of the oxygen abundance in HCG~91c. Differentiating between these scenarios (and others) would require a better understanding of the nature of the spiral structure in HCG~91c \citep[including its stability and coherence over time, see e.g.][]{Dobbs2014,Baba2015}. From that perspective, a combined \emph{gas+stars} analysis of the MUSE observations of HCG~91c bridging gaseous and stellar abundance (and kinematics) appears very indicated, but outside the scope of this article. \section{Conclusions}\label{sec:conclusion} Here we have presented MUSE observations of HCG~91c, as a follow-up of observations acquired with the WiFeS integral field spectrograph \citep{Vogt2015}. Using the new \textsc{brutus} tool designed to process the data cubes of integral field spectrographs, we have have measured the oxygen abundance and ionization parameter associated with star-forming regions throughout the disc of HCG~91c out to $\sim$2\,R${e}$, using both a spaxel-based and aperture-based approach. We confirm the presence of rapid abundance variations in the galaxy initially detected with WiFeS. These variations can be separated in two distinct types. First, sub-kpc-scale variations associated with individual star-forming regions, and second, kpc-scale variations correlated with the spiral structure of the galaxy (specifically with the boundaries of spiral arms). The kpc-scale variations thus provide observational evidence that ISM enrichment is preferentially occurring along the spiral structure of HCG~91c, and less easily across it. As per the sub-kpc-scale variations of the oxygen abundance in HCG~91c, they are reminiscent of the behavior of star-forming regions observed in near-by grand design spirals. The MUSE observations of HCG~91c confirm the unique ability of this instrument to spatially resolve oxygen abundance gradients, characterize intrinsic scatter and map non-radial variations of the abundance of H\,\textsc{\smaller II} regions in galaxies. The instrument's unique combination of a large FoV and small spaxel size \citep[soon to be fully exploited with the GALACSI ground-layer adaptive optics system and the Four-Laser Guide Star Facility, see][]{Stuik2006,Calia2014} is effectively pushing the distance (out to $\sim$100\,Mpc) at which abundance maps can both cover the optical discs of galaxies out to $\geq2\text{R}_{e}$ while resolving sub-kpc structures. Such targets form the ideal link between projects targeting extremely near-by systems down to pc-scales \citep[e.g.][]{Kreckel2016} to the high redshift Universe: provided observations are performed under excellent seeing conditions, and with exposure times suitable to detect all strong emission lines with sufficient S/N to characterize H\,\textsc{\smaller II} regions up to 2\,R$_{e}$ and beyond. \begin{acknowledgements} We thank the anonymous referee for a prompt and constructive report. This research has made use of \textsc{brutus}, a Python module to process data cubes from integral field spectrographs hosted at \url{http://fpavogt.github.io/brutus/}. \textsc{brutus} relies on \textsc{statsmodel} \citep{Seabold2010}, \textsc{ppxf} \citep{Cappellari2004}, \textsc{fit\_kinematic\_pa} as described in Appendix C of \cite{Krajnovic2006}), \textsc{matplotlib} \citep{Hunter2007}, \textsc{astropy}, a community-developed core Python package for Astronomy \citep{AstropyCollaboration2013}, \textsc{photutils}, an affiliated package of \textsc{astropy} for photometry, \textsc{aplpy}, an open-source plotting package for Python hosted at \url{http://aplpy.github.com}, \textsc{montage}, funded by the National Science Foundation under Grant Number ACI-1440620 and previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology, and \textsc{mpfit}, a Python script that uses the Levenberg-Marquardt technique \citep{More1978} to solve least-squares problems, based on an original \textsc{fortran} code part of the \textsc{minpack}-1 package. This research has also made use of the \textsc{aladin} interactive sky atlas \citep{Bonnarel2000}, of \textsc{saoimage ds9} \citep{Joye2003} developed by Smithsonian Astrophysical Observatory, of NASA's Astrophysics Data System, and of the NASA/IPAC Extragalactic Database \citep[NED;][]{Helou1991} which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 60.A-9317[A]. This work was co-funded under the Marie Curie Actions of the European Commission (FP7-COFUND). EP acknowledges support from the Spanish MINECO through project AYA2014-57490-P. LVM acknowledges support from AYA2015-65973-C3-1-R and AYA2014-52013-C2-1-R grants (MINECO Spain/FEDER, UE). \end{acknowledgements} \bibliographystyle{aa}
2023-04-23T08:18:06.074Z
2017-01-10T02:00:15.000Z
redpajama/arxiv
arxiv_0000
1,189
6,390
2f65b6b42adc8b287ef16169c4d052576e80ae35
\section{Introduction}\label{Sec:Intro} The problem of reconstruction of spike-trains, and of similar signals, from noisy moment measurements, and a closely related problem of robust solving the classical Prony system, is a well-known problem in Mathematics and Engineering. It is of major practical importance, and, in case when the nodes nearly collide, it presents major mathematical difficulties. It is closely related to a spike-train ``super-resolution problem'', (see \cite{Aza.Cas.Gam,Bat1,Can.Fer1,Can.Fer2,Con.Hir,Dem.Ngu,Dem.Nee.Ngu,Don,Duv.Pey,Fer,Hec.Mor.Sol,Lia.Fan,Mor.Can,Moi,Pet.Plo} as a small sample). The aim of the present paper is to investigate the possible amplification of the measurements error $\epsilon$ in the reconstruction process, caused by the fact that some of the nodes of $F$ near-collide. Recently this problem attracted attention of many researchers. In particular, in \cite{Aki.Bat.Yom,Bat1,Can.Fer2,Dem.Nee.Ngu,Don} it was shown (in different settings of the problem) that if $s$ spikes of $F$ are near-colliding in an interval of size $h\ll 1$, then a strong ``noise amplification'' occurs: up to a factor of $(\frac{1}{h})^{2s-1}$. Specifically, in \cite{Aki.Bat.Yom} a parametric setting (the same as in the present paper) was considered (see, as a small sample, \cite{Bat.Yom3,Bey.Mon2,Blu.Dra.Vet.Mar.Cou,Pet.Plo,Pet.Pot.Tas,Plo.Wis,Yom} and references therein). In this setting, signals are assumed to be members of a parametric family with a finite number of parameters. The parameters of the signal are then considered as unknowns, while the measurements provide a system of algebraic equations in these unknowns. It was announced in \cite{Aki.Bat.Yom} that the strongest ``noise amplification'' occurs along the algebraic curves $S$ (``Prony curves''), defined in the signal parameter space by the $2s-1$ initial equations of the classical ``Prony system'' (system \ref{eq:Prony.system1} below). However, \cite{Aki.Bat.Yom} provided neither detailed proofs, nor explicit constants in the error bound, nor the explicit description of the curves $S$. In the present paper we consider reconstruction of spike-train signals of an a priori known form $F(x)=\sum_{j=1}^{d}a_{j}\delta\left(x-x_{j}\right)$, from their moments $m_0(F),\ldots,\allowbreak m_{N-1}(F)$, $ N\ge 2d$, in the case where two nodes $x_i,x_{i+1}$ near-collide. That is, $x_{i+1}-x_{i+1}=h\ll 1$. \smallskip In Section \ref{Sec:setting} we introduce the $\epsilon$-error set $E_\epsilon(F)$, consisting of all signals $F'$, for which the moments of $F'$ differ from the moments of $F$ by at most $\epsilon$. The set $E_\epsilon(F)$ presents the distribution of all the possible reconstructed signals $F'$, caused by independent errors, not exceeding $\epsilon$, in each of the moment measurements $m_k(F)$. Thus the geometry of $E_\epsilon(F)$ reflects the patterns of the possible error amplification in the reconstruction process. In this paper we are mostly interested in the lower bounds for the error in nodes reconstruction. We thus consider the projection $E^x_\epsilon(F)$ of the error set $E_\epsilon(F)$ to the nodes space. This set represents the error amplification in nodes reconstruction. In particular, the ``radius'' $\rho^x_\epsilon(F)$ of $E^x_\epsilon(F)$ provides a lower bound on the nodes reconstruction accuracy of any reconstruction algorithm (see a more detailed description of this fact in Section \ref{Sec:setting}). \smallskip One of our two main results - Theorem \ref{thm:main2} in Section \ref{Sec:Error.amplification.main} - is that {\it for $F$ with two nodes in a distance $h$, and for any $\epsilon$ of order $h^3$ (or larger) we have $\rho^x_\epsilon(F)\ge Ch.$ Consequently, the presence of near-colliding nodes implies a massive amplification of the measurements error in the process of nodes reconstruction - up to $h^{-2}$ times.} \smallskip In order to prove Theorem \ref{thm:main2} we start in this paper the investigation of the geometry of the error sets $E_\epsilon(F)$ and $E^x_\epsilon(F)$ (which, as we believe, is important by itself). First we provide in Section \ref{sec:examples} numerical simulations and visualizations, which suggest that for $\epsilon \sim h^3$ the $\epsilon$-error set $E_\epsilon(F)$ is an ``elongated curvilinear parallelepiped'' of the width $\sim h$, stretched up to the size $\sim 1$ along a certain curve $S_F$ (while its projection onto the nodes space, $E^x_\epsilon(F)$, is stretched along the projection $S^x_F$ of $S_F$). These experiments suggest also that as $h\to 0$, the sets $E_\epsilon(F)$ and $E^x_\epsilon(F)$ concentrate closer and closer around the curves $S_F$ (respectively, $S^x_F$). \smallskip Next, we give in Section \ref{Sec:Error.amplification.main} an independent definition of the ``Prony curves'' $S_F$, ``discovered'' in Section \ref{sec:examples}: for each $F$ the Prony curve $S_F,$ passing through $F$ in the signal parameter space, is defined by the requirement that along it the first three moments $m_0,m_1,m_2$ do not change. An explicit parametric description of the curves $S_F$ is given in Section \ref{sec:parm.Prony.curves}. \smallskip Our second main result - Theorem \ref{thm:main1} in Section \ref{Sec:Error.amplification.main} - is that {\it indeed, as suggested by visualizations in Section \ref{sec:examples}, the set $E_\epsilon(F)$ contains a ``sufficiently long'' part of the Prony curve $S_F$ around $F$}. As a consequence, we obtain Theorem \ref{thm:main2}. Let us stress that all the constants in Theorems \ref{thm:main1} and \ref{thm:main2} are explicit (and reasonably realistic). \smallskip The proofs are given in Section \ref{Sec:proofs}. \smallskip Finally, in Section \ref{sec:compl.nodes}, we compare two approaches to the reconstruction problem for real spike-train signals: from their moments, and from their Fourier samples (which can be interpreted as the moments of an appropriate signal $\tilde F$ with complex nodes). Recently in \cite{Aki.Gol.Yom} a trigonometric reconstruction method for $\tilde F$ was suggested, which uses, as an input, {\it only three complex moments $m_0(\tilde F),m_1(\tilde F),m_2(\tilde F)$}. According to the approach of the present paper, we would expect for the trigonometric method (for $\tilde F$ with two nodes in a distance $h\ll 1$, and for $\epsilon\sim h^3$,) the worst case reconstruction error of order $\sqrt \epsilon$, while for the Prony inversion we show it to be of order $\epsilon^{\frac{1}{3}}$. We pose some open questions related to this apparent contradiction. \medskip The authors would like to thank the referee for a constructive criticism, as well as for remarks and suggestions, which allowed us to significantly improve the presentation. \section{Setting of the problem}\label{Sec:setting} Assume that our signal $F(x)$ is a spike-train, that is, a linear combination of $d$ shifted $\delta$-functions: \begin{equation} \label{eq:equation.model.delta} F(x)=\sum_{i=1}^{d}a_{i}\delta\left(x-x_{i}\right), \end{equation} where $a=(a_1,\ldots,a_d) \in {\mathbb R}^d, \ x=(x_1,\ldots,x_d) \in {\mathbb R}^d.$ We assume that the form (\ref{eq:equation.model.delta}) is a priori known, but the specific parameters $(a,x)$ are unknown. Our goal is to reconstruct $(a,x)$ from $N\ge 2d$ moments $m_k(F)=\int_{-\infty}^\infty x^k F(x)dx, \ k=0,\ldots,N-1$, which are known with a possible absolute error of no more than $\epsilon>0$. \smallskip The moments $m_k(F)$ are expressed through the unknown parameters $(a,x)$ as $m_k(F)=\sum_{i=1}^d a_i x_i^k$. Hence our reconstruction problem is equivalent to solving the (possibly over-determined) {\it Prony system} of algebraic equations, with the unknowns $a_i,x_i$: \begin{equation}\label{eq:Prony.system1} \sum_{i=1}^d a_i x_i^k = m_k(F), \ k= 0,1,\ldots,N-1. \end{equation} This system appears in many theoretical and applied problems. There exists a vast literature on Prony and similar systems, in particular, on their robust solution in the presence of noise - see, as a small sample, \cite{Aza.Cas.Gam,Bey.Mon2,Lia.Fan}, \cite{Ode.Bar.Pis}-\cite{Pot.Tas}, and references therein. We present the spike train reconstruction problem in a geometric language of spaces and mappings. Let us denote by ${\cal P}={\cal P}_d$ the parameter space of signals $F$, $$ {\cal P}_d=\{(a,x)=(a_1,\ldots,a_d,x_1,\ldots,x_d)\in {\mathbb R}^{2d}, \ x_1<x_2<\ldots<x_d \}, $$ and by ${\cal M}={\cal M}_{N} \cong {\mathbb R}^{N}$ the moment space, consisting of the $N$-tuples of moments $(m_0,m_1,\ldots,m_{N-1})$. We will identify signals $F$ with their parameters $(a,x)\in {\cal P}.$ \smallskip The Prony mapping $PM=PM_{d,N}:{\cal P}_d\to {\cal M}_{N}$ is given by $$ PM(F)= \mu = (\mu_0,\ldots,\mu_{N-1}) \in {\cal M}, \ \mu_k=m_k(F), \ k=0,\ldots,N-1. $$ Inversion of the Prony mapping is equivalent to reconstruction of a spike-train signal $F$ from its moments (or to solving Prony system (\ref{eq:Prony.system1})). \smallskip The aim of this paper is to investigate the amplification of the measurements error $\epsilon$ in the reconstruction process, in case of two near-colliding nodes. We are interested in effects, caused by the geometric nature of system (\ref{eq:Prony.system1}), independently of the specific method of its inversion. \smallskip The error amplification is reflected by the geometry of the $\epsilon$-error set $E_\epsilon(F)$, which is defined as follows: \begin{definition}\label{def:error.set} The $\epsilon$-error set $E_\epsilon(F)$ consists of all signals $F'\in {\cal P}_d$, for which the moments of $F'$ differ from the moments of $F$ by at most $\epsilon$: $$ E_\epsilon(F) = \{F'\in {\cal P}_d, \ |m_k(F')-m_k(F)|\le \epsilon, \ k=0,\ldots, N-1\}. $$ Equivalently, $E_\epsilon(F)=PM_{d}^{-1}(Q^{N}_\epsilon(F))$, where $Q^{N}_\epsilon(F)\subset {\cal M}_{N}$ is the $N$-dimensional $\epsilon$-cube centered at $PM(F)\in {\cal M}_{N}$. \end{definition} The $\epsilon$-error set $E_\epsilon(F)$ presents the distribution of possible reconstructed signals $F'$, caused by the independent errors, not exceeding $\epsilon$, in each of the moment measurements $m_k(F)$. Its yet another convenient description is as the set of solutions of the Prony system \begin{equation}\label{eq:Prony.system2} \sum_{i=1}^d a_i x_i^k = m_k(F)+\epsilon_k, \ k= 0,1,\ldots,N-1, \end{equation} with all the possible errors $\epsilon_k$ satisfying $|\epsilon_k|\leq \epsilon, \ k= 0,1,\ldots,N-1.$ Notice that the $\epsilon$-error set $E_\epsilon(F)$ depends on $N$, the number of the moments which we use in reconstruction. Since $N$ is assumed to be fixed, we do not indicate it in the notations. \smallskip In this paper we are mainly interested in the accuracy of the nodes reconstruction which is determined by the geometry of the projection $E^x_{\epsilon}(F)$ of the set $E_{\epsilon}(F)$ onto the nodes space. Accordingly, we define the worst case error $\rho^x_\epsilon(F)$ in the Prony reconstruction of the nodes of $F$ as follows: \begin{definition}\label{def:wrst.case.error} For $F=(a,x)\in {\cal P}_d$, the worst case error $\rho^x_\epsilon(F)$ in the reconstruction of the nodes of $F$ is defined by \begin{equation}\label{eq:worst.case.error} \rho^x_\epsilon(F)=\sup_{F'=(a',x')\in E_\epsilon(F)} ||x'-x||, \end{equation} where $||\cdot||$ denotes the Euclidean norm in the space of the nodes. \end{definition} In fact, $\frac{1}{2}\rho^x_\epsilon(F)$ bounds from below the worst case error in nodes reconstruction with {\it any} reconstruction algorithm $A$. Indeed, we can informally argue as follows: let $F''=(a'',x'')\in E_\epsilon(F)$ be a signal for which the supremum in (\ref{eq:worst.case.error}) is nearly achieved. Assume that we apply $A$ to both signals $F$ and $F''$, and the (adversary) noise is zero for $F$ and is equal to the difference of the moments of $F$ and $F''$ in the second case. Thus $A$ obtains as the input in both cases the moments of $F$. Whatever result $\tilde F=(\tilde a, \tilde x)$ the algorithm $A$ produces as an output, at least one of the distances $||x-\tilde x||$ or $||x''-\tilde x||$ will be not smaller than $\frac{1}{2}\rho^x_\epsilon(F)$. \section{Visualization of the error sets $E_\epsilon(F)$}\label{sec:examples} For a given $h, \ 0<h<1,$ consider a signal $F(x)=\frac{1}{2}\delta(x+h)+\frac{1}{2}\delta(x-h) \in {\cal P}_2.$ We put $N=2d=4$. The moments of $F$ are $$ m_0(F)=1, \ m_1(F)=0, \ m_2(F)=h^2, \ m_3(F)=0. $$ For a given $\epsilon>0$ we consider the $\epsilon$-cube $Q^4_\epsilon(F)\subset {\cal M}_{4}$ centered at $(1,0,h^2,0)\in {\cal M}_{4}$, and the $\epsilon$-error set $E_\epsilon(F)=PM_{2,4}^{-1}(Q^4_\epsilon(F)).$ Equivalently, $E_\epsilon(F)$ is defined in ${\cal P}_2$ by the inequalities $$ |m_0(F')-1| \le \epsilon, \ |m_1(F')| \le \epsilon, \ |m_2(F')-h^2| \le \epsilon, \ |m_3(F')| \le \epsilon. $$ \begin{figure} \centering \includegraphics[scale=0.80]{h10.eps} \caption{The error set $E_{\epsilon}(F)$ and its projection $E_{\epsilon}^x(F)$ for \;\;\;\;\;$h=0.1$, $\epsilon=2h^3=0.002$ and $F(x)=\frac{1}{2}\delta(x-0.1)+\frac{1}{2}\delta(x+0.1)$.} \label{fig:figure1} \hspace{0.5cm} \centering \includegraphics[scale=0.80]{h5.eps} \caption{The error set $E_{\epsilon}(F)$ and its projection $E_{\epsilon}^x(F)$ for \;\;\;\;\;$h=0.05$, $\epsilon=2h^3=0.00025$ and $F(x)=\frac{1}{2}\delta(x-0.05)+\frac{1}{2}\delta(x+0.05)$.} \label{fig:figure2} \end{figure} The $\epsilon$-error set $E_\epsilon(F)$ is a four-dimensional subset of ${\cal P}_2\cong {\mathbb R}^4,$ and its direct visualization is problematic. Instead the following Figures \ref{fig:figure1} and \ref{fig:figure2} show the projection of $E_\epsilon(F)$ onto the three-dimensional coordinate subspace of ${\cal P}_2$, spanned by the two nodes coordinates $x_1,x_2$ and the first amplitude $a_1$, as well as its further projection $E^x_\epsilon(F)$ onto the nodes plane $x_1,x_2$. Notice, that by the first of the Prony equations $a_1+a_2=m_0(F)+\epsilon_0$ in (\ref{eq:Prony.system2}) we have $a_2=m_0(F)-a_1+\epsilon_0,$ with $|\epsilon_0|\le \epsilon.$ Thus the projections of $E_\epsilon(F)$ shown in Figures \ref{fig:figure1} and \ref{fig:figure2}, give a rather accurate (up to $\epsilon$) representation of the true error set. \medskip \medskip Let us stress a natural scaling in our problem, reflected in Figures \ref{fig:figure1} and \ref{fig:figure2}: {\it the scale in nodes is of order $h$, while the scale in the amplitudes is of order $1$}. \medskip Figures \ref{fig:figure1} and \ref{fig:figure2} suggest that $E_\epsilon(F)$ is an ``elongated curvilinear parallelepiped'', with the sizes of its two largest edges of orders $1$ and $h,$ respectively. (The third and the fourth edges, of orders $h^2$ and $h^3,$ respectively, are not visible). $E_\epsilon(F)$ is stretched up to the size $\sim 1$ along a certain curve $S$, depicted in the pictures. Respectively, $E^x_\epsilon(F)$ is stretched up to the size $\sim h$ along the projection curve $S^x$. A comparison between Figures \ref{fig:figure1} and \ref{fig:figure2} also suggests that as $h$ (and $\epsilon \sim h^3)$ decrease, the error set concentrates closer and closer along the curve $S$. (Compare a conjectured general description of $E_\epsilon(F)$ at the end of Section \ref{Sec:Error.amplification.main}). \smallskip Below we analyse the structure of the error set $E_\epsilon(F)$ in some detail, and show that {\it $S$ is an algebraic curve, which we call the ``Prony curve''}. We show that for $F$ as above, the projection $S^x$ of $S$ onto the node subspace is the hyperbola $x_1x_2=-h^2$, while $a_1$ is expressed on this curve through $x_1,x_2$ as $a_1=\frac{x_2}{x_2-x_1}.$ In Section \ref{sec:parm.Prony.curves} we study such ``Prony curves'' in detail. \smallskip Numerically, the figures above were constructed via the following procedure: we construct a four-dimensional regular net $Z\subset Q^4_\epsilon(F)\subset {\cal M}_{2},$ with a sufficiently small step. For each point $z\in Z,$ its Prony preimage $w=PM^{-1}(z)\in {\cal P}_2$ is calculated, and the projection of $w$ onto the space $(a_1,x_1,x_2)$ is plotted. Some other visualisation results can be found in \cite{Aki}. \smallskip In what follows we assume that {\it inversion of the Prony map, or solving of (\ref{eq:Prony.system2}) (when possible) is accurate, and the reconstruction error is caused only by the measurements errors $\epsilon_k$.} \section{Prony curves and error amplification: main results}\label{Sec:Error.amplification.main} We will consider signals $F(x)$ of the form (\ref{eq:equation.model.delta}), with two near colliding nodes $x_i$ and $x_{i+1}, \ 1 \leq i\leq d-1.$ In the present paper we study the geometry of the reconstruction error, allowing perturbations only of the cluster nodes $x_i,x_{i+1},$ and of their amplitudes $a_i,a_{i+1}$. Therefore the positions and the amplitudes of the other nodes are not relevant for our results. However, in order to avoid possible collisions of the cluster nodes with their neighbors in the process of deformation, we will always assume that for $x_{i+1}-x_i=h>0$, the distances to the neighboring nodes from the left and from the right satisfy $x_i-x_{i-1}\geq 3h, \ x_{i+1}-x_{i+2}\geq 3h.$ We do not assume formally that $h \ll 1,$ but this is the case where the geometric patterns we describe become apparent. \smallskip For each signal $F$ and index $i$ the Prony curve $S=S_{F,i}$ passing through $F$ is obtained by varying only the nodes and amplitudes $(a_i,x_i),(a_{i+1},x_{i+1})$, while preserving the first three moments. More accurately, we have the following definition: \begin{definition}\label{def:Prony.curves} Let $F(x)=\sum_{j=1}^{d}a_{j}\delta\left(x-x_{j}\right)\in {\cal P}_d$ and let $i, \ 1 \leq i\leq d-1,$ be fixed. The Prony curve $S=S_{F,i}\subset {\cal P}_d$ consists of all the signals $$ F'(x)= \sum_{j=1}^{d}a'_{j} \delta(x-x'_{j})\in {\cal P}_d $$ for which $a'_j=a_j, x'_j=x_j, \ j\ne i,i+1,$ and $m_k(F')=m_k(F)$ for $k=0,1,2.$ \end{definition} By definition, we always have $F\in S_{F,i}$. In this paper we concentrate on an ``$h$-local'' part $S_{F,i}(h)$ around $F$ of the Prony curve $S_{F,i}$, consisting of all $$ F'(x)= \sum_{j=1}^{d}a'_{j} \delta(x-x'_{j})\in S_{F,i}, $$ for which the nodes $(x'_i,x'_{i+1})$ belong to the disk $D \subset \mathbb{R}^2$ of radius $\frac{1}{2}h$ centered at $(x_i,x_{i+1}).$ In particular, the node collision cannot happen on $S_{F,i}(h)$ (see Lemma \ref{lem:bdd.dist11} below). Compare also to Figure \ref{fig:figure3}. \smallskip Notice, however, that the Prony curves $S_{F,i}$ are {\it global algebraic curves (possibly singular)}. Their explicit global parametrization is described in Section \ref{sec:parm.Prony.curves} below, and one can show that in some cases they can pass through the node collision points (with amplitudes tending to infinity). We believe that the Prony curves and their multi-dimensional generalizations play an important role in understanding of multi-nodes collision singularities. \medskip The following definition specifies the type of signals we will work with: \begin{definition}\label{def:i.h.M.clusters} A signal $F=\sum_{j=1}^{d}a_{j}\delta\left(x-x_{j}\right)\in {\cal P}_d$ is said to form an $(i,h,M)$-cluster, with given $h, \ 0< h <1, \ M>0,$ and $i, \ 1 \leq i\leq d-1,$ if $x_{i+1}-x_i=h,$ and $|a_i|,|a_{i+1}|\leq M$. \end{definition} For $F$ forming an $(i,h,M)$-cluster, let $\kappa=\frac{x_{i+1}+x_i}{2}$ be the center of the interval $[x_i,x_{i+1}]$. We put $C(F)=18M(1+|\kappa|)^N$ (Notice that $C(F)$ depends on $\kappa,M$, but not on $h$). One of our main results is the following: \begin{theo}\label{thm:main1} Let $F\in {\cal P}_d$ form an $(i,h,M)$-cluster. Then for each $\epsilon\ge C(F)h^3$ the $\epsilon$-error set $E_\epsilon(F)$ contains the local Prony curve $S_{F,i}(h).$ \end{theo} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{h10t.eps} \caption{The error set $E_{\epsilon}(F)$ and its projection $E_{\epsilon}^x(F)$ for \;\;\;\;\;$h=0.1$, $\epsilon=2h^3=0.002$ and $F(x)=\frac{1}{2}\delta(x-0.1)+\frac{1}{2}\delta(x+0.1)$. The circle depicts the boundary of the disk $D$. In bold is the local prony curve $S_{F,i}(h)$ and its projection.} \label{fig:figure3} \end{figure} Figure \ref{fig:figure3} suggests that in fact the $\epsilon$-error set $E_\epsilon(F)$ ``concentrates'' around the local Prony curve $S_{F,i}(h).$ Already the fact that this curve is inside $E_\epsilon(F)$ (provided by Theorem \ref{thm:main1}) implies important conclusion on the worst case reconstruction error. Indeed, as we show below, the projection $S^x_{F,i}(h)$ of the local Prony curve $S_{F,i}(h)$ onto the nodes space has a length of order $h$. Consequently, in the presence of an $h$-cluster in $F$, and for each $\epsilon\ge C(F)h^3$, there are signals $F'\in E_\epsilon(F),$ with the nodes $x'_i,x'_{i+1}$ at a distance $\sim h$ from $x_i,x_{i+1}$. That is, a massive error amplification from $h^3$ to h occurs in this case. \smallskip Our second main result presents this fact formally, in terms of the worst case error: \begin{theo}\label{thm:main2} Let $F\in {\cal P}_d$ form an $(i,h,M)$-cluster. Then for each $\epsilon\ge C(F)h^3$ the worst case reconstruction error $\rho^x_\epsilon(F)$ in nodes of $F$ is at least $\frac{1}{2}h$. \end{theo} We see that a measurements error $\epsilon\sim h^3$ can be amplified up to the factor $\sim h^{-2}$ in reconstruction of the nodes of $F$. In particular, for $d=2,$ i.e, in the case of exactly two nodes in $F$, and for $N=4$, we get (assuming that $M=1$ and $x_1,x_2\in [-1,1])$ that $C(F)\le288.$ So the minimal accuracy $\epsilon$ required to keep the error in the nodes reconstruction less than $\frac{1}{2}h$ is $\epsilon \le 288 h^3$. For $h=0.01$ we get $\epsilon\leq 0.0003$. \smallskip In \cite{Aki.Bat.Yom}, in order to show that the length of the curve $S^x_{F,i}(h)$ is of order $h$, we use the inverse function theorem, combined with estimations from \cite{Bat.Yom2} of the Jacobian of the Prony mapping. As a result, the constants become much less explicit. \smallskip We expect that the results of Theorems \ref{thm:main1} and \ref{thm:main2} can be extended to an accurate description of the $\epsilon$-error set in the case of clusters with more than two nodes, using an appropriate version of the ``quantitative inverse function theorem''. Informally, we expect the following general result to be true: \smallskip \noindent {\it Let the nodes $x_1,\ldots,x_d$ of $F$ form a cluster of size $h\ll 1.$ Then for $\epsilon \le O(h^{2d-1})$ the $\epsilon$-error set $E_\epsilon(F)$ is a ``non-linear coordinate parallelepiped'' $\Pi_{h,\epsilon}(F)$ with respect to the moment coordinates $m_k(F')$, centered at $F$. Its width in the direction of the moment coordinate $m_k, \ k=0,\ldots,2d-1,$ is of order $\epsilon h^{-k}.$ In particular, the maximal stretching of $\Pi_{h,\epsilon}(F)$, of order $\epsilon h^{-(2d-1)},$ occurs along the Prony curve $S_{2d-2}(F)$.} \smallskip However, an application of the approach based on the inverse function theorem will significantly reduce the domain of applicability of the results, and will make the constants less explicit. On the other hand, we believe that the explicit parametric description of the Prony curves, given in Section \ref{sec:parm.Prony.curves} below, can be extended to clusters with more than two nodes. It becomes significantly more complicated, but promises potentially better understanding of the geometry of error amplification. \section{Proofs}\label{Sec:proofs} The proof of Theorems \ref{thm:main1} and \ref{thm:main2}, given below, is based on a detailed explicit description of the Prony curves, on one hand, and of a behavior of the moments $m_k, \ k\ge 3$ on these curves, on the other. \subsection{Parametrization of the Prony curves}\label{sec:parm.Prony.curves} We denote by $S^x_{F,i}$ the projection of the Prony curve $S_{F,i}$ onto the node plane spanned by the node coordinates $x'_i,x'_{i+1}$. \begin{theo}\label{thm:two.nodes} The curve $S^x_{F,i}$ is a hyperbola in the plane $x'_i,x'_{i+1}$ defined by the equation $$ m_0(F)x'_ix'_{i+1}-m_1(F)(x'_i+x'_{i+1})+m_2(F)=0. $$ The original curve $S_{F,i}$ is parametrized through $x'_i,x'_{i+1}$ in $S^x_{F,i}$ as $$ a'_i=\frac{m_0(F)x'_{i+1}-m_1(F)}{x'_{i+1}-x'_i}, \ a'_{i+1} = \frac{-m_0(F)x'_i+m_1(F)}{x'_{i+1}-x'_i}, \ (x'_i,x'_{i+1})\in S^x_{F,i}. $$ \end{theo} \noindent{\bf Proof: } Since all the nodes and amplitudes are fixed on the Prony curve $S_{F,i}$, but $(a_i,x_i),(a_{i+1},x_{i+1})$, we can work only with the partial signals $a_i\delta (x-x_i)+a_{i+1}\delta (x-x_{i+1})$. In other words, we can consider the case of exactly two nodes, i.e. signals of the form $F(x)=a_1\delta (x-x_1)+a_2\delta (x-x_2)\in {\cal P}_2$. In this case there is only one choice $i=1$ for the index $i$ in the definition of the Prony curves $S_{F,i}$, and we denote them by $S_F$. \smallskip Alternatively, we can consider algebraic curves $S(m_0,m_1,m_2)$ in ${\cal P}_2$, defined by the equations \begin{equation}\label{eq:Prony.fol.2.111} \begin{array}{c} a_1+a_2=m_0,\\ a_1x_1+a_2x_2=m_1,\\ a_1x^2_1+a_2x^2_2=m_2,\\ \end{array} \end{equation} for any moments $m_0,m_1,m_2$. If we put $m_k=m_k(F), \ k=0,1,2,$ we get $S(m_0,\allowbreak m_1,m_2)=S_F.$ Since we are interested in the behavior of the nodes $x_1,x_2$ along the curve $S$, we will consider also the node parameter space ${\cal P}^x_2=\{(x_1,x_2)\},$ and the projections $S^x(m_0,m_1,m_2) \subset {\cal P}^x_2$ of the Prony curves $S(m_0,m_1,m_2)$ to the node space ${\cal P}^x_2$. The following proposition is an extended version of Theorem \ref{thm:two.nodes}. \begin{proposition}\label{Prop:two.nodes11} The curves $S^x(m_0,m_1,m_2) \subset {\cal P}^x_2$ are hyperbolas in the plane $x_1,x_2$ defined by the equation \begin{equation}\label{eq:Prony.fol.2.11} m_0x_1x_2-m_1(x_1+x_2)+m_2=0. \end{equation} They form a two-parametric family, depending only on the ratio of the moments $(m_0:m_1:m_2)$. The corresponding curves $S(m_0,m_1,m_2) \subset {\cal P}_2$ are parametrized as \begin{equation}\label{eq:Prony.fol.2.21} a_1=\frac{m_0x_2-m_1}{x_2-x_1}, \ a_2 = \frac{-m_0x_1+m_1}{x_2-x_1}, \ (x_1,x_2)\in S^x(m_0,m_1,m_2). \end{equation} \end{proposition} \noindent{\bf Proof: } We get from the first two equations of (\ref{eq:Prony.fol.2.111}) the following expressions for $a_1,a_2$ through $x_1,x_2$: \begin{equation}\label{eq:Prony.fol.121} a_2=m_0-a_1, \ a_1x_1+(m_0-a_1)x_2=m_1, \end{equation} and hence \begin{equation}\label{eq:Prony.fol.131} a_1=\frac{m_0x_2-m_1}{x_2-x_1}, \ a_2 = \frac{-m_0x_1+m_1}{x_2-x_1}. \end{equation} The curve $S(m_0,m_1,m_2)$ is defined by all the three equations of (\ref{eq:Prony.fol.2.111}). Substituting (\ref{eq:Prony.fol.131}) into the last equation of (\ref{eq:Prony.fol.2.111}), we see that the projection $S^x= S^x(m_0,m_1,m_2)$ of $S(m_0,m_1,m_2)$ onto the $(x_1,x_2)$-subspace ${\cal P}^x_2 \subset {\cal P}_2$ is obtained in ${\cal P}^x_2$ as the solution of the third degree equation \begin{equation}\label{eq:Prony.fol.141} \frac{m_0x_2-m_1}{x_2-x_1}x^2_1+\frac{-m_0x_1+m_1}{x_2-x_1}x^2_2=m_2. \end{equation} An explicit description of the curve $S^x$ can be obtained as follows: we can rewrite the left hand side of equation (\ref{eq:Prony.fol.141}) in the form \begin{equation}\label{eq:Prony.fol.151} \frac{1}{x_2-x_1}[m_0(x^2_1x_2-x_1x^2_2)+m_1(x^2_2-x^2_1)]=-m_0x_1x_2+m_1(x_1+x_2), \end{equation} which leads to the equation \begin{equation}\label{eq:Prony.fol.161} m_0x_1x_2-m_1(x_1+x_2)+m_2=0 \end{equation} for the curve $S^x(m_0,m_1,m_2).$ So this curve is a hyperbola with the center at the point $(\frac{m_1}{m_0},\frac{m_1}{m_0})$, and with the asymptotes $x_1=\frac{m_1}{m_0}, \ x_2=\frac{m_1}{m_0}.$ Equation \ref{eq:Prony.fol.161} is homogeneous in $(m_0,m_1,m_2)$ and hence its solution depends only on the ratio of the moments $(m_0:m_1:m_2)$. Applying expressions \ref{eq:Prony.fol.131} we complete the proof of Proposition \ref{Prop:two.nodes11} and of Theorem \ref{thm:two.nodes}. $\square$ $\square$ \medskip We expect that the explicit description of the Prony curves given above, can be combined with the analysis of the Prony mapping from the point of view of Singularity Theory, given in \cite{Bat.Yom3,Yom}, including, in particular, repsentation of signals $F$ in the ``bases of finite differences'' introduced in \cite{Bat.Yom3,Yom}. \subsection{Moments on the Prony curves} In this section we describe the behavior of the moments $m_k(F), \ k\geq 3,$ along the Prony curve $S_{F,i}$, on its $h$-local part $S_{F,i}(h).$ \begin{theo}\label{thm:moments.on.S} Let $F=\sum_{j=1}^{d}a_{j}\delta\left(x-x_{j}\right)\in {\cal P}_d$ form an $(i,h,M)$-cluster, and let $\kappa=\frac{x_{i+1}+x_i}{2}$ be the center of the interval $[x_i,x_{i+1}]$. Then for any $F' \in S_{F,i}(h)$ we have $m_k( F')-m_k(F)=0, \ k=0,1,2,$ while $$ |\ m_k(F')-m_k(F)|\leq 18M(1+|\kappa|)^kh^3, \ k\geq 3. $$ \end{theo} \noindent{\bf Proof: } As in the previous section, it is sufficient to consider the case of exactly two nodes. By the assumptions, for the signal $F(x)=a_1\delta (x-x_1)+a_2\delta (x-x_2)\in {\cal P}_2$ we have $x_2=x_1+h$, and $|a_1|, |a_2| \leq M$. To simplify the expressions we shall assume that $h\leq 1$. Let us show first that the distance between the nodes remains uniformly bounded from below along $S^x_F(h)$. \begin{lemma}\label{lem:bdd.dist11} For each $F'(x)=a'_1\delta (x-x'_1)+a'_2\delta (x-x'_2)\in S_F(h)$ we have $$ x'_2-x'_1 \geq \frac{1}{4}h. $$ \end{lemma} \noindent{\bf Proof: } The point $(x_1,x_2)\in {\cal P}^x_2$ is at the distance $\frac{1}{\sqrt 2}h$ from the diagonal $\{x_1=x_2\}$. So the disk $D$ is at the distance $\kappa = (\frac{1}{\sqrt 2}-\frac{1}{2})h > 0.2h$ from the diagonal. Therefore for any $(x'_1,x'_2)\in D$ we have $x'_2-x'_1>0.2\sqrt 2 h > \frac{1}{4}h.$ In particular, this is true for each point of $S^x_F(h)$. $\square$ \smallskip Next we show that the amplitudes $a'_1, a'_2$ are uniformly bounded on $S_F(h)$. \begin{lemma}\label{lem:bdd.ampl11} For each $F'(x)=a'_1\delta (x-x'_1)+a'_2\delta (x-x'_2)\in S_F(h)$ we have $$ |a'_1|, |a'_2| \leq 8M. $$ \end{lemma} \noindent{\bf Proof: } By expressions (\ref{eq:Prony.fol.2.21}) in Proposition \ref{Prop:two.nodes11} we have \begin{equation}\label{eq:Prony.fol.2.1311} a'_1=\frac{m_0 x'_2-m_1}{x'_2-x'_1}, \ a'_2 = \frac{-m_0 x'_1+m_1}{x'_2-x'_1}. \end{equation} We can write $$ a'_1=\frac{m_0 x'_2-m_1}{x'_2-x'_1}=\frac{m_0x_2-m_1}{x'_2-x'_1}+\frac{m_0(x'_2-x_2)}{x'_2-x'_1}, $$ and hence $$ |a'_1|\leq |\frac{m_0x_2-m_1}{x_2-x_1}|\cdot |\frac{x_2-x_1}{x'_2-x'_1}|+|m_0|\cdot |\frac{x'_2-x_2}{x'_2-x'_1}|, $$ or \begin{equation}\label{eq:Prony.fol.2.14} |a'_1|\leq |a_1|\cdot|\frac{x_2-x_1}{x'_2-x'_1}|+|m_0|\cdot |\frac{x'_2-x_2}{x'_2-x'_1}|. \end{equation} By Lemma \ref{lem:bdd.dist11} we have $x'_2-x'_1 \geq \frac{1}{4}h,$ while by the assumptions $x_2-x_1=h$. Since the point $(x'_1,x'_2)$ belongs to the disk $D$ of radius $\frac{1}{2}h$ centered at $(x_1,x_2)$, we have also $|x'_2-x_2|\leq \frac{1}{2}h.$ Therefore (\ref{eq:Prony.fol.2.14}) implies \begin{equation}\label{eq:Prony.fol.2.15} |a'_1|\leq 4|a_1|+2|m_0|\leq 8M, \end{equation} since by the assumptions $|a_1|, |a_2| \leq M$, and hence $|m_0|=|a_1+a_2|\leq 2M$. The bound for $|a'_2|$ is obtained exactly in the same way. $\square$ \smallskip In order to estimate the differences $m_k(F')-m_k(F)$ we now shift the origin into the middle point $\kappa=\frac{x_1+x_2}{2}$ between the nodes $x_1,x_2$. For $$ F(x)=\sum_{j=1}^d a_j\delta (x-x_j)\in {\cal P}_d $$ denote by $F^\kappa(x)$ the shifted signal $F^\kappa(x)=F(x-\kappa)$. \medskip The following proposition describes the action of the coordinate shift on the moments of general spike-trains (of course, this results remains valid for the moments of any measure on $\mathbb R$). \begin{proposition}\label{prop:shift} $$ m_k(F)=\sum_{l=0}^k \binom{k}{l}(-\kappa)^{k-l} m_l(F^\kappa), \ m_k(F^\kappa) = \sum_{l=0}^k \binom{k}{l}(\kappa)^{k-l} m_l(F). $$ \end{proposition} \noindent{\bf Proof: } $$ m_k(F^\kappa)= \sum_{j=1}^d a_j (\kappa+x_j)^k =\sum_{j=1}^d a_j \sum_{l=0}^k \binom{k}{l}\kappa^{k-l}x_j^l = $$ $$ = \sum_{l=0}^k \binom{k}{l}\kappa^{k-l} \sum_{j=1}^d a_j x_j^l= \sum_{l=0}^k \binom{k}{l}\kappa^{k-l} m_l(F). $$ Replacing $\kappa$ by $-\kappa$ we get the second expression. $\square$ \smallskip Finally we come to estimating the differences $m_k(F')-m_k(F)$. Since by Proposition \ref{prop:shift} the shifted moments are expressed through the original moments of the same and of smaller orders, we see that along the curve $S_F$ the first three shifted moments do not change. Applying Proposition \ref{prop:shift} in the opposite direction, we can write, for $k\geq 3$, \begin{equation}\label{eq:shifted.diff} |m_k(F')-m_k(F)| \leq \sum_{l=3}^k \binom{k}{l}|\kappa|^{k-l}|m_l(F'^\kappa)-m_l(F^\kappa)|. \end{equation} By the choice of $\kappa$ we have $|x_1-\kappa|=|x_2-\kappa|=h/2.$ For $(x'_1,x'_2)\in D$ we have $|x'_1-\kappa|,|x'_2-\kappa|\leq h$. Hence we obtain, using Lemma \ref{lem:bdd.ampl11}, $$ |m_l(F^\kappa)|=|a_1(x_1-\kappa)^l+a_2(x_2-\kappa)^l|\leq 2M(\frac{h}{2})^l, $$ $$ |m_l(F'^\kappa)|=|a'_1 (x'_1-\kappa)^l+a'_2(x'_2-\kappa)^l|\leq 16Mh^l. $$ Consequently, $|m_l(F'^\kappa)-m_l(F^\kappa)|\leq (16+2(\frac{1}{2})^l)Mh^l \leq 18Mh^l.$ Substituting this into equation (\ref{eq:shifted.diff}) we get $$ |m_k(F')-m_k(F)| \leq 18M\sum_{l=3}^k \binom{k}{l}|\kappa|^{k-l}h^l \leq $$ $$ \leq 18Mh^3\sum_{l=3}^k \binom{k}{l}|\kappa|^{k-l}\leq 18M(1+|\kappa|)^kh^3. $$ This completes the proof of Theorem \ref{thm:moments.on.S}. $\square$ \subsection{Proof of Theorem \ref{thm:main1}} We have to show that for each $\epsilon\ge C(F)h^3$, with $C(F)=18M(1+|\kappa|)^N,$ the $\epsilon$-error set $E_\epsilon(F)$ contains the local Prony curve $S_{F,i}(h).$ By Theorem \ref{thm:moments.on.S} we have for any $F'\in S_{F,i}(h)$ and for each $k\leq N$ $$ |\ m_k(F')-m_k(F)|\leq 18M(1+|\kappa|)^kh^3 \leq 18M(1+|\kappa|)^Nh^3=C(F)h^3 \le \epsilon, $$ and therefore $S_{F,i}(h)\subset E_\epsilon(F).$ This completes the proof. $\square$ \subsection{Proof of Theorem \ref{thm:main2}} By definition, for $F=(a,x)\in {\cal P}_d$ the worst case error $\rho^x_\epsilon(F)$ in reconstruction of the nodes of $F$ is $$ \rho^x_\epsilon(F)=\sup_{F'=(a',x')\in E_\epsilon(F)} ||x'-x||. $$ The projection $S^x_{F,i}(h)$ of the $h$-local Prony curve $S_{F,i}(h)$ to the coordinate plane of $(x'_i,x'_{i+1})$ is a hyperbola, passing through the point $(x_i,x_{i+1})$, and it crosses the boundary of the disk $D$ of radius $\frac{1}{2}h$ centered at $(x_i,x_{i+1})$, at exactly two points. Let $F''=(a'',x'')$ be one of the corresponding endpoints of $S_{F,i}(h).$ Then the distance between the nodes of $F''$ and the nodes of $F$ is exactly $\frac{1}{2}h$. By Theorem \ref{thm:main1} we have $S_{F,i}(h)\subset E_\epsilon(F),$ and therefore $F''\in E_\epsilon(F)$. We conclude that $$ \rho^x_\epsilon(F)=\sup_{F'=(a',x')\in E_\epsilon(F)} ||x'-x||\ge ||x''-x||=\frac{1}{2}h. $$ This completes the proof. $\square$ \section{A case of complex nodes}\label{sec:compl.nodes} The goal of this section is to compare two approaches to the reconstruction problem for real spike-train signals with exactly two nodes: from their moments, and from their Fourier samples, and to pose some related open questions. For a signal $$ F(x)=a_1\delta(x-x_1)+a_2\delta(x-x_2)\in {\cal P}_2 $$ we have for its Fourier transform $f_s(F):={\cal F}(F)(s)=a_1e^{isx_1}+a_2e^{isx_2}$. Taking samples $f_k(F)$ at the points $s=0,1,\ldots,k,\ldots,$ we get $f_k(F)=a_1e^{ikx_1}+a_2e^{ikx_2}$. We see immediately, that the Fourier samples $f_k(F)=a_1e^{ikx_1}+a_2e^{ikx_2}$ coincide with the moments $m_k(\tilde F)$ for a signal $\tilde F(x)=a_1\delta(x-e^{ix_1})+a_2\delta(x-e^{ix_2})$ with the {\it complex nodes} $e^{ix_1},e^{ix_2}$. Recently in \cite{Aki.Gol.Yom} a trigonometric reconstruction method for $\tilde F$ from their moments $m_k(\tilde F)$ was introduced, which uses the following four real measurements: $$ |m_0|,|m_1|,|m_2|, \ \text {and the imaginary part} \ \Im m_1. $$ Notice that each complex moment provides (at least, formally) {\it two real measurements}: its real and its complex parts. So taking four moments $m_k, \ k=0,1,2,3,$ as in a true complex Prony system, gives us eight real equations, while the signals $F$ and $\tilde F$ have only four real parameters: $a_1,a_2,x_1,x_2$. \smallskip This leads us to the following question: {\it what real measurements (coming from the real or from the complex moments) do we really need? Can we improve the reconstruction accuracy by a ``correct choice'' of the measurements?} \smallskip The last question is directly connected to the main results of the current paper, because of the following fact: the trigonometric reconstruction method of \cite{Aki.Gol.Yom} uses as an input only {\it three complex moments $m_0,m_1,m_2$}. According to the approach of the present paper, we would expect for $F$ (or $\tilde F)$ with two nodes in a distance $h\ll 1$ the worst case node error amplification factor to be of order $(\frac{1}{h})^{l-2}$, where $l$ is the number of the moments used. For the trigonometric method $l=3$, and this would lead to the amplification factor of order $\frac{1}{h}$, while for the Prony inversion it is shown above to be of order $(\frac{1}{h})^{2}$ - an apparent contradiction. Our initial experiments (partially reported in \cite{Aki.Gol.Yom}) indicate also for the trigonometric method that the worst case error amplification factor of order $(\frac{1}{h})^{2}$. This leads to the following question, which may be important for better understanding the patterns of error amplification in different methods of spike-train reconstruction: \smallskip \noindent {\it Is it possible to extend the approach of the present paper to the analysis of the error amplification in trigonometric reconstruction? Where do we (presumably) lose, in the trigonometric method, the accuracy gained by not using the fourth moment?} \smallskip For the reader convenience we shortly recall below the main steps of the trigonometric reconstruction method of \cite{Aki.Gol.Yom}. \subsection{Trigonometric reconstruction: main steps}\label{sec:trig.reconstr} Consider signal $F(u)$ with {\it complex} variables $(x, y)$ of the form (in this section, we use notations from \cite{Aki.Gol.Yom}): \begin{equation}\label{eq:unit.circle} F(u)=a\delta(u-x)+b\delta(u-y), \ x=e^{i\phi}, \ y=e^{i\theta}, \ \phi,\theta,a,b \in {\mathbb R} \end{equation} Our measurements are the complex moments \begin{equation}\label{eq:unit.circle2} m_k=\int_{\mathbb C} x^k F(x)dx=ae^{ik\phi}+be^{ik\theta}. \end{equation} We can write the moments for signal \eqref{eq:unit.circle} in the following form: \begin{equation}\label{eq:unit.circle3} m_k = ax^k + by^k. \end{equation} \subsection{Recovery of phase difference} Introduce the following definitions for phases of $x$ and $y$: $$ \phi = -2\pi \mu, \quad \theta = -2\pi \nu, \quad \Delta = \phi - \theta. $$ From \eqref{eq:unit.circle}, we have $$ x = e^{i \phi} = \cos \phi + i \sin \phi, \quad y = e^{i \theta}= \cos \theta + i \sin \theta. $$ Consider real and imaginary parts of the moment $m_k$: \begin{equation} \label{eq:TProny2D-ReIm} \begin{cases} \Re m_k = a \cos k\phi + b \cos k\theta \\ \Im m_k = a \sin k\phi + b \sin k\theta \end{cases} \end{equation} Now, we get $$ \begin{cases} (\Re m_k)^2 = a^2 \cos^2 k\phi + b^2 \cos^2 k\theta + 2ab \cos k\phi \cos k\theta \\ (\Im m_k)^2 = a^2 \sin^2 k\phi + b^2 \sin^2 k\theta + 2ab \sin k\phi \sin k\theta \end{cases} $$ Let $M_k = |m_k|$. So $$ M_k^2 = |m_k|^2 = a^2 + b^2 + 2ab \cos k\Delta. $$ Since $M_0^2 = a^2 + 2ab + b^2$, we get \begin{equation} \label{eq:temp1} 2\sin^2 \frac{k\Delta}{2}= -\dfrac{M_k^2 - M_0^2}{2ab}. \end{equation} It follows from \eqref{eq:temp1} that $$ \dfrac{\sin^2 \Delta/2}{\sin^2 \Delta} \equiv \dfrac{1}{4 \cos^2(\Delta/2)} = \dfrac{M_1^2 - M_0^2}{M_2^2 - M_0^2}. $$ and hence, $$ \dfrac{1+\cos \Delta}{2} = \dfrac{1}{4} \dfrac{M_2^2 - M_0^2}{M_1^2 - M_0^2} \quad \text{or} \quad \Delta = \arccos \Bigg( \dfrac{2 M_1^2 - M_0^2 - M_2^2}{2(M_0^2 - M_1^2)}. \Bigg) $$ \subsection{Amplitudes recovery} Recall that $$ \cos \Delta - 1= \dfrac{M_1^2 - M_0^2}{2ab} \qquad \text{and} \qquad \cos\Delta + 1 = \dfrac{1}{2} \dfrac{M_2^2 - M_0^2}{M_1^2 - M_0^2}. $$ Thus $$ 2 = \dfrac{1}{2} \dfrac{M_2^2 - M_0^2}{M_1^2 - M_0^2} - \dfrac{M_1^2 - M_0^2}{2ab} \quad \text{or} \quad \dfrac{M_1^2 - M_0^2}{ab} = \dfrac{M_2^2 + 3M_0^2 - 4M_1^2}{M_1^2 - M_0^2}. $$ So we get $$ ab = \dfrac{(M_1^2 - M_0^2)^2}{M_2^2 + 3M_0^2 - 4M_1^2}. $$ Now, in order to find the unknown amplitudes, we have to solve the system $$ \begin{cases} a + b = M_0, \\ ab = \dfrac{(M_1^2 - M_0^2)^2}{M_2^2 + 3M_0^2 - 4M_1^2}, \end{cases} $$ which is reduced to the quadratic equation $$ a^2 - M_0 a + \dfrac{(M_1^2 - M_0^2)^2}{M_2^2 + 3M_0^2 - 4M_1^2} = 0 $$ with the discriminant $$ D = M_0^2 - 4\dfrac{(M_1^2 - M_0^2)^2}{M_2^2 + 3M_0^2 - 4M_1^2} $$ Now, we obtain the amplitudes: $$ a = \dfrac{M_0 \pm \sqrt{D}}{2}, \quad b = \dfrac{M_0 \mp \sqrt{D}}{2}. $$ \subsection{Phases recovery} Let $\phi = \theta + \Delta$. Then the system \eqref{eq:TProny2D-ReIm} has the form $$ \begin{cases} \Re m_1 = a \cos (\theta + \Delta) + b \cos \theta, \\ \Im m_1 = a \sin (\theta + \Delta) + b \sin \theta \end{cases} $$ or $$ \begin{cases} \Re m_1 = a \Big( \cos\theta \cos\Delta - \sin\theta \sin\Delta \Big) + b \cos \theta, \\ \Im m_1 = a \Big( \sin\theta \cos\Delta + \cos\theta \sin\Delta \Big) + b \sin \theta \end{cases} $$ and $$ \begin{cases} \Re m_1 = (a\cos\Delta + b)\cos\theta + (-a\sin\Delta) \sin\theta, \\ \Im m_1 = (a\sin\Delta) \cos\theta + (a\cos\Delta + b)\sin\theta. \end{cases} $$ Thus, we get $$ \theta = -\arccos \Big( \dfrac {\Re m_1 (a\cos\Delta + b) + \Im m_1 (a\sin\Delta)} {(a\cos\Delta + b)^2 + (a\sin\Delta)^2} \Big), \quad \phi = \theta + \Delta. $$ \clearpage \bibliographystyle{amsplain}
2023-04-23T08:18:06.321Z
2017-01-09T02:01:19.000Z
redpajama/arxiv
arxiv_0000
1,200
7,815
a6d34f85969bf8c2b96bfa1c5b65bebd7706a88a
\section{Introduction} Quantum technologies received great attention as means to improve resolution and precision of metrological tasks by reducing statistical errors due to quantum noise~\cite{Giovannetti_QuMetReview_2004,Walther_4photon_2004,Mitchell_superresolution_2004,Nagata_superresolution_2007,Higgins_entanglement-free_2007,Matthews_multiphoton-wg_2009,Afek_high-noon_2010,Advances_in_Qmet}. Far less attention has been paid to their ability to reduce systematic errors. However, statistical and systematic errors are of equal importance in any measurement, and the latter ones are typically more difficult to account for. Notable examples for quantum-improved measurements are the use of single-electron sources for a more accurate definition of the Ampere~\cite{Poirier_ampere_2016}, and quantum correlated ``twin photon beams'' towards establishing absolute and universal optical power standards~\cite{Migdall_power_1995}. In this letter we demonstrate a new use of quantum optics to reduce systematic errors in the technologically prominent application of spectrally-resolved white-light interferometry (WLI). WLI is used for precise measurements of chromatic dispersion, \textit{i.e.} the second derivative of the wavelength-dependent optical phase. Classical WLI requires, however, precise interferometer equalisation~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996} and is influenced by third-order dispersion~\cite{MethodsComparison,Galle_thesis_2014} which leads to systematic errors that are difficult to account for.\\ We eliminate all those drawbacks by inferring chromatic dispersion using energy-time entangled photon pairs and coincidence counting to measure spectral correlation functions. In addition, we exploit photon-number correlations to achieve a two-fold resolution enhancement. Our results demonstrate that this new strategy outperforms the precision and accuracy of previous quantum~\cite{Brendel_dispersion_1998,Nasr_dispersion_2004} and state-of-the-art techniques~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996}. Moreover, as our approach is essentially alignment-free, it enables using the same interferometer in a user-friendly fashion for analysing a wide variety of different optical materials, in terms of type, optical properties, length, \textit{etc.}. \subsection{Standard white-light interferometry} The standard scheme for WLI is shown in \figurename~\ref{Fig1}(a). The emission of a white-light source is directed to an interferometer in which the reference arm is free-space (with well known optical properties) and the other arm comprises the sample under test (SUT). Recombining both arms at the output beam-splitter leads to an interference pattern for which the intensity follows $I \propto 1 + \cos \left( \phi(\lambda) \right)$, with $\phi(\lambda) = \tfrac{2\,\pi}{\lambda} \left(n(\lambda) \cdot L_{\rm s} - L_{\rm r} \right)$. Here, $\lambda$ represents the wavelength, $L_{\rm r}$ and $L_{\rm s}$ are the physical lengths of the reference arm and the SUT, respectively, and $n(\lambda)$ is the effective refractive index of the SUT. It is worth noting that interference is only observed when the interferometer is precisely balanced to within the larger of: the coherence length of the white-light source and the coherence length imposed by the resolution of the spectrometer, which is typically on the order of microns to millimetres~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996}. In this case, the phase term reads (more details are given in the supplementary information): \begin{eqnarray} \phi(\lambda) &\approx& 2\,\pi\,L_{\rm s} \Bigg( \frac{1}{2} \, \frac{d^2 n}{d \lambda^2}\Bigg|_{\lambda_0} \cdot \frac{(\Delta \lambda) ^2}{\lambda_0 + \Delta \lambda} \nonumber\\ &\,& + \frac{1}{6} \, \frac{d^3 n}{d \lambda^3}\Bigg|_{ \lambda_0} \cdot \frac{(\Delta \lambda) ^3}{\lambda_0 + \Delta \lambda} \Bigg) + \phi_{\rm off}. \label{ClassicalPhase} \end{eqnarray} Here, $\lambda_0$ represents the stationary phase point, \textit{i.e.} the wavelength at which the absolute phase difference between the two interferometer arms is exactly zero. In standard WLI, $\lambda_0$ is extracted experimentally by identifying the symmetry point of the observed interferogram~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996}. Additionally, $\Delta \lambda = \lambda - \lambda_0$, and $\phi_{\rm off}$ is a constant offset phase. Provided that $L_{\rm s}$ is precisely known, the optical material parameters $\frac{d^2n}{d \lambda^2}\Big|_{\lambda_0}$ and $\frac{d^3n}{d \lambda^3}\Big|_{\lambda_0}$ can be extracted from a fit to the data as a function of $\Delta \lambda$. It is noteworthy that the three free parameters, \textit{i.e.} $\lambda_0$, $\frac{d^2n}{d \lambda^2}\Big|_{\lambda_0}$ and $\frac{d^3n}{d \lambda^3}\Big|_{\lambda_0}$, are usually all interdependent in a non-trivial fashion such that uncertainties on one parameter systematically induce uncertainties on the others. As a matter of fact, the high number of required fitting parameters and the necessity to re-equilibrate the interferometer for every new SUT represent the main limiting factors of this technique~\cite{MethodsComparison,Galle_thesis_2014}. However, more accurate optical measurements are eagerly demanded in almost all fields where optics is involved. A special focus is set on the optical parameter $\tfrac{d^2n}{d \lambda^2}\Big|_{\lambda_0}$ as it is directly related to the chromatic dispersion coefficient $D = - \frac{\lambda_0}{c} \cdot \frac{d^2n}{d \lambda^2}\Big|_{\lambda_0}$, in which $c$ represents the speed of light~\cite{Hlubina_one_percent,Laurent_dispersion,Kardas_160fs,Hlubina_sub_100fs,Ye_few_percent,MethodsComparison,Galle_280fs,GDD_mirrors}. Chromatic dispersion causes optical pulse broadening and more accurate measurements on $D$ would have tremendous repercussions for optimising today's telecommunication networks, developing new-generation pulsed lasers and amplifiers, designing novel linear and nonlinear optical components and circuits, as well as for assessing the properties of biological tissues. \begin{figure} \includegraphics{Fig1_bis} \caption{Typical experimental setups for standard spectrally-resolved WLI \textbf{(a)}, and Q-WLI \textbf{(b)}. BS, beam-splitter; SUT, sample under test; SPD, single photon detector. \&-symbol, time-tagging and coincidence logic.\label{Fig1}} \end{figure} \section{Materials and Methods} \subsection{Quantum white-light interferometry} \figurename~\ref{Fig1}(b) depicts the new experimental schematic dedicated to spectrally-resolved quantum WLI (Q-WLI), intended to overcome the above-mentioned issues. The quantum white-light source is composed of a continuous-wave pump laser and a non-linear crystal in which energy-time entangled photon pairs are generated through spontaneous parametric downconversion~\cite{Kaiser_source,Alibart_PPLN_2016}. This process obeys the conservation of the energy, \textit{i.e.} $\frac{1}{\lambda_{\rm p}} = \frac{1}{\lambda_1} + \frac{1}{\lambda_2}$. Here, $\lambda_{\rm p,1,2}$ represent the vacuum wavelengths of the pump laser photons, and the individual photons for each generated pair. Another implication of the conservation of the energy is that the degenerate vacuum wavelength of the emission spectrum is $\lambda^* = 2\,\lambda_{\rm p}$. We send the paired photons to the interferometer, however, as opposed to standard WLI, we intentionally unbalance it now. This provides us with two advantages: first, we avoid single-photon interference; and second, we obtain a means to distinguish events where both photons take opposite paths (strongly delayed arrival times at the interferometer's outputs) or the same path (near zero arrival time difference)~\cite{Kaiser_source}. We post-select the latter ones by considering only two-photon coincidence detection events in which both the single photon detector (SPD) and the single-photon sensitive spectrometer fire simultaneously. Our target is now to observe quantum interference between those two-photon contributions which necessitates that they are coherent and indistinguishable. Coherence is ensured by operating the interferometer at a path length difference that is shorter than the coherence length of the pump laser ($\sim$100\,m) such that the photon-pair contributions \textit{are in phase}~\cite{Franson_energy_time_1989}. Indistinguishability concerns mainly the temporal envelope of the photon-pair wave packet which is distorted from its original shape by the dispersion-induced temporal walk-off between the individual photons in the SUT. For standard fibres, this means that path length differences up to $\sim$10\,m are acceptable~\cite{Vergyris_sagnac_2017}.\\ Thus, provided that the interferometer is operated in these conditions, near zero arrival time coincidence detection results in a two-photon $N00N$-state: \begin{equation} |\psi \rangle = \frac{\left( |2 \rangle_{\rm r} |0 \rangle_{\rm s} + {\rm e}^{{\rm i} \phi_{N00N}} |0 \rangle_{\rm r} |2 \rangle_{\rm s} \right)}{\sqrt{2}}. \end{equation} Here, the ket vectors, indexed by s and r, indicate the number of photons in the reference and SUT arm, respectively, and $\phi_{N00N} = \phi(\lambda_1) + \phi(\lambda_2)$. We obtain the spectral dependence of $\phi_{N00N}$ by computing $\phi(\lambda_1)$ and $\phi(\lambda_2)$ accordingly to equation~\ref{ClassicalPhase} and respecting the conservation of the energy during the downconversion process: \begin{equation} \phi_{N00N}(\lambda) \approx \frac{d^2 n}{d \lambda^2}\Bigg|_{\lambda^*} \cdot \frac{\pi \, L_{\rm s} \cdot (\Delta \lambda)^2}{\frac{\lambda^*}{2}+\Delta \lambda} + \phi_{\rm off}. \label{QuantumPhase} \end{equation} Here, $\phi_{\rm off} = \frac{4\,\pi \left( n(\lambda^*) \,L_{\rm s} - L_{\rm r} \right) }{\lambda^*}$ is an offset term, and $\Delta \lambda = \lambda - \lambda^*$. The phase-dependent two-photon coincidence rate $R$ is then: $R \propto 1+\cos \left( \phi_{N00N} \right)$. In the past, numerous studies have investigated the term $\phi_{\rm off}$, as it allows measuring optical phase-shifts at constant wavelengths with doubled resolution compared to the standard approach~\cite{Crepsi_protein-concentration_2012,Ono_microscopy_2013,Israel_microscopy_2014}.\\ We access here, for the first time, the wavelength-dependent term in equation~\ref{QuantumPhase} by recording $R$ as a function of $\Delta \lambda$, \textit{i.e.} the two-photon coincidence rate is measured as a function of the paired-photons' wavelengths. This leads to several pertinent purely quantum-enabled features: due to the use of an energy-time entangled two-photon $N00N$-state, the required precision on equilibrating the interferometer is $\sim$10\,m instead of microns to millimetres in standard WLI~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996,MethodsComparison,Galle_thesis_2014}. This is particularly interesting for improving the ease-of-use as no re-alignment is necessary when changing the SUT; compared to equation~\ref{ClassicalPhase}, the third-order term $\tfrac{d^3n}{d \lambda^3}$ in equation~\ref{QuantumPhase} is cancelled thanks to energy-time correlations~\cite{Nasr_QOCT_2004}; furthermore, the wavelength at which chromatic dispersion is measured, $\lambda^*$, does not have to be extracted from the data, as it is exactly twice the wavelength of the continuous-wave pump laser, $\lambda_{\rm p}$, and can therefore be known with extremely high accuracy. This means that the quantum strategy allows data fitting using exactly one free parameter, namely $\tfrac{d^2n}{d \lambda^2}\Big|_{\lambda^*}$ which is an essential step towards absolute optical property determination with high precision without systematic errors. Finally, due to the use of a two-photon $N00N$-state, a doubled resolution on $\tfrac{d^2n}{d \lambda^2}\Big|_{\lambda^*}$ is achieved, allowing to perform measurements on shorter samples and components compared to standard WLI, \textit{i.e.} down to the technologically interesting mm to cm scale. \subsection{Detailed optical setup and data acquisition} To benchmark standard and quantum approaches, we employ a 1\,m long SMF28e fibre from Corning as SUT. For all measurements, we employ the same interferometer and actively stabilise it using a reference laser and a piezoelectric transducer on one mirror in the reference arm (more details are provided in the methods section). This ensures that $\phi_{\rm off}$ remains constant.\\ For chromatic dispersion measurements using classical WLI, we use a state-of-the-art superluminescent diode. At the output of the interferometer we measure an average spectral intensity of $\sim$125\,pW/nm from $1450$ to $1650\rm\,nm$. Interferograms are recorded using a spectrometer from Anritsu (model MS9710B) with 0.1\,s integration time and 0.5\,nm resolution which are standard parameters for this kind of measurement~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996}.\\ For the Q-WLI approach, the light source is made of a 780.246\,nm laser, pumping a \mbox{type-0} periodically-poled lithium niobate waveguide (PPLN/W). We stabilise the laser wavelength against the $F=2 \rightarrow F'=2 \times 3$ hyperfine crossover transition in atomic $^{87}$Rb, such that $\lambda_{\rm p}$ and $\lambda^*$ are known with a precision on the order of 1\,fm. The quasi-phase matching in the PPLN/W is engineered such as to generate energy-time entangled photon pairs around the degenerate wavelength of $\lambda^*=1560.493\rm\,nm$ with a bandwidth of about 140\,nm~\cite{Alibart_PPLN_2016}. To detect the paired photons, we use an InGaAs SPD (IDQ 220) at one interferometer output. The single photon spectrometer at the other output is made of a wavelength-tunable 0.5\,nm bandpass filter, followed by another InGaAs SPD (IDQ 230). To avoid saturation of these detectors, the spectral intensity at the interferometer output is reduced to about 25\,fW/nm, which is partially compensated by increasing the integration time to 8\,s.\\ All measurements are repeated 100 times on the same SUT in order to infer the statistical accuracy of both WLI and Q-WLI approaches. \section{Results and discussion} \subsection{Statistical analysis for comparing measurement precision} Typical interference patterns for chromatic dispersion measurements using both methods are shown in \figurename~\ref{Fig2}(a,b). With the Q-WLI setup, we find twice as much interference fringes for the same spectral bandwidth which is a direct consequence of the doubled phase sensitivity of the two-photon $N00N$-state. \begin{figure}[h] \includegraphics{Fig2_bis} \caption{Typical measurements acquired for inferring chromatic dispersion in a 1\,m long standard single-mode fibre using standard WLI \textbf{(a)}, and Q-WLI \textbf{(b)}. Red dots, data points; Blue lines, appropriate fits to the data following equations~\ref{ClassicalPhase} and \ref{QuantumPhase} from which $D$ is extracted. Error bars assume poissonian photon number statistics. For standard WLI, normalization is obtained by measuring two reference spectra. For Q-WLI, normalization is performed \textit{on-the-fly} by counting non-zero arrival time difference coincidences. For more details, refer to the supplementary information.\label{Fig2}} \end{figure} After acquiring $2 \times 100$ measurements on the same SUT, we infer the precision of both approaches. The results of the statistical data analysis are shown in \figurename~\ref{Fig3}. For standard WLI, we obtain, on average, $D=17.047\rm\, \frac{ps}{nm \cdot km}$ at $\lambda_0 \approx 1560.5\rm\,nm$ with a standard deviation of $\sigma_{\rm classical} = 0.051\rm\, \frac{ps}{nm \cdot km}$. This result is amongst the best reported to date in the literature~\cite{Hlubina_one_percent,Laurent_dispersion,Kardas_160fs,Hlubina_sub_100fs,Ye_few_percent,MethodsComparison,Galle_280fs}. For Q-WLI, we measure, on average, $D=17.035\rm\, \frac{ps}{nm \cdot km}$ at $\lambda^*=1560.493\rm\,nm$ with a significantly better standard deviation of $\sigma_{N00N} = 0.021\rm\, \frac{ps}{nm \cdot km}$. In our two sets of data, we observe a difference of $0.012\rm\, \frac{ps}{nm \cdot km}$ between the central values which is larger than the deviation expected from statistical uncertainties ($0.006\rm\, \frac{ps}{nm \cdot km}$). Polarisation mode dispersion is also excluded as it would introduce at most an offset of $0.003\rm\, \frac{ps}{nm \cdot km}$. Consequently, the difference in central values must originate from systematic errors. In this sense we compute that, for standard WLI, the difference is explained by either a slight wavelength offset of the spectrometer ($<0.2\,\rm nm$), or by a slightly unbalanced interferometer ($\sim$1.5$\,\rm\mu m$). Both types of errors induce an error on the fitting parameter $\lambda_0$ which is translated to an error in $\frac{d^2n}{d \lambda^2}\Big|_{\lambda_0}$~\cite{Naganuma_subwavelength_1990,Diddams_dispersion-WLI_1996}. At this point, we emphasise again that in our Q-WLI approach, $\lambda^*$ is essentially known with absolute accuracy and an unbalanced interferometer does not influence the measurement. As Q-WLI presents less sources of systematic errors, it is therefore natural to consider that Q-WLI determines chromatic dispersion with absolute accuracy. We further emphasise that our measurements performed with Q-WLI involve $\sim$62 times less photons transmitted through the SUT compared to standard WLI. It is therefore interesting to compare the achievable precision normalised to the number of transmitted photons. For each standard and quantum interferogram, the number of photons reaching the interferometer outputs is $N_{\rm std} \approx 2.0 \cdot 10^{10}$ and $N_{\rm quant} \approx 3.1 \cdot 10^8$, respectively. Consequently, the standard and quantum methods achieve precisions of $\left(\Delta D \right)_{{\rm std}} = 7146\,{\rm \frac{ps}{nm \cdot km}} / \sqrt{N_{\rm std}}$ and $\left(\Delta D \right)_{{\rm quant}} = 372\,{\rm \frac{ps}{nm \cdot km}} / \sqrt{N_{\rm quant}}$, respectively. In other words, in addition to being more prone to systematic errors, the standard measurement requires $369 \times$ more photons for achieving the same precision as Q-WLI. \begin{center} \begin{figure}[h] \includegraphics{Fig3_bis} \caption{Histogram of inferred chromatic dispersion coefficients after 100 repetitions on the same SUT for both standard (blue) and quantum-enhanced (red) measurements, respectively. Fits to the data assume a normal distribution.\label{Fig3}} \end{figure} \end{center} \subsection{Device calibration using Q-WLI} Another advantage provided by Q-WLI lies in straightforward device calibration. All the optical components in the interferometer actually show small residual chromatic dispersion, and this undesired offset needs to be evaluated and subtracted from the data in order to avoid systematic errors. In both cases, this implies performing a measurement without any SUT.\\ Note that in standard WLI, removing the SUT significantly unbalances the interferometer, and in order to observe interference the length of the reference arm has to be reduced accordingly (typically on the order of 1\,m). This procedure is technically challenging, time-consuming, and might lead to additional systematic errors.\\ At this point, Q-WLI shows its ability for user-friendly operation. Even after removing the SUT, interference is observed, without any interferometer realignment. \begin{figure}[h] \includegraphics{Fig4_bis} \caption{Experimental results when using Q-WLI for inferring residual chromatic dispersion in our interferometer without the SUT. Red dots, data points; Blue lines, appropriate fit to the data.\label{Fig4}} \end{figure} \figurename~\ref{Fig4} shows the experimental results that we have obtained when measuring chromatic dispersion in our bare interferometer, \textit{i.e.} without the SUT. It turns out that in our interferometer, residual chromatic dispersion amounts to $\sim$10\% of the measured values on the 1\,m SUT. For all the data discussed above, except for the raw data in \figurename~\ref{Fig2}(a) and (b), we have subtracted the residual chromatic dispersion. \section{Conclusions} We have introduced and demonstrated the concept of spectrally-resolved Q-WLI exploiting energy-time entangled two-photon $N00N$-states. Compared to standard measurements, the $N00N$-state permits achieving a two times higher phase sensitivity. More strikingly, the peculiar use of such quantum states of light reduces the number of free parameters for fitting experimental data from three to one, representing a major advantage for determining optical properties with high precision and absolute accuracy. In addition, our setup does not require a balanced interferometer for performing the measurement which represents a significant time-saving advantage compared to standard WLI. This is of particular interest for device calibration and when considering measurements on a large set of samples. As an exemplary demonstration, we have applied our scheme to infer chromatic dispersion in a standard single-mode fibre, obtaining 2.4 times more precise results compared to state-of-the-art realisations, despite using $\sim$62 times less photons. We note that the sensitivity of our approach could be further doubled by using a double-pass configuration~\cite{Laurent_dispersion}, towards achieving measurements on short samples, such as optical components and waveguide structures (mm to cm length scale). Such measurements would also be of interest for medical applications where precise knowledge on chromatic dispersion in tissues is required to yield optimal image quality in optical coherence tomography~\cite{Drexler_OCT_1999}. In this perspective, the reduced number of photons required for quantum white-light interferometry is also highly interesting for measurements performed on photosensitive biological samples~\cite{Frigault_Live_2009,Celebrano_Single_Molecule_2011,Marek_Direct_2014}. Regarding optical telecommunication systems, by rotating the polarisation of the entangled photon pairs, our setup could be also used for measuring polarisation mode dispersion in optical components, which would lead to refinement of manufacturing processes. In addition, total measurement times could be reduced far below 1\,s by employing high-speed superconducting detectors showing $\sim$3 orders of magnitude higher saturation levels compared to the InGaAs SPDs employed here~\cite{Zwiller_detector_2017}. Alternatively, quantum-inspired strategies may also prove to be suitable~\cite{Mazurek_qinspired_2013,Manceau_qinspired_2017}. In summary, we believe that combining fundamental and conceptual advantages enabled by quantum light is a very promising approach for the future development and improvement of applications requiring absolute and high-precision measurements of optical properties. \section{Acknowledgements} The authors acknowledge financial support from the Foundation Simone \& Cino Del Duca, the European Commission for the FP7-ITN PICQUE project (grant agreement No 608062), l'Agence Nationale de la Recherche (ANR) for the CONNEQT, SPOCQ and SITQOM projects (grants ANR-EMMA-002-01, ANR-14-CE32-0019, and ANR-15-CE24-0005, respectively), and the iXCore Research Foundation. We also thank M.~T. P\'erez Zaballos for help on statistical data treatment, as well as M. Mitchell, T. Debuisschert, A. Levenson, and P. Neumann for fruitful discussions.
2023-04-23T08:18:06.460Z
2017-09-18T02:06:48.000Z
redpajama/arxiv
arxiv_0000
1,203
3,502
65a85daf3733718ceae98a86dd06d37afbca6731
\section{Introduction} \label{sec:intro} \begin{figure*} \includegraphics[width=\linewidth]{pipeline.png} \centering \caption{Pipeline of the kernel computation for a polyhedron. At first step, we compute the Axis Aligned Bounding Box (AABB) of the polyhedron; then, we iterate on each face $f$ of the polyhedron (black edges) and cut AABB with the plane induced by $f$ (red edges).} \label{fig:pipeline} \end{figure*} The concept of \emph{geometric kernel} of a polygon, a polyhedron, or more generally of a shape, is a pillar of computational geometry. Roughly speaking, the kernel of a closed geometric shape $S$ is the locus of the points internal to $S$ from which the whole shape $S$ is visible. The kernel of a convex shape coincides with the shape itself, while the concept is particularly interesting for non-convex shapes and in particular, polytopes. The kernel of non-convex shapes can also be empty, as in the case of non simply-connected objects for which it is always empty. In the two-dimensional scenario, that is when the shape is a polygon, the standard way of computing the kernel is by intersecting appropriate half-planes generated from its edges. This problem has been tackled since the 70s, when \cite{ShamosHoey} presented an algorithm able to perform the kernel computation in $O(e\log e)$ operations, being $e$ the number of edges of a polygon, as the intersection of $e$ half-edges. After that, an optimal algorithm able to run in $O(n)$ operations over an $n-$sided polygon, has been proposed in \cite{LeePreparata}. Famous computational tools and libraries like \textit{Boost} \cite{BoostLibrary}, \textit{Geogram} \cite{levy2015geogram}, \textit{CGAL} \cite{fabri2009cgal}, or \textit{Libigl} \cite{jacobson2017libigl} currently implement routines to compute intersections between polygons and planes, which can be used to estimate the kernel. In the first attempts to solve the volumetric version of the problem, for example in \cite{PreparataShamos}, the natural approach has been to extend the 2D method (which we call \textit{geometric)} to the 3D case from a theoretical point of view, but it was soon dismissed as unattractive for computational reasons. It was replaced by a new approach (which we call \textit{algebraic)} which makes use of linear algebra and homogeneous coordinates, and that is the state of the art for computing 3D kernels currently implemented by libraries like CGAL. During years, the polygon kernel computation has become popular to address several problems based on simple polygon analysis, such as star-components decomposition and visibility algorithms that are of interest in robotics, surveillance, geometric modeling, computer vision and, recently, in the emerging field of additive manufacturing \cite{demir2018near}. Today, the geometric kernel of a polytope is a pivotal information for understanding the geometrical quality of an element in the context of finite elements analysis. While in the past years finite elements methods were only designed to work on convex elements like triangles/tetrahedra or quadrangles/hexahedra \cite{ciarlet2002finite}, recent and more complex methods like the Mimetic Finite Difference Method \cite{lipnikov2014mimetic}, the Virtual Elements Method \cite{beirao2013basic}, the Discontinuous Galerkin Mehod \cite{cockburn2012discontinuous} or the Hybrid High Order Method \cite{di2019hybrid} are able to deal with non convex polytopes. This enrichment of the class of admissible elements led researchers to further investigate the concept of the geometric quality of a polytope, and to define quality measures and metrics for the mesh elements, whether they are poligons \cite{attene2021benchmark,sorgente2022role} or polyhedra \cite{sorgente2021polyhedral}. In this setting, the geometric kernel is often associated with the concepts of \textit{shape regularity} and \textit{star-shapedness} of an element. For example, as analyzed in \cite{sorgente2021vem}, most of the error estimates regarding the Virtual Elements Method (but the same holds for other polytopal methods) are based on the theory of polynomial approximation in Sobolev spaces, assuming the star-shapedness of the elements \cite{Brenner-Scott:2008,dupont1980polynomial}. As a consequence there are a number of sufficient geometrical assumptions on the computational domain for the convergence of the method, which require an estimate of the kernel. When dealing with non-trivial meshes \cite{antonietti2022high, berrone2019parallel}, such quality measures/metrics/indicators require to compute the kernel of thousands of polytopes, each of them with a limited number of faces and vertices, in the shortest possible time. A preliminary algorithm for the computation of the kernel of a polyhedron using the geometric approach was introduced in \cite{SorgenteKernel}. There, we experimentally showed how this type of approach in practice can significantly outperform the algebraic one, for instance, when polyhedral elements have a limited number of faces and vertices or several faces are co-planar. In this paper we optimize the algorithm, for instance preliminary estimating the position of all vertices with respect to the planes induced by the faces, introducing the use of exact geometric predicates \cite{shewchuk1997adaptive} and a random strategy for visiting the faces of a polyhedron and identify more rapidly polyhedra whose kernel is empty. We also simplify the description of the algorithmic routines and we introduce new scientific results on a larger variety of models that confirm the validity of a geometry-based kernel algorithm. The paper is organized as follows. In Section~\ref{sec:preliminary} we introduce some notation and discuss the difference between the geometric and the algebraic approach. In Section~\ref{sec:kernel} we detail the algorithm for the construction of the kernel of a polyhedron. In Section~\ref{sec:tests} we exhibit some examples of computed kernels and analyze the performance of the algorithm with comparisons with an implementation of the algebraic approach and with its previous version \cite{SorgenteKernel}. In Section~\ref{sec:conclusions} we sum up pros and cons of the algorithm and draw some conclusions. \section{Preliminary concepts} \label{sec:preliminary} We introduce some notations and preliminary concepts that we will use in the rest of the paper. \subsection{Notations} \label{subsec:notations} We define a \textit{polyhedron} as a finite set of plane polygons such that every edge of a polygon is shared by exactly one other polygon and no subset of polygons has the same property \cite{PreparataShamos}. The vertices and the edges of the polygons are the \textit{vertices} and the \textit{edges} of the polyhedron; the polygons are the \textit{faces} of the polyhedron. In this work we only consider \textit{simple} polyhedra, which means that there is no pair of nonadjacent faces sharing a point. A polyhedron $P$ is said to be \textit{convex} if, given any two points $p_1$ and $p_2$ in $P$, the line segment connecting $p_1$ and $p_2$ is entirely contained in $P$. It can be shown that the intersection of convex polyhedra is a convex polyhedron \cite{PreparataShamos}. Two points $p_1$ and $p_2$ in $P$ are said to be \textit{visible} from each other if the segment $(p_1,p_2)$ does not intersect the boundary of $P$. The \textit{kernel} of $P$ is the set of points in the interior of $P$ from which all the points in $P$ are visible. The first obvious consideration is that the kernel of a polyhedron is a convex polyhedron. If $P$ is convex its kernel coincides with its interior, because any two points inside a convex polyhedron are visible from each other. A polyhedron may also not have a kernel at all; in this case we say that its kernel is \textit{empty}. Last, a polyhedron $P$ is called \textit{star-shaped} if there exists a sphere, with non-zero radius, completely contained in its kernel. A polyhedron is star-shaped if and only if its kernel is not empty, therefore star-shapedness can be thought as an indicator of the existence of a kernel. Star-shapedness is weaker than convexity, and it is often used in the literature as many theoretical results in the theory of polynomial approximation in Sobolev spaces rely on this condition \cite{dupont1980polynomial, Brenner-Scott:2008}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{tent.png} \caption{A sequence of parametric polyhedra whose kernel is progressively shrinking. The kernel is the polyhedron delimited by the red edges.} \label{fig:tent} \end{figure} In order to give a visual example of these concepts, in Fig.~\ref{fig:tent} we present a parametric object shaped like a tent, with the parameter regulating the height of the ``entrance'' from the basis. This object is not convex, but in the first examples it is star-shaped and it has a proper kernel, delimited by the red edges. As the parameter increases, the set of points from which the whole polyhedron is visible becomes smaller, and so does the star-shapedness radius. The last example of Fig.~\ref{fig:tent} is not star-shaped anymore, i.e. the kernel is empty. \subsection{The algebraic approach to the kernel computation} As observed in Section~\ref{sec:intro}, the state of the art algorithm for the computation of the kernel of a polygon follows a geometric approach: the kernel is found as the intersection of half-planes originating from its edges. We use the term ``geometrical'' because the algorithm computes repeatedly a sequence of geometric intersections between polygons and planes. This idea was afterwards optimized until obtaining an algorithm able to run in $O(n)$ operations, which has been proven to be optimal \cite{LeePreparata}. When facing the 3D version of the problem, one natural way could be to extend the 2D algorithm, which is well studied and documented, to the higher dimension. The problem with the 3D case is that, whereas two convex polygons with respectively $n_1$ and $n_2$ vertices can be intersected in time $O(n)$, being $n=n_1+n_2$, two convex polyhedra with the same parameters are intersected in time $O(n\log n)$, thus the generalization of the two-dimensional instance would yield an $O(n\log^2 n)$ algorithm. This is in contrast with the result shown in \cite{PreparataShamos}, where a lower bound for the intersection of convex polyhedra is established at $O(n\log n)$. Therefore the geometric approach to the 3D problem was soon dismissed as unattractive, and alternative ways have been explored. A new algorithm was formulated, based on the so-called ``double duality trick'', which makes use of linear algebra and homogeneous coordinates. Thanks to this algebraic approach, described in \cite[Section 7.3.2]{PreparataShamos}, it is possible to compute the intersection of $n$ half-spaces in time $O(n\log n)$ \cite{PreparataShamos}. This algorithm can be implemented inside the framework of the CGAL library, although there is currently not an explicit routine for computing the kernel of a polyhedron and one has to connect the function for the intersection of half-spaces to the polyhedron data structure. The whole routine implies: \begin{enumerate} \item solving a linear problem to find a point in the interior of the polyhedron; \item translating the polyhedron so that the interior point is the origin; \item calculating the dual polyhedron, defined as the convex hull of the vertices dual to the original faces in regard to the unit sphere (i.e., halfspaces at distance $d$ from the origin are dual to vertices at distance $1/d$); \item calculating the resulting polyhedron, which is the dual of the dual polyhedron; \item translating the origin back to the interior point. \end{enumerate} While from a theoretical point of view the cited results are indubitable, we believe that in many practical situations the geometric approach could perform better than the algebraic one. This is due to the fact that the cost of solving a linear problem cannot fall below a certain bound, while the intersection of half-spaces can become extremely cheap in some circumstances. For instance if the number of faces and vertices of the polyhedron is low, this method could be more efficient than the algebraic one, and there may be other situations in which one could take advantage of a wise treatment of the geometrical operations. \subsection{Data structure} \label{subsec:data} We adopt the following data structure inherited by the \textit{cinolib} library \cite{livesu2019cinolib}, in which the code has been written: \begin{itemize} \item \textit{Points:} array of unordered 3D points. % \item \textit{Face:} array of unsigned integers associated to a \textit{Points} array, representing the indices of the vertices of a face ordered counter-clockwise. % \item \textit{Polyhedron:} struct composed by a field \textit{verts} of type \textit{Points} containing the vertices and a field \textit{faces} of type \textit{array $<$Face$>$} containing the faces of a polyhedron. % \item \textit{Plane:} class defining a plane in the Hessian form, composed by a 3D point $n$ indicating the unit normal of the plane (i.e. the $a,b,c$ coefficients of the plane equation) and a number $d$ indicating the distance of the plane from the origin (i.e. the $d$ coefficient of the plane equation). % The plane class also contains three additional points $p_1,p_2,p_3$ lying on the plane, useful for the Shewchuck exact predicates \cite{shewchuk1997adaptive}. % \item \textit{Sign:} array of labels (BELOW, ABOVE or INTER) used to store information on the position of the elements of a polyhedron (points, edges or faces) with respect to an unspecified plane. \end{itemize} Given a plane $p$, the elements of a polyhedron are classified as follows: \begin{itemize} \item A point $v$ is labelled as BELOW, ABOVE or INTER provided that the function \textit{orient3d}$(p.p_1,p.p_2,p.p_3,v)$ in the Shewchuck exact predicate library \cite{shewchuk1997adaptive} is negative, positive or zero, within a tolerance of $10^{-8}$; \item An edge $e$ is labelled as BELOW (resp. ABOVE) if both its endpoints are BELOW or INTER (resp. ABOVE or INTER), and as INTER if one point is ABOVE and the other one is BELOW. \item A face $f$ is labelled as BELOW if all of its points are BELOW, as ABOVE if none of its points is BELOW and as INTER otherwise. \end{itemize} The classification of points is computed at the top level of the algorithm with Shewchuck exact predicates, defining the symbol CINOLIB$\_$USES$\_$EXACT$\_$PREDICATES at compilation time. For edges and faces instead, we define a function \textit{classify(S)} which determines the classification of and edge or a face from an array $S$ of type \textit{Sign} containing the classification of its points. The introduction of the labels and the points evaluation at the beginning of the process are two main computational novelties with respect to the previous version of the algorithm \cite{SorgenteKernel}. \section{Polyhedron kernel algorithm} \label{sec:kernel} In this section, we illustrate our method for computing the kernel of a polyhedron with a geometric approach. It has a modular structure composed of four nested algorithms, each one calling the next one in its core part. It is modular in the sense that each algorithm can be entirely replaced by another one performing the same operation(s). This property is particularly useful for making comparisons: one could, for instance, use different strategies for computing the intersection between a polygon and a plane and simply replace Algorithm~\ref{alg:polygon-plane}, measuring the efficiency from time to time. \subsection{Polyhedron Kernel} \label{subsec:polyhedron_kernel} With Algorithm~\ref{alg:kernel} we tackle the general problem: given a polyhedron $P$, we want to find the polyhedron $K$ representing its kernel. In addition to $P$ we also need as input an array containing the outwards normals of its faces, as it is not always possible to determine the orientation of a face only from its vertices (for example with non-convex faces). We require the face normals explicitly, and not simply a boolean indicating the faces orientation, because these points will be used to define the planes containing the faces of $P$. We initialize $K$ with the axis aligned bounding box (AABB) of $P$, i.e. the box with the smallest volume within which all the vertices of $P$ lie, aligned with the axes of the coordinate system. Then we recursively ``slice'' $K$ with a number of planes, generating a sequence of convex polyhedra $K_i$, $i=1,\dots,\#$\textit{P.faces}, such that $K_i\subseteq K_{i-1}$. For each face $f$ of $P$ we define the plane $p$ containing its vertices and with normal vector given by the opposite of the face normal $N(f)$, that is to say, $p.n:=-N(f)$. We consider the plane together with the direction indicated by $p.n$, which is equivalent to considering the half-space originating in $p$ and containing its normal vector. Given this plane, in a \textit{Sign} array $S$ we store the classification of all the points in $K.verts$ according to their position with respect to $p$. For improving the efficiency, we can set the sign of all points belonging to $f$ to zero without evaluating their position. In general $p$ will separate $K$ into two polyhedra, and between those two we keep the one containing the vector $p.n$, which therefore points towards the interior of the element. This operation is performed by the \textit{Polyhedron-Plane Intersection} algorithm detailed in Section \ref{subsec:polygon_plane}, which replaces $K$ with the new polyhedron. The order in which we consider the faces is not relevant from a theoretical point of view, but turns out to have a huge impact on the performance. For instance, if we imagine to compute the kernel of a ``ring'', which is obviously empty, visiting the faces in the order they are stored may take a very long time, especially if the tessellation of the object is fine. This is because, generally, the faces of a tessellated model are numbered somehow coherently with their neighbors. For this reason, we optionally propose to visit the faces in random order, or \emph{shuffle} $P.faces$, and to return an empty polyhedron if after a ``slice'' $K$ has less than three faces. In this way, the empty kernel of a ring with thousands of faces could be detected in just three or four iterations. When this command is turned on, we say the algorithm is run in \textit{shuffle} mode. We point out that cutting a convex polyhedron with a plane will always generate two convex polyhedra, and since we start from the bounding box (which is convex), we are guaranteed that $K$ will always be a convex polyhedron. No matter how weird the initial element $P$ is, from this point on we will only be dealing with convex polyhedra and convex faces. Last, we could as well start with considering the polyhedron's convex hull instead, but it would be less efficient because the convex hull costs in general $O(n\log n)$ while the AABB is only $O(n)$, and we would still need to intersect the polyhedron with each of its faces. \begin{algorithm}[htbp] \caption{Polyhedron Kernel} \label{alg:kernel} \begin{algorithmic}[1] \Require Polyhedron $P$, Points $N$ (faces normals); \Ensure Polyhedron $K$ \State $K$ := AABB of $P$; \State \textit{[optional]} shuffle $P.faces$; \For{Face $f$ in $P.faces$} \State Plane $p$ := plane containing $f$ with normal $-N(f)$; \State Sign $S$ := orient3d$(p.p_1, p.p_2, p.p_3, K.verts)$; \State $K$ := Polyhedron-Plane Intersection($K$, $S$, $p$); \If{size($K.faces$) $<3$} return NULL; \EndIf \EndFor \State \Return $K$; \end{algorithmic} \end{algorithm} \subsection{Polyhedron-Plane Intersection} \label{subsec:polyhedron_plane} With the second algorithm we want to intersect a polyhedron $P$ with a plane $p$, given in $S$ the position of the vertices of $P$ with respect to $p$. The intersection will in general determine two polyhedra, and between these two we are interested in the one containing the normal vector of $p$ (conventionally called the one ``above'' the plane and indicated with $A$). This algorithm is inspired from \cite{ahn2008geometric}, where the authors define an algorithm for the intersection of a convex polyhedron with an half-space. \begin{figure}[htbp] \centering \begin{tabular}{cc} \includegraphics[width=.46\linewidth]{clipping.png} & \includegraphics[width=.46\linewidth]{capping.png}\\ (a) & (b) \end{tabular} \caption{Intersection of a polyhedron with a plane: (a) clipping and (b) capping of a cube.} \label{fig:polyhedron-plane} \end{figure} The first part of Algorithm~\ref{alg:polyhedron-plane} is called the ``clipping'' part (recalling the terminology from \cite{ahn2008geometric}) and consists in clipping each face of $P$ with the plane $p$, see Fig.~\ref{fig:polyhedron-plane}(a). It corresponds to the \textit{for} loop: we iterate on \textit{P.faces}, each time extracting from $S$ the labels $f_s$ of the vertices $f_v$ of the current face and using the \textit{classify} function. Faces classified as BELOW are discarded, ABOVE faces are added to $A$ together with their vertices, and INTER faces are split by the \textit{Polygon-Plane Intersection} algorithm. While we visit every face only once, the same does not hold for vertices, therefore we check if a vertex is already in $A.verts$ before adding it. This simple idea of checking in advance the faces classification resolves several implementation issues and in some cases significantly improves the efficiency of the algorithm. By doing this, we make sure that only the faces properly intersected by the plane are passed to Algorithm~\ref{alg:polygon-plane}, so that we do not need to implement all the particular cases of intersections in a single point or along an edge or of faces contained in the plane. In addition, for every face not passed to Algorithm~\ref{alg:polygon-plane} we have an efficiency improvement, and this happens frequently in models with many coplanar faces like the ones considered in Section~\ref{subsec:refinements}. If, at the end of this step, $A$ contains at least three INTER points, given that $A$ and all its faces are convex, these vertices will define a ``cap'' face of $A$ completely contained in $p$, see Fig.~\ref{fig:polyhedron-plane}(b). We can optimize the algorithm by storing in a Sign array the classification of the vertices in $A$, updating it with the sign of every vertex added in the switch loop. Note that in this case we do not need to use \textit{orient3d}: we already know the sign of the old vertices and the new vertices will obviously be of type INTER. In our data structure, the vertices of the faces are ordered counter-clockwise (CCW): in order to sort the points contained in \textit{capV} we project them onto a plane, drop one coordinate and apply the algorithm proposed in \cite{baeldung} for 2D points. Note that if the cap face was not convex it would make no sense to order its vertices, but the intersection between a plane and a convex polyhedron will always generate convex faces. Last, we need to check that this new face is not already present in $A$: for example if $p$ was tangent to $P$ along a face, this face could be added to $A$ both as an ABOVE face and as a cap face. If this is not the case, we add \textit{capF} to $A.faces$ but we do not need to add any vertex from \textit{capV}, as we can assume they are all already present in $A.verts$. \begin{algorithm}[htbp] \caption{Polyhedron-Plane Intersection} \label{alg:polyhedron-plane} \begin{algorithmic}[1] \Require Polyhedron $P$, Sign $S$, Plane $p$ \Ensure Polyhedron $A$ \For{Face $f$ in $P.faces$} \State Points $f_v:=$ vertices in $P.verts$ relative to $f$; \State Sign $f_s$ := $S_{|f_v}$ \Switch{classify$(f_s)$} \Case{BELOW} break; \EndCase \Case{ABOVE} \State $A.verts\leftarrow$ $f_v$, $A.faces\leftarrow f$; \EndCase \Case{INTER} \State (\textit{V},\textit{F}):=Polygon-Plane Intersection$(f_v,f,f_s,p)$; \State $A.verts\leftarrow$ \textit{V}, $A.faces\leftarrow$ \textit{F}; \EndCase \EndSwitch \EndFor \State Points \textit{capV} := vertices in $A.verts$ which are INTER; \If{size(\textit{capV}) $<3$} return $A$; \EndIf \State Face \textit{capF} := indices of \textit{capV} vertices ordered CCW; \If{\textit{capF} $\notin$ $A.faces$} $A.faces\leftarrow$ \textit{capF}; \EndIf \State \Return $A$; \end{algorithmic} \end{algorithm} \subsection{Polygon-Plane Intersection} \label{subsec:polygon_plane} Algorithm~\ref{alg:polygon-plane} describes the intersection of a polygon (representing a face of the polyhedron), defined by an array of 3D points \textit{polyV} and an array of indices \textit{polyF}, with a plane $p$. As before, we also require as input an array \textit{polyS} containing the position of the vertices of \textit{polyV} with respect to $p$. In analogy to Algorithm~\ref{alg:polyhedron-plane}, the intersection will in general determine two polygons and we are only interested in the one above the plane, see Fig.~\ref{fig:polygon-line-plane}(a), defined by points \textit{aboveV} and indexes \textit{aboveF}. We generically say that a vertex $v$ is added to \textit{above} meaning that $v$ is added to \textit{aboveV} and its index $id_v$ is added to \textit{aboveF}. This time we iterate on the edges of \textit{polyF}, extract the signs $s_1, s_2$ of the edge endpoints and switch between the three possible classifications of the edge. In order to avoid duplicates, for each couple of consecutive points $v_1, v_2$, we only accept to add to \textit{above} the second point $v_2$ or the intersection point $v$, but never $v_1$. We are here taking advantage of the fact that all faces are oriented coherently. If the edge is of type BELOW we ignore it unless $v_2$ is INTER (i.e. it lies exactly on the plane), in which case we add it to \textit{above}. In case of ABOVE edges we add $v_2$ to \textit{above}. For edges of type INTER we perform the \textit{Line-Plane Intersection} algorithm and find a new point $v$. Its index $id_v$ will be equal to the maximum value in \textit{polyF} plus one, just to make sure that we are not using the index of an existing point. We always add $v$ to \textit{above}, and if $v_1$ is BELOW we also add $v_2$. As already noted in Section~\ref{subsec:polyhedron_plane}, treating separately the weak intersections (the BELOW and ABOVE cases) makes the code simpler and more efficient. \begin{figure}[htbp] \centering \begin{tabular}{cc} \includegraphics[width=.35\linewidth]{polygon-plane.png} & \includegraphics[width=.35\linewidth]{line-plane.png}\\ (a) & (b) \end{tabular} \caption{(a) Intersection between a polygon and a plane, with the above part coloured in green. (b) Intersection between a line and a plane.} \label{fig:polygon-line-plane} \end{figure} \begin{algorithm}[htbp] \caption{Polygon-Plane Intersection} \label{alg:polygon-plane} \begin{algorithmic}[1] \Require Points \textit{polyV}, Face \textit{polyF}, Sign \textit{polyS}, Plane $p$. \Ensure Points \textit{aboveV}, Face \textit{aboveF}. \For{$i=1$ : size(\textit{polyF})} \State $id_1:=$ \textit{polyF}$(i)$, $id_2:=$ \textit{polyF}$(i+1)$; \State $v_1:=$ \textit{polyV}$(id_1)$, $v_2:=$ \textit{polyV}$(id_2)$; \State $s_1:=$ \textit{polyS}$(id_1)$, $s_2:=$ \textit{polyS}$(id_2)$; \Switch{classify($s_1,s_2$)} \Case{BELOW} \If{$v_2$ is INTER} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndIf \EndCase \Case{ABOVE} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndCase \Case{INTER} \State $v:=$ Line-Plane Intersection$(v_1,v_2,p)$; \State $id_v:=$ max(\textit{polyF})+1; \State \textit{aboveV} $\leftarrow v$, \textit{aboveF} $\leftarrow id_v$; \If{$v_1$ is BELOW} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndIf \EndCase \EndSwitch \EndFor \State \Return \textit{aboveV}, \textit{aboveF}; \end{algorithmic} \end{algorithm} \subsection{Line-Plane Intersection} \label{subsec:line_plane} This last algorithm computes the intersection point between a line, given as a couple of vertices, and a plane. It is a very simple and well known procedure, and we report it here only for completeness. The intersection vertex $v$ is defined as the linear combination of vertices $v_1$ and $v_2$, with a coefficient $t$ which may also fall outside the standard range $[0,1]$. The coefficient $t$ is found as the negative ratio between two scalar products involving the plane normal $n$ and a generic other point on the plane $s$, other than $v_1$ and $v_2$ see Fig~\ref{fig:polygon-line-plane}(b). The normal is $p.n$, and for the point $s$ we can use one of the three points on the plane $p.p_1, p.p_2, p.p_3$. If the denominator $D$ vanishes, it means either that the line is contained in the plane (if $N=0$ as well) or that the line does not intersect the plane. We treat these exceptions as errors because in Algorithm~\ref{alg:polygon-plane} we only call this algorithm after checking that the edge $(v_1,v_2)$ properly intersects the plane $p$. \begin{algorithm}[htbp] \caption{Line-Plane Intersection} \label{alg:line-plane} \begin{algorithmic}[1] \Require vertices $v_1,v_2$, Plane $p$. \Ensure vertex $v$. \State $N:=(p.n)\cdot(v_1-p.p_1)$; \State $D:=(p.n)\cdot(v_2-v_1)$; \Assert{$D!=0$} \State $t:=-N/D$; \State \Return $v:=v_1+t\ (v_2-v_1)$; \end{algorithmic} \end{algorithm} \subsection{Computational complexity} \label{subsec:computational_complexity} Making advantage of the modular organisation of our algorithms, we can estimate separately the computational cost of each algorithm and then include them into a single formula. Let us consider the case of an input polyhedron $P$ with $n_v$ vertices and $n_f$ faces. In Algorithm~\ref{alg:kernel} we start by computing the polyhedron's AABB, which is $O(n_v)$, then for each face we estimate the position of the vertices of $K$ with respect to a plane with \textit{orient3d}. Unfortunately, we have to re-compute the signs of all the $n_{kv}$ vertices of $K$ at every iteration, as from one step to the other the plane changes: this means $O(n_{kv} n_f)$ operations. At step zero we have $n_{kv}=8$, being $K$ the bounding box; then in the worst case, that is if the object is convex, it can grow up to $n_v$ because $K$ coincides with $P$. If the object is not convex, $n_{kv}$ can remain significantly lower than $n_v$, which translates in a much lower computational cost. In both cases, the difference between $n_{kv}$ and $n_v$ is also related to the number of coplanar faces of the model, which get agglomerated into a single face in the kernel with a consequent reduction of the number of vertices. If the kernel is empty, whenever we end up with less than three faces the algorithm stops, therefore we can replace $n_f$ with the number of iterations needed to detect the non star-shapedness. This number cannot be estimated precisely, but we can drastically reduce it with the shuffle mode. The good news is that once that we have the sign of all the vertices with respect to all the planes we are essentially done: the computational costs of Algorithms~\ref{alg:polyhedron-plane} and~\ref{alg:polygon-plane} is all about visiting arrays and copying parts of them into other arrays, and Algorithm~\ref{alg:line-plane} only consists of 4 operations. We point out that navigating and duplicating arrays has a negligible cost in this scenario as the faces of a generic polyhedron hardly contain more than 10 vertices. The only relevant operation in Algorithm~\ref{alg:polyhedron-plane} is the sorting of the vertices of the cap face, which does not always exist. This is done with a QuickSort routine which is on average $O(n\log n)$ for a cap face with $n$ vertices. Since $n$ is much smaller than $n_{kv}$ and the number of cap faces is always smaller than $n_f$, this cost is negligible compared to $O(n_{kv} n_f)$. In summary, we can set an upper bound to the computational cost of our algorithm at $O(n_v n_f)$, but both $n_v$ and $n_f$ can significantly decrease if the model is not convex or has coplanar faces, and even more if it is not star-shaped. In the next section we will show how in practice, on small polyhedra or on objects with many co-planar faces, the geometric method works in a computational time which is smaller than the one of the algebraic approach, and it is still competitive on many examples, even complex ones. \section{Tests and discussions} \label{sec:tests} We test our method in different settings, comparing its performance to the results obtained using our implementation of the algebraic method in CGAL. The comparison of the current method, including the shuffle version, and its previous version in \cite{SorgenteKernel} is presented in Section \ref{subsec:comparison}. Experiments have been performed on a MacBook Pro equipped with a 2,3 GHz Intel Core i5 processor with four CPUs and 16GB of RAM. Source code is written in C++ and it is accessible at \url{https://github.com/TommasoSorgente/polyhedron_kernel}, together with all datasets. In the following subsections we will present some plots and tables: we point out here some remarks on the notation adopted. Regarding plots, we colour the CGAL computational time in blue and our computational time in red, both in logarithmic scale. On the $x-$axis, depending from the context, we have the number of elements in the mesh or the number of vertices of the single model. In the several tables presented we first report the number of elements or vertices of the mesh. Since all the considered objects have genus zero and their surface is purely triangular, Euler's formula states that the number of faces is approximately equal to twice the number of vertices. Therefore we only indicate the number of vertices, but the number of faces is easily computable. Then the computational times (in seconds) are shown, and the ratio between CGAL time and ours. Note that ratios are computed from the original time values while in the tables we indicate truncated times, therefore they do not exactly correspond to the division between the values in the previous columns. \subsection{Polyhedral meshes} \label{subsec:meshes} \begin{figure*}[htbp] \centering \includegraphics[width=\linewidth]{meshes.png} \caption{Polyhedral meshes and time plots from datasets \textit{poly-parallel}, \textit{poly-poisson} and \textit{poly-random}, with non-tetrahedral elements highlighted in blue.} \label{fig:meshes} \end{figure*} First, we test our algorithm in the setting it was developed for, i.e. the computation of the kernels of elements in a 3D tessellation. To do so, we used the datasets from \cite{sorgente2021polyhedral}, available for download at \url{https://github.com/TommasoSorgente/vem-indicator-3D-dataset}. The meshes contained in these datasets are typical examples of tessellations which can be found in numerical analysis for the approximation of a PDE. We focus our attention on the polyhedral datasets: \textit{poly-parallel}, \textit{poly-poisson} and \textit{poly-random}. Each of them contains five tessellations of the unit cube with decreasing mesh size, from 100 to 100K vertices. The resulting meshes contain between 100 and 600K elements, most of which are tetrahedra but $20\%$ of them are generic polyhedra (in blue in Fig.~\ref{fig:meshes}), obtained by the union of two tetrahedra. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{elements.png} \caption{Examples of non-convex elements found in polyhedral meshes from Section~\ref{subsec:meshes} and relative kernels.} \label{fig:elements} \end{figure} Non-tetrahedral elements are generated by the agglomeration of two tetrahedral elements, therefore they may also be non convex, see Fig.~\ref{fig:elements}. \begin{table}[htbp] \caption{Computational times for polyhedral meshes.} \label{table:time:mesh} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} dataset & $\#$elements & our & CGAL & ratio\\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{poly-parallel} & 130 & 0.04 & 0.21 & 4.89 \\ & 1647 & 0.17 & 1.79 & 10.69 \\ & 16200 & 1.68 & 19.4 & 11.55 \\ & 129600 & 13.94 & 142.36 & 10.21 \\ & 530842 & 53.47 & 588.43 & 11 \\ \noalign{\smallskip}\hline \textit{poly-poisson} & 140 & 0.04 & 0.3 & 8.05 \\ & 1876 & 0.29 & 2.54 & 8.91 \\ & 16188 & 2.64 & 24.79 & 9.38 \\ & 146283 & 24.24 & 212.23 & 8.75 \\ & 601393 & 86.77 & 770.66 & 8.88 \\ \noalign{\smallskip}\hline \textit{poly-random} & 147 & 0.03 & 0.19 & 6.8 \\ & 1883 & 0.28 & 2.62 & 9.41 \\ & 18289 & 2.91 & 27.11 & 9.33 \\ & 161512 & 26.51 & 228.08 & 8.6 \\ & 598699 & 80.15 & 735.55 & 9.18 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In Table~\ref{table:time:mesh} we report, for each mesh of each dataset, the number of elements, the computational times for both methods and the ratio between the CGAL time and ours. Moreover, at the bottom of Fig.~\ref{fig:meshes} we plot the times against the number of elements in the mesh. It is visible how both methods scale linearly with respect to the number of elements, since the kernel of the elements is computed separately and independently for each element. Our method performs 8 to 11 times faster than CGAL, which approximately means one order of magnitude. As the elements of these meshes have either 4 faces if they are tetrahedra or 6 faces if they are the union of two tetrahedra, computing their kernel in a geometrical way results much faster than solving a linear problem. In this case we did not use the shuffle mode, as the number of faces was so small that the visiting order resulted not relevant. \subsection{Refinements} \label{subsec:refinements} As a second setting for our tests, instead of increasing the number of elements we wanted to measure the asymptotic behaviour of the methods as the number of faces and vertices of a single element explodes. We selected two polyhedra from the dataset \textit{Thingi10K} \cite{zhou2016thingi10k}: the so-called \textit{spiral} (ThingiID: 60246) and \textit{vase} (ThingiID: 85580). These models are given in the form of a surface triangular mesh but we treat them as single volumetric cell, analyzing the performance of both algorithms as we refine them. In Table~\ref{table:time:refinements} we report the computational times and the ratio for each refinement. \begin{figure}[th] \centering \includegraphics[width=.8\linewidth]{spiral.png} \caption{Original \textit{spiral} model and its first refinement, with identical kernels.} \label{fig:spiral} \end{figure} The \textit{spiral} model is refined through a midpoint strategy: each face is subdivided by connecting its barycenter to its other vertices. As a consequence, the planes induced by its faces remain the same and the kernels of the refined models are all equal (Fig.~\ref{fig:spiral}). On this example our method performs on average 5.77 times better that the algebraic method (see Table~\ref{table:time:refinements}), and the computational time scales with a constant rate (see the plot in Fig.~\ref{fig:spiral}). Our implementation takes advantage of the fact that Algorithm~\ref{alg:polyhedron-plane} recognises the several coplanar faces and always performs Algorithm~\ref{alg:polygon-plane} the same number of times, independently of the number of faces. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{vase.png} \caption{Original \textit{vase} model and its first refinement: small perturbations in the faces lead to slightly different kernels.} \label{fig:vase} \end{figure} The \textit{vase} model is more complex, as it presents a curved surface which generates a lot of different planes defining the kernel. Moreover, we refined this model using the Loop's algorithm and this generated faces lying on completely new planes. This explains the difference between the two kernels in Fig.~\ref{fig:vase}: the general shape is similar but the more faces we add to our model the more faces we find on the resulting kernel. Our geometric method improves the performance of the algebraic one by a factor around 2 in the first refinements, but in the last two meshes the complexity increases drastically and CGAL results faster (see Table.~\ref{table:time:refinements}). In this case, an efficient treatment of the faces is not sufficient to hide the quadratic nature of the geometric approach. Even the shuffle mode did not particularly improve the performance, being the object star-shaped. \begin{table}[htbp] \caption{Computational times for the \textit{spiral} and \textit{vase} refinements.} \label{table:time:refinements} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} mesh & $\#$vertices & our & CGAL & ratio \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{spiral} & 64 & 0.004 & 0.01 & 3.07 \\ & 250 & 0.007 & 0.04 & 6.27 \\ & 994 & 0.02 & 0.14 & 6.56 \\ & 3970 & 0.08 & 0.43 & 5.38 \\ & 15874 & 0.32 & 1.77 & 5.54 \\ & 63490 & 1.05 & 8.24 & 7.86 \\ \hline \textit{vase} & 99 & 0.02 & 0.03 & 1.55 \\ & 390 & 0.04 & 0.12 & 2.56 \\ & 1554 & 0.31 & 0.92 & 2.94 \\ & 6210 & 3.24 & 6.1 & 1.88 \\ & 24261 & 47.49 & 37.24 & 0.78 \\ & 36988 & 196.7 & 56.75 & 0.29 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Complex models} \label{subsec:complex_models} Last, we try to compute the kernel of some more complex models, taken again from the dataset \textit{Thingi10K} and treated as single volumetric cells. Even if our method is designed for dealing with polyhedra of relatively small size, as we already saw in Section~\ref{subsec:refinements} our algorithms are still able to compute the kernel of objects with thousands of vertices and faces. We filtered the \textit{Thingi10K} dataset selecting only ``meaningful'' models: objects with one connected component, genus zero, Euler characteristic greater than zero, closed, not degenerate and of size smaller than 1MB. Note that, even applying these filters, the majority of the models are not star-shaped, i.e. with empty kernel. We discarded a few models for computational and stability reasons, for instance because one of the two algorithms failed to process them. The final collection, which we will call the \textit{Thingi dataset}, contains exactly 1806 distinct volumetric models. \begin{figure*}[htbp] \centering \includegraphics[width=.9\linewidth]{time_thingi.png} \caption{Thingi dataset times. From left to right: all Thingi dataset, models with empty kernel, models with non-empty kernel.} \label{fig:time:thingi} \end{figure*} In Fig.~\ref{fig:time:thingi} we show the times distribution for the whole Thingi dataset, with a particular focus on the difference between models with empty kernel and models with non-empty kernel. Globally, the overall cost of computing all kernels is 173 seconds with our method against 518 seconds with CGAL, for an improvement of 3 times. When the kernel is empty our algorithm is always faster than CGAL: the main reason for this is the usage of the shuffle mode, which makes it extremely cheap to recognise non star-shaped elements. When the model is star-shaped the distinction between the two methods is not so clear anymore, as the results mainly depend on the shape and size of the object. \begin{figure*}[ht!] \centering \begin{tabular}{cccc} \includegraphics[width=.2\linewidth]{plus.png} & \includegraphics[width=.2\linewidth]{star.png} & \includegraphics[width=.2\linewidth]{flex.png} & \includegraphics[width=.2\linewidth]{cross.png} \\ \textit{plus} & \textit{star} & \textit{flex} & \textit{cross} \\ \vspace{0.3cm} \\ \includegraphics[width=.2\linewidth]{part.png} & \includegraphics[height=.2\linewidth]{superellipse.png} & \includegraphics[width=.2\linewidth]{boteye.png} & \includegraphics[width=.2\linewidth]{button.png} \\ \textit{part} & \textit{super-ellipse} & \textit{bot-eye} & \textit{button} \\ \vspace{0.3cm} \\ \includegraphics[height=.2\linewidth]{rt4arm.png} & \includegraphics[width=.2\linewidth]{ball.png} & \includegraphics[height=.2\linewidth]{acorn.png} & \includegraphics[width=.2\linewidth]{muffin.png} \\ \textit{rt4-arm} & \textit{ball} & \textit{acorn} & \textit{muffin} \\ \end{tabular} \caption{Examples of our kernel evaluation for complex models. In the top row models on which the geometric method is more efficient, in the middle models for which the performance are similar and in the bottom row models on which the algebraic method is preferable.} \label{fig:complex} \end{figure*} To further investigate on this point, in Fig.~\ref{fig:complex} we present the kernel computation of 10 selected star-shaped examples from this dataset. In the top row we have models on which the geometric method is by far more efficient: \textit{plus} (ThingiID: 1120761), \textit{star} (ThingiID: 313883), \textit{flex} (ThingiID: 827640) and \textit{cross} (ThingiID: 313882). In the middle row, models for which the performance are similar: \textit{part} (ThingiID: 472063), \textit{super-ellipse} (ThingiID: 40172), \textit{bot-eye} (ThingiID: 37276) and \textit{button} (ThingiID: 1329185). Then in the bottom row we show models on which the algebraic method is preferable: \textit{rt4-arm} (ThingiID: 39353), \textit{ball} (ThingiID: 58238), \textit{acorn} (ThingiID: 815480), \textit{muffin} (ThingiID: 101636). The computational times, together with the ones relative to the whole dataset, are reported in Table~\ref{table:time:complex}. \begin{table}[th] \caption{Computational times for complex models. The first number relative to \textit{Thingi} dataset indicates the number of models instead of the number of vertices.} \label{table:time:complex} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} mesh & $\#$vertices & our-shuffle & CGAL & ratio \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{plus} & 448 & 0.004 & 0.09 & 22.75\\ \textit{star} & 9633 & 0.4 & 5.15 & 12.93\\ \textit{flex} & 834 & 0.02 & 0.27 & 12.76\\ \textit{cross} & 3914 & 0.19 & 2.1 & 11.12\\ \textit{part} & 5382 & 2.58 & 6.94 & 2.69\\ \textit{super-ellipse} & 290 & 0.02 & 0.04 & 2.05\\ \textit{bot-eye} & 453 & 0.03 & 0.03 & 0.96\\ \textit{button} & 1227 & 0.1 & 0.08 & 0.75\\ \textit{rt4-arm} & 655 & 0.13 & 0.09 & 0.67\\ \textit{ball} & 660 & 0.24 & 0.04 & 0.15\\ \textit{acorn} & 4114 & 4.35 & 0.55 & 0.13\\ \textit{muffin} & 8972 & 11.73 & 0.54 & 0.04\\ \textit{\textbf{Thingi dataset}} & 1806 & 172.88 & 518.2 & 2.99\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Once again, we notice that the size of the model impacts on the performance of our method. Looking at Fig.~\ref{fig:time:thingi} we can see how the models for which our method performs worse than CGAL are all in the right part of the plane, relative to models with a high number of vertices. At the same time, the number of vertices of the element, by itself, is not sufficient to justify the supremacy of one method over the other. For example, models \textit{star} and \textit{flex} have very different sizes and times, but their ratio is quite similar; the same holds for models \textit{parts} and \textit{super-ellipse} or \textit{ball} and \textit{acorn}. The shape of the object also plays an important role: over models with numerous adjacent co-planar faces like \textit{plus}, \textit{star} (whose bottom is completely flat) and \textit{cross} our method is preferable even when the size grows. As already seen in Section~\ref{subsec:refinements}, the presence of coplanar faces significantly improves the performance of our method. Vice-versa, over elements with significant curvatures like \textit{rt4-arm}, \textit{acorn} or \textit{muffin}, the algebraic method performs similarly or better than ours even on relatively small models like \textit{bot-eye}. Over these models it is still possible to compute a correct kernel with the geometric approach, but the ratio between CGAL time and ours is in the order of $10^{-1}$ or even $10^{-2}$. \subsection{Comparison with the previous version} \label{subsec:comparison} With respect to its introduction in \cite{SorgenteKernel}, we believe that the algorithm is now more easy to read and to understand, thanks to the introduction of labels for storing the position of vertices, edges and faces with respect to a plane. Another significant difference is that the evaluation of the position of the vertices is now computed once and for all, at the top level (in Algorithm~\ref{alg:kernel}), while in the previous version every algorithm contained some vertices evaluations: this further reduced the computational complexity. In addition, we switched from the inexact predicates (in \cite{SorgenteKernel} we used the equivalent of \textit{orient3d-fast}) to the exact \textit{orient3d}, which resulted in an increasing precision in the treatment of nearly-coplanar faces, for a small extra cost which was easily compensated by the other improvements. This switch required a small modification of the \textit{Plane} class: in view of the usage of \textit{orient3d}, for every plane we also store three points on it. Last, the introduction of the shuffle mode made the computation of the kernels of non star-shaped objects almost immediate, marking a huge difference with respect to the algebraic method and our previous implementation. In Table~\ref{table:comparison} we report the differences between the old version of the algorithm and the current version in standard and in shuffle mode. Over the meshes from Section~\ref{subsec:meshes} there is almost no differences between the three versions. For refined models, there is an improvement of a factor 2 in the computation of the \textit{vase} refinement (we report the sum of the times for all the refinements). With complex models we have the greatest differences; in these cases we can also appreciate the advantage brought by the shuffle mode, which was not significant in the other tests. \begin{table}[htbp] \caption{Performance comparison between the current implementation and the one presented in \cite{SorgenteKernel}. Only models with significant differences are reported.} \label{table:comparison} \centering \begin{tabular}{cccc} \hline\noalign{\smallskip} mesh & our-shuffle & our & our\cite{SorgenteKernel} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{spiral} (sum) & 1.04 & 1.01 & 1.03\\ \textit{vase} (sum) & 245.61 & 194.91 & 495.63\\ \textit{cross} & 0.19 & 4.15 & 9.14\\ \textit{part} & 2.58 & 5.22 & 12.7\\ \textit{bot-eye} & 0.03 & 0.17 & 0.25\\ \textit{rt4-arm} & 0.13 & 0.14 & 0.21\\ \textit{button} & 0.1 & 0.11 & 0.15\\ \textit{ball} & 0.24 & 0.28 & 0.58\\ \textit{acorn} & 4.35 & 276.13 & 626.72\\ \textit{muffin} & 11.73 & 58.62 & 129.38\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \section{Conclusions} \label{sec:conclusions} We presented an algorithm for the computation of the kernel of a polyhedron based on the extension to the 3D case of the geometric approach commonly adopted in two dimensions. With respect to \cite{SorgenteKernel} we have optimized the algorithm in several ways: we now perform all the vertices comparisons in a single, preliminary step; we have introduced the use of exact geometric predicates and a random visiting strategy of the faces that considerably improve the performance of the method over non star-shaped, complex objects. The algorithm showed up to be robust and reliable, as it computed successfully the kernel of every considered polyhedron. The efficiency of our algorithm is compared to the CGAL implementation of the algebraic approach to the problem. From a theoretical point of view, the computational complexity evaluation of Section~\ref{subsec:computational_complexity} suggests that our method is in general quadratic, while the algebraic approach has a lower bound at $n\log(n)$. Nonetheless, we proved all across Section~\ref{sec:tests} that in several circumstances our approach outperforms the algebraic one. The geometric approach showed up to be significantly faster than the algebraic one when dealing with models satisfying at least one of the following conditions: \begin{enumerate} \item the size of the model is small; \item the model contains a significant number of co-planar faces; \item the kernel is empty. \end{enumerate} Our method performs significantly better than the algebraic approach over polyhedra with a limited number of vertices and faces, as shown in Section~\ref{subsec:meshes}, making it particularly suitable for the analysis of volumetric tessellations with non-convex elements. Indeed, we point out that our algorithm is specifically designed to be used with simple polyhedra, possibly composing a bigger and more complex 3D model, rather than with a complete model itself. This behaviour is particularly evident with model \textit{vase} from Section~\ref{subsec:refinements}. As long as the size of the model remains reasonable our method is faster than CGAL, then over a certain bound the algebraic method becomes more efficient. Again, models like \textit{super-ellipse} or \textit{flex} from Section~\ref{subsec:complex_models} have few or zero co-planar faces and a significant kernel, but the size of these meshes is small and the geometric approach offers better performance. According to Fig.~\ref{fig:time:thingi}, a bound on the number of vertices could be possibly set at around $10^3$. When the size of the polyhedron increases, our method is still particularly efficient if the model has numerous co-planar faces, due for instance to the presence of flat regions on the surface. This is a very common situation in models representing mechanical parts. For instance, models \textit{star} and \textit{part} from Section~\ref{subsec:complex_models} present large flat regions despite having a significant size, and again the geometric approach is faster on these models. Another scenario in which the geometric approach overperforms the algebraic one is with non star-shaped objects. The differences in this case are so evident that one could even imagine to use our algorithm to understand, in few iterations in shuffle mode, if a model is actually star-shaped or not, even without computing the proper kernel. On the other side, the algebraic approach is likely to remain preferable over domains which do not satisfy any of the above conditions: star-shaped objects with thousands of vertices and high surface curvature. In conclusion, with this work we do not aim at completely replacing the algebraic approach for the kernel computation but instead to give an alternative which can be preferred for specific cases, such as the quality analysis of the elements in a 3D tessellation, in the same way as bubble-sort is to be preferred to optimal sorting algorithms when dealing with very small arrays. As a future development, we plan to integrate in our algorithm the promising \textit{indirect predicates} introduced in \cite{attene2020indirect}. Numerical problems remain a critical issue in the computation of geometric constructions like the kernel, independently of the approach adopted. We believe indirect predicates could enormously help in enhancing the robustness of the algorithm. Moreover, we plan to include this tool in a suite for the generation and analysis of tessellations of three dimensional domains, aimed at PDE simulations. The kernel of a polyhedron has a great impact on its geometrical quality, and the geometrical quality of the elements of a mesh determines the accuracy and the efficiency of a numerical method over it. We are therefore already using this algorithm in works like \cite{sorgente2022role} for better understanding the correlations between the shape of the elements and the performance of the numerical simulations, and be able to adaptively generate, refine or fix a tessellation accordingly to them. \section*{Acknowledgements} We would like to thank Dr. M. Manzini for the precious discussions and suggestions, and all the people from IMATI institute involved in the CHANGE project. Special thanks are also given to the anonymous reviewers for their comments and suggestions. This work has been partially supported by the ERC Advanced Grant CHANGE contract N.694515. \bibliographystyle{plain} \section{Introduction} \label{sec:intro} \begin{figure*} \includegraphics[width=\linewidth]{pipeline.png} \centering \caption{Pipeline of the kernel computation for a polyhedron. At first step, we compute the Axis Aligned Bounding Box (AABB) of the polyhedron; then, we iterate on each face $f$ of the polyhedron (black edges) and cut AABB with the plane induced by $f$ (red edges).} \label{fig:pipeline} \end{figure*} The concept of \emph{geometric kernel} of a polygon, a polyhedron, or more generally of a shape, is a pillar of computational geometry. Roughly speaking, the kernel of a closed geometric shape $S$ is the locus of the points internal to $S$ from which the whole shape $S$ is visible. The kernel of a convex shape coincides with the shape itself, while the concept is particularly interesting for non-convex shapes and in particular, polytopes. The kernel of non-convex shapes can also be empty, as in the case of non simply-connected objects for which it is always empty. In the two-dimensional scenario, that is when the shape is a polygon, the standard way of computing the kernel is by intersecting appropriate half-planes generated from its edges. This problem has been tackled since the 70s, when \cite{ShamosHoey} presented an algorithm able to perform the kernel computation in $O(e\log e)$ operations, being $e$ the number of edges of a polygon, as the intersection of $e$ half-edges. After that, an optimal algorithm able to run in $O(n)$ operations over an $n-$sided polygon, has been proposed in \cite{LeePreparata}. Famous computational tools and libraries like \textit{Boost} \cite{BoostLibrary}, \textit{Geogram} \cite{levy2015geogram}, \textit{CGAL} \cite{fabri2009cgal}, or \textit{Libigl} \cite{jacobson2017libigl} currently implement routines to compute intersections between polygons and planes, which can be used to estimate the kernel. In the first attempts to solve the volumetric version of the problem, for example in \cite{PreparataShamos}, the natural approach has been to extend the 2D method (which we call \textit{geometric)} to the 3D case from a theoretical point of view, but it was soon dismissed as unattractive for computational reasons. It was replaced by a new approach (which we call \textit{algebraic)} which makes use of linear algebra and homogeneous coordinates, and that is the state of the art for computing 3D kernels currently implemented by libraries like CGAL. During years, the polygon kernel computation has become popular to address several problems based on simple polygon analysis, such as star-components decomposition and visibility algorithms that are of interest in robotics, surveillance, geometric modeling, computer vision and, recently, in the emerging field of additive manufacturing \cite{demir2018near}. Today, the geometric kernel of a polytope is a pivotal information for understanding the geometrical quality of an element in the context of finite elements analysis. While in the past years finite elements methods were only designed to work on convex elements like triangles/tetrahedra or quadrangles/hexahedra \cite{ciarlet2002finite}, recent and more complex methods like the Mimetic Finite Difference Method \cite{lipnikov2014mimetic}, the Virtual Elements Method \cite{beirao2013basic}, the Discontinuous Galerkin Mehod \cite{cockburn2012discontinuous} or the Hybrid High Order Method \cite{di2019hybrid} are able to deal with non convex polytopes. This enrichment of the class of admissible elements led researchers to further investigate the concept of the geometric quality of a polytope, and to define quality measures and metrics for the mesh elements, whether they are poligons \cite{attene2021benchmark,sorgente2022role} or polyhedra \cite{sorgente2021polyhedral}. In this setting, the geometric kernel is often associated with the concepts of \textit{shape regularity} and \textit{star-shapedness} of an element. For example, as analyzed in \cite{sorgente2021vem}, most of the error estimates regarding the Virtual Elements Method (but the same holds for other polytopal methods) are based on the theory of polynomial approximation in Sobolev spaces, assuming the star-shapedness of the elements \cite{Brenner-Scott:2008,dupont1980polynomial}. As a consequence there are a number of sufficient geometrical assumptions on the computational domain for the convergence of the method, which require an estimate of the kernel. When dealing with non-trivial meshes \cite{antonietti2022high, berrone2019parallel}, such quality measures/metrics/indicators require to compute the kernel of thousands of polytopes, each of them with a limited number of faces and vertices, in the shortest possible time. A preliminary algorithm for the computation of the kernel of a polyhedron using the geometric approach was introduced in \cite{SorgenteKernel}. There, we experimentally showed how this type of approach in practice can significantly outperform the algebraic one, for instance, when polyhedral elements have a limited number of faces and vertices or several faces are co-planar. In this paper we optimize the algorithm, for instance preliminary estimating the position of all vertices with respect to the planes induced by the faces, introducing the use of exact geometric predicates \cite{shewchuk1997adaptive} and a random strategy for visiting the faces of a polyhedron and identify more rapidly polyhedra whose kernel is empty. We also simplify the description of the algorithmic routines and we introduce new scientific results on a larger variety of models that confirm the validity of a geometry-based kernel algorithm. The paper is organized as follows. In Section~\ref{sec:preliminary} we introduce some notation and discuss the difference between the geometric and the algebraic approach. In Section~\ref{sec:kernel} we detail the algorithm for the construction of the kernel of a polyhedron. In Section~\ref{sec:tests} we exhibit some examples of computed kernels and analyze the performance of the algorithm with comparisons with an implementation of the algebraic approach and with its previous version \cite{SorgenteKernel}. In Section~\ref{sec:conclusions} we sum up pros and cons of the algorithm and draw some conclusions. \section{Preliminary concepts} \label{sec:preliminary} We introduce some notations and preliminary concepts that we will use in the rest of the paper. \subsection{Notations} \label{subsec:notations} We define a \textit{polyhedron} as a finite set of plane polygons such that every edge of a polygon is shared by exactly one other polygon and no subset of polygons has the same property \cite{PreparataShamos}. The vertices and the edges of the polygons are the \textit{vertices} and the \textit{edges} of the polyhedron; the polygons are the \textit{faces} of the polyhedron. In this work we only consider \textit{simple} polyhedra, which means that there is no pair of nonadjacent faces sharing a point. A polyhedron $P$ is said to be \textit{convex} if, given any two points $p_1$ and $p_2$ in $P$, the line segment connecting $p_1$ and $p_2$ is entirely contained in $P$. It can be shown that the intersection of convex polyhedra is a convex polyhedron \cite{PreparataShamos}. Two points $p_1$ and $p_2$ in $P$ are said to be \textit{visible} from each other if the segment $(p_1,p_2)$ does not intersect the boundary of $P$. The \textit{kernel} of $P$ is the set of points in the interior of $P$ from which all the points in $P$ are visible. The first obvious consideration is that the kernel of a polyhedron is a convex polyhedron. If $P$ is convex its kernel coincides with its interior, because any two points inside a convex polyhedron are visible from each other. A polyhedron may also not have a kernel at all; in this case we say that its kernel is \textit{empty}. Last, a polyhedron $P$ is called \textit{star-shaped} if there exists a sphere, with non-zero radius, completely contained in its kernel. A polyhedron is star-shaped if and only if its kernel is not empty, therefore star-shapedness can be thought as an indicator of the existence of a kernel. Star-shapedness is weaker than convexity, and it is often used in the literature as many theoretical results in the theory of polynomial approximation in Sobolev spaces rely on this condition \cite{dupont1980polynomial, Brenner-Scott:2008}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{tent.png} \caption{A sequence of parametric polyhedra whose kernel is progressively shrinking. The kernel is the polyhedron delimited by the red edges.} \label{fig:tent} \end{figure} In order to give a visual example of these concepts, in Fig.~\ref{fig:tent} we present a parametric object shaped like a tent, with the parameter regulating the height of the ``entrance'' from the basis. This object is not convex, but in the first examples it is star-shaped and it has a proper kernel, delimited by the red edges. As the parameter increases, the set of points from which the whole polyhedron is visible becomes smaller, and so does the star-shapedness radius. The last example of Fig.~\ref{fig:tent} is not star-shaped anymore, i.e. the kernel is empty. \subsection{The algebraic approach to the kernel computation} As observed in Section~\ref{sec:intro}, the state of the art algorithm for the computation of the kernel of a polygon follows a geometric approach: the kernel is found as the intersection of half-planes originating from its edges. We use the term ``geometrical'' because the algorithm computes repeatedly a sequence of geometric intersections between polygons and planes. This idea was afterwards optimized until obtaining an algorithm able to run in $O(n)$ operations, which has been proven to be optimal \cite{LeePreparata}. When facing the 3D version of the problem, one natural way could be to extend the 2D algorithm, which is well studied and documented, to the higher dimension. The problem with the 3D case is that, whereas two convex polygons with respectively $n_1$ and $n_2$ vertices can be intersected in time $O(n)$, being $n=n_1+n_2$, two convex polyhedra with the same parameters are intersected in time $O(n\log n)$, thus the generalization of the two-dimensional instance would yield an $O(n\log^2 n)$ algorithm. This is in contrast with the result shown in \cite{PreparataShamos}, where a lower bound for the intersection of convex polyhedra is established at $O(n\log n)$. Therefore the geometric approach to the 3D problem was soon dismissed as unattractive, and alternative ways have been explored. A new algorithm was formulated, based on the so-called ``double duality trick'', which makes use of linear algebra and homogeneous coordinates. Thanks to this algebraic approach, described in \cite[Section 7.3.2]{PreparataShamos}, it is possible to compute the intersection of $n$ half-spaces in time $O(n\log n)$ \cite{PreparataShamos}. This algorithm can be implemented inside the framework of the CGAL library, although there is currently not an explicit routine for computing the kernel of a polyhedron and one has to connect the function for the intersection of half-spaces to the polyhedron data structure. The whole routine implies: \begin{enumerate} \item solving a linear problem to find a point in the interior of the polyhedron; \item translating the polyhedron so that the interior point is the origin; \item calculating the dual polyhedron, defined as the convex hull of the vertices dual to the original faces in regard to the unit sphere (i.e., halfspaces at distance $d$ from the origin are dual to vertices at distance $1/d$); \item calculating the resulting polyhedron, which is the dual of the dual polyhedron; \item translating the origin back to the interior point. \end{enumerate} While from a theoretical point of view the cited results are indubitable, we believe that in many practical situations the geometric approach could perform better than the algebraic one. This is due to the fact that the cost of solving a linear problem cannot fall below a certain bound, while the intersection of half-spaces can become extremely cheap in some circumstances. For instance if the number of faces and vertices of the polyhedron is low, this method could be more efficient than the algebraic one, and there may be other situations in which one could take advantage of a wise treatment of the geometrical operations. \subsection{Data structure} \label{subsec:data} We adopt the following data structure inherited by the \textit{cinolib} library \cite{livesu2019cinolib}, in which the code has been written: \begin{itemize} \item \textit{Points:} array of unordered 3D points. % \item \textit{Face:} array of unsigned integers associated to a \textit{Points} array, representing the indices of the vertices of a face ordered counter-clockwise. % \item \textit{Polyhedron:} struct composed by a field \textit{verts} of type \textit{Points} containing the vertices and a field \textit{faces} of type \textit{array $<$Face$>$} containing the faces of a polyhedron. % \item \textit{Plane:} class defining a plane in the Hessian form, composed by a 3D point $n$ indicating the unit normal of the plane (i.e. the $a,b,c$ coefficients of the plane equation) and a number $d$ indicating the distance of the plane from the origin (i.e. the $d$ coefficient of the plane equation). % The plane class also contains three additional points $p_1,p_2,p_3$ lying on the plane, useful for the Shewchuck exact predicates \cite{shewchuk1997adaptive}. % \item \textit{Sign:} array of labels (BELOW, ABOVE or INTER) used to store information on the position of the elements of a polyhedron (points, edges or faces) with respect to an unspecified plane. \end{itemize} Given a plane $p$, the elements of a polyhedron are classified as follows: \begin{itemize} \item A point $v$ is labelled as BELOW, ABOVE or INTER provided that the function \textit{orient3d}$(p.p_1,p.p_2,p.p_3,v)$ in the Shewchuck exact predicate library \cite{shewchuk1997adaptive} is negative, positive or zero, within a tolerance of $10^{-8}$; \item An edge $e$ is labelled as BELOW (resp. ABOVE) if both its endpoints are BELOW or INTER (resp. ABOVE or INTER), and as INTER if one point is ABOVE and the other one is BELOW. \item A face $f$ is labelled as BELOW if all of its points are BELOW, as ABOVE if none of its points is BELOW and as INTER otherwise. \end{itemize} The classification of points is computed at the top level of the algorithm with Shewchuck exact predicates, defining the symbol CINOLIB$\_$USES$\_$EXACT$\_$PREDICATES at compilation time. For edges and faces instead, we define a function \textit{classify(S)} which determines the classification of and edge or a face from an array $S$ of type \textit{Sign} containing the classification of its points. The introduction of the labels and the points evaluation at the beginning of the process are two main computational novelties with respect to the previous version of the algorithm \cite{SorgenteKernel}. \section{Polyhedron kernel algorithm} \label{sec:kernel} In this section, we illustrate our method for computing the kernel of a polyhedron with a geometric approach. It has a modular structure composed of four nested algorithms, each one calling the next one in its core part. It is modular in the sense that each algorithm can be entirely replaced by another one performing the same operation(s). This property is particularly useful for making comparisons: one could, for instance, use different strategies for computing the intersection between a polygon and a plane and simply replace Algorithm~\ref{alg:polygon-plane}, measuring the efficiency from time to time. \subsection{Polyhedron Kernel} \label{subsec:polyhedron_kernel} With Algorithm~\ref{alg:kernel} we tackle the general problem: given a polyhedron $P$, we want to find the polyhedron $K$ representing its kernel. In addition to $P$ we also need as input an array containing the outwards normals of its faces, as it is not always possible to determine the orientation of a face only from its vertices (for example with non-convex faces). We require the face normals explicitly, and not simply a boolean indicating the faces orientation, because these points will be used to define the planes containing the faces of $P$. We initialize $K$ with the axis aligned bounding box (AABB) of $P$, i.e. the box with the smallest volume within which all the vertices of $P$ lie, aligned with the axes of the coordinate system. Then we recursively ``slice'' $K$ with a number of planes, generating a sequence of convex polyhedra $K_i$, $i=1,\dots,\#$\textit{P.faces}, such that $K_i\subseteq K_{i-1}$. For each face $f$ of $P$ we define the plane $p$ containing its vertices and with normal vector given by the opposite of the face normal $N(f)$, that is to say, $p.n:=-N(f)$. We consider the plane together with the direction indicated by $p.n$, which is equivalent to considering the half-space originating in $p$ and containing its normal vector. Given this plane, in a \textit{Sign} array $S$ we store the classification of all the points in $K.verts$ according to their position with respect to $p$. For improving the efficiency, we can set the sign of all points belonging to $f$ to zero without evaluating their position. In general $p$ will separate $K$ into two polyhedra, and between those two we keep the one containing the vector $p.n$, which therefore points towards the interior of the element. This operation is performed by the \textit{Polyhedron-Plane Intersection} algorithm detailed in Section \ref{subsec:polygon_plane}, which replaces $K$ with the new polyhedron. The order in which we consider the faces is not relevant from a theoretical point of view, but turns out to have a huge impact on the performance. For instance, if we imagine to compute the kernel of a ``ring'', which is obviously empty, visiting the faces in the order they are stored may take a very long time, especially if the tessellation of the object is fine. This is because, generally, the faces of a tessellated model are numbered somehow coherently with their neighbors. For this reason, we optionally propose to visit the faces in random order, or \emph{shuffle} $P.faces$, and to return an empty polyhedron if after a ``slice'' $K$ has less than three faces. In this way, the empty kernel of a ring with thousands of faces could be detected in just three or four iterations. When this command is turned on, we say the algorithm is run in \textit{shuffle} mode. We point out that cutting a convex polyhedron with a plane will always generate two convex polyhedra, and since we start from the bounding box (which is convex), we are guaranteed that $K$ will always be a convex polyhedron. No matter how weird the initial element $P$ is, from this point on we will only be dealing with convex polyhedra and convex faces. Last, we could as well start with considering the polyhedron's convex hull instead, but it would be less efficient because the convex hull costs in general $O(n\log n)$ while the AABB is only $O(n)$, and we would still need to intersect the polyhedron with each of its faces. \begin{algorithm}[htbp] \caption{Polyhedron Kernel} \label{alg:kernel} \begin{algorithmic}[1] \Require Polyhedron $P$, Points $N$ (faces normals); \Ensure Polyhedron $K$ \State $K$ := AABB of $P$; \State \textit{[optional]} shuffle $P.faces$; \For{Face $f$ in $P.faces$} \State Plane $p$ := plane containing $f$ with normal $-N(f)$; \State Sign $S$ := orient3d$(p.p_1, p.p_2, p.p_3, K.verts)$; \State $K$ := Polyhedron-Plane Intersection($K$, $S$, $p$); \If{size($K.faces$) $<3$} return NULL; \EndIf \EndFor \State \Return $K$; \end{algorithmic} \end{algorithm} \subsection{Polyhedron-Plane Intersection} \label{subsec:polyhedron_plane} With the second algorithm we want to intersect a polyhedron $P$ with a plane $p$, given in $S$ the position of the vertices of $P$ with respect to $p$. The intersection will in general determine two polyhedra, and between these two we are interested in the one containing the normal vector of $p$ (conventionally called the one ``above'' the plane and indicated with $A$). This algorithm is inspired from \cite{ahn2008geometric}, where the authors define an algorithm for the intersection of a convex polyhedron with an half-space. \begin{figure}[htbp] \centering \begin{tabular}{cc} \includegraphics[width=.46\linewidth]{clipping.png} & \includegraphics[width=.46\linewidth]{capping.png}\\ (a) & (b) \end{tabular} \caption{Intersection of a polyhedron with a plane: (a) clipping and (b) capping of a cube.} \label{fig:polyhedron-plane} \end{figure} The first part of Algorithm~\ref{alg:polyhedron-plane} is called the ``clipping'' part (recalling the terminology from \cite{ahn2008geometric}) and consists in clipping each face of $P$ with the plane $p$, see Fig.~\ref{fig:polyhedron-plane}(a). It corresponds to the \textit{for} loop: we iterate on \textit{P.faces}, each time extracting from $S$ the labels $f_s$ of the vertices $f_v$ of the current face and using the \textit{classify} function. Faces classified as BELOW are discarded, ABOVE faces are added to $A$ together with their vertices, and INTER faces are split by the \textit{Polygon-Plane Intersection} algorithm. While we visit every face only once, the same does not hold for vertices, therefore we check if a vertex is already in $A.verts$ before adding it. This simple idea of checking in advance the faces classification resolves several implementation issues and in some cases significantly improves the efficiency of the algorithm. By doing this, we make sure that only the faces properly intersected by the plane are passed to Algorithm~\ref{alg:polygon-plane}, so that we do not need to implement all the particular cases of intersections in a single point or along an edge or of faces contained in the plane. In addition, for every face not passed to Algorithm~\ref{alg:polygon-plane} we have an efficiency improvement, and this happens frequently in models with many coplanar faces like the ones considered in Section~\ref{subsec:refinements}. If, at the end of this step, $A$ contains at least three INTER points, given that $A$ and all its faces are convex, these vertices will define a ``cap'' face of $A$ completely contained in $p$, see Fig.~\ref{fig:polyhedron-plane}(b). We can optimize the algorithm by storing in a Sign array the classification of the vertices in $A$, updating it with the sign of every vertex added in the switch loop. Note that in this case we do not need to use \textit{orient3d}: we already know the sign of the old vertices and the new vertices will obviously be of type INTER. In our data structure, the vertices of the faces are ordered counter-clockwise (CCW): in order to sort the points contained in \textit{capV} we project them onto a plane, drop one coordinate and apply the algorithm proposed in \cite{baeldung} for 2D points. Note that if the cap face was not convex it would make no sense to order its vertices, but the intersection between a plane and a convex polyhedron will always generate convex faces. Last, we need to check that this new face is not already present in $A$: for example if $p$ was tangent to $P$ along a face, this face could be added to $A$ both as an ABOVE face and as a cap face. If this is not the case, we add \textit{capF} to $A.faces$ but we do not need to add any vertex from \textit{capV}, as we can assume they are all already present in $A.verts$. \begin{algorithm}[htbp] \caption{Polyhedron-Plane Intersection} \label{alg:polyhedron-plane} \begin{algorithmic}[1] \Require Polyhedron $P$, Sign $S$, Plane $p$ \Ensure Polyhedron $A$ \For{Face $f$ in $P.faces$} \State Points $f_v:=$ vertices in $P.verts$ relative to $f$; \State Sign $f_s$ := $S_{|f_v}$ \Switch{classify$(f_s)$} \Case{BELOW} break; \EndCase \Case{ABOVE} \State $A.verts\leftarrow$ $f_v$, $A.faces\leftarrow f$; \EndCase \Case{INTER} \State (\textit{V},\textit{F}):=Polygon-Plane Intersection$(f_v,f,f_s,p)$; \State $A.verts\leftarrow$ \textit{V}, $A.faces\leftarrow$ \textit{F}; \EndCase \EndSwitch \EndFor \State Points \textit{capV} := vertices in $A.verts$ which are INTER; \If{size(\textit{capV}) $<3$} return $A$; \EndIf \State Face \textit{capF} := indices of \textit{capV} vertices ordered CCW; \If{\textit{capF} $\notin$ $A.faces$} $A.faces\leftarrow$ \textit{capF}; \EndIf \State \Return $A$; \end{algorithmic} \end{algorithm} \subsection{Polygon-Plane Intersection} \label{subsec:polygon_plane} Algorithm~\ref{alg:polygon-plane} describes the intersection of a polygon (representing a face of the polyhedron), defined by an array of 3D points \textit{polyV} and an array of indices \textit{polyF}, with a plane $p$. As before, we also require as input an array \textit{polyS} containing the position of the vertices of \textit{polyV} with respect to $p$. In analogy to Algorithm~\ref{alg:polyhedron-plane}, the intersection will in general determine two polygons and we are only interested in the one above the plane, see Fig.~\ref{fig:polygon-line-plane}(a), defined by points \textit{aboveV} and indexes \textit{aboveF}. We generically say that a vertex $v$ is added to \textit{above} meaning that $v$ is added to \textit{aboveV} and its index $id_v$ is added to \textit{aboveF}. This time we iterate on the edges of \textit{polyF}, extract the signs $s_1, s_2$ of the edge endpoints and switch between the three possible classifications of the edge. In order to avoid duplicates, for each couple of consecutive points $v_1, v_2$, we only accept to add to \textit{above} the second point $v_2$ or the intersection point $v$, but never $v_1$. We are here taking advantage of the fact that all faces are oriented coherently. If the edge is of type BELOW we ignore it unless $v_2$ is INTER (i.e. it lies exactly on the plane), in which case we add it to \textit{above}. In case of ABOVE edges we add $v_2$ to \textit{above}. For edges of type INTER we perform the \textit{Line-Plane Intersection} algorithm and find a new point $v$. Its index $id_v$ will be equal to the maximum value in \textit{polyF} plus one, just to make sure that we are not using the index of an existing point. We always add $v$ to \textit{above}, and if $v_1$ is BELOW we also add $v_2$. As already noted in Section~\ref{subsec:polyhedron_plane}, treating separately the weak intersections (the BELOW and ABOVE cases) makes the code simpler and more efficient. \begin{figure}[htbp] \centering \begin{tabular}{cc} \includegraphics[width=.35\linewidth]{polygon-plane.png} & \includegraphics[width=.35\linewidth]{line-plane.png}\\ (a) & (b) \end{tabular} \caption{(a) Intersection between a polygon and a plane, with the above part coloured in green. (b) Intersection between a line and a plane.} \label{fig:polygon-line-plane} \end{figure} \begin{algorithm}[htbp] \caption{Polygon-Plane Intersection} \label{alg:polygon-plane} \begin{algorithmic}[1] \Require Points \textit{polyV}, Face \textit{polyF}, Sign \textit{polyS}, Plane $p$. \Ensure Points \textit{aboveV}, Face \textit{aboveF}. \For{$i=1$ : size(\textit{polyF})} \State $id_1:=$ \textit{polyF}$(i)$, $id_2:=$ \textit{polyF}$(i+1)$; \State $v_1:=$ \textit{polyV}$(id_1)$, $v_2:=$ \textit{polyV}$(id_2)$; \State $s_1:=$ \textit{polyS}$(id_1)$, $s_2:=$ \textit{polyS}$(id_2)$; \Switch{classify($s_1,s_2$)} \Case{BELOW} \If{$v_2$ is INTER} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndIf \EndCase \Case{ABOVE} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndCase \Case{INTER} \State $v:=$ Line-Plane Intersection$(v_1,v_2,p)$; \State $id_v:=$ max(\textit{polyF})+1; \State \textit{aboveV} $\leftarrow v$, \textit{aboveF} $\leftarrow id_v$; \If{$v_1$ is BELOW} \State \textit{aboveV} $\leftarrow v_2$, \textit{aboveF} $\leftarrow id_2$; \EndIf \EndCase \EndSwitch \EndFor \State \Return \textit{aboveV}, \textit{aboveF}; \end{algorithmic} \end{algorithm} \subsection{Line-Plane Intersection} \label{subsec:line_plane} This last algorithm computes the intersection point between a line, given as a couple of vertices, and a plane. It is a very simple and well known procedure, and we report it here only for completeness. The intersection vertex $v$ is defined as the linear combination of vertices $v_1$ and $v_2$, with a coefficient $t$ which may also fall outside the standard range $[0,1]$. The coefficient $t$ is found as the negative ratio between two scalar products involving the plane normal $n$ and a generic other point on the plane $s$, other than $v_1$ and $v_2$ see Fig~\ref{fig:polygon-line-plane}(b). The normal is $p.n$, and for the point $s$ we can use one of the three points on the plane $p.p_1, p.p_2, p.p_3$. If the denominator $D$ vanishes, it means either that the line is contained in the plane (if $N=0$ as well) or that the line does not intersect the plane. We treat these exceptions as errors because in Algorithm~\ref{alg:polygon-plane} we only call this algorithm after checking that the edge $(v_1,v_2)$ properly intersects the plane $p$. \begin{algorithm}[htbp] \caption{Line-Plane Intersection} \label{alg:line-plane} \begin{algorithmic}[1] \Require vertices $v_1,v_2$, Plane $p$. \Ensure vertex $v$. \State $N:=(p.n)\cdot(v_1-p.p_1)$; \State $D:=(p.n)\cdot(v_2-v_1)$; \Assert{$D!=0$} \State $t:=-N/D$; \State \Return $v:=v_1+t\ (v_2-v_1)$; \end{algorithmic} \end{algorithm} \subsection{Computational complexity} \label{subsec:computational_complexity} Making advantage of the modular organisation of our algorithms, we can estimate separately the computational cost of each algorithm and then include them into a single formula. Let us consider the case of an input polyhedron $P$ with $n_v$ vertices and $n_f$ faces. In Algorithm~\ref{alg:kernel} we start by computing the polyhedron's AABB, which is $O(n_v)$, then for each face we estimate the position of the vertices of $K$ with respect to a plane with \textit{orient3d}. Unfortunately, we have to re-compute the signs of all the $n_{kv}$ vertices of $K$ at every iteration, as from one step to the other the plane changes: this means $O(n_{kv} n_f)$ operations. At step zero we have $n_{kv}=8$, being $K$ the bounding box; then in the worst case, that is if the object is convex, it can grow up to $n_v$ because $K$ coincides with $P$. If the object is not convex, $n_{kv}$ can remain significantly lower than $n_v$, which translates in a much lower computational cost. In both cases, the difference between $n_{kv}$ and $n_v$ is also related to the number of coplanar faces of the model, which get agglomerated into a single face in the kernel with a consequent reduction of the number of vertices. If the kernel is empty, whenever we end up with less than three faces the algorithm stops, therefore we can replace $n_f$ with the number of iterations needed to detect the non star-shapedness. This number cannot be estimated precisely, but we can drastically reduce it with the shuffle mode. The good news is that once that we have the sign of all the vertices with respect to all the planes we are essentially done: the computational costs of Algorithms~\ref{alg:polyhedron-plane} and~\ref{alg:polygon-plane} is all about visiting arrays and copying parts of them into other arrays, and Algorithm~\ref{alg:line-plane} only consists of 4 operations. We point out that navigating and duplicating arrays has a negligible cost in this scenario as the faces of a generic polyhedron hardly contain more than 10 vertices. The only relevant operation in Algorithm~\ref{alg:polyhedron-plane} is the sorting of the vertices of the cap face, which does not always exist. This is done with a QuickSort routine which is on average $O(n\log n)$ for a cap face with $n$ vertices. Since $n$ is much smaller than $n_{kv}$ and the number of cap faces is always smaller than $n_f$, this cost is negligible compared to $O(n_{kv} n_f)$. In summary, we can set an upper bound to the computational cost of our algorithm at $O(n_v n_f)$, but both $n_v$ and $n_f$ can significantly decrease if the model is not convex or has coplanar faces, and even more if it is not star-shaped. In the next section we will show how in practice, on small polyhedra or on objects with many co-planar faces, the geometric method works in a computational time which is smaller than the one of the algebraic approach, and it is still competitive on many examples, even complex ones. \section{Tests and discussions} \label{sec:tests} We test our method in different settings, comparing its performance to the results obtained using our implementation of the algebraic method in CGAL. The comparison of the current method, including the shuffle version, and its previous version in \cite{SorgenteKernel} is presented in Section \ref{subsec:comparison}. Experiments have been performed on a MacBook Pro equipped with a 2,3 GHz Intel Core i5 processor with four CPUs and 16GB of RAM. Source code is written in C++ and it is accessible at \url{https://github.com/TommasoSorgente/polyhedron_kernel}, together with all datasets. In the following subsections we will present some plots and tables: we point out here some remarks on the notation adopted. Regarding plots, we colour the CGAL computational time in blue and our computational time in red, both in logarithmic scale. On the $x-$axis, depending from the context, we have the number of elements in the mesh or the number of vertices of the single model. In the several tables presented we first report the number of elements or vertices of the mesh. Since all the considered objects have genus zero and their surface is purely triangular, Euler's formula states that the number of faces is approximately equal to twice the number of vertices. Therefore we only indicate the number of vertices, but the number of faces is easily computable. Then the computational times (in seconds) are shown, and the ratio between CGAL time and ours. Note that ratios are computed from the original time values while in the tables we indicate truncated times, therefore they do not exactly correspond to the division between the values in the previous columns. \subsection{Polyhedral meshes} \label{subsec:meshes} \begin{figure*}[htbp] \centering \includegraphics[width=\linewidth]{meshes.png} \caption{Polyhedral meshes and time plots from datasets \textit{poly-parallel}, \textit{poly-poisson} and \textit{poly-random}, with non-tetrahedral elements highlighted in blue.} \label{fig:meshes} \end{figure*} First, we test our algorithm in the setting it was developed for, i.e. the computation of the kernels of elements in a 3D tessellation. To do so, we used the datasets from \cite{sorgente2021polyhedral}, available for download at \url{https://github.com/TommasoSorgente/vem-indicator-3D-dataset}. The meshes contained in these datasets are typical examples of tessellations which can be found in numerical analysis for the approximation of a PDE. We focus our attention on the polyhedral datasets: \textit{poly-parallel}, \textit{poly-poisson} and \textit{poly-random}. Each of them contains five tessellations of the unit cube with decreasing mesh size, from 100 to 100K vertices. The resulting meshes contain between 100 and 600K elements, most of which are tetrahedra but $20\%$ of them are generic polyhedra (in blue in Fig.~\ref{fig:meshes}), obtained by the union of two tetrahedra. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{elements.png} \caption{Examples of non-convex elements found in polyhedral meshes from Section~\ref{subsec:meshes} and relative kernels.} \label{fig:elements} \end{figure} Non-tetrahedral elements are generated by the agglomeration of two tetrahedral elements, therefore they may also be non convex, see Fig.~\ref{fig:elements}. \begin{table}[htbp] \caption{Computational times for polyhedral meshes.} \label{table:time:mesh} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} dataset & $\#$elements & our & CGAL & ratio\\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{poly-parallel} & 130 & 0.04 & 0.21 & 4.89 \\ & 1647 & 0.17 & 1.79 & 10.69 \\ & 16200 & 1.68 & 19.4 & 11.55 \\ & 129600 & 13.94 & 142.36 & 10.21 \\ & 530842 & 53.47 & 588.43 & 11 \\ \noalign{\smallskip}\hline \textit{poly-poisson} & 140 & 0.04 & 0.3 & 8.05 \\ & 1876 & 0.29 & 2.54 & 8.91 \\ & 16188 & 2.64 & 24.79 & 9.38 \\ & 146283 & 24.24 & 212.23 & 8.75 \\ & 601393 & 86.77 & 770.66 & 8.88 \\ \noalign{\smallskip}\hline \textit{poly-random} & 147 & 0.03 & 0.19 & 6.8 \\ & 1883 & 0.28 & 2.62 & 9.41 \\ & 18289 & 2.91 & 27.11 & 9.33 \\ & 161512 & 26.51 & 228.08 & 8.6 \\ & 598699 & 80.15 & 735.55 & 9.18 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In Table~\ref{table:time:mesh} we report, for each mesh of each dataset, the number of elements, the computational times for both methods and the ratio between the CGAL time and ours. Moreover, at the bottom of Fig.~\ref{fig:meshes} we plot the times against the number of elements in the mesh. It is visible how both methods scale linearly with respect to the number of elements, since the kernel of the elements is computed separately and independently for each element. Our method performs 8 to 11 times faster than CGAL, which approximately means one order of magnitude. As the elements of these meshes have either 4 faces if they are tetrahedra or 6 faces if they are the union of two tetrahedra, computing their kernel in a geometrical way results much faster than solving a linear problem. In this case we did not use the shuffle mode, as the number of faces was so small that the visiting order resulted not relevant. \subsection{Refinements} \label{subsec:refinements} As a second setting for our tests, instead of increasing the number of elements we wanted to measure the asymptotic behaviour of the methods as the number of faces and vertices of a single element explodes. We selected two polyhedra from the dataset \textit{Thingi10K} \cite{zhou2016thingi10k}: the so-called \textit{spiral} (ThingiID: 60246) and \textit{vase} (ThingiID: 85580). These models are given in the form of a surface triangular mesh but we treat them as single volumetric cell, analyzing the performance of both algorithms as we refine them. In Table~\ref{table:time:refinements} we report the computational times and the ratio for each refinement. \begin{figure}[th] \centering \includegraphics[width=.8\linewidth]{spiral.png} \caption{Original \textit{spiral} model and its first refinement, with identical kernels.} \label{fig:spiral} \end{figure} The \textit{spiral} model is refined through a midpoint strategy: each face is subdivided by connecting its barycenter to its other vertices. As a consequence, the planes induced by its faces remain the same and the kernels of the refined models are all equal (Fig.~\ref{fig:spiral}). On this example our method performs on average 5.77 times better that the algebraic method (see Table~\ref{table:time:refinements}), and the computational time scales with a constant rate (see the plot in Fig.~\ref{fig:spiral}). Our implementation takes advantage of the fact that Algorithm~\ref{alg:polyhedron-plane} recognises the several coplanar faces and always performs Algorithm~\ref{alg:polygon-plane} the same number of times, independently of the number of faces. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{vase.png} \caption{Original \textit{vase} model and its first refinement: small perturbations in the faces lead to slightly different kernels.} \label{fig:vase} \end{figure} The \textit{vase} model is more complex, as it presents a curved surface which generates a lot of different planes defining the kernel. Moreover, we refined this model using the Loop's algorithm and this generated faces lying on completely new planes. This explains the difference between the two kernels in Fig.~\ref{fig:vase}: the general shape is similar but the more faces we add to our model the more faces we find on the resulting kernel. Our geometric method improves the performance of the algebraic one by a factor around 2 in the first refinements, but in the last two meshes the complexity increases drastically and CGAL results faster (see Table.~\ref{table:time:refinements}). In this case, an efficient treatment of the faces is not sufficient to hide the quadratic nature of the geometric approach. Even the shuffle mode did not particularly improve the performance, being the object star-shaped. \begin{table}[htbp] \caption{Computational times for the \textit{spiral} and \textit{vase} refinements.} \label{table:time:refinements} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} mesh & $\#$vertices & our & CGAL & ratio \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{spiral} & 64 & 0.004 & 0.01 & 3.07 \\ & 250 & 0.007 & 0.04 & 6.27 \\ & 994 & 0.02 & 0.14 & 6.56 \\ & 3970 & 0.08 & 0.43 & 5.38 \\ & 15874 & 0.32 & 1.77 & 5.54 \\ & 63490 & 1.05 & 8.24 & 7.86 \\ \hline \textit{vase} & 99 & 0.02 & 0.03 & 1.55 \\ & 390 & 0.04 & 0.12 & 2.56 \\ & 1554 & 0.31 & 0.92 & 2.94 \\ & 6210 & 3.24 & 6.1 & 1.88 \\ & 24261 & 47.49 & 37.24 & 0.78 \\ & 36988 & 196.7 & 56.75 & 0.29 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Complex models} \label{subsec:complex_models} Last, we try to compute the kernel of some more complex models, taken again from the dataset \textit{Thingi10K} and treated as single volumetric cells. Even if our method is designed for dealing with polyhedra of relatively small size, as we already saw in Section~\ref{subsec:refinements} our algorithms are still able to compute the kernel of objects with thousands of vertices and faces. We filtered the \textit{Thingi10K} dataset selecting only ``meaningful'' models: objects with one connected component, genus zero, Euler characteristic greater than zero, closed, not degenerate and of size smaller than 1MB. Note that, even applying these filters, the majority of the models are not star-shaped, i.e. with empty kernel. We discarded a few models for computational and stability reasons, for instance because one of the two algorithms failed to process them. The final collection, which we will call the \textit{Thingi dataset}, contains exactly 1806 distinct volumetric models. \begin{figure*}[htbp] \centering \includegraphics[width=.9\linewidth]{time_thingi.png} \caption{Thingi dataset times. From left to right: all Thingi dataset, models with empty kernel, models with non-empty kernel.} \label{fig:time:thingi} \end{figure*} In Fig.~\ref{fig:time:thingi} we show the times distribution for the whole Thingi dataset, with a particular focus on the difference between models with empty kernel and models with non-empty kernel. Globally, the overall cost of computing all kernels is 173 seconds with our method against 518 seconds with CGAL, for an improvement of 3 times. When the kernel is empty our algorithm is always faster than CGAL: the main reason for this is the usage of the shuffle mode, which makes it extremely cheap to recognise non star-shaped elements. When the model is star-shaped the distinction between the two methods is not so clear anymore, as the results mainly depend on the shape and size of the object. \begin{figure*}[ht!] \centering \begin{tabular}{cccc} \includegraphics[width=.2\linewidth]{plus.png} & \includegraphics[width=.2\linewidth]{star.png} & \includegraphics[width=.2\linewidth]{flex.png} & \includegraphics[width=.2\linewidth]{cross.png} \\ \textit{plus} & \textit{star} & \textit{flex} & \textit{cross} \\ \vspace{0.3cm} \\ \includegraphics[width=.2\linewidth]{part.png} & \includegraphics[height=.2\linewidth]{superellipse.png} & \includegraphics[width=.2\linewidth]{boteye.png} & \includegraphics[width=.2\linewidth]{button.png} \\ \textit{part} & \textit{super-ellipse} & \textit{bot-eye} & \textit{button} \\ \vspace{0.3cm} \\ \includegraphics[height=.2\linewidth]{rt4arm.png} & \includegraphics[width=.2\linewidth]{ball.png} & \includegraphics[height=.2\linewidth]{acorn.png} & \includegraphics[width=.2\linewidth]{muffin.png} \\ \textit{rt4-arm} & \textit{ball} & \textit{acorn} & \textit{muffin} \\ \end{tabular} \caption{Examples of our kernel evaluation for complex models. In the top row models on which the geometric method is more efficient, in the middle models for which the performance are similar and in the bottom row models on which the algebraic method is preferable.} \label{fig:complex} \end{figure*} To further investigate on this point, in Fig.~\ref{fig:complex} we present the kernel computation of 10 selected star-shaped examples from this dataset. In the top row we have models on which the geometric method is by far more efficient: \textit{plus} (ThingiID: 1120761), \textit{star} (ThingiID: 313883), \textit{flex} (ThingiID: 827640) and \textit{cross} (ThingiID: 313882). In the middle row, models for which the performance are similar: \textit{part} (ThingiID: 472063), \textit{super-ellipse} (ThingiID: 40172), \textit{bot-eye} (ThingiID: 37276) and \textit{button} (ThingiID: 1329185). Then in the bottom row we show models on which the algebraic method is preferable: \textit{rt4-arm} (ThingiID: 39353), \textit{ball} (ThingiID: 58238), \textit{acorn} (ThingiID: 815480), \textit{muffin} (ThingiID: 101636). The computational times, together with the ones relative to the whole dataset, are reported in Table~\ref{table:time:complex}. \begin{table}[th] \caption{Computational times for complex models. The first number relative to \textit{Thingi} dataset indicates the number of models instead of the number of vertices.} \label{table:time:complex} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} mesh & $\#$vertices & our-shuffle & CGAL & ratio \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{plus} & 448 & 0.004 & 0.09 & 22.75\\ \textit{star} & 9633 & 0.4 & 5.15 & 12.93\\ \textit{flex} & 834 & 0.02 & 0.27 & 12.76\\ \textit{cross} & 3914 & 0.19 & 2.1 & 11.12\\ \textit{part} & 5382 & 2.58 & 6.94 & 2.69\\ \textit{super-ellipse} & 290 & 0.02 & 0.04 & 2.05\\ \textit{bot-eye} & 453 & 0.03 & 0.03 & 0.96\\ \textit{button} & 1227 & 0.1 & 0.08 & 0.75\\ \textit{rt4-arm} & 655 & 0.13 & 0.09 & 0.67\\ \textit{ball} & 660 & 0.24 & 0.04 & 0.15\\ \textit{acorn} & 4114 & 4.35 & 0.55 & 0.13\\ \textit{muffin} & 8972 & 11.73 & 0.54 & 0.04\\ \textit{\textbf{Thingi dataset}} & 1806 & 172.88 & 518.2 & 2.99\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Once again, we notice that the size of the model impacts on the performance of our method. Looking at Fig.~\ref{fig:time:thingi} we can see how the models for which our method performs worse than CGAL are all in the right part of the plane, relative to models with a high number of vertices. At the same time, the number of vertices of the element, by itself, is not sufficient to justify the supremacy of one method over the other. For example, models \textit{star} and \textit{flex} have very different sizes and times, but their ratio is quite similar; the same holds for models \textit{parts} and \textit{super-ellipse} or \textit{ball} and \textit{acorn}. The shape of the object also plays an important role: over models with numerous adjacent co-planar faces like \textit{plus}, \textit{star} (whose bottom is completely flat) and \textit{cross} our method is preferable even when the size grows. As already seen in Section~\ref{subsec:refinements}, the presence of coplanar faces significantly improves the performance of our method. Vice-versa, over elements with significant curvatures like \textit{rt4-arm}, \textit{acorn} or \textit{muffin}, the algebraic method performs similarly or better than ours even on relatively small models like \textit{bot-eye}. Over these models it is still possible to compute a correct kernel with the geometric approach, but the ratio between CGAL time and ours is in the order of $10^{-1}$ or even $10^{-2}$. \subsection{Comparison with the previous version} \label{subsec:comparison} With respect to its introduction in \cite{SorgenteKernel}, we believe that the algorithm is now more easy to read and to understand, thanks to the introduction of labels for storing the position of vertices, edges and faces with respect to a plane. Another significant difference is that the evaluation of the position of the vertices is now computed once and for all, at the top level (in Algorithm~\ref{alg:kernel}), while in the previous version every algorithm contained some vertices evaluations: this further reduced the computational complexity. In addition, we switched from the inexact predicates (in \cite{SorgenteKernel} we used the equivalent of \textit{orient3d-fast}) to the exact \textit{orient3d}, which resulted in an increasing precision in the treatment of nearly-coplanar faces, for a small extra cost which was easily compensated by the other improvements. This switch required a small modification of the \textit{Plane} class: in view of the usage of \textit{orient3d}, for every plane we also store three points on it. Last, the introduction of the shuffle mode made the computation of the kernels of non star-shaped objects almost immediate, marking a huge difference with respect to the algebraic method and our previous implementation. In Table~\ref{table:comparison} we report the differences between the old version of the algorithm and the current version in standard and in shuffle mode. Over the meshes from Section~\ref{subsec:meshes} there is almost no differences between the three versions. For refined models, there is an improvement of a factor 2 in the computation of the \textit{vase} refinement (we report the sum of the times for all the refinements). With complex models we have the greatest differences; in these cases we can also appreciate the advantage brought by the shuffle mode, which was not significant in the other tests. \begin{table}[htbp] \caption{Performance comparison between the current implementation and the one presented in \cite{SorgenteKernel}. Only models with significant differences are reported.} \label{table:comparison} \centering \begin{tabular}{cccc} \hline\noalign{\smallskip} mesh & our-shuffle & our & our\cite{SorgenteKernel} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textit{spiral} (sum) & 1.04 & 1.01 & 1.03\\ \textit{vase} (sum) & 245.61 & 194.91 & 495.63\\ \textit{cross} & 0.19 & 4.15 & 9.14\\ \textit{part} & 2.58 & 5.22 & 12.7\\ \textit{bot-eye} & 0.03 & 0.17 & 0.25\\ \textit{rt4-arm} & 0.13 & 0.14 & 0.21\\ \textit{button} & 0.1 & 0.11 & 0.15\\ \textit{ball} & 0.24 & 0.28 & 0.58\\ \textit{acorn} & 4.35 & 276.13 & 626.72\\ \textit{muffin} & 11.73 & 58.62 & 129.38\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \section{Conclusions} \label{sec:conclusions} We presented an algorithm for the computation of the kernel of a polyhedron based on the extension to the 3D case of the geometric approach commonly adopted in two dimensions. With respect to \cite{SorgenteKernel} we have optimized the algorithm in several ways: we now perform all the vertices comparisons in a single, preliminary step; we have introduced the use of exact geometric predicates and a random visiting strategy of the faces that considerably improve the performance of the method over non star-shaped, complex objects. The algorithm showed up to be robust and reliable, as it computed successfully the kernel of every considered polyhedron. The efficiency of our algorithm is compared to the CGAL implementation of the algebraic approach to the problem. From a theoretical point of view, the computational complexity evaluation of Section~\ref{subsec:computational_complexity} suggests that our method is in general quadratic, while the algebraic approach has a lower bound at $n\log(n)$. Nonetheless, we proved all across Section~\ref{sec:tests} that in several circumstances our approach outperforms the algebraic one. The geometric approach showed up to be significantly faster than the algebraic one when dealing with models satisfying at least one of the following conditions: \begin{enumerate} \item the size of the model is small; \item the model contains a significant number of co-planar faces; \item the kernel is empty. \end{enumerate} Our method performs significantly better than the algebraic approach over polyhedra with a limited number of vertices and faces, as shown in Section~\ref{subsec:meshes}, making it particularly suitable for the analysis of volumetric tessellations with non-convex elements. Indeed, we point out that our algorithm is specifically designed to be used with simple polyhedra, possibly composing a bigger and more complex 3D model, rather than with a complete model itself. This behaviour is particularly evident with model \textit{vase} from Section~\ref{subsec:refinements}. As long as the size of the model remains reasonable our method is faster than CGAL, then over a certain bound the algebraic method becomes more efficient. Again, models like \textit{super-ellipse} or \textit{flex} from Section~\ref{subsec:complex_models} have few or zero co-planar faces and a significant kernel, but the size of these meshes is small and the geometric approach offers better performance. According to Fig.~\ref{fig:time:thingi}, a bound on the number of vertices could be possibly set at around $10^3$. When the size of the polyhedron increases, our method is still particularly efficient if the model has numerous co-planar faces, due for instance to the presence of flat regions on the surface. This is a very common situation in models representing mechanical parts. For instance, models \textit{star} and \textit{part} from Section~\ref{subsec:complex_models} present large flat regions despite having a significant size, and again the geometric approach is faster on these models. Another scenario in which the geometric approach overperforms the algebraic one is with non star-shaped objects. The differences in this case are so evident that one could even imagine to use our algorithm to understand, in few iterations in shuffle mode, if a model is actually star-shaped or not, even without computing the proper kernel. On the other side, the algebraic approach is likely to remain preferable over domains which do not satisfy any of the above conditions: star-shaped objects with thousands of vertices and high surface curvature. In conclusion, with this work we do not aim at completely replacing the algebraic approach for the kernel computation but instead to give an alternative which can be preferred for specific cases, such as the quality analysis of the elements in a 3D tessellation, in the same way as bubble-sort is to be preferred to optimal sorting algorithms when dealing with very small arrays. As a future development, we plan to integrate in our algorithm the promising \textit{indirect predicates} introduced in \cite{attene2020indirect}. Numerical problems remain a critical issue in the computation of geometric constructions like the kernel, independently of the approach adopted. We believe indirect predicates could enormously help in enhancing the robustness of the algorithm. Moreover, we plan to include this tool in a suite for the generation and analysis of tessellations of three dimensional domains, aimed at PDE simulations. The kernel of a polyhedron has a great impact on its geometrical quality, and the geometrical quality of the elements of a mesh determines the accuracy and the efficiency of a numerical method over it. We are therefore already using this algorithm in works like \cite{sorgente2022role} for better understanding the correlations between the shape of the elements and the performance of the numerical simulations, and be able to adaptively generate, refine or fix a tessellation accordingly to them. \section*{Acknowledgements} We would like to thank Dr. M. Manzini for the precious discussions and suggestions, and all the people from IMATI institute involved in the CHANGE project. Special thanks are also given to the anonymous reviewers for their comments and suggestions. This work has been partially supported by the ERC Advanced Grant CHANGE contract N.694515. \bibliographystyle{plain}
2023-04-23T08:18:07.590Z
2022-02-15T02:41:30.000Z
redpajama/arxiv
arxiv_0000
1,235
18,272
7566cecb1e91b4ee00063e4324c60d70d4aaf7c1
\section{Introduction} The study of isospectral matrix polynomials lies at the crossroads of integrable systems and algebraic geometry. An $n \times n$ \textit{matrix polynomial} of degree $m$ is an expression of the form $P(\lambda)= A_0 + A_1\lambda + \dots + A_m \lambda^m$ where $A_0, \dots, A_m \in \mathrm{Mat}_n(\mathbb{C})$ are complex $n \times n$ matrices, and $\lambda$ is a complex variable. It is well known that there is a close relation between matrix polynomials and line bundles over plane curves. With a given matrix polynomial $P(\lambda)$, one can associate a plane algebraic curve $\{ (\lambda, \mu) \in \mathbb{C}^2 \mid \det(P(\lambda) - \mu \cdot \mathrm{Id}) = 0\}$, called the \textit{spectral curve}, and a line bundle over the spectral curve given by eigenvectors of $P(\lambda)$. Conversely, given a plane algebraic curve $C$ and a line bundle $E \to C$ satisfying certain genericity assumptions, one can construct a matrix polynomial whose spectral spectral curve is $C$ and whose eigenvector bundle is isomorphic to $E$.\par \par When the spectral curve is smooth, the relation between matrix polynomials and line bundles is described by the following classical result. Fix positive integers $m$ and $n$, and let $\modspace{m}{n}{C}$ be the moduli space of $n \times n$ degree $m$ matrix polynomials with spectral curve $C$, considered up to conjugation by constant matrices (for simplicity of notation, we omit the dependency of $\pazocal M_C$ on $m$ and $n$; note that the dimension $n$ can always be found from the equation of the spectral curve $C$). Then, if the curve $C$ is smooth and satisfies certain conditions at infinity, the moduli space $\modspace{m}{n}{C}$ is a smooth variety biholomorphic to the complement of the theta divisor in the Jacobian of~$C$. For hyperelliptic curves, this result is due to D.\,Mumford. In~\cite{Mumford}, Mumford used the space $\modspace{m}{n}{C}$ to give an explicit algebraic construction of the hyperelliptic Jacobian. For general smooth curves $C$, the space $\modspace{m}{n}{C}$ was studied by P.\,Van Moerbeke and D.\,Mumford \nolinebreak\cite{mvm} in connection with periodic difference operators, and by M.\,Adler and P.\,Van Moerbeke \nolinebreak\cite{adler}, A.G.\,Reyman and M.A.\,Semenov-Tian-Shansky~\cite{Reyman2}, and A.\,Beauville \nolinebreak\cite{beauville2} in the context of finite-dimensional integrable systems. It is worth mentioning that the technique used in these papers is to a large extent based on earlier works of S.P.\,Novikov's school on infinite-dimensional integrable systems, see, e.g., the review \cite{DKN}. \par In the present paper, we are interested in generalizing the relation between isospectral matrix polynomials and Jacobian varieties to singular curves. It is known that for certain singular curves the space $\modspace{m}{n}{C}$ is a variety isomorphic to the complement of the theta divisor in the compactification of the generalized Jacobian of $C$, see e.g. \cite{inoue}. However, in general, the space $\modspace{m}{n}{C}$ associated with a singular curve may even fail to be Hausdorff. For this reason, we replace the space $\modspace{m}{n}{C}$ with a slightly bigger space $\pazocal P^B_C$ defined as the set of $n \times n$ degree $m$ matrix polynomials whose spectral curve is a given curve $C$ and whose leading coefficient $A_m$ is equal to a fixed matrix $B$. The latter space is, by definition, an affine algebraic variety for any curve $C$ and any matrix $B$ (by an algebraic variety we always mean an algebraic set, i.e. we allow varieties to be reducible). Note that the space $\modspace{m}{n}{C}$ can be recovered from $\pazocal P^{B}_C$ by taking the quotient of the latter with respect to the conjugation action of the centralizer of $B$, provided that this quotient is well-defined.\par For a smooth curve $C$, the set $\pazocal P^B_C$ was explicitly described by L.\,Gavrilov \cite{Gavrilov2}. Namely, he showed that if $C$ is smooth, then $\pazocal P^B_C$ can be naturally identified with an open dense subset in the generalized Jacobian of the curve obtained from $C$ by identifying points at infinity. We note that the latter Jacobian has a natural fiber bundle structure over the Jacobian of $C$, which agrees with the above description of the moduli space $\modspace{m}{n}{C}$ as the quotient of $\pazocal P^B_C$.\par In this present paper, we consider the case of an arbitrary nodal and possibly reducible curve~$C$. In this case, the isospectral set $\pazocal P^B_C$ turns out to have a rich combinatorial structure. This structure may be described in terms of orientations on the dual graph of $C$, or in terms of a certain convex polytope, the so-called {graphical zonotope} \cite{postnikov} associated with the dual graph. Based on this combinatorial description, we make a conjecture on the relation of the set $\pazocal P^B_C$ to the canonical compactified Jacobian of $C$ described by V.\,Alexeev \cite{Alexeev}.\par The interest in varieties $\pazocal P^B_C$ associated with singular curves also comes from integrable systems. There is a natural algebraically integrable system on matrix polynomials of the form $ A_0 + L_1 \lambda + \dots + A_{m-1}\lambda^{m-1} + B\lambda^m$ whose fibers are precisely the sets $\pazocal P^B_C$, see~\cite{Reyman2, beauville2, Gavrilov2}. So, the results of the present paper can be regarded as a description of singular fibers for the integrable system on matrix polynomials. We also hope that these results will be useful on their own, in particular, for understanding the structure of compactified Jacobians.\par Main results of the paper are Theorem \ref{thm1} and Theorem \ref{thm3}. Theorem \ref{thm1} describes the variety $\pazocal P^B_C$ associated with a nodal curve $C$ as a disjoint union of smooth strata, each one being an open subset in a certain generalized Jacobian. Theorem \ref{thm3} describes the adjacency of these strata. Two other results are Theorem \ref{thm2}, which describes the local structure of the variety $\pazocal P^B_C$, and Theorem \ref{thm4}, which characterizes strata of $\pazocal P^B_C$ corresponding to irreducible and completely reducible matrix polynomials. Based on Theorem \ref{thm4}, we make a conjecture on the relation of the variety $\pazocal P^B_C$ to the canonical compactified Jacobian of $C$ (Conjecture \ref{conjCCJ}).\par The main emphasis of the present paper is on constructions and examples. Detailed proofs will be published elsewhere. \par \bigskip {\bf Acknowledgments.} This work was partially supported by the Dynasty Foundation Scholarship and an NSERC research grant. \section{Statement of the problem}\label{mainProblem} Fix positive integers $m,n$, and let \begin{equation*}\pazocal P_{m,n} \mathrel{\mathop:}= \left\{ P(\lambda) = A_0 + A_1 \lambda + \dots + A_{m-1}\lambda^{m-1} + A_m\lambda^m \mid A_0, \dots, A_{m} \in \mathrm{Mat}_n(\Complex) \right\} \end{equation*} be the space of $n \times n$ degree $m$ matrix polynomials. For each $P \in \pazocal P_{m,n}$, let \begin{align*} C_P\mathrel{\mathop:}= \{ (\lambda, \mu) \in \mathbb{C}^2 \mid\det(P(\lambda)-\mu \cdot \mathrm{Id}) = 0\}\end{align*} be the zero set of its characteristic polynomial. The curve $C_P$ is called the \textit{spectral curve} associated with a matrix polynomial $P$. This curve can be regarded as the graph of the spectrum of $P$ depending on the value of $\lambda$. Further, fix a matrix $B \in \mathrm{Mat}_n(\Complex)$ with distinct eigenvalues, and let $$ \pazocal P^\lt_{m,n} = \{ P(\lambda) = A_0 + A_1 \lambda + \dots + A_{m-1}\lambda^{m-1} + A_m\lambda^m \in \pazocal P_{m,n} \mid A_m = B \} $$ be the space of matrix polynomials $P \in \pazocal P_{m,n}$ with leading coefficient $B$. The isospectral variety $\pazocal P^B_C$ corresponding to a plane curve $C$ is defined as \begin{align*} \pazocal P^B_C \mathrel{\mathop:}= \{P \in \pazocal P^\lt_{m,n} \mid C_P = C \}. \end{align*} The main problem of the present paper is the following: \begin{problem}\label{problem1} Describe the structure of the set $\pazocal P^B_C$ for an arbitrary nodal, possibly reducible, curve $C$. \end{problem} The solution of Problem \ref{problem1} is given by Theorems \ref{thm1}, \ref{thm2}, and \ref{thm3} below. We begin with several preliminary remarks. First, note that if the matrices $B$ and $B'$ are similar, then the corresponding isospectral varieties $\pazocal P^{B}_C$ and $\pazocal P^{B'}_C$ are naturally isomorphic. For this reason, we may confine ourselves to the case of a diagonal matrix $B$ (recall that eigenvalues of $B$ are pairwise distinct).\par Further, note that the choice of $m$, $n$, and $B$ imposes restrictions on curves which may arise as spectral curves. Indeed, let $C$ be a curve given by $Q(\lambda, \mu) = 0$. Assume that $\pazocal P^B_C $ is non-empty, and let $$P(\lambda) = A_0 + A_1 \lambda + \dots + A_{m-1}\lambda^{m-1} + B\lambda^m \in \pazocal P^B_C. $$ Introduce new variables $z,w$ such that $\lambda = 1/z$, and $\mu = w/z^m$. Then \begin{align} \begin{aligned}\label{specCond}\lim_{z \to 0}\left(Q \left({\frac{1}{z}}, \frac{w}{z^{m}}\right)z^{nm} \right) = \lim_{z \to 0}\left(\det\left(P - \frac{w}{z^m}\cdot\mathrm{Id}\right)z^{mn}\right) = \\ \lim_{z \to 0}\vphantom{\int}\det(B + zA_{m-1} + \dots + z^m A_0 - w\cdot\mathrm{Id}) &= \det(B- w\cdot\mathrm{Id}). \end{aligned} \end{align} Geometrically, this condition means that the Newton polygon of the polynomial $Q(\lambda, \mu)$ lies inside the triangle with vertices $(0,0)$, $(0,n)$, $(mn, 0)$, and that coefficients of the monomials lying on the hypothenuse of this triangle are equal to corresponding coefficients of the characteristic polynomial of the matrix $B$ (see Figure~\ref{newton}). In what follows, we consider only those curves $C$ which satisfy this condition. Note that for smooth and nodal curves, the above condition turns out to be not only necessary, but also a sufficient condition for a curve $C$ to be the spectral curve for a suitable matrix polynomial $P$. Also note that for matrix polynomials of degree one, the above condition simply means that the curve $C$ has degree $n$, and that its closure in \nolinebreak ${\mathbb{C}}\mathrm{P}^2$ intersects the line at infinity at specific points prescribed by the eigenvalues of the matrix $B$. \begin{figure}[t] \centerline{ \begin{tikzpicture}[thick,scale = 1.2] \draw [->] (0,0) --(0,3); \draw [->] (0,0) --(5,0); \draw (3,0) -- (0,1.5); \fill[opacity = 0.3] (0,0) -- (3,0) -- (0,1.5) -- cycle; \fill (0,0) circle (1.2pt); \fill (0,0.5) circle (1.2pt); \fill (0,1) circle (1.2pt); \fill (0,1.5) circle (1.2pt); \fill (0.5,0) circle (1.2pt); \fill (0.5,0.5) circle (1.2pt); \fill (0.5,1) circle (1.2pt); \fill (1,0) circle (1.2pt); \fill (1,0.5) circle (1.2pt); \fill (1,1) circle (1.2pt); \fill (1.5,0) circle (1.2pt); \fill (1.5,0.5) circle (1.2pt); \fill (2,0) circle (1.2pt); \fill (2,0.5) circle (1.2pt); \fill (2.5,0) circle (1.2pt); \fill (3,0) circle (1.2pt); \node () at (-0.2, 1.5) {$n$}; \node () at (3, -0.2) {$mn$}; \node () at (-0.2, 2.8) {$\mu$}; \node () at (4.8, -0.2) {$\lambda$}; \node (A) at (2.5,2) {\small \parbox{2.15cm}{Coefficients of $\det(B - w \cdot \mathrm{Id})$}}; \draw [->, dotted] (A) -- (0,1.5); \draw [->, dotted] (A) -- (1,1); \draw [->, dotted] (A) -- (2,0.5); \draw [->, dotted] (A) -- (3,0); \end{tikzpicture} } \caption{Newton polygon of a spectral curve}\label{newton} \end{figure} Finally, we note that there is a natural action of the group \begin{align*}G^{B} \mathrel{\mathop:}= \{ U \in \mathbb{P}\mathrm{GL}_n(\mathbb{C}) \mid U{B} = {B} U \}\end{align*} on $\pazocal P^B_C$ by conjugation. Provided that $\pazocal P^B_C$ is non-empty, the quotient $\pazocal P^B_C / G^B$ coincides with the moduli space $$\modspace{m}{n}{C} = \{P \in \pazocal P_{m,n} \mid C_P = C \} \,/\,\mathrm{GL}_n(\mathbb{C}),$$ where the group $\mathrm{GL}_n(\mathbb{C})$ acts by conjugation. Indeed, there is a natural map $ \pi \colon \pazocal P^B_C \to \modspace{m}{n}{C} $ that sends every $P \in \pazocal P^B_C$ to its conjugacy class. Obviously, fibers of this map are precisely $G^B$ orbits. Furthermore, since $ \pazocal P^B_C$ is non-empty, there \textit{exists} $P \in \modspace{m}{n}{C}$ such that its leading coefficient is conjugated to $B$. But this implies that this is so \textit{for any} $P \in \modspace{m}{n}{C}$, because the eigenvalues of the leading coefficient of a matrix polynomial are uniquely determined by the spectral curve. Therefore, the map $\pi$ is surjective, and we have $$\modspace{m}{n}{C} = \pazocal P^B_C / G^B. $$ \begin{example}\label{twoLines} The following example shows that if the curve $C$ is reducible, then the space $\modspace{m}{n}{C}$ is, generally speaking, not a variety. Let $m =1$, $n=2$, and let $C$ be the union of two straight lines $\mu = a_1 + b_1 \lambda$ and $\mu = a_2 + b_2 \lambda$. To describe the space $\modspace{m}{n}{C}$, we first describe the variety $\pazocal P^B_C$ for $B$ diagonal with diagonal entries $b_1, b_2$. A simple explicit computation shows that $\pazocal P^B_C$ is a disjoint union of three $G^B$ orbits $\mathrm{Orb}_1 \sqcup \mathrm{Orb}_2 \sqcup \mathrm{Orb}_3$ where \begin{align*} &\qquad\qquad\qquad\qquad \mathrm{Orb}_1 = \left\{ \left(\begin{array}{cc}a_1 + b_1 \lambda & 0 \\0 & a_2 + b_2 \lambda \end{array}\right) \right\}, \\ &\mathrm{Orb}_2 = \left\{ \left(\begin{array}{cc}a_1 + b_1\lambda &z \in \mathbb{C}^* \\0 & a_2 + b_2 \lambda \end{array}\right) \right\}, \quad \mathrm{Orb}_3 = \left\{ \left(\begin{array}{cc}a_1 + b_1 \lambda & 0 \\z \in \mathbb{C}^* & a_2 + b_2 \lambda\end{array}\right) \right\}. \end{align*} Note that the closure of the orbit $\mathrm{Orb}_2$, as well as the closure of the orbit $\mathrm{Orb}_3$, contains the orbit $\mathrm{Orb}_1$, therefore the quotient $\modspace{m}{n}{C} = \pazocal P^B_C / G^B $ is non-Hausdorff. \end{example} \section{Stratification of the isospectral variety}\label{genCons} Let $C$ be an arbitrary nodal, possibly reducible, curve. In this section, with each matrix polynomial $P \in \pazocal P^B_C$, we associate a graph $\Gamma_P$ and a divisor $D_P$ on $\Gamma_P$, i.e. an integer-valued function on vertices of $\Gamma_P$. This construction leads to a stratification of $\pazocal P^B_C$ with strata indexed by pairs (a graph $\Gamma$, a divisor $D$ on $\Gamma$) .\par \tikzstyle{vertex} = [coordinate] \begin{figure}[t] \centering \begin{tikzpicture}[thick] \draw (0,0) -- (1.5,2); \draw (2,0) -- (0.5,2); \draw (-0.2,0.5) -- (2.2,0.5); \draw[->, dashed] (1.8,1.2) -- (2.7,1.2); \node () at (3.5,1) { \begin{tikzpicture}[thick, rotate = 180] \draw (0,0) -- (0.5, 0.86); \fill (0,0) circle [radius=1.5pt]; \fill (1,0) circle [radius=1.5pt]; \fill (0.5,0.86) circle [radius=1.5pt]; \draw (0,0) -- (1,0); \draw (0.5, 0.86) -- (1,0); \end{tikzpicture} }; \node () at (7,1.2) {\includegraphics[scale = 1]{curve.pdf}}; \draw[->, dashed] (7.8,1.2) -- (8.7,1.2); \node () at (9.5,1) { \begin{tikzpicture}[thick] \fill (0,0) circle [radius=1.5pt]; \draw (0,0) .. controls +(-0.7,-0.7) and +(+0.7,-0.7) .. (0,0); \end{tikzpicture} }; \end{tikzpicture} \caption{Nodal curves and their dual graphs}\label{curveGraph} \end{figure} Recall that \textit{dual graph} $\Gamma_C$ of a nodal curve $C$ is defined as follows. The vertices of $\Gamma_C$ are irreducible components of $C$. The edges of $\Gamma_C$ are nodes. Two vertices are joined by an edge if there is a node joining the corresponding irreducible components. Nodes which lie in only one irreducible component correspond to loops in $\Gamma_C$ (see e.g. Figure~\ref{curveGraph}).\par Now we associate a subgraph $\Gamma_P \subset \Gamma_C$ with each matrix polynomial in $P^B_C$. Let $P \in \pazocal P^B_C$. Then, by definition of the spectral curve, for any point $ (\lambda, \mu) \in C$ we have $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) > 0$. Moreover, if $ (\lambda, \mu)$ is a smooth point of $C$, then $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 1$, i.e. the $\mu$-eigenspace of the matrix $P(\lambda)$ is one-dimensional (see e.g.~\cite{babelon2003introduction}, Chapter~5.2). If, on the contrary, $ (\lambda, \mu) \in C$ is a double point, then there are two possibilities: either $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 1$, or $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 2$ (cf. Section~2 of~\cite{adams1990}). Now, define $\Gamma_P \subset \Gamma_C$ as the graph obtained from the dual graph $\Gamma_C$ by removing all edges of $\Gamma_C$ corresponding to nodes $(\lambda, \mu) \in C$ such that $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 2$. Note that the subgraph $\Gamma_P \subset \Gamma_C$ is \textit{generating}, i.e. it has the same set of vertices as the graph $\Gamma_C$ itself. \begin{example}\label{twoLines2} Let $m =1$, $n=2$, and let the curve $C$ be the union of two lines $C_1 = \{\mu = a_1 + b_1 \lambda\}$ and $C_2 = \{\mu = a_2 + b_2 \lambda\}$, as in Example \ref{twoLines}. Let also $B$ be a diagonal matrix with diagonal entries $b_1$, $b_2$. Then we have $\pazocal P^B_C = \mathrm{Orb}_1 \sqcup \mathrm{Orb}_2 \sqcup \mathrm{Orb}_3$ (see Example \ref{twoLines}). The only double point of the curve $C$ is $(\lambda_0, \mu_0) = C_1 \cap C_2$, and the dual graph $\Gamma_C$ is two vertices joined by an edge. For $P \in \mathrm{Orb}_1$, we have $\dim \Ker(P(\lambda_0) - \mu_0 \cdot \mathrm{Id}) = 2$, so the corresponding graph $\Gamma_P$ is two disjoint vertices. For $P \in \mathrm{Orb}_2$ or $P \in \mathrm{Orb}_3$, we have $\dim \Ker(P(\lambda_0) - \mu_0 \cdot \mathrm{Id}) = 1$, so $\Gamma_P = \Gamma_C$. \end{example} As can be seen from Example \ref{twoLines2}, different components of $\pazocal P^B_C$ may correspond to the same generating subgraph $\Gamma_P \subset \Gamma_C$. To distinguish between these components, with each matrix polynomial $P \in \pazocal P^B_C$ we associate an additional invariant, a divisor $D_P$ on the graph $\Gamma_P$. This divisor is defined as follows.\par Let $C_1, \dots, C_k$ be the irreducible components of $C$. Consider the normalization $X$ of $C$, i.e. the disjoint union of connected Riemann surfaces $X_1, \dots, X_k$, where $X_i$ is the normalization of $C_i$. Note that $X$ is equipped with two meromorphic functions $\lambda$ and $\mu$ coming from the embedding of the curve $C$ into $\mathbb{C}^2$. For all but finitely many points $u \in X$, we have $\dim \Ker(P(\lambda(u)) - \mu(u) \cdot \mathrm{Id}) = 1 $. This allows us to construct a densely defined map $\psi_P \colon X \to {\mathbb{C}}\mathrm{P}^{n-1}$ which sends $u$ to the one-dimensional subspace $ \Ker(P(\lambda(u)) - \mu(u) \cdot \mathrm{Id}) \subset \mathbb{C}^n$. Since $X$ is smooth, the map $\psi_P$ extends to a holomorphic map defined on the whole $X$. Now, consider a line bundle over $X$ defined by taking the line $\psi_P(u)$ for every point $u \in X$. Denote this line bundle by $E_P$, and let $E_P^*$ be the dual bundle. The total degree of the bundle $E_P^*$ can be found in the same way as in the smooth case (see e.g. \cite{babelon2003introduction}, Chapter 5.2): \begin{align} \label{totalDegreeFormula} \sum_{i=1}^k \deg E_P^*\mid_{X_i} \,= \sum_{i=1}^k \,(\mathrm{genus}(X_i) - 1) + |E(\Gamma_P)| + n \end{align} where $|E(\Gamma_P)|$ stands for the number of edges of $\Gamma_P$, or, equivalently, the number of nodes $(\lambda, \mu) \in C$ such that $\dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 1$. For smooth curves, this formula reduces to the standard one \begin{align}\label{smoothDegree} \deg E_P^*= \mathrm{genus}(X) + n -1. \end{align} \begin{example}\label{twoLines3} The following example shows that while the {total} degree of the bundle $E_P^*$ can be computed in terms of the curve $C$ and the graph $\Gamma_P$, the degree of the restriction of $E_P^*$ to $X_i$ can be different for different matrix polynomials $P$ with the same spectral curve and the same graph. \par Let us again consider the two-lines spectral curve from Examples \ref{twoLines}, \ref{twoLines2}. The corresponding Riemann surface is $X = X_1 \sqcup X_2$ where $X_i$ is the closure of the line $C_i = \{\mu = a_i + b_i\lambda\}$ in ${\mathbb{C}}\mathrm{P}^2$. Take a matrix polynomial $P \in \mathrm{Orb}_2$. Then for any point $u = (\lambda, \mu) \in C_1$, we have $$ P - \mu \cdot \mathrm{Id} = \left(\begin{array}{cc}0 & z \\0 & (a_2 + b_2\lambda) - (a_1 + b_1\lambda)\end{array}\right), $$ so $ \Ker(P - \mu \cdot \mathrm{Id})$ is spanned by the vector $w = (1,0)$, which implies that the restriction of the bundle $E_P$ to $X_1$ is a trivial bundle, and $\deg E_P^*\mid_{X_1} = 0$. Further, for $u = (\lambda, \mu) \in C_2$, we have $$ P - \mu \cdot \mathrm{Id} = \left(\begin{array}{cc}(a_1 + b_1\lambda) - (a_2 + b_2\lambda)\ & z \\0 & 0\end{array}\right), $$ so $ \Ker(P - \mu \cdot \mathrm{Id})$ is spanned by the vector $w = (z, (a_2 + b_2\lambda) - (a_1 + b_1\lambda))$. The vector-function $w$ can be regarded as a meromorphic section of the bundle $E_P$ over $X_2$ with one pole of order one at infinity, so $\deg E_P\mid_{X_2} = -1$, and $\deg E_P^*\mid_{X_2} = 1$. Analogously, for $P \in \mathrm{Orb}_3$, we get $\deg E_P^*\mid_{X_1} = 1$, and $\deg E_P^*\mid_{X_2} = 0$. Note that in both cases $P \in \mathrm{Orb}_2$ and $P \in \mathrm{Orb}_3$, the total degree of the line bundle $ E_P^*$ is equal to one, as predicted by formula \eqref{totalDegreeFormula}. \end{example} Now, we use the numbers $\deg E_P^*\mid_{X_i} $ to define a divisor $D_P$ on the graph $\Gamma_P$. Take any vertex $v_i$ of $\Gamma_P$. By definition, this vertex corresponds to an irreducible component $C_i$ of $C$ or, equivalently, to a connected component $X_i \subset X$. Define \begin{align}\label{dpdef} D_P(v_i):= \deg E^*_P\mid_{X_i}\!-\, (\mathrm{genus}(X_i) + n_i - 1) \end{align} where $n_i$ is the number of poles of the function $\lambda$ on $X_i$ or, equivalently, the degree of the equation of the irreducible component $C_i$ in the variable $\mu$. Note that for smooth curves the right-hand side of \eqref{dpdef} is zero (cf. formula \eqref{smoothDegree}), so the divisor $D_P$ measures the deviation of the degree of the line bundle $E^*_P\mid_{X_i}$ from the degree that it would have in the smooth case. \begin{figure}[t] \centerline{ \begin{tikzpicture}[thick,scale = 1.2] \node (a) at (0,0) { \begin{tikzpicture}[scale = 1.2] \node () at (-4,0) {$\left(\begin{array}{cc}a_1 + b_1 \lambda & 0 \\0 & a_2 + b_2 \lambda \end{array}\right)$}; \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [dashed] (0,0) -- (1,0); \node () at (-0.1,-0.2) {$0$}; \node () at (1.1,-0.2) {$0$}; \draw [->, dashed] (-1.75,0) -- (-0.75,0); \end{tikzpicture} }; \node (b) at (0,-1.5) { \begin{tikzpicture}[scale = 1.2] \node () at (-4,0) {$\left(\begin{array}{cc}a_1 + b_1 \lambda & * \\0 & a_2 + b_2 \lambda \end{array}\right)$}; \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.1,-0.2) {$0$}; \node () at (1.1,-0.2) {$1$}; \draw [->, dashed] (-1.75,0) -- (-0.75,0); \end{tikzpicture} }; \node (b) at (0,-3) { \begin{tikzpicture}[scale = 1.2] \node () at (-4,0) {$\left(\begin{array}{cc}a_1 + b_1 \lambda & 0 \\ * & a_2 + b_2 \lambda \end{array}\right)$}; \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.1,-0.2) {$1$}; \node () at (1.1,-0.2) {$0$}; \draw [->, dashed] (-1.75,0) -- (-0.75,0); \end{tikzpicture} }; \end{tikzpicture} } \caption{Graphs and divisors for matrix polynomials whose spectral curve is two straight lines}\label{diangle} \end{figure} \begin{example}\label{twoLines4} Figure \ref{diangle} depicts the graph $\Gamma_P$ and the divisor $D_P$ for each of the components $\mathrm{Orb}_1$, $\mathrm{Orb}_2$, and $\mathrm{Orb}_3$ of the variety $\pazocal P^B_C$ considered in Examples \ref{twoLines}, \ref{twoLines2}, \ref{twoLines3}. Here and in what follows, dashed edges are those which do not belong to the subgraph $\Gamma_P$. \end{example} Now, for any generating subgraph $\Gamma$ of the dual graph of $C$, and for any divisor $D$ on $\Gamma$, define the corresponding stratum \begin{align*} \pazocal S(\Gamma,D) := \{ P \in \pazocal P^B_C \mid \Gamma_P = \Gamma, D_P = D\}. \end{align*} \begin{example}\label{twoLines4_5} When $C$ is two lines (see Example \ref{twoLines} and Examples \ref{twoLines2} - \ref{twoLines4}), the variety $\pazocal P^B_C$ consists of three strata \begin{align*}\pazocal S\left( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [dashed] (0,0) -- (1,0); \node () at (-0.2,-0.) {$0$}; \node () at (1.2,-0.) {$0$}; \end{tikzpicture} \right) = \mathrm{Orb}_1, \quad \pazocal S\left( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0.) {$0$}; \node () at (1.2,-0.) {$1$}; \end{tikzpicture} \right) = \mathrm{Orb}_2, \quad \pazocal S\left( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0.) {$1$}; \node () at (1.2,-0.) {$0$}; \end{tikzpicture} \right) = \mathrm{Orb}_3. \end{align*} \end{example} Theorem \ref{thm1} below asserts that non-empty strata $\pazocal S(\Gamma ,D)$ are smooth and connected, and, moreover, can be described as open subsets in semiabelian varieties. This result generalizes the description of $\pazocal P^B_C $ for smooth curves $C$ found by Gavrilov \cite{Gavrilov2}. The rest of the paper is devoted to the description of those divisors $D$ for which $\pazocal S(\Gamma ,D)$ is non-empty. We also describe the local structure of the variety $\pazocal P^B_C $ in the neighborhood of each stratum and characterize those strata which are adjacent to each other. \section{Combinatorial interlude I: Indegree divisors and graphical zonotopes} In what follows, a \textit{graph} is an undirected multigraph possibly with loops. The notation $V(\Gamma)$ stands for the vertex set of a graph $\Gamma$, and $E(\Gamma)$ stands for the edge set. By $\mathrm{Div}(\Gamma)$ we denote the set of all divisors on $\Gamma$. Each divisor $D \in \mathrm{Div}(\Gamma)$ may be regarded either as a formal integral linear combination of vertices of $\Gamma$, or as a $\mathbb{Z}$-valued function on vertices. The \textit{degree} $|D|$ of a divisor $D$ is the sum of its values over all vertices. \par Let $\Gamma$ be a graph, and let $\pazocal O(\Gamma)$ be the set of all orientations on $\Gamma$. For each $\mathfrak{o} \in \pazocal O$, define a divisor \begin{align}\label{indegDiv}\mathrm{indeg}(\mathfrak o) :=\!\!\! \sum_{v\, \in\, V(\Gamma)} \!\mathrm{indeg}_{\mathfrak o}(v)v\end{align} where $\mathrm{indeg}_{\mathfrak o}(v)$ is the indegree of the vertex $v$ in the directed graph $(\Gamma, \mathfrak o)$, i.e. the number of edges pointing to $v$. \begin{definition}\label{balDef1} Let $\Gamma$ be a graph. We say that a divisor $D \in \mathrm{Div}(\Gamma)$ is an \textit{indegree divisor} if there exists an orientation $\mathfrak o \in \pazocal O(\Gamma)$ such that $D = \mathrm{indeg}(\mathfrak o)$. We denote the set of all indegree divisors by $\mathrm{InDeg}(\Gamma)$ \end{definition} \begin{figure}[b] \centerline{ \begin{tikzpicture}[thick,scale = 1.2] \node (a) at (0,0) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [->-] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$2$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \node (b) at (2,0) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [->-] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$2$}; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [-<-] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$0$}; \node () at (0.5,1.02) {$2$}; \end{tikzpicture} }; \node (d) at (6,0) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [-<-] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$2$}; \node () at (1.1,-0.1) {$0$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \node (e) at (8,0) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [-<-] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$2$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$0$}; \end{tikzpicture} }; \node (f) at (10,0) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [->-] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$2$}; \node () at (0.5,1.02) {$0$}; \end{tikzpicture} }; \node (g) at (4,-2) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [->-] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \node (h) at (6,-2) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [-<-] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \end{tikzpicture} } \caption{Eight orientations and seven indegree divisors on a triangle}\label{triangle} \end{figure} \begin{example}\label{triangleEx1} Figure \ref{triangle} depicts $8$ possible orientations of a triangle and $7$ corresponding indegree divisors. \end{example} \begin{remark} Divisors satisfying Definition \ref{balDef1} are also known in literature as score vectors~\cite{KW} and ordered outdegree sequences \cite{stanley1980decompositions} (note that there is no difference between indegree and outdegree divisors, since $\mathrm{indeg}({\mathfrak o}) = \mathrm{outdeg}({-{\mathfrak o}})$ where $-{\mathfrak o}$ is the orientation obtained from $\mathfrak o$ by inverting the direction of every edge). In algebraic geometry, divisors satisfying Definition~\ref{balDef1} are known as \textit{normalized semistable multidegrees}. Such multidegrees were used by Beauville~\cite{Beauville} to describe the theta divisor for a nodal reducible curve. The term normalized semistable multidegree is due to Alexeev~\cite{Alexeev}. Let us also mention a recent paper~\cite{ABKS} where the authors consider so-called \textit{orientable} divisors. A divisor $D \in \mathrm{Div}(\Gamma)$ is called orientable if there exists an orientation $\mathfrak o \in \pazocal O(\Gamma)$ such that $ D = \sum_{v} (\mathrm{indeg}_{\mathfrak o}(v) - 1)v. $ Clearly, orientable divisors are in bijection with indegree divisors. \end{remark} The following result is well-known. \begin{proposition}\label{balDef2} A divisor $D$ on a graph $\Gamma$ is {an indegree divisor} if and only if the degree of $D$ is equal to the number of edges of $\Gamma$, and for every subgraph $\Gamma_1 \subset \Gamma$, the degree of the restriction of $D$ to $\Gamma_1$ is greater or equal than the number of edges of $\Gamma_1$. \end{proposition} For the proof of Proposition \ref{balDef2}, see e.g.~\cite{hakimi, gyarfas1976orient, Alexeev}. See also~\cite{backman} where it is shown that this result is equivalent to the well-known max-flow min-cut theorem in optimization theory.\par We will also need one more description of indegree divisors. To each vertex $v_i$ of $\Gamma$, we associate a formal variable $x_i = \exp({v_i})$, and consider the polynomial $$ B_\Gamma(x_1, \dots, x_n) := \prod_{[v_i, v_j]} (x_i + x_j) $$ where the product is taken over all edges of $\Gamma$. We have \begin{align}\label{BPolynomial} B_\Gamma =\prod_{[v_i, v_j]} (\exp({v_i})+ \exp({v_j}))=\!\!\! \!\!\! \sum_{D \,\in\, \mathrm{Div}(\Gamma)} \!\!\!\!\! \mathrm{mult}(\Gamma, D) \exp(D), \end{align} where $\mathrm{mult}(\Gamma, D)$ is a non-negative integer for every divisor $D$. The following proposition is straightforward: \begin{proposition}\label{balDef3} A divisor $D = \sum d_iv_i \in \mathrm{Div}(\Gamma)$ is an indegree divisor if and only if the polynomial $B_\Gamma$ contains the monomial $\exp(D) = x_1^{d_1}\cdots x_k^{d_k}, $ or, equivalently, if and only if $\mathrm{mult}(\Gamma, D) > 0$. Moreover, if $D$ is an indegree divisor, then the number $\mathrm{mult}(\Gamma, D)$ is equal to the number of orientations $\mathfrak o \in \pazocal O(\Gamma)$ such that $D = \mathrm{indeg}(\mathfrak o)$. \end{proposition} \begin{example}\label{triangleEx2} Let $\Gamma$ be a triangle. Then \begin{align}\label{triangleB} \begin{aligned} B_\Gamma&= (x_1 + x_2)(x_1 + x_3)(x_2 + x_3) = \\ &x_1^2 x_2 + x_1x_2^2 + x_1^2x_3 + x_1x_3^2 + x_2^2x_3 + x_2x_3^2 + 2x_1x_2x_3, \end{aligned} \end{align} which again shows that there are $7$ indegree divisors on $\Gamma$ (cf. Figure \ref{triangle}). \end{example} Now, consider the space $\mathrm{span}_\mathbb{R} \langle v_1, \dots, v_k \rangle = \mathrm{Div}(\Gamma) \otimes \mathbb{R}$ formally spanned over $\mathbb{R}$ by vertices of $\Gamma$. Note that each edge of $\Gamma$ may be viewed as a line segment in this space. The following definition is due to T.\,Zaslavsky, see \cite{postnikov}: \begin{definition} Let $\Gamma$ be a graph. Then the \textit{graphical zonotope} $Z_\Gamma$ is the Minkowski sum of edges $[v_i, v_j] \in \Gamma$ considered as line segments in the space $\mathrm{span}_\mathbb{R} \langle v_1, \dots, v_k \rangle$. \end{definition} \begin{statement}\label{balDef4} For any graph $\Gamma$, indegree divisors on $\Gamma$ are exactly lattice points in the corresponding graphical zonotope $Z_\Gamma$. \end{statement} \begin{proof} It is well known that the Newton polytope of a product is the Minkowski sum of Newton polytopes of factors. Therefore, the graphical zonotope $Z_\Gamma$ is the Newton polytope of the polynomial $B_\Gamma$. In other words, $Z_\Gamma$ is the convex hull of all indegree divisors. In particular, any indegree divisor is a lattice point in $Z_\Gamma$. Conversely, let $D$ be a lattice point in $Z_\Gamma$. Then $D$ lies in the convex hull of indegree divisors, and, as follows from Proposition \ref{balDef2}, it is itself an indegree divisor, q.e.d. \end{proof} Now we have four different descriptions of indegree divisors: in terms of orientations, in terms of linear inequalities, in terms of monomials in the polynomial $B_\Gamma$, and in terms of lattice points in the graphical zonotope $Z_\Gamma$. \begin{figure}[t] \centerline{ \begin{picture}(100,100) \put(50,50){ \begin{picture}(200,200) \put(-20, 33){\line(1,0){40}} \put(20, 33){\line(3,-5){20}} \put(20, -33){\line(3,5){20}} \put(-20, -33){\line(1,0){40}} \put(-20, 33){\line(-3,-5){20}} \put(-20, -33){\line(-3,5){20}} \put(39,0){\circle*{3}} \put(44,-2){$(2,0,1)$} \put(14,-45){$(2,1,0)$} \put(-46,-45){$(1,2,0)$} \put(-78,-2){$(0,2,1)$} \put(-46,40){$(0,1,2)$} \put(14,40){$(1,0,2)$} \put(-10,-10){$(1,1,1)$} \put(-39,0){\circle*{3}} \put(20,33){\circle*{3}} \put(-20,33){\circle*{3}} \put(20,-33){\circle*{3}} \put(-20,-33){\circle*{3}} \put(0,0){\circle*{3}} \end{picture} } \end{picture} } \caption{Lattice points in the permutohedron $P_3$}\label{permut3} \end{figure} \begin{example}\label{completeGraph} Let $\Gamma = K_n$ be the complete graph on $n$ vertices. Then $$ Z_\Gamma =\sum_{i < j} \,\mathrm{Newton}(x_i + x_j) = \sum_{i < j}\, \mathrm{Newton}(x_i - x_j) = \mathrm{Newton}(\mathrm {V}(x_1, \dots, x_n)) $$ where $\mathrm{Newton}(B)$ stands for the Newton polytope of a polynomial $B$, and $\mathrm {V}$ is the Vandermonde determinant $$ \mathrm {V}(x_1, \dots, x_n) = \prod_{i < j}\, (x_i - x_j) = \det(x_i^{j-1}) = \sum_{\sigma \in S_n} (-1)^\sigma x_1^{\sigma(0)} \dots x_n^{\sigma(n-1)}, $$ and $S_n$ denotes all permutations of $(0,1, \dots, n-1)$. So, the graphical zonotope for the complete graph $K_n$ is the convex hull of all permutations $\sigma \in S_n$. This polytope is known as the \textit{permutohedron} $P_n$. Figure \ref{permut3} depicts the permutohedron $P_3$ corresponding to the triangle $K_3$. Lattice points in this permutohedron are seven indegree divisors (cf. Example \ref{triangleEx1} and Example~\ref{triangleEx2}). \end{example} \begin{remark}\label{verticesOfZG} Another way to define the graphical zonotope $Z_\Gamma$ is to consider the convex hull of indegree divisors of all acyclic orientations. Moreover, indegree divisors of the form $\mathrm{indeg}(\mathfrak o)$ where $\mathfrak o$ is acyclic are precisely vertices of $Z_\Gamma$ (cf. \cite{Stanley}, Exercise 4.32). Note that if the graph $\Gamma$ has loops, then it does not admit acyclic orientations. In this case, vertices of the graphical zonotope $Z_\Gamma$ are indegree divisors of those orientations that have no oriented cycles except compositions of loops. \end{remark} \section{Description of strata}\label{secDOS} In Section \ref{genCons}, we constructed a stratification $ \pazocal P^B_C = \bigsqcup\,\pazocal S(\Gamma ,D) $ where $\Gamma$ is a generating subgraph of the dual graph $\Gamma_C$ of $C$, and $D$ is a divisor on $\Gamma$. Now, we describe those divisors $D$ for which the stratum $\pazocal S(\Gamma ,D)$ is non-empty.\par For a generating subgraph $\Gamma$ of the dual graph $\Gamma_C$, denote by ${C}_{\Gamma}$ the curve obtained from $C$ by resolving those nodes which correspond to edges $e \in \Gamma_C \setminus \Gamma$. Let also ${C}_{\Gamma} \, / \, \infty$ be the curve obtained from ${C}_{\Gamma}$ by identifying $n$ points at infinity. \begin{theorem}\label{thm1} Assume that $C$ is a nodal curve satisfying condition \eqref{specCond}. Let also $\Gamma$ be a generating subgraph of the dual graph $\Gamma_C$ of $C$, and let $D \in \mathrm{Div}(\Gamma)$. Then \begin{longenum} \item the stratum $\pazocal S(\Gamma ,D)$ is non-empty if and only if $D$ is an indegree divisor on $\Gamma$; \item each non-empty stratum $\pazocal S(\Gamma ,D)$ is a smooth irreducible quasi-affine variety biholomorphic to an open dense subset in the generalized Jacobian of the curve ${C}_{\Gamma} \, / \, \infty$. In particular, \begin{align* \dim \pazocal S(\Gamma ,D)= \frac{mn(n-1)}{2} - |E(\Gamma_C)| + |E(\Gamma)|; \end{align*} \item for each non-empty stratum $\pazocal S(\Gamma ,D)$, the quotient $\pazocal S(\Gamma ,D) / G^B$ is biholomorphic to an open dense subset in the generalized Jacobian of the curve ${C}_{\Gamma}$. \end{longenum} \end{theorem} \begin{corollary} We have $$\dim \pazocal P^B_C = \dfrac{mn(n-1)}{2}. $$ \end{corollary} \begin{remark} Details on generalized Jacobians of singular curves can be found in \cite{rosenlicht, serre}. Here we recall that the generalized Jacobian $\mathrm{Jac}(C)$ of a nodal curve $C$ is an extension of the Jacobian of the normalization of $C$ with a commutative algebraic group $(\mathbb{C}^*)^k$, where $k$ is a non-negative integer. The same is true for a nodal curve with identified points at infinity, provided that these points are all distinct. In particular, if $C$ is a rational nodal curve, then $\mathrm{Jac}(C) \simeq (\mathbb{C}^*)^k$, and $\mathrm{Jac}(C /\infty ) \simeq (\mathbb{C}^*)^m$, where $m \geq k$. \end{remark} The proof of Theorem \ref{thm1} is based on careful analysis of the correspondence between matrix polynomials and line bundles described in Section \ref{genCons} and is similar to Gavrilov's proof in the smooth case \cite{Gavrilov2}. Details of the proof will be published elsewhere. \begin{example}\label{twoLines5} Let $C$ be two lines, as in Example \ref{twoLines} and Examples \ref{twoLines2} - \ref{twoLines4_5}. Then \begin{align*} \pazocal S( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0) {$0$}; \node () at (1.2,-0) {$1$}; \end{tikzpicture}) \simeq \pazocal S( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0.) {$1$}; \node () at (1.2,-0.) {$0$}; \end{tikzpicture}) \simeq \mathbb{C}^*,\quad \pazocal S( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [dashed] (0,0) -- (1,0); \node () at (-0.2,-0.) {$0$}; \node () at (1.2,-0.) {$0$}; \end{tikzpicture}) \simeq \mbox{a point}, \end{align*} which means that for all strata we have an isomorphism $\pazocal S(\Gamma ,D) \simeq \mathrm{Jac}(C_\Gamma / \infty)$, i.e. in this example the open dense subset from Theorem \ref{thm1} is the whole Jacobian. \end{example} \begin{figure}[t] \centerline{ \begin{tikzpicture} \node () at (-7,-2.4) { \begin{tikzpicture}[scale = 0.5] \node (A) at (0,0) { $ (0,1,2)$ }; \node (B) at (3.5,0) { $ (1,0,2) $ }; \node (C) at (5, -2.4) { $ (2,0,1) $ }; \node (D) at (3.5,-4.8) { $ (2,1,0) $ }; \node (E) at (0,-4.8) { $ (1,2,0) $ }; \node (F) at (-1.5,-2.4) { $ (0,2,1) $ }; \draw (A) -- (B) -- (C) -- (D) --(E) -- (F) -- (A); \end{tikzpicture} }; \draw [dashed, ->] (-4.3,-2.4) -- (-3.3,-2.4); \node (A) at (0,0) { $ \left(\begin{array}{ccc}\nu_1 & * & * \\0 & \nu_2 & * \\0 & 0 & \nu_3\end{array}\right)$ }; \node (B) at (3.5,0) { $ \left(\begin{array}{ccc} \nu_1 & 0 & * \\ * & \nu_2 & * \\ 0 & 0 & \nu_3 \end{array}\right) $ }; \node (C) at (5, -2.4) { $ \left(\begin{array}{ccc} \nu_1 & 0 & 0 \\ * & \nu_2 & * \\ * & 0 & \nu_3 \end{array}\right) $ }; \node (D) at (3.5,-4.8) { $ \left(\begin{array}{ccc} \nu_1 & 0 & 0 \\ * & \nu_2 & 0 \\ * & * & \nu_3 \end{array}\right) $ }; \node (E) at (0,-4.8) { $ \left(\begin{array}{ccc} \nu_1 & * & 0 \\ 0 & \nu_2 & 0 \\ * & * & \nu_3 \end{array}\right) $ }; \node (F) at (-1.5,-2.4) { $ \left(\begin{array}{ccc} \nu_1 & * & * \\ 0& \nu_2 & 0 \\ 0 & * & \nu_3 \end{array}\right) $ }; \draw (A) -- (B) -- (C) -- (D) --(E) -- (F) -- (A); \end{tikzpicture} } \caption{Strata corresponding to vertices of the permutohedron $P_3$.}\label{vertComp} \end{figure} \begin{example}\label{threeLines} Let $m =1$, $n=3$, and let $C$ be the union of three straight lines $C_i = \{\mu = a_i + b_i \lambda \}$ in general position. Let also $B= \mathrm{diag}(b_1, b_2, b_3)$. The dual graph $\Gamma_C$ is a triangle, so $|\mathrm{InDeg}(\Gamma_C)| = 7$ (see Figures \ref{triangle}, \ref{permut3}). Accordingly, the variety $ \pazocal P^B_C $ has seven strata of dimension three. Six of them correspond to vertices of the permutohedron $P_3$; these six strata are depicted in Figure \ref{vertComp}. The notation $\nu_i$ stands for $ a_i + b_i \lambda $, and the numbers denoted by stars are assumed to be non-zero. Note that there is a one-to-one correspondence between these strata and Borel subalgebras of $\mathfrak{gl}_3$ that contain the Cartan subalgebra of diagonal matrices. \par Now, let us describe the seventh three-dimensional stratum $\pazocal S(\Gamma_C, v_1 + v_2 +v_3)$ corresponding to the interior lattice point $(1,1,1)$ in the permutohedron $P_3$. Let $$(c_1,c_2,c_3) := (1,1,1) \times (b_1, b_2,b_3),\quad k := (c_1,c_2,c_3) \cdot (a_1, a_2, a_3)$$ where $\times$ is the cross product, and dot denotes the inner product. Then the stratum $\pazocal S(\Gamma_C, v_1 + v_2 +v_3)$ consists of matrix polynomials of the form $A + \lambda B$ where $a_{ii} = a_i$, off-diagonal entries of the matrix $A$ satisfy \begin{align*} a_{12}a_{21} = c_3 z,\quad a_{13}a_{31} = c_2 z, \quad a_{23}a_{32} = c_1 z, \quad a_{12}a_{23}a_{31} = w,\end{align*} and $(z,w) \neq (0,0)$ is any point lying in the affine part of the nodal cubic \begin{align}\label{ratCubic} w(kz - w) = c_1c_2c_3z^3. \end{align} \par \begin{figure}[t] \centerline{ \begin{tikzpicture}[thick,scale = 1.2] \node (a) at (0,0) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \node (b) at (2,0) { \begin{tikzpicture}[scale = 1.2] \draw [->-] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$0$}; \node () at (0.5,1.02) {$2$}; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (1,0); \draw [->-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$0$}; \end{tikzpicture} }; \node (d) at (6,0) { \begin{tikzpicture}[scale = 1.2] \draw [-<-] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (1,0); \draw [-<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$1$}; \node () at (1.1,-0.1) {$0$}; \node () at (0.5,1.02) {$1$}; \end{tikzpicture} }; \end{tikzpicture} } \caption{Four indegree divisors on two sides of a triangle}\label{path} \end{figure} Further, let us describe two-dimensional strata. There are three generating subgraphs of $\Gamma_C$ with two edges, and each of them has four indegree divisors (see Figure \ref{path}), so $ \pazocal P^B_C$ has $3 \times 4 = 12$ two-dimensional strata. These strata look similarly to strata in Figure~\ref{vertComp}, but with two stars (i.e., two non-zero elements) instead of three. Stars can be placed at any two off-diagonal positions which are not symmetric with respect to the main diagonal; there are exactly $12$ ways to choose such two positions. Similarly, there are six one-dimensional strata that can be obtained by placing one star at any position, and one zero-dimensional stratum, the point $\mathrm{diag}(\nu_1, \nu_2, \nu_3)$. \end{example} \begin{remark} Note that the stratum $\pazocal S(\Gamma_C, v_1 + v_2 +v_3)$ in the previous example is isomorphic to $(\mathbb{C}^*)^2 $ times a thrice punctured sphere, which means that the open dense subset from Theorem~\ref{thm1} in this case is \textit{not} the whole Jacobian. For any other stratum of $ \pazocal P^B_C$, we have an isomorphism $\pazocal S(\Gamma ,D) \simeq \mathrm{Jac}(C_\Gamma / \infty)$, as in Example \ref{twoLines5}. \end{remark} \begin{remark} Note that matrix polynomials lying in opposite vertices of the hexagon in Figure~\ref{vertComp} are transposes of each other. More generally, if $B$ is a symmetric matrix, then there is an involution $\sigma$ on $\pazocal P^B_C$ given by $\sigma(P) = P^t$. This involution maps each stratum $\pazocal S(\Gamma,D)$ to another stratum $\pazocal S(\Gamma, D')$ and thus induces an involution $\tau \colon D \mapsto D'$ on the set $\mathrm{InDeg}(\Gamma)$. Explicitly, the involution $\tau$ can be defined by the formula $\tau(\mathrm{indeg}(\mathfrak o)) = \mathrm{indeg}(-{\mathfrak o})$ where $-{\mathfrak o}$ is the orientation inverse to $\mathfrak o$. In other words, we have $\tau(D) = \deg(\Gamma) - D$, where $\deg(\Gamma) := \sum_{v \in V(\Gamma)} \deg(v)v$ is the degree divisor of the graph $\Gamma$. Equivalently, the involution $\tau$ can be described as the central symmetry of the graphical zonotope $Z_\Gamma$.\end{remark} \section{Local structure and irreducible components} In this section, we describe the local structure of the variety $\pazocal P^B_C$ in the neighborhood of its smooth stratum $\pazocal S(\Gamma ,D)$. Let a \textit{node} be the germ at the origin of the complex-analytic variety $\{(z,w) \in \mathbb{C}^2 \mid zw = 0\}$. \begin{theorem}\label{thm2} Let $P \in \pazocal S(\Gamma ,D)$. Then, in the neighborhood of the point $P$, the isospectral variety $\pazocal P^B_C$ is locally isomorphic to the direct product of $\codim \pazocal S(\Gamma ,D) = |E(\Gamma_C)| - |E(\Gamma)|$ nodes\footnote{We use the notation $\codim \pazocal S(\Gamma ,D)$ for the codimension of the stratum $ \pazocal S(\Gamma ,D)$ in $\pazocal P^B_C$, i.e. $\codim \pazocal S(\Gamma ,D) := \dim \pazocal P^B_C - \dim \pazocal S(\Gamma ,D)$.} and a smooth disk of dimension $\dim \pazocal S(\Gamma ,D) = \dim \pazocal P^B_C - \left\lvert E(\Gamma_C)\right\rvert + |E(\Gamma)|.$ \end{theorem} \begin{example} Let $C$ be two lines, as in Examples \ref{twoLines} and \ref{twoLines2} - \ref{twoLines4_5}, and let $\emptyset \subset \Gamma_C$ be a generating subgraph with no edges, i.e. the disjoint union of vertices of $\Gamma_C$. The only indegree divisor on the graph $\emptyset$ is the zero divisor, and the corresponding stratum $\pazocal S(\emptyset ,0)$ is one point $\mathrm{diag}(a_1+b_1\lambda, a_2+b_2\lambda)$. Theorem \ref{thm2} implies that near this point $\pazocal P^J_{C}$ is locally a node. Two disks forming this node are closures of the strata $ \pazocal S( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0) {$0$}; \node () at (1.2,-0) {$1$}; \end{tikzpicture})$ and $\pazocal S( \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \fill (0,0) circle (1.2pt); \fill (1,0) circle (1.2pt); \draw [] (0,0) -- (1,0); \node () at (-0.2,-0.) {$1$}; \node () at (1.2,-0.) {$0$}; \end{tikzpicture}) $. \end{example} \begin{example}\label{threeLinesMult0} Let $C$ be three lines, as in Example \ref{threeLines}. Then the stratum $\pazocal S(\emptyset ,0)$ is one point $\mathrm{diag}(\nu_1, \nu_2, \nu_3)$ where $\nu_i = a_i + b_i \lambda$. Theorem \ref{thm2} implies that near this point the isospectral variety $\pazocal P^J_{C}$ is locally a product of three nodes, i.e. a union of eight three-dimensional disks. Six of these disks are closures of strata depicted in Figure~\ref{vertComp}, and other two belong to the closure of the seventh stratum $\pazocal S(\Gamma_C, v_1 + v_2 + v_3)$ (note that this closure has a double point at the origin corresponding to a double point of the curve~\eqref{ratCubic}). \end{example} \begin{corollary}[of Theorem \ref{thm2}] Irreducible components of the variety $ \pazocal P^B_C$ are in one-to-one correspondence with indegree divisors on the dual graph $\Gamma_C$. \end{corollary} \begin{proof} According to Theorem \ref{thm2}, each stratum $\pazocal S(\Gamma, D)$ where $\Gamma$ is a proper subgraph of $\Gamma_C$ lies in the closure of a higher-dimensional stratum. Therefore, irreducible components of $ \pazocal P^B_C$ are closures of strata of the form $\pazocal S(\Gamma_C,D)$, as desired. \end{proof} \begin{example}\label{nLines} Let $m =1$, and let $C$ be the union of $n$ straight lines $\{\mu = a_i + b_i \lambda \}$ in general position. Let also $B= \mathrm{diag}(b_1, \dots, b_n)$. The dual graph $\Gamma_C$ is the complete graph $K_n$ on $n$ vertices. Therefore, irreducible components of $ \pazocal P^B_C$ are enumerated by lattice points in the permutohedron $P_n$ (see Example \ref{completeGraph}). It is easy to give an explicit description of components corresponding to vertices of the permutohedron: they generalize components depicted in Figure \ref{vertComp} and correspond to $n!$ Borel subalgebras of $\mathfrak{gl}_n$ containing diagonal matrices. Explicit description of other components for $n > 3$ is unknown. \par Also note that the number of lattice points in the permutohedron $P_n$ is known to be equal to the number of forests on $n$ labeled vertices. It is sequence A001858 in the online encyclopedia of integer sequences: $1,2,7,38, 291, \dots$. \end{example} \begin{remark} The idea of the proof of Theorem \ref{thm2} is to show that if $C$ is a nodal curve, then every $P \in \pazocal P^B_C$ is a non-degenerate singular point for the integrable system on matrix polynomials (for the definition of non-degenerate singular points, see, e.g.,~\cite{bolosh}, Definition 7). Then one applies J.\,Vey's normal form theorem \cite{Vey}. \end{remark} \section{Combinatorial interlude II: Multiplicities} \begin{definition}\label{multDef} Let $\Gamma$ be a graph, and let $D$ be an indegree divisor on $\Gamma$. The \textit{multiplicity} of $D$ is the number $\mathrm{mult}(\Gamma, D)$ entering formula \eqref{BPolynomial} for the polynomial $B_\Gamma$ or, equivalently, the number of orientations $\mathfrak o \in \pazocal O(\Gamma)$ such that $D = \mathrm{indeg}(\mathfrak o)$\footnote{Equivalence of these definitions follows from Proposition \ref{balDef3}.}. \end{definition} \begin{example} Let $\Gamma$ be a graph with $2$ vertices joined by $n$ edges. Then $ B_\Gamma = (x_1 + x_2)^n, $ so multiplicities are given by binomial coefficients. \end{example} \begin{example}\label{Schur} Let $\Gamma = K_n$ be a complete graph on $n$ vertices. Then $$ B_\Gamma(x_1, \dots, x_n) = \frac{\mathrm V(x_1^2, \dots, x_n^2)}{\mathrm V(x_1, \dots, x_n)}, $$ i.e. $B_\Gamma$ is the Schur polynomial corresponding to the partition $\lambda = (n-1, n-2, \dots,1, 0)$. Therefore, indegree divisors on $\Gamma$ are weights of the irreducible representation of $\mathfrak{gl}_n$ with highest weight $\lambda$, and the multiplicity of any indegree divisor is equal to the multiplicity of the corresponding weight. \end{example} \begin{remark}\label{circuits}\label{multOfVert} The multiplicity of an indegree divisor $D$ can also be defined as $1$ plus the number of possibly disconnected oriented circuits in the directed graph $(\Gamma, \mathfrak o)$ where $\mathfrak o$ is any orientation of $\Gamma$ such that $\mathrm{indeg}(\mathfrak o) = D$, and a \textit{circuit} is a cycle allowing repetitions of vertices but not edges \par Note that this definition of multiplicities in terms of oriented circuits implies that for a loopless graph $\mathrm{mult}(\Gamma, D) = 1$ if and only if $D$ is a vertex of the zonotope $Z_\Gamma$ (cf. Remark~\ref{verticesOfZG}). If the graph $\Gamma$ has $k$ loops, then for any $D \in \mathrm{InDeg}(\Gamma)$, we have $\mathrm{mult}(\Gamma, D) \geq 2^k$, and $\mathrm{mult}(\Gamma, D) = 2^k$ if and only if $D$ is a vertex of the zonotope $Z_\Gamma$. \end{remark} Now, let us define the multiplicity of an indegree divisor along an indegree divisor on a subgraph. Let $\Gamma_1 \subset \Gamma$ be a generating subgraph, and let $\pazocal O(\Gamma \setminus \Gamma_1)$ be the set of all possible orientations of those edges of $\Gamma$ which do not belong to $\Gamma_1$. Then, for each such partial orientation $\mathfrak o$, we can define a divisor $\mathrm{indeg}(\mathfrak o)$ by the same formula \eqref{indegDiv}: the value of $\mathrm{indeg}(\mathfrak o)$ at the vertex $v$ is the number of arrows pointing at $v$. Note that if $D_1 \in \mathrm{InDeg}(\Gamma_1)$, and $\mathfrak o \in \pazocal O(\Gamma \setminus \Gamma_1)$, then $D_1 + \mathrm{indeg}(\mathfrak o) \in \mathrm{InDeg}(\Gamma)$. \begin{definition}\label{relMult} The \textit{multiplicity} of $D \in \mathrm{InDeg}(\Gamma)$ along $D_1 \in \mathrm{InDeg}(\Gamma_1)$ is the number of orientations $\mathfrak o \in \pazocal O(\Gamma \setminus \Gamma_1)$ such that $ D - D_1 = \mathrm{indeg}(\mathfrak o)$. We denote the multiplicity of $D $ along $D_1 $ as $\mathrm{mult}(\Gamma, D \mid \Gamma_1, D_1)$. \end{definition} \begin{example}\label{mult2example} \begin{align*} \mathrm{mult}\left( \left.\begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [] (0,0) -- (0.5, 0.86); \draw [] (0,0) -- (0.5, 0.3); \draw (0.5, 0.3) -- (1,0); \draw (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.) {$1$}; \node () at (1.1,-0.) {$1$}; \node () at (0.5,1.02) {$2$}; \node () at (0.6, 0.4) {$2$}; \end{tikzpicture}\right\rvert \begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [dashed] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (0.5, 0.3); \draw [dashed] (0.5, 0.3) -- (1,0); \draw [dashed] (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [dashed] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.) {$0$}; \node () at (1.1,-0.) {$1$}; \node () at (0.5,1.02) {$0$}; \node () at (0.6, 0.4) {$0$}; \end{tikzpicture} \right) = 2. \end{align*} Corresponding partial orientations are \begin{align*} \mbox{} \begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [dashed, ->- ] (0,0) -- (0.5, 0.86); \draw [dashed, -<- ] (0,0) -- (0.5, 0.3); \draw [dashed, -<- ] (0.5, 0.3) -- (1,0); \draw [dashed, -<-] (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [dashed, -<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$0$}; \node () at (0.6, 0.4) {$0$}; \end{tikzpicture} \mbox{ and } \begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [dashed, -<- ] (0,0) -- (0.5, 0.86); \draw [dashed, ->- ] (0,0) -- (0.5, 0.3); \draw [dashed, -<- ] (0.5, 0.3) -- (1,0); \draw [dashed, ->-] (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [dashed, -<-] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$1$}; \node () at (0.5,1.02) {$0$}; \node () at (0.6, 0.4) {$0$}; \end{tikzpicture}. \end{align*} \end{example} \begin{remark} Note that for $\Gamma_1 = \emptyset$ and $D_1 = 0$, where $\emptyset \subset \Gamma$ denotes a generating subgraph with no edges, Definition \ref{relMult} reduces to Definition \ref{multDef}, i.e. $ \mathrm{mult}(\Gamma, D \mid \emptyset, 0) = \mathrm{mult}(\Gamma, D). $ \end{remark} \begin{remark}\label{counter} Note that a necessary condition for the multiplicity of $D$ along $D_1$ to be nonzero is $D \geq D_1$. However, this condition is insufficient. For example, \begin{align*} \mathrm{mult}\left( \left.\begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [] (0,0) -- (0.5, 0.86); \draw [] (0,0) -- (0.5, 0.3); \draw (0.5, 0.3) -- (1,0); \draw (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$2$}; \node () at (1.1,-0.1) {$2$}; \node () at (0.5,1.02) {$1$}; \node () at (0.6, 0.4) {$1$}; \end{tikzpicture}\right\rvert \begin{tikzpicture}[scale = 1.7, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (0.5, 0.3); \draw [dashed] (0.5, 0.3) -- (1,0); \draw [dashed] (0.5, 0.3) -- (0.5, 0.86); \draw [] (0,0) -- (1,0); \draw [] (0.5, 0.86) -- (1,0); \node () at (-0.1,-0.1) {$0$}; \node () at (1.1,-0.1) {$2$}; \node () at (0.5,1.02) {$1$}; \node () at (0.6, 0.4) {$0$}; \end{tikzpicture} \right) = 0. \end{align*} \end{remark} \begin{remark} Another way to define multiplicities is by introducing a poset structure on the set of indegree divisors on generating subgraphs. Let $\Gamma$ be a graph without loops. Consider the set $\mathrm{IN}(\Gamma)$ of pairs $(\Gamma', D)$ where $\Gamma' \subset \Gamma$ is a generating subgraph, and $D \in \mathrm{InDeg}(\Gamma')$ is an indegree divisor on $\Gamma'$. Then the set $\mathrm{IN}(\Gamma)$ has a natural poset structure. Namely, say that $(\Gamma_1, D_1) > (\Gamma_2, D_2)$ if $\Gamma_1 \supset \Gamma_2$, and there exists an orientation $\mathfrak o \in \pazocal O(\Gamma_1 \setminus \Gamma_2)$ such that $D_1 - D_2 = \mathrm{indeg}(\mathfrak o)$. Let $\pazocal H(\Gamma)$ be the Hasse diagram of the poset $\mathrm{IN}(\Gamma)$. Then multiplicities can be defined by the formula $$ \mathrm{mult}(\Gamma_1, D_1 \mid \Gamma_2, D_2) = \frac{\mbox{number of directed paths from $(\Gamma_2, D_2)$ to $(\Gamma_1, D_1)$ in } \pazocal H(\Gamma)}{(|E(\Gamma_1)| - |E(\Gamma_2)|)!}. $$ It is worth mentioning that a closely related poset $\overline{\pazocal{OP}_\Gamma}$ was considered in \cite{caporaso2010torelli}. The elements of $\overline{\pazocal{OP}_\Gamma}$ are pairs $(\Gamma', D')$ where $\Gamma' \subset \Gamma$ is a generating subgraph, and $D'$ is a \textit{completely reducible} indegree divisor on $\Gamma'$ (see Section \ref{secRID}). The ordering is defined in the same way as above, i.e. $\overline{\pazocal{OP}_\Gamma}$ is a subposet of the poset $\mathrm{IN}(\Gamma)$. In \cite{Amini}, it is proved that the poset $\overline{\pazocal{OP}_\Gamma}$ is isomorphic to the facet poset of the Voronoi cell associated with the graph $\Gamma$. \end{remark} \section{Adjacency of strata} Now, we would like to understand which pairs of strata $\pazocal S(\Gamma, D)$ are adjacent to each other. First note that an obvious necessary condition for the closure of the stratum $\pazocal S(\Gamma_1, D_1)$ to contain the stratum $\pazocal S(\Gamma_2, D_2)$ is $ \dim \pazocal S(\Gamma_1, D_1) > \dim \pazocal S(\Gamma_2, D_2)$, or, equivalently, $|E(\Gamma_1)| > |E(\Gamma_2)|$. Moreover, since for any double point $(\lambda, \mu) \in C$ the set $\{L\in \pazocal P^B_C \mid \dim \Ker(P(\lambda) - \mu \cdot \mathrm{Id}) = 2\}$ is closed in $\pazocal P^B_C$, we should in fact have $\Gamma_1 \supset \Gamma_2$. However, the latter condition still does not imply that the closure of $\pazocal S(\Gamma_1, D_1)$ contains $\pazocal S(\Gamma_2, D_2)$.\par To formulate a necessary and sufficient condition for a stratum to be contained in the closure of another stratum, let us first discuss in more detail the local structure of the variety $\pazocal P^B_C$ in the neighborhood of a stratum $\pazocal S(\Gamma_2, D_2)$. Theorem \ref{thm2} implies that locally $\pazocal P^B_C$ can be written a \begin{align*} (2\mathbb{C}^*\sqcup \mbox{a point})^p\times \mathbb{C}^q = \bigsqcup_{i=0}^p 2^i\binom{p}{i} (\mathbb{C}^*)^i \times \mathbb{C}^q. \end{align*} where $p = \codim \pazocal S(\Gamma_2 ,D_2)$, and $q = \dim \pazocal S(\Gamma_2 ,D_2)$. Thus, the neighborhood of the stratum $\pazocal S(\Gamma_2, D_2)$ in the variety $\pazocal P^B_C$ is locally a union of $3^p$ smooth strata whose dimensions vary from $q = \dim \pazocal S(\Gamma_2, D_2)$ to $p+q = \dim \pazocal P^B_C$. Each of these local strata is an open subset of some global stratum $\pazocal S(\Gamma_1, D_1)$. \begin{definition} The \textit{multiplicity} of the stratum $\pazocal S(\Gamma_1, D_1)$ along the stratum $\pazocal S(\Gamma_2, D_2)$ is the number of local strata $S$ in the above stratification of the neighborhood of $\pazocal S(\Gamma_2, D_2)$ such that $S \subset \pazocal S(\Gamma_1, D_1)$. \end{definition} Clearly, the multiplicity of $\pazocal S(\Gamma_1, D_1)$ along $\pazocal S(\Gamma_2, D_2)$ does not depend on the choice of a point $P \in \pazocal S(\Gamma_2, D_2)$. In particular, if this multiplicity is positive, then the stratum $\pazocal S(\Gamma_2, D_2)$ lies in the closure of $\pazocal S(\Gamma_1, D_1)$, and if the multiplicity is zero, then $\pazocal S(\Gamma_2, D_2)$ and the closure of $\pazocal S(\Gamma_1, D_1)$ do not intersect each other. Also note that if the multiplicity of $\pazocal S(\Gamma_1, D_1)$ along $\pazocal S(\Gamma_2, D_2)$ is equal to one, then the closure of $\pazocal S(\Gamma_1, D_1)$ is smooth at points of $\pazocal S(\Gamma_2, D_2)$. Multiplicity bigger than one means that the closure of $\pazocal S(\Gamma_1, D_1)$ is singular along $\pazocal S(\Gamma_2, D_2)$. \par The following theorem gives a necessary and sufficient condition for a stratum to be contained in the closure of another stratum, and also allows us to compute multiplicities. \begin{theorem}\label{thm3} Let $\pazocal S(\Gamma_1, D_1)$ and $\pazocal S(\Gamma_2, D_2)$ be two strata of $\pazocal P^B_C$. Then \begin{longenum} \item $\pazocal S(\Gamma_2, D_2)$ lies in the closure of $\pazocal S(\Gamma_1, D_1)$ if and only if $\Gamma_2 \subset \Gamma_1$ and $ \mathrm{mult}(\Gamma_1, D_1 \mid \Gamma_2, D_2) > 0; $ \item if $\Gamma_2 \subset \Gamma_1$, then the multiplicity of the stratum $\pazocal S(\Gamma_1, D_1)$ along the stratum $\pazocal S(\Gamma_2, D_2)$ is equal to the multiplicity $\mathrm{mult}(\Gamma_1, D_1 \mid \Gamma_2, D_2)$. \end{longenum} \end{theorem} \begin{example} Let $C$ be three lines in general position, as in Examples \ref{threeLines} and \ref{threeLinesMult0}. Let also $\Gamma= \Gamma_C$, and let $D = v_1+v_2 +v_3 \in \mathrm{InDeg}(\Gamma)$. We have $\mathrm{mult}(\Gamma, D) = 2$ (see~\eqref{triangleB} and Figure~\ref{triangle}). Therefore, the stratum $\pazocal S(\Gamma, D) $ has multiplicity $2$ along the zero-dimensional stratum $\pazocal S(\emptyset,0) = \mathrm{diag}(a_i+b_i\lambda)$. In other words, $P = \mathrm{diag}(a_i+b_i\lambda)$ is a double point for the closure of $\pazocal S(\Gamma, D) $ (cf. Example~\ref{threeLinesMult0}). Moreover, for any $(\Gamma', D') \neq (\emptyset,0) $, we have $\mathrm{mult}(\Gamma, D \mid \Gamma', D') \leq 1$, which implies that the closure of $\pazocal S(\Gamma, D) $ is smooth at all points except $\mathrm{diag}(a_i+b_i\lambda)$. \end{example} \begin{example} More generally, let $C$ be a union of $n$ straight lines (see Examples~\ref{nLines} and \ref{Schur}). Then maximal-dimension strata $\pazocal S(\Gamma, D) $ of the isospectral variety $\pazocal P^B_C$ are indexed by weights of the representation of $\mathfrak{gl}_n$ with highest weight $\lambda = (n-1, n-2, \dots,1, 0)$, and the multiplicity of a stratum $\pazocal S(\Gamma, D) $ at the point $\mathrm{diag}(a_i+b_i\lambda)$ is equal to the multiplicity of the weight $D$. Also note that, in contrast to the case $n=3$, some of the strata $\pazocal S(\Gamma, D) $ may have multiplicities greater than $1$ along strata of positive dimension, see, e.g., Example \ref{mult2example}. \end{example} \begin{remark} Note that for any $(\Gamma_1, D_1)$ and $(\Gamma_2, D_2)$, one has $\mathrm{mult}(\Gamma_1, D_1 \mid \Gamma_2, D_2) \leq \mathrm{mult}(\Gamma_1, D_1)$. Therefore, the closure of a stratum $\pazocal S(\Gamma_1, D_1) $ is smooth if and only if $\mathrm{mult}(\Gamma_1, D_1) = 1$, or, equivalently, if $\Gamma_1$ has no loops, and $D_1$ is a vertex of the graphical zonotope $Z_{\Gamma_1}$ (see Remark \ref{multOfVert}). \end{remark} \begin{remark} Also note that conditions $\Gamma_2 \subset \Gamma_1$ and $D_2 \leq D_1$ are necessary but not sufficient for the stratum $\pazocal S(\Gamma_2, D_2)$ to belong to the closure of the stratum $\pazocal S(\Gamma_1, D_1)$, see, e.g., Example~\ref{counter}. \end{remark} \section{Combinatorial interlude III: Reducible indegree divisors}\label{secRID} Let $\Gamma$ be a graph. Recall that an orientation $\mathfrak o \in \pazocal O(\Gamma)$ is called \textit{strongly connected} if any two vertices of the directed graph $(\Gamma, \mathfrak o)$ can be joined by a directed path; an orientation $\mathfrak o \in \pazocal O(\Gamma)$ is called \textit{totally cyclic} if every edge of $(\Gamma, \mathfrak o)$ belongs to a directed cycle. It is well-known that if $\Gamma$ is connected, then an orientation $\mathfrak o \in \pazocal O(\Gamma)$ is totally cyclic if and only if it is strongly connected. On a disconnected graph $\Gamma$, an orientation $\mathfrak o \in \pazocal O(\Gamma)$ is totally cyclic if and only if its restriction to every connected component of $\Gamma$ is strongly connected. \begin{proposition}\label{irCond} Let $D \in \mathrm{InDeg}(\Gamma)$ be an indegree divisor. Then the following conditions are equivalent: \begin{longenum} \item there exists no proper non-empty subgraph $\Gamma' \subset \Gamma$ such that the restriction of $D$ to $\Gamma'$ is an indegree divisor; \item for every proper non-empty subgraph $\Gamma' \subset \Gamma$, the degree of the restriction of $D$ to $\Gamma'$ is strictly bigger than the number of edges of $\Gamma'$; \item there exists a strongly connected orientation $\mathfrak o \in \pazocal O(\Gamma)$ such that $D = \mathrm{indeg}(\mathfrak o)$; \item the graph $\Gamma$ is connected, and $D$ is an interior point\footnote{Note that if $Z_\Gamma$ is just one point, then we regard this point as an interior one.} of the graphical zonotope $ Z_\Gamma$. \end{longenum} \end{proposition} \begin{definition} Let $D \in \mathrm{InDeg}(\Gamma)$ be an indegree divisor. If $D$ satisfies conditions a)-d) listed in Proposition \ref{irCond}, then $D$ is called \textit{irreducible}. Otherwise, $D$ is called \textit{reducible}. \end{definition} \begin{proposition}\label{crCond} Let $D \in \mathrm{InDeg}(\Gamma)$ be an indegree divisor. Then the following conditions are equivalent: \begin{longenum} \item the restriction of $D$ to every connected component of $\Gamma$ is an irreducible divisor\footnote{Note that a restriction of an indegree divisor to a connected component is automatically an indegree divisor.}; \item there exists a totally cyclic orientation $\mathfrak o \in \pazocal O(\Gamma)$ such that $D = \mathrm{indeg}(\mathfrak o)$; \item $D$ is an interior point of the graphical zonotope $Z_\Gamma$. \end{longenum} \end{proposition} \begin{definition} Let $D \in \mathrm{InDeg}(\Gamma)$ be an indegree divisor. If $D$ satisfies conditions a)-c) listed in Proposition \ref{crCond}, then $D$ is called \textit{completely reducible}. \end{definition} \section{Completely reducible matrix polynomials and compactified Jacobians} \begin{definition} A matrix polynomial $P(\lambda)$ is called \textit{reducible} if there exists a proper non-empty subspace $W \subset \mathbb{C}^n$ which is an invariant under the action of the matrix $P(\lambda)$ for every $\lambda \in \mathbb{C}$. A matrix polynomial $P(\lambda)$ is called \textit{completely reducible} if every such invariant subspace $W \subset \mathbb{C}^n$ admits an invariant complement. \end{definition} Note that if $P(\lambda)$ is reducible, then so is its spectral curve. However, the converse is in general not true. For example, matrix polynomials which belong to the stratum $\pazocal S(\Gamma_C, v_1 + v_2 + v_3)$ in Example \ref{threeLines} are irreducible, though the corresponding spectral curve is a union of three lines. It turns out that a necessary and sufficient condition for reducibility of $P$ can be given in terms of the associated graph $\Gamma_P$ and the divisor $D_P$: \begin{theorem}\label{thm4} Let $P \in \pazocal S(\Gamma, D)$ be a matrix polynomial. Then $P$ is reducible if and only if $D$ is a reducible divisor on $\Gamma$. Similarly, $P$ is completely reducible if and only if $D$ is completely reducible. \end{theorem} Let us introduce the set $$ (\pazocal P_C^B)_{\mathrm{cr}} := \{P(\lambda) \in \pazocal P_C^B \mid P(\lambda) \mbox{ completely reducible}\}.$$ By Theorem \ref{thm4}, we have \begin{align}\label{crStrat} (\pazocal P_C^B)_{\mathrm{cr}} = \bigsqcup\nolimits_{\Gamma, D} \pazocal S(\Gamma,D)\end{align} where $\Gamma$ is a generating subgraph of the dual graph $\Gamma_C$, and $D$ is a completely reducible indegree divisor on $\Gamma$. \begin{example} Let $C$ be three lines, as in Example \ref{threeLines}. Then $$ (\pazocal P_C^B)_{\mathrm{cr}} = \pazocal S\left(\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw (0,0) -- (0.5, 0.86); \draw (0,0) -- (1,0); \draw (0.5, 0.86) -- (1,0); \node () at (-0.15,-0.1) {$1$}; \node () at (1.15,-0.1) {$1$}; \node () at (0.5,1.1) {$1$}; \end{tikzpicture} \right) \,\sqcup\, \pazocal S\left( \begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)},vertex/.style={anchor=base, circle,fill=black!25,minimum size=18pt,inner sep=2pt}] \draw [dashed] (0,0) -- (0.5, 0.86); \draw [dashed] (0,0) -- (1,0); \draw [dashed] (0.5, 0.86) -- (1,0); \node () at (-0.15,-0.1) {$0$}; \node () at (1.15,-0.1) {$0$}; \node () at (0.5,1.1) {$0$}; \end{tikzpicture}\right), $$ cf. the explicit description in Example \ref{threeLines}. \end{example} Now, we compare stratification \eqref{crStrat} with the stratification of the \textit{canonical compactified Jacobian} introduced by Alexeev. In \cite{Alexeev} Alexeev showed that if $C$ is a nodal, possibly reducible curve, then in degree $g-1$, where $g$ is the arithmetic genus of $C$, there exists a canonical way of compactifying the generalized Jacobian. This canonical compactified Jacobian has a stratification that can be written in our terms as \begin{align}\label{ccj} \overline{\mathrm{Jac}}_{g-1}(C) =\bigsqcup\nolimits_{\Gamma,D} \mathrm{Jac}(C_\Gamma) \end{align} where $\Gamma$ is a generating subgraph of the dual graph $\Gamma_C$, $D$ is a completely reducible indegree divisor on $\Gamma$, and, as above, $C_\Gamma$ is the curve obtained from $C$ by resolving all nodes which correspond to edges not in $\Gamma$. \par Thus, from the combinatorial point of view, stratifications \eqref{crStrat} and \eqref{ccj} coincide. However, the strata themselves are different: each stratum of \eqref{crStrat} is biholomorphic to an open subset in the Jacobian $\mathrm{Jac}(C_\Gamma \, / \, \infty)$, while strata of \eqref{ccj} are Jacobians $\mathrm{Jac}(C_\Gamma)$. To pass from $\mathrm{Jac}(C_\Gamma \, / \, \infty)$ to $\mathrm{Jac}(C_\Gamma)$, we should take the quotient of $\pazocal S(\Gamma,D)$ with respect to the conjugation action of the centralizer $G^B$ of $B$ (see Theorem \ref{thm1}). This observation leads to the following conjecture: \begin{conjecture}\label{conjCCJ} Assume that $C$ is a nodal curve satisfying condition \eqref{specCond}. Then the moduli space $$ (\pazocal P_C^B)_{\mathrm{cr}} / G^B = \{P \in \pazocal P_{m,n} \mid P \mbox{ is completely reducible}, C_P = C \} \,/\,\mathrm{GL}_n(\mathbb{C})$$ of \textit{completely reducible} $n \times n$ degree $m$ matrix polynomials with spectral curve~$C$, considered up to conjugation by constant matrices, is a variety isomorphic to an open dense subset in the canonical compactified Jacobian of $C$. \end{conjecture} \bibliographystyle{plain}
2023-04-23T08:18:07.860Z
2015-06-18T02:06:56.000Z
redpajama/arxiv
arxiv_0000
1,241
12,611
deb5bb9fc0f67ef3e54e4cd6f526b196ec9d0b85
\section*{Appendix A - Discretization} We describe more in depth the discretization steps and give the implementation details of our algorithm (we skip trivial derivations for the sake of compactness). Detailed gradients for each term are given in Appendix B. \paragraph*{Mesh parametrization.} We denote by $S$ a triangle mesh of $n$ points, composed of triangles $S_j$ for $j=1,\dots,m$. In the following derivations, we will consider the classical triangle-based parametrization described by the charts $x_j : \mathbb{R}^2 \to \mathbb{R}^3$ \begin{equation} x_j(\alpha, \beta) = x_{j,1} + \alpha(x_{j,2}-x_{j,1}) + \beta(x_{j,3} - x_{j,1})\,, \end{equation} with $\alpha \in [0,1]$ and $\beta \in [0, 1-\alpha]$. With $x_{j,k} \in \mathbb{R}^3$ we denote the 3D coordinates of vertex $k \in \{1,2,3\}$ in triangle $S_j$. Each triangle $S_j$ is equipped with a discrete metric tensor with coefficients \begin{eqnarray} g_j = \left( \begin{array}{cc} E_j&F_j \\ F_j&G_j \end{array} \right)\,, \end{eqnarray} where $E_j = \| x_{j,2}-x_{j,1} \|^2$, $F_j = \langle x_{j,2}-x_{j,1} , x_{j,3}-x_{j,1} \rangle$, and $G_j = \| x_{j,3}-x_{j,1} \|^2$. The volume element for the $j$-th triangle is then given by $\sqrt{\det g_j} = \sqrt{E_jG_j-F_j^2}$. \paragraph*{Integral of a scalar function.} Scalar functions $f:S \to \mathbb{R}$ are assumed to behave linearly within each triangle. Hence, $f(x(\alpha,\beta))$ is a linear function of $(\alpha,\beta)$ and it is uniquely determined by its values at the vertices of the triangle. The integral of $f$ over $S_j$ is then simply given by: \begin{align} &\int_0^1 \int_0^{1-\alpha} f(\alpha,\beta) \sqrt{\det g_j} d\beta d\alpha\\ &= \int\int f(0,0) (1-\alpha-\beta) + f(1,0) \alpha + f(0,1)\beta ~\sqrt{\det g_j} d\beta d \alpha\nonumber\\ &= \frac{1}{6} (f(0,0) + f(1,0) + f(0,1))\sqrt{E_jG_j-F_j^2}\nonumber\\ &= \frac{1}{3} (f(0,0) + f(1,0) + f(0,1))\mathrm{area}(S_j) \label{eq:integral}\,, \end{align} where $f(0,0) = f(x_{j,1})$, $f(1,0) = f(x_{j,2})$, and $f(0,1) = f(x_{j,3})$. \paragraph*{Gradient of a scalar function.} For the intrinsic gradient of $f$ we get the classical expression in local coordinates: \begin{eqnarray} \nabla f =\left( \begin{array}{cc} \frac{\partial x_j}{\partial \alpha} & \frac{\partial x_j}{\partial \beta} \end{array} \right) \left( \begin{array}{cc} E_j&F_j \\ F_j&G_j \end{array} \right)^{-1} \left( \begin{array}{c} f_\alpha \\ f_\beta \end{array} \right)\,, \end{eqnarray} where we write $f_\alpha$ to denote the partial derivative $\frac{\partial f}{\partial \alpha} = f_{j,2} - f_{j,1}$ and similarly for $f_\beta$. The norm of the intrinsic gradient over triangle $S_j$ is then given by: \begin{align} &\|\nabla f\|\\ &= \sqrt{\langle\nabla f, \nabla f\rangle}\nonumber\\ &= \sqrt{ \left( \begin{array}{cc} f_\alpha & f_\beta \end{array} \right) \left( \begin{array}{cc} E_j&F_j \\ F_j&G_j \end{array} \right)^{-1} \left( \begin{array}{c} f_\alpha \\ f_\beta \end{array} \right) }\nonumber\\ &= \sqrt{ \left( \begin{array}{cc} f_\alpha & f_\beta \end{array} \right) \left( \begin{array}{cc} G_j&-F_j \\ -F_j&E_j \end{array} \right) \left( \begin{array}{c} f_\alpha \\ f_\beta \end{array} \right) } \frac{1}{\sqrt{\det g_j}}\nonumber\\ &= \sqrt{\frac{f_\alpha^2 G_j - 2 f_\alpha f_\beta F_j + f_\beta^2 E_j}{\det g_j} }\,. \end{align} Note that, since we take $f$ to be linear, the gradient $\nabla f$ is constant within each triangle. We can then integrate $\nabla f$ over $S_j$ as follows: \begin{align} &\int_{S_j} \| \nabla f(x)\| \sqrt{\det g_j} d\alpha d\beta\\ &= \int_{S_j} \sqrt{\frac{f_\alpha^2 G_j - 2 f_\alpha f_\beta F_j + f_\beta^2 E_j}{\det g_j} } \sqrt{\det g_j}d\alpha d\beta\nonumber\\ &= \int_{S_j} \sqrt{f_\alpha^2 G_j - 2 f_\alpha f_\beta F_j + f_\beta^2 E_j } ~d\alpha d\beta\nonumber\\ &= \frac{1}{2} \sqrt{f_\alpha^2 G_j - 2 f_\alpha f_\beta F_j + f_\beta^2 E_j }\label{eq:gradient}\,. \end{align} In the following, we write $\mathcal{N}$ and $\mathcal{M}$ to denote the partial and full shape respectively. Further, let $\{ \lambda_i^\mathcal{N} \}_{i=1,\dots,k}$ be the first $k$ eigenvalues of the Laplacian on $\mathcal{N}$, and similarly for $\{ \lambda_i^\mathcal{M} \}_{i=1,\dots,k}$. The functional map $\vct{C}$ has size $k \times k$. \paragraph*{Mumford-Shah functional ($\mu_2$-term).} Following Equations \eqref{eq:integral} and \eqref{eq:gradient}, we immediately obtain: \begin{align} &\int_S \xi(v) \| \nabla v \| dx\nonumber\\ &= \sum_{j=1}^m \int_{S_j} \xi(\alpha,\beta) \| \nabla v \| \sqrt{\det g_j} d\beta d\alpha\nonumber\\ &= \sum_{j=1}^m \sqrt{v_\alpha^2 G_j - 2 v_\alpha v_\beta F_j + v_\beta^2 E_j } \int_{S_j} \xi(\alpha,\beta) d\beta d\alpha\nonumber\\ &\approx \frac{1}{6} \sum_{j=1}^m \sqrt{v_\alpha^2 G_j - 2 v_\alpha v_\beta F_j + v_\beta^2 E_j } (\xi(0,0) + \xi(1,0) + \xi(0,1))\nonumber\,, \end{align} where $\xi(0,0) = \xi(v(x_{j,1}))$, $\xi(1,0) = \xi(v(x_{j,2}))$, and $\xi(0,1) = \xi(v(x_{j,3}))$. \paragraph*{Weight matrix ($\mu_3$-term).} Recall from Section \ref{sec:perturb} and Figure \ref{fig:spectra} that an estimate for the rank of $\vct{C}$ can be easily computed as \begin{equation}\label{eq:rank} r = \max \{ i ~|~ \lambda_i^\mathcal{N} < \max_j \lambda_j^\mathcal{M} \}\,. \end{equation} We use this information in order to construct the weight matrix $\vct{W}$, whose diagonal slope directly depends on $r$. To this end, we model $\vct{W}$ as a regular $k \times k$ grid in $\mathbb{R}^2$. The slanted diagonal of $\vct{W}$ is a line segment $\vct{\delta}(t) = \vct{p} + t \frac{\vct{n}}{\| \vct{n} \|}$ with $t \in \mathbb{R}$, where $\vct{p}=(1,1)\ensuremath{^\top}$ is the matrix origin, and $\vct{n} = (1, r/k)\ensuremath{^\top}$ is the line direction with slope $r/k$. The high-frequency spread in $\vct{C}$ is further accounted for by funnel-shaping $\vct{W}$ along the slanted diagonal. We arrive at the following expression for $\vct{W}$: \begin{equation}\label{eq:wij} w_{ij} = e^{-\sigma\sqrt{i^2 + j^2}} \| \frac{\vct{n}}{\| \vct{n} \|} \times ((i,j)\ensuremath{^\top} - \vct{p}) \| \,, \end{equation} where the second factor is the distance from the slanted diagonal $\vct{\delta}$, and $\sigma \in \mathbb{R}_+$ regulates the spread around $\vct{\delta}$. In our experiments we set $\sigma = 0.03$. \paragraph*{Orthogonality ($\mu_4 , \mu_5$-terms).} For practical reasons, we incorporate the off-diagonal and diagonal terms within one term with the single coefficient $\mu_{4,5}$. In addition, we rewrite the off-diagonal penalty using the following equivalent expression: \begin{equation} \sum_{i \neq j} ( \mathbf{C}^\mathrm{T} \mathbf{C} )^2_{ij} = \| \vct{C}\ensuremath{^\top}\vct{C} \|_\mathrm{F}^2 - \sum_i (\vct{C}\ensuremath{^\top}\vct{C})^2_{ii}\,. \end{equation} Vector $\vct{d} \in \mathbb{R}^k$ is constructed by setting the first $r$ elements (according to \eqref{eq:rank}) equal to 1, and the remaining $k-r$ elements equal to 0. \section*{Appendix B - Gradients} We find local solutions to each optimization problem by the (nonlinear) conjugate gradient method. In this Section we give the detailed gradient derivations of all terms involved in the optimization. In order to keep the derivations practical, we will model function $v$ by its corresponding $n$-dimensional vector $\vct{v}$. Note that, depending on the optimization step, the gradients are computed with respect to either $\vct{v}$ or $\vct{C}$. \paragraph*{Data term (w.r.t. ${v}$).} Let $q$ denote the number of corresponding functions between the two shapes. Matrices $\mathbf{F}$ and $\mathbf{G}$ contain the column-stacked functions defined over $\mathcal{N}$ and $\mathcal{M}$ and have size $n\times q$. The respective projections onto the corresponding functional spaces $\bm{\Phi}$ and $\bm{\Psi}$ are stored in the $k\times q$ matrices $\mathbf{A}$ and $\mathbf{B}$ respectively. Let us write $\mathbf{H}_{ij}$ to identify the elements of the matrix $\vct{CA} - \vct{B} (\eta(v))$, we then have \begin{align} &\frac{\partial}{\partial v_p} \| \vct{\mathbf{C}\mathbf{A}} - \vct{\mathbf{B}}(\eta(v))\|_{2,1}\nonumber\\ &= \frac{\partial}{\partial v_p} \sum_{j=1}^q \left( \sum_{i=1}^n \mathbf{H}_{ij}^2 \right)^\frac{1}{2} \nonumber\\ &= \sum_{j=1}^q \left( \sum_{i=1}^n \mathbf{H}_{ij}^2 \right)^{-\frac{1}{2}} \sum_{i=1}^n \mathbf{H}_{ij} \frac{\partial}{\partial v_p} \mathbf{H}_{ij}\,. \label{eq:derdatatermvp} \end{align} Since $\mathbf{B} (\eta(v))_{ij} = \sum_k^n \bm{\Psi}^\mathrm{T}_{ik} \eta(v_k) \mathbf{F}_{kj}$, we have: \begin{equation} \frac{\partial}{\partial v_p} \mathbf{H}_{ij} = \frac{\partial}{\partial v_p} [\mathbf{C}\mathbf{A}_{ij} - \mathbf{B}_{ij}] = \bm{\Psi}^\mathrm{T}_{ip} \mathbf{F}_{pj} \frac{\partial}{\partial v_p}\eta(v_p)\,. \end{equation} Finally: \begin{equation}\label{eq:gradetav} \frac{\partial}{\partial v_p}\eta(v_p) = 1-\tanh^2(2 v_p -1). \end{equation} \paragraph*{Area term ($\mu_1$-term w.r.t. ${v}$).} The derivative of the discretized area term is: \begin{align*} &\frac{\partial}{\partial v_p} \left( \sum_{i=1}^n (S_\mathcal{N})_i - \sum_{i=1}^n (S_\mathcal{M})_i \eta(v_i) \right)^2\\ &= 2 \left( \sum_{i=1}^n (S_\mathcal{N})_i - \sum_{i=1}^n (S_\mathcal{M})_i \eta(v_i) \right) S_M \frac{\partial}{\partial v_p}\eta(v_p) \end{align*} where $(S_\mathcal{M})_i$ and $(S_\mathcal{N})_i$ are the local area elements associated with the $i$-th vertex of meshes $\mathcal{M}$ and $\mathcal{N}$ respectively. For the derivative of $\eta(v_p)$ see equation \eqref{eq:gradetav}. \paragraph*{Mumford-Shah functional ($\mu_2$-term w.r.t. ${v}$).} Computing the gradient $\nabla_\vct{v} \int_S \xi(\vct{v}) \| \nabla \vct{v} \| dx$ involves computing partial derivatives of $\xi(\vct{v})$ with respect to $\vct{v}$. These are simply given by: \begin{align*} \frac{\partial}{\partial v_k} \xi (v_k) &= \frac{\partial}{\partial v_k} e^{-\frac{\tanh(2 v_k - 1)}{4 \sigma^2}}\\ &= -\frac{1 - \tanh^2(2 v_k -1)}{2 \sigma^2} e^{-\frac{\tanh(2 v_k - 1)}{4 \sigma^2}}\,. \end{align*} In the following derivations we set $D_j \equiv \sqrt{v_\alpha^2 G_j - 2 v_\alpha v_\beta F_j + v_\beta^2 E_j }$, and $D_j =0$ whenever $\nabla \vct{v} = \vct{0}$. The gradient of the Mumford-Shah functional is then composed of the partial derivatives: \begin{align*} &\frac{\partial}{\partial v_k} \int_S \xi(\vct{v}) \| \nabla \vct{v} \| dx\nonumber\\ &= \sum_{j=1}^m \frac{\partial}{\partial v_k} \int_{S_j} \xi(\vct{v}) \| \nabla \vct{v} \| \nonumber\\ &= \frac{1}{6} \sum_{j \in N(k)} \frac{\partial}{\partial v_k} D_j ( \xi(v_k) + \xi( v_{j,2} ) + \xi ( v_{j,3} ) )\nonumber\\ &= \frac{1}{6} \sum_{j \in N(k)} ( \xi(v_k) + \xi( v_{j,2} ) + \xi ( v_{j,3} ) ) \frac{\partial}{\partial v_k}D_j + D_j \frac{\partial}{\partial v_k} \xi(v_k)\nonumber\\ &= \frac{1}{6} \sum_{j \in N(k)} ( \xi(v_k) + \xi( v_{j,2} ) + \xi ( v_{j,3} ) ) \frac{1}{2D_j} \frac{\partial}{\partial v_k} D_j^2 + D_j \frac{\partial}{\partial v_k} \xi(v_k) \nonumber\\ &=\frac{1}{6} \sum_{j \in N(k)} ( \xi(v_k) + \xi( v_{j,2} ) + \xi ( v_{j,3} ) ) K_j + D_j \frac{\partial}{\partial v_k} \xi(v_k) \nonumber \end{align*} where we write $K_j \equiv \frac{1}{D_j} ((v_k - v_{j,2})(G_j-F_j)+(v_k-v_{j,3})(E_j-F_j))$, and $j \in N(k)$ are the indices of the triangles containing the $k$-th vertex. Note that we slightly abuse notation by writing $v_k$, $v_{j,2}$, and $v_{j,3}$ to denote the three vertices of the $j$-th triangle, even though in general the ordering might be different depending on the triangle. \paragraph*{Data term (w.r.t. C).} The derivative of the data term with respect to $\mathbf{C}$ is similar to \eqref{eq:derdatatermvp}. The only difference is in the partial derivative: \begin{equation} \frac{\partial}{\partial \mathbf{C}_{pq}} \mathbf{H}_{ij} = \frac{\partial}{\partial \mathbf{C}_{pq}} \sum_{i=1}^k \mathbf{C}_{ik} \mathbf{A}_{kj} = \begin{cases} \mathbf{C}_{pq} \mathbf{A}_{qj} & \mbox{if } i = p \\ 0 & \mbox{otherwise.}\end{cases} \end{equation} \paragraph*{Weight matrix ($\mu_3$-term w.r.t. C).} This is simply given by: \begin{equation} \frac{\partial}{\partial \mathbf{C}_{pq}} \| \mathbf{C} \circ \vct{W} \|^2_\mathrm{F} = \mathbf{C}_{pq} (\vct{W}_{pq})^2\,. \end{equation} \paragraph*{Orthogonality ($\mu_{4,5}$-term w.r.t. C).} The gradient of the last term can be finally obtained as: \begin{equation*} \begin{aligned} &\frac{\partial}{\partial \mathbf{C}_{pq}} \left[ \| \mathbf{C}\ensuremath{^\top}\mathbf{C} \|_\mathrm{F}^2 - \sum_i (\vct{C}\ensuremath{^\top}\vct{C})^2_{ii}\ + \sum_i ((\vct{C}\ensuremath{^\top}\vct{C})_{ii} - d_i)^2 \right] \\& \; = 4(\mathbf{C} \mathbf{C}^T \mathbf{C})_{pq} + 2 \sum_{i} \left[ -\sum_k \mathbf{C}_{ki}^2 \frac{\partial}{\partial \mathbf{C}_{pq}} \sum_k \mathbf{C}_{ki}^2 +(\sum_k \mathbf{C}_{ki}^2 - d_i)\frac{\partial}{\partial \mathbf{C}_{pq}} \sum_k \mathbf{C}_{ki}^2 \right] \\& \; = 4[ (\mathbf{C} \mathbf{C}^T \mathbf{C})_{pq} - d_q \mathbf{C}_{pq}]\,. \end{aligned} \end{equation*} \section*{Appendix C - Perturbation Analysis} \newtheorem{thm}{Theorem} \begin{thm Let $\vct{L}_\mathcal{N}+t\vct{P}_\mathcal{N} = \bm{\Phi}(t)\ensuremath{^\top} \bm{\Lambda}(t) \bm{\Phi}(t)$, where $\bm{\Lambda}(t) = \mathrm{diag}(\lambda_1(t), \hdots, \lambda_n(t))$ is a diagonal matrix of eigenvalues, and $\bm{\Phi}(t)$ are the corresponding eigenvectors. The derivative of the non-trivial eigenvalues is given by \begin{equation}\label{eq:eigd} \frac{d}{dt}\lambda_i = \sum_{v,w\in \partial \mathcal{N}} (\vct{P}_\mathcal{N})_{vw} {\phi}_{iv} {\phi}_{iw} = \bm{\phi}_i^\top \vct{P}_\mathcal{N} \bm{\phi}_i. \end{equation} \end{thm} {\em Proof:} Let $\vct{A}(t)$ be a symmetric real $n\times n$ matrix parametrized by $t \in T\subseteq\mathbb{R}$, with $\bm{\Phi}(t)$ and $\bm{\Lambda}(t)$ being the eigenvector and eigenvalue matrices, {\em i.e.}, for all $t \in T$ we have \begin{equation}\label{eq:eigen} \vct{A}(t) \bm{\Phi}(t) = \bm{\Phi}(t)\bm{\Lambda}(t) \end{equation} and $\bm{\Lambda}(t)$ is diagonal and $\bm{\Phi}(t)$ orthogonal. Following~\cite{NME:NME1620260202}, if all the eigenvalues are distinct, then we can compute the derivatives of the eigenvalues at $t=0$ as \begin{equation} \lambda^\prime_i = \bm{\phi}_i\ensuremath{^\top} \vct{A}^\prime \bm{\phi}_i \end{equation} where $\vct{A}^\prime$, the derivative of $\vct{A}(t)$, and the eigenvectors $\bm{\phi}_i$ are considered at $t=0$. In fact, differentiating (\ref{eq:eigd}), we obtain \begin{equation}\label{eq:diff} \vct{A}^\prime \bm{\Phi} + \vct{A} \bm{\Phi}^\prime = \bm{\Phi}^\prime \bm{\Lambda} + \bm{\Phi} \bm{\Lambda}^\prime\,. \end{equation} Left-multiplying both sides by $\bm{\Phi}\ensuremath{^\top}$, setting $\bm{\Phi}^\prime=\bm{\Phi}\vct{B}$ for a matrix $\vct{B}$ to be determined, and recalling that $\bm{\Phi}\ensuremath{^\top} \vct{A} \bm{\Phi}=\bm{\Lambda}$, we have \begin{equation}\label{eq:diff2} \bm{\Phi}\ensuremath{^\top} \vct{A}^\prime \bm{\Phi} + \bm{\Lambda}\vct{B} = \vct{B}\bm{\Lambda} + \bm{\Lambda}^\prime\,, \end{equation} from which \begin{equation} \operatorname{diag}(\bm{\Lambda}^\prime) = \operatorname{diag}(\bm{\Phi}\ensuremath{^\top} \bm{A}^\prime \bm{\Phi}) + \operatorname{diag}(\bm{\Lambda}\vct{B} - \vct{B}\bm{\Lambda}) = \operatorname{diag}(\bm{\Phi}\ensuremath{^\top} \vct{A}^\prime \bm{\Phi})\,. \end{equation} Going back to our case, we take the simplifying assumptions that $\vct{L}_\mathcal{M}$ and $\vct{L}_\mathcal{N}$ do not have repeated eigenvalues. Let \begin{eqnarray} \vct{L}_\mathcal{N} &=& \bm{\Phi}\ensuremath{^\top} \bm{\Lambda} \bm{\Phi}\\ \vct{L}_{\overline{\mathcal{N}}} &=& \overline{\bm{\Phi}}\ensuremath{^\top} \overline{\bm{\Lambda}} \overline{\bm{\Phi}} \end{eqnarray} be the spectral decompositions of $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$ respectively. According to the previous result, we can write the derivative of $\lambda_i$ eigenvalue of $\vct{L}_\mathcal{N}$ and, thus, of $\vct{L}(0)$, as: \begin{equation} \lambda_i^\prime = \bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_i = \sum_{v,w\in \partial \mathcal{N}} (\vct{P}_\mathcal{N})_{vw} {\phi}_{iv} {\phi}_{iw} \,. \end{equation} \begin{thm Assume that $\vct{L}_{{\mathcal{N}}}$ has distinct eigenvalues ($\lambda_i \neq \lambda_j$ for $i\neq j$), and furthermore, the non-zero eigenvalues are all distinct from the eigenvalues of $\vct{L}_{\overline{\mathcal{N}}}$ ($\lambda_i \neq \overline{\lambda}_j$ for all $i, j$). Let $\vct{L}_\mathcal{N}+t\vct{P}_\mathcal{N} = \bm{\Phi}(t)\ensuremath{^\top} \bm{\Lambda}(t) \bm{\Phi}(t)$, where $\bm{\Lambda}(t) = \mathrm{diag}(\lambda_1(t), \hdots, \lambda_n(t))$ is a diagonal matrix of eigenvalues, and $\bm{\Phi}(t)$ are the corresponding eigenvectors. Then, the derivative of the non-constant eigenvector is given by \begin{equation} \frac{d}{dt}\bm{\phi}_i = \sum_{ {j=1}\atop{j\neq i}}^{n} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_j}{{\lambda}_i-{\lambda}_j} \bm{\phi}_j + \sum_{j=1}^{\overline{n}} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}\; \overline{\bm{\phi}}_j}{{\lambda}_i-\overline{{\lambda}}_j} \overline{\bm{\phi}}_j\,. \end{equation} \end{thm} {\em Proof:} Under the same assumptions as for the previous theorem, from (\ref{eq:diff2}) we have \begin{eqnarray} (\bm{\Phi}\ensuremath{^\top} \vct{A}^\prime \bm{\Phi})_{ij} + (\bm{\Lambda}\vct{B})_{ij} &=& (\vct{B}\bm{\Lambda})_{ij} + (\bm{\Lambda}^\prime)_{ij}\\ \bm{\phi}_i\ensuremath{^\top} \vct{A}^\prime \bm{\phi}_j + \lambda_i b_{ij} &=& b_{ij}\lambda_j + 0\,, \end{eqnarray} from which \begin{equation}\label{eq:mix} b_{ij} = \frac{\bm{\phi}_i\ensuremath{^\top} \vct{A}^\prime \bm{\phi}_j}{\lambda_j-\lambda_i}\,. \end{equation} due to the orthogonality of $\bm{\Phi}(t)$ we have that $\vct{B}$ is skew-symmetric, and thus, $b_{ii}=0$. From the relation $\bm{\Phi}^\prime = \bm{\Phi}\vct{B}$ we obtain \begin{equation} \frac{d}{dt}\bm{\phi}_i = \sum_{j\neq i} b_{ji} \bm{\phi}_j = \sum_{j\neq i} \frac{\bm{\phi}_j\ensuremath{^\top} A^\prime \bm{\phi}_i}{\lambda_i-\lambda_j} \bm{\phi}_j\,. \end{equation} Going back to our case, recall that the set of eigenvalues of $\vct{L}(0)$ is the union of the eigenvalues of $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$ and the corresponding eigenvectors are obtained from those of $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$ by padding with zeros on the missing parts. Denoting with $\lambda_i$ and $\bm{\phi}_i$ the eigenvalues and corresponding eigenvectors of $\vct{L}_\mathcal{N}$, and $\overline{{\lambda}}_j$ and $\overline{\bm{\phi}}_j$ the eigenvalues and corresponding eigenvectors of $\vct{L}_{\overline{\mathcal{N}}}$, we have \begin{equation} \frac{d}{dt}\bm{\phi}_i = \sum_{ {j=1}\atop{j\neq i}}^{n} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_j}{{\lambda}_i-{\lambda}_j} \bm{\phi}_j + \sum_{j=1}^{\overline{n}} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}\; \overline{\bm{\phi}}_j}{{\lambda}_i-\overline{{\lambda}}_j} \overline{\bm{\phi}}_j\,. \end{equation} \paragraph*{Boundary interaction strength.} We can measure the variation of the eigenbasis as a function of the boundary $\mathcal{B}$ splitting $\mathcal{M}$ into $\mathcal{N}$ and $\overline{\mathcal{N}}$ as \begin{eqnarray} \partial \bm{\Phi}(\mathcal{B}) &=& \sum_{i=1}^{n} \| \bm{\phi}_i^\prime \|^2_{\mathcal{N}} = \sum_{i=1}^{n} \left( \sum_{j=1\atop{j\neq i}}^{n} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{D}_\mathcal{N} \bm{\phi}_j}{{\lambda}_i-{\lambda}_j} \right)^2. \end{eqnarray} Let us now consider the function: \begin{equation} f(v) = \sum_{{i,j=1}\atop{j\neq i}}^{n} \left(\frac{{\phi}_{iv}{\phi}_{jv}}{{\lambda}_i-{\lambda}_j}\right)^2\,. \end{equation} Assuming $\vct{D}_\mathcal{N}$ diagonal and with constant diagonal elements $k$, we have \begin{equation} k \int_{\mathcal{B}} f(v)\,dv \geq \partial \bm{\Phi}(\mathcal{B})\,, \end{equation} in fact: \begin{eqnarray} \partial \bm{\Phi}(\mathcal{B}) &\approx& k \sum_{i=1}^{n} \left( \sum_{j=1\atop{j\neq i}}^{n}\frac{\sum_{v\in \partial \mathcal{M}}{\phi}_{iv}\ {\phi}_{jv}}{{\lambda}_i-{\lambda}_j} \right)^2 \nonumber\\ &\leq& k \sum_{v\in \partial \mathcal{M}} \sum_{{i,j=1}\atop{j\neq i}}^{n} \left( \frac{{\phi}_{iv}\ {\phi}_{jv}}{{\lambda}_i-{\lambda}_j} \right)^2 = k \sum_{v\in \partial \mathcal{M}} f(v).\, \end{eqnarray} \section{Background} \label{sec:bg} In this paper, we model shapes as compact connected \rev{2-manifolds} $\mathcal{M}$, possibly with boundary $\partial\mathcal{M}$. Given $f, g : {\mathcal{M} \rightarrow \mathbb{R}}$ some real scalar fields on the manifold, we define the standard inner product $\langle f, g\rangle_{\mathcal{M}} = \int_{\mathcal{M}}f(x)g(x) dx$, where integration is done using the area element induced by the Riemannian metric. % We denote by $L^2(\mathcal{M}) = \{ {f: \mathcal{M} \rightarrow\mathbb{R}} ~|~ \langle f, f\rangle_{\mathcal{M}} <\infty \}$ the space of square-integrable functions on $\mathcal{M}$. The {\em intrinsic gradient} $\nabla_{\mathcal{M}}f$ and the positive semi-definite {\em Laplace-Beltrami operator} $\Delta_{\mathcal{M}}f = -\mathrm{div}_{\mathcal{M}}( \nabla_{\mathcal{M}}f )$ generalize the notions of gradient and Laplacian to manifolds. % % The Laplace-Beltrami operator admits an eigen-decomposition % \begin{eqnarray} \Delta_{\mathcal{M}} \phi_i(x) = \lambda_i \phi_i(x) & \,\,\,\,\,& x \in \mathrm{int}(\mathcal{M}) \\ \langle \nabla_{\mathcal{M}} \phi_i(x) , \hat{n}(x) \rangle = 0 &\,\,\,\,\,& x \in \partial\mathcal{M}, \label{eq:neumann} \end{eqnarray} with homogeneous Neumann boundary conditions~(\ref{eq:neumann}) if $\mathcal{M}$ has a boundary (here $\hat{n}$ denotes the normal vector to the boundary), where $0 = \lambda_1 <\lambda_2 \leq \hdots$ are eigenvalues and $\phi_1, \phi_2, \hdots$ are the corresponding eigenfunctions (or eigenvectors). The eigenfunctions form an orthonormal basis on $L^2(\mathcal{M})$, \emph{i.e.}, $\langle \phi_i, \phi_j\rangle_{\mathcal{M}} = \delta_{ij}$, generalizing the classical Fourier analysis: a function $f\in L^2(\mathcal{M})$ can be expanded into the {\em Fourier series} as \begin{eqnarray} \label{eq:fourier} f(x) &=& \sum_{i\geq 1} \langle f, \phi_i\rangle_{\mathcal{M}} \phi_i(x)\,. \end{eqnarray} \paragraph*{Functional correspondence.} Let us be now given two manifolds, $\mathcal{N}$ and $\mathcal{M}$. Ovsjanikov \emph{et al.} \shortcite{ovsjanikov12} proposed modeling {\em functional correspondence} between shapes as a linear operator $T: L^2(\mathcal{N}) \rightarrow L^2(\mathcal{M})$. One can easily see that classical vertex-wise correspondence is a particular setting where $T$ maps delta-functions to delta-functions. Assuming to be given two orthonormal bases $\{\phi_i\}_{i\geq 1}$ and $\{\psi_i\}_{i\geq 1}$ on $L^2(\mathcal{N})$ and $L^2(\mathcal{M})$ respectively, the functional correspondence can be expressed w.r.t. to these bases as follows: \begin{eqnarray} \label{eq:funcorr1} Tf &=& T \sum_{i\geq 1} \langle f, \phi_i \rangle_{\mathcal{N}} \phi_i = \sum_{i\geq 1} \langle f, \phi_i \rangle_{\mathcal{N}} T\phi_i \nonumber\\\label{eq:cc} &=& \sum_{ij\geq 1} \langle f, \phi_i \rangle_{\mathcal{N}} \underbrace{\langle T\phi_i, \psi_j \rangle_{\mathcal{M}}}_{c_{ij}} \psi_j\,, \end{eqnarray} Thus, $T$ amounts to a linear transformation of the Fourier coefficients of $f$ from basis $\{\phi_i\}_{i\geq 1}$ to basis $\{\psi_i\}_{i\geq 1}$, which is captured by the coefficients $c_{ij}$. Truncating the Fourier series~(\ref{eq:funcorr1}) at the first $k$ coefficients, one obtains a rank-$k$ approximation of $T$, represented in the bases $\{\phi_i, \psi_i\}_{i\geq 1}$ as a $k\times k$ matrix $\mathbf{C} = (c_{ij})$. In order to compute $\mathbf{C}$, Ovsjanikov \emph{et al.} \shortcite{ovsjanikov12} assume to be given a set of $q$ corresponding functions $\{ f_1, \hdots, f_q \} \subseteq L^2(\mathcal{N})$ and $\{ g_1, \hdots, g_q\} \subseteq L^2(\mathcal{M})$. Denoting by $a_{ij} = \langle f_j, \phi_i \rangle_{\mathcal{N}}$ and $b_{ij} = \langle g_j, \psi_i \rangle_{\mathcal{M}}$ the $k\times q$ matrices of the respective Fourier coefficients, functional correspondence boils down to the linear system \begin{eqnarray} \label{eq:cab} \mathbf{C} \mathbf{A} &=& \mathbf{B}\,. \end{eqnarray} If $q\geq k$, the system~(\ref{eq:cab}) is (over-)determined and is solved in the least squares sense to find $\mathbf{C}$. \paragraph*{Structure of C.} We note that the coefficients $\mathbf{C}$ depend on the choice of the bases. In particular, it is convenient to use the eigenfunctions of the Laplace-Beltrami operators of $\mathcal{N}$ and $\mathcal{M}$ as the bases $\{\phi_i, \psi_i\}_{i\geq 1}$; truncating the series at the first $k$ coefficients has the effect of `low-pass' filtering thus producing smooth correspondences. In the following, this will be our tacit basis choice. Furthermore, note that the system~(\ref{eq:cab}) has $qk$ equations and $k^2$ variables. However, in many situations the actual number of variables is significantly smaller, as $\mathbf{C}$ manifests a certain structure which can be taken advantage of. In particular, if $\mathcal{N}$ and $\mathcal{M}$ are isometric and have simple spectrum (\emph{i.e.}, the Laplace-Beltrami eigenvalues have no multiplicity), then $T \phi_i = \pm \psi_i$, or in other words, $c_{ij} = \pm \delta_{ij}$. In more realistic scenarios (approximately isometric shapes), the matrix $\mathbf{C}$ would manifest a funnel-shaped structure, with the majority of elements distant from the diagonal close to zero. \paragraph*{Discretization.} In the discrete setting, the manifold $\mathcal{N}$ is sampled at $n$ points $x_1, \hdots, x_n$ which are connected by edges $E = E_\mathrm{i} \cup E_\mathrm{b}$ and faces $F$, forming a manifold triangular mesh $(V,E,F)$. We denote by $E_\mathrm{i}$ and $E_\mathrm{b}$ the interior and boundary edges respectively. A function on the manifold is represented by an $n$-dimensional vector $\mathbf{f} = (f(x_1), \hdots, f(x_n))^\top$. The discretization of the Laplacian takes the form of an $n\times n$ sparse matrix $\mathbf{L} = -\mathbf{S}^{-1}\mathbf{W}$ using the classical cotangent formula \cite{macneal1949solution,duffin1959distributed,pinkall1993computing}, \begin{eqnarray} \label{eq:cotan} w_{ij} & = & \left\{ \begin{array}{lc} (\cot \alpha _{ij} + \cot \beta _{ij})/2 & ij \in E_\mathrm{i}; \\ (\cot \alpha _{ij})/2 & ij \in E_\mathrm{b}; \\ -\sum_{k\neq i} w_{ik} & i = j; \\ 0 & \mathrm{else}; \end{array} \right. \end{eqnarray} where $\mathbf{S} = \mathrm{diag}(s_1, \hdots, s_n)$, $s_i = \frac{1}{3} \sum_{jk: ijk \in F} s_{ijk}$ denotes the local area element at vertex $i$, $s_{ijk}$ denotes the area of triangle $ijk$, and $\alpha_{ij}, \beta_{ij}$ denote the angles $\angle ikj, \angle jhi$ of the triangles sharing the edge $ij$ (see Fig.~\ref{fig:cot_weights}). \begin{figure}[bh!] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=0.9\linewidth]{laplacian.pdf} \put(21,19){\footnotesize $i$} \put(20,47){\footnotesize $j$} \put(43,37){\footnotesize $k$} \put(1,36){\footnotesize $h$} \put(16.5,35.5){\footnotesize $w_{ij}$} \put(23.5,30){\footnotesize $\tfrac{1}{3}s_{ijk}$} \put(32.5,36.5){\footnotesize $\alpha_{ij}$} \put(9,35.5){\footnotesize $\beta_{ij}$} \put(81.5,36.5){\footnotesize $\alpha_{ij}$} \put(73.5,19){\footnotesize $i$} \put(74,45){\footnotesize $j$} \put(92.5,39.5){\footnotesize $k$} \end{overpic} \caption{Discretization of the Laplace-Beltrami operator on a triangular mesh for interior edges (green, left) and boundary edges (red, right). } \label{fig:cot_weights} \end{figure} The first $k$ eigenfunctions and eigenvalues of the Laplacian are computed by performing the generalized eigen-decomposition $\mathbf{W} \bm{\Phi} = \mathbf{S}\bm{\Phi}\bm{\Lambda}$, where $\bm{\Phi} = (\bm{\phi}_1, \hdots, \bm{\phi}_k)$ is an $n\times k$ matrix containing as columns the discretized eigenfunctions and $\bm{\Lambda} = \mathrm{diag}(\lambda_1, \hdots, \lambda_k)$ is the diagonal matrix of the corresponding eigenvalues. The computation of Fourier coefficients is performed by $\mathbf{a} = \bm{\Phi}^\top \mathbf{S} \mathbf{f}$. \section{Discussion and conclusions} \label{sec:conclusion} In this paper we tackled the problem of dense matching of deformable shapes under partiality transformations. We cast our formulation within the framework of functional maps, which we adapted and extended to deal with this more challenging scenario. Our approach is fully automatic and makes exclusive use of dense local features as a measure of similarity. Coupled with a robust prior on the functional correspondence derived from a perturbation analysis of the shape Laplacians, this allowed us to devise an effective optimization process with remarkable results on very challenging cases. In addition to our framework for partial functional correspondence we also introduced two new datasets comprising hundreds of shapes, which we hope will foster further research on this challenging problem. One of the main issues of our method concerns the existence of multiple optima, which is in turn related to the presence of non-trivial self-isometries on the considered manifolds. Since most natural shapes are endowed with intrinsic symmetries, one may leverage this knowledge in order to avoid inconsistent matchings. For example, including a smoothness prior on the correspondence might alleviate such imperfections and thus provide better-behaved solutions. Secondly, since the main focus of this paper is on tackling partiality rather than general deformations, our current formulation does not explicitly address the cases of topological changes and inter-class similarity (\emph{e.g.}, matching a man to a gorilla). However, the method can be easily extended to employ more robust descriptors such as pairwise features~\cite{rodola13,kaick13}, or to simultaneously optimize over ad-hoc functional bases on top of the correspondence. Finally, extending our approach to tackle entire shape collections, as opposed to individual pairs of shapes, represents a further exciting direction of research. \section{Experimental results} \label{sec:exp} \paragraph*{Datasets.} As base models, we use shapes from the TOSCA dataset~\cite{bbk08}, consisting of 76 nearly-isometric shapes subdivided into 8 classes. Each class comes with a ``null'' shape in a standard pose (extrinsically bilaterally symmetric), and ground-truth correspondences are provided for all shapes within the same class. In order to make the datasets more challenging and avoid compatible triangulations, all shapes were remeshed to 10K vertices by iterative pair contractions~\cite{garland97}. Then, missing parts were introduced in the following ways:\footnote{The datasets together with code for our method are available for download at {\small \url{http://vision.in.tum.de/data/datasets/partial}}. } {\em Regular cuts}. The null shape of each class was cut with a plane at 6 different orientations, including an exact cut along the symmetry plane. The six cuts were then transferred to the remaining poses using the ground-truth correspondence, resulting in 456 partial shapes in total. Some examples are shown in Fig.~\ref{fig:spectra} and \ref{fig:C}. {\em Irregular holes}. Given a shape and an ``area budget'' determining the fraction of area to keep (40\%, 70\%, and 90\%), we produced additional shapes by an erosion process applied to the surface. Specifically, seed holes were placed at 5, 25, and 50 farthest samples over the shape; the holes were then enlarged to meet the specified area budget. The total number of shapes produced this way was 684. Examples of this dataset are shown in Fig.~\ref{fig:horse_partiality} and \ref{fig:examples}. {\em Range images}. We simulated range images by taking orthographic projections of the original TOSCA shapes from different viewpoints. Each range image was produced via ray casting from a regular grid with a resolution of ${100\times150}$ pixels. Examples are shown in Fig. \ref{fig:examples}. {\em Point clouds}. Point clouds were generated by taking a subset of shapes from the first two datasets. Each partial shape was then resampled uniformly to 1000 farthest points, and the tessellation removed. See Fig. \ref{fig:examples} for examples. Where not specified otherwise, we use 120 random partial shapes for the first dataset and 80 for the second, equally distributed among the different classes. Each partial shape is then matched to the null shape of the corresponding class. \paragraph*{Error measure.} For the evaluation of the correspondence quality, we used the Princeton benchmark protocol \cite{DBLP:journals/tog/KimLF11} for point-wise maps. Assume that a correspondence algorithm produces a pair of points $(x,y) \in \mathcal{N} \times \mathcal{M}$, whereas the ground-truth correspondence is $(x,y^*)$. Then, the inaccuracy of the correspondence is measured as \begin{eqnarray} \epsilon(x) &=& \frac{d_\mathcal{M}(y,y^*)}{ \mathrm{area}(\mathcal{M})^{1/2} }, \label{eq:harderr} \end{eqnarray} and has units of normalized length on $\mathcal{M}$ (ideally, zero). Here $d_{\mathcal{M}}$ is the geodesic distance on $\mathcal{M}$. The value $\epsilon(x)$ is averaged over all shapes $\mathcal{N}$. We plot cumulative curves showing the percent of matches which have error smaller than a variable threshold. \paragraph*{Methods.} We compared the proposed method with (full) functional maps \cite{ovsjanikov12}, elastic net \cite{rodola13iccv}, and the voting method of \cite{sahilliouglu2014partial} using the code provided by the respective authors. \begin{figure}[t] \centering \setlength\figureheight{4cm} \setlength\figurewidth{0.85\columnwidth} \input{comparison.tikz} \caption{\label{fig:comparisons} Correspondence quality of different methods evaluated using the Princeton protocol on partial TOSCA shapes with regular cuts (solid) and irregular holes (dotted).} \end{figure} \paragraph*{Local descriptors. } \label{sec:data} Due to the particular nature of the problem, in all our experiments we only make use of dense, {\em local} descriptors as a data term. This is in contrast with the more common scenario in which full shapes are being matched -- thus allowing to employ more robust, globally-aware features such as landmark matches, repeatable surface regions, and various spectral quantities~\cite{ovsjanikov12}. In our experiments, we used the extrinsic SHOT~\cite{tombari10} descriptor, computed using 10 normal bins (352 dimensions in total). As opposed to~\cite{albarelli-pr15,pokrass13-nmtma} which ignore points close to the boundary in order to avoid boundary effects, in our formulation we retain all shape points. \begin{figure}[t] \centering \centering \setlength\figureheight{3cm} \setlength\figurewidth{0.85\columnwidth} \input{partiality.tikz} \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=\linewidth]{comparison_partiality_povray.pdf} \end{overpic} \caption{\label{fig:horse_partiality} Correspondence quality (in terms of mean geodesic error, in \% of diameter) obtained by different methods at increasing levels of partiality. Other methods show significant performance drop with increasing partiality, while the performance of our method is nearly-constant. } \end{figure} \subsection{Sensitivity analysis} We conducted a set of experiments aimed at evaluating the sensitivity of our approach to different parametrizations. In order to reduce overfitting we only used a subset of TOSCA (regular cuts), namely composed of the {\em cat} and {\em victoria} shape classes (20 pairs). \begin{figure}[b] \centering \input{number_eigenvectors.tikz} \caption{\label{fig:rank} Correspondence quality obtained on a subset of TOSCA at increasing rank (reported as labels on top of the curves). Note the opposite behavior of the baseline approach and our regularized partial matching.} \end{figure} \begin{figure*}[t] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=1\linewidth]{results_.png} \put(64,58){\footnotesize range maps} \end{overpic} \caption{\label{fig:examples}Examples of partial functional correspondence obtained with our method on meshes and point clouds from the proposed datasets. Notice how regions close to the boundary are still accurately matched despite the noisy descriptors.} \end{figure*} \noindent\textbf{Rank.} In the first experiment we study the change in accuracy as the rank of the functional map is increased; this corresponds to using an increasing number of basis functions for the two shapes being matched. For this experiment we compare with the baseline method of Ovsjanikov~\emph{et al.} \cite{ovsjanikov12} by using the same dense descriptors as ours. For fair comparisons, we did not impose map orthogonality or operator commutativity constraints~\cite{ovsjanikov12}, which cannot obviously be satisfied due to partiality. The results of this experiment are reported in Fig.~\ref{fig:rank}. As we can see from the plots, our method allows to obtain more accurate solutions as the rank increases, while an opposite behavior is observed for the other method. \noindent\textbf{Representation.} Our method is general enough to be applied to different shape representations, as long as a proper discretization of the Laplace operator is available. In Fig.~\ref{fig:examples} we show some qualitative examples of correspondences produced by our algorithm on simulated point clouds and depth maps. Here we use the method described in~\cite{belkin09} to construct a discrete Laplacian on the point clouds. This is traditionally considered a particularly challenging problem in robotics and vision applications, with few methods currently capable of giving satisfactory solutions without exploiting controlled conditions or domain-specific information (\emph{e.g.}, the knowledge that the shape being matched is that of a human). These are, to the best of our knowledge, the best results to be published so far for this class of problems. \subsection{Comparisons} We compared our method on the {\em cuts} and {\em holes} datasets (200 shape pairs in total) ; the results are shown in Fig.~\ref{fig:comparisons}. As an additional experiment, we ran comparisons against \cite{ovsjanikov12} across increasing amounts of partiality. The rationale behind this experiment is to show that, at little or no partiality, our approach converges to the one described in \cite{ovsjanikov12}, currently among the state of the art in non-rigid shape matching. However, as partiality increases so does the sensitivity of the latter method. Fig.~\ref{fig:horse_partiality} shows the results of this experiment. Parameters for our method were chosen on the basis of the sensitivity analysis. Specifically, we used $k=100$ eigenfunctions per shape, and set $\mu_1 = \mu_3 = 1$, $\mu_4 = \mu_5 = 10^3$, and $\mu_2 = 10^2$. The different orders of magnitude for the $\mu$ coefficients are due to the fact that the regularizing terms operate at different scales. We also experimented with other values, but in all our tests we did not observe significant changes in accuracy. Additional examples of partial matchings obtained with our method are shown in Fig.~\ref{fig:examples} \section{Implementation} \label{sec:impl} We refer to Appendix A for the discretization of the regularization terms appearing in \eqref{eq:funcorr_reg1} and \eqref{eq:funcorr_reg2}. \paragraph*{Numerical optimization.} We implemented our matching framework in Matlab/C++ using the manifold optimization toolbox~\cite{manopt}. Each optimization step was performed by the method of nonlinear conjugate gradients. Detailed derivations of the involved gradients can be found in Appendix B. We initialize the alternating scheme by fixing $\vct{v}^\ast = \vct{1}$ (a vector of $m$ ones), ${\vct{C} = \vct{W}}$, and by optimizing over $\vct{C}$. In all our experiments we observed convergence in 3-5 outer iterations (around 5 mins. for a pair of shapes). \paragraph*{Refinement.} In order to account for noisy data, we also run a refinement step after each {\em C-step}. Specifically, assume $\vct{C}^\ast$ is a local optimum of problem \eqref{eq:funcorr_part_C}, \rev{and consider the term $\| \vct{C}^\ast \bm{\Phi}\ensuremath{^\top} - \bm{\Psi}\ensuremath{^\top} \bm{\Pi} \|_{F}$, where $\bm{\Pi}$ is a left-stochastic binary matrix assigning each column of $\bm{\Psi}\ensuremath{^\top}$ to the nearest column of $\vct{C}^\ast \bm{\Phi}\ensuremath{^\top}$; this is done by $n$ nearest-neighbor searches in $\mathbb{R}^k$, one per column. Given the optimal $\bm{\Pi}^\ast$, we solve for the map $\vct{C}$ minimizing $\| \vct{C} \bm{\Phi}\ensuremath{^\top} - \bm{\Psi}\ensuremath{^\top} \bm{\Pi}^\ast \|_{F}$ plus the $\mu_4$, $\mu_5$ terms of Eq.~\eqref{eq:funcorr_reg2}. We alternate the $\vct{C}^\ast$ and $\bm{\Pi}^\ast$ steps until convergence. This refinement step can be seen as a generalization to partial maps of the ICP-like technique found in~\cite{ovsjanikov12}, and can be interpreted as an attempt to improve the alignment between the spectral embeddings of the two shapes. Further note that matrix $\bm{\Pi}^\ast$ encodes the point-wise correspondence between $\mathcal{N}$ and $\mathcal{M}$, which is used to evaluate the accuracy of our method.} \section{Introduction} The problem of shape correspondence is one of the most fundamental problems in computer graphics and geometry processing, with a plethora of applications ranging from texture mapping to animation \cite{bronstein2006generalized,DBLP:journals/cgf/KimLCF10,DBLP:journals/tog/KimLF11,van2011survey}. A particularly challenging setting is that of {\em non-rigid correspondence}, where the shapes in question are allowed to undergo deformations, which are typically assumed to be approximately isometric (such a model appears to be good for, \emph{e.g.}, human body poses). Even more challenging is {\em partial correspondence}, where one is shown only a subset of the shape and has to match it to a deformed full version thereof. Partial correspondence problems arise in numerous applications that involve real data acquisition by 3D sensors, which inevitably lead to missing parts due to occlusions or partial view. \paragraph*{Related work.} For rigid partial correspondence problems, arising \emph{e.g.}, in 3D scan completion applications, many versions of regularized iterative closest point (ICP) approaches exist, see for example \cite{aiger20084,albarelli-pr15}. Attempts to extend these ideas to the non-rigid case in the form of non-rigid or piece-wise rigid ICP have been explored in recent years \cite{li08}. By nature of the ICP algorithm, these methods rely on the assumption that the given shapes can be placed in approximate rigid alignment to initiate the matching process. As a result, they tend to work well under small deformations (\emph{e.g.}, when matching neighboring frames of a sequence), but performance deteriorates quickly when this assumption does not hold. For the non-rigid setting, several metric approaches centered around the notion of minimum distortion correspondence \cite{bronstein2006generalized} have been proposed. Bronstein~\emph{et al.} \cite{bronstein2008not,bronstein2009partial} combine metric distortion minimization with optimization over matching parts, showing an algorithm that simultaneously seeks for a correspondence and maximizes the {\em regularity} of corresponding parts in the given shapes. Rodol\`{a}~\emph{et al.} \cite{rodola12} subsequently relaxed the regularity requirement by allowing sparse correspondences, and later introduced a mechanism to explicitly control the degree of sparsity of the solution \cite{rodola13iccv}. Finally, in \cite{sahilliouglu2014partial} the authors proposed a voting-based formulation to match shape extremities, which are assumed to be preserved by the partiality transformation. Being based on spectral features and metric preservation, the accuracy of the aforementioned methods suffers at high levels of partiality, where the computation of these quantities becomes unreliable due to boundary effects and meshing artifacts. Furthermore, these methods suffer from high computational complexity and generally provide only a {\em sparse correspondence}. Pokrass~\emph{et al.} \cite{pokrass13-nmtma} proposed a descriptor-based partial matching approach where the optimization over parts is done to maximize the matching of bags of local descriptors. The main drawback of this approach is that it only finds similar parts, without providing a correspondence between them. Windheuser~\emph{et al.} \cite{wind11} formulated the shape matching problem as one of seeking minimal surfaces in the product space of two given shapes; the formulation notably allows for a linear programming discretization and provides guaranteed continuous and orientation-preserving solutions. The method was shown to work well with partial shapes, but requires watertight surfaces as input (\emph{e.g.}, via hole filling). Brunton~\emph{et al.} \cite{Brunton201470} used alignment of tangent spaces for partial correspondence. In their method, a sparse set of correspondences is first computed by matching feature descriptors; the matches are then propagated in an isometric fashion so as to cover the largest possible regions on the two shapes. Since the quality of the final solution directly depends on the initial matches, the method is understood as a ``densification'' method to complement other sparse approaches. Other recent works include the design of robust descriptors for partial matching \cite{kaick13}. In the context of collections of shapes, partial correspondence has been considered in \cite{van2011prior,chen14icml,cosmo15}. All the aforementioned works are based on the notion of point-wise correspondence between shapes. Recently, Ovsjanikov~\emph{et al.} \cite{ovsjanikov12} proposed the {\em functional maps} framework, in which shape correspondence is modeled as a linear operator between spaces of functions on the shapes. The main advantage of functional maps is that finding correspondence boils down to a simple algebraic problem, as opposed to difficult combinatorial-type problems arising in, \emph{e.g.}, the computation of minimum-distortion maps. While several recent works showed that functional maps can be made resilient to missing parts or incomplete data \shortcite{DBLP:journals/tog/HuangWG14,kovnatsky15}, overall this framework is not suitable for dealing with partial correspondence. \paragraph*{Contribution.} In this paper, we propose an extension to the functional correspondence framework to allow dealing with partial correspondence. Specifically, we consider a scenario of matching a part of a deformed shape to some full model. Such scenarios are very common for instance in robotics applications, where one has to match an object acquired by means of a 3D scanner (and thus partially occluded) with a reference object known in advance. We use an explicit part model over which optimization is performed as in \cite{bronstein2008not,bronstein2009partial}, as well as a regularization on the spectral representation of the functional correspondence accounting for a special structure of the Laplacian eigenfunctions as a result of part removal. Theoretical study of this behavior based on perturbation analysis of Laplacian matrices is another contribution of our work. We show experimentally that the proposed approach allows dealing with very challenging partial correspondence settings; further, we introduce a new benchmark to evaluate deformable partial correspondence methods, consisting of hundreds of shapes and ground-truth information. The rest of the paper is organized as follows. In Section \ref{sec:bg}, we review the basic concepts in the spectral geometry and describe the functional correspondence approach. Section \ref{sec:perturb} studies the behavior of Laplacian eigenfunctions in the case of missing parts, motivating the regularizations used in the subsequent sections. Section \ref{sec:method} introduces our partial correspondence model, and Section \ref{sec:impl} describes its implementation details. Section \ref{sec:exp} presents experimental results, and finally, Section \ref{sec:conclusion} concludes the paper. \section*{Acknowledgments} The authors thank Matthias Vestner, Vladlen Koltun, Aneta Stevanovi\'{c}, Zorah L\"{a}hner, Maks Ovsjanikov, and Ron Kimmel for useful discussions. ER is supported by an Alexander von Humboldt Fellowship. MB is partially supported by the ERC Starting Grant No. 307047 (COMET). \bibliographystyle{eg-alpha-doi} \section{Partial functional maps} \label{sec:method} As stated before, throughout the paper we consider the setting where we are given a full model shape $\mathcal{M}$ and another query shape $\mathcal{N}$ that corresponds to an approximately isometrically deformed part $\mathcal{M}' \subset\mathcal{M}$. Following \cite{bronstein2008not}, we model the part $\mathcal{M}'$ by means of an indicator function $v:\mathcal{M} \rightarrow \{ 0,1\}$ such that $v(x)=1$ if $x\in \mathcal{M}'$ and zero otherwise. Assuming that $v$ is known, the {\em partial functional correspondence} between $\mathcal{N}$ and $\mathcal{M}$ can be expressed as $Tf = vg$, where $v$ can be regarded as a kind of mask, and anything outside the region where $v=1$ should be ignored. Expressed w.r.t. bases $\{\phi_i\}_{i\geq 1}$ and $\{\psi_i\}_{i\geq 1}$, the partial functional correspondence takes the form $\mathbf{C} \mathbf{A} = \mathbf{B}(v)$, where $\mathbf{B}(v)$ denotes a matrix of weighted inner products with elements given by $b_{ij}(v) = \int_{\mathcal{M}} v(x)\psi_i(x)g_j(x) dx$ (when $v(x)\equiv 1$, $\mathbf{B}$ is simply the matrix of Fourier coefficients defined in~\eqref{eq:funcorr1}). This brings us to the problem we are considering throughout this paper, involving optimization w.r.t. correspondence (encoded by the coefficients $\mathbf{C}$) and the part $v$, \begin{eqnarray} \label{eq:funcorr_part1} \min_{\mathbf{C}, v} \, \| \mathbf{C}\mathbf{A} - \mathbf{B}(\eta(v)) \|_{2,1} + \rho_{\mathrm{corr}}(\mathbf{C}) + \rho_{\mathrm{part}}(v)\,, \end{eqnarray} where $\eta(t) = \tfrac{1}{2}\left( \tanh(2t-1) +1\right)$ saturates the part indicator function between zero and one (see below). Here $\rho_{\mathrm{corr}}$ and $\rho_{\mathrm{part}}$ denote regularization terms for the correspondence and the part, respectively; these terms are explained below. We use the $L_{2,1}$ matrix norm (equal to the sum of $L_2$-norms of matrix columns) to handle possible outliers in the corresponding data, as such a norm promotes column-sparse matrices. A similar norm was adopted in \cite{DBLP:journals/tog/HuangWG14} to handle spurious maps in shape collections. Note that in order to avoid a combinatorial optimization over binary-valued $v$, we use a continuous $v$ with values in the range $(-\infty, +\infty)$, saturated by the non-linearity $\eta$. This way, $\eta(v)$ becomes a soft membership function with values in the range $[0,1]$. \paragraph*{Part regularization.} Similarly to \cite{bronstein2008not,pokrass13-nmtma}, we try to find the part with area closest to that of the query and with shortest boundary. This can be expressed as \begin{eqnarray} \label{eq:funcorr_reg1} \rho_{\mathrm{part}}(v) &=& \mu_1\left( \mathrm{area}(\mathcal{N}) - \int_{\mathcal{M}} \eta(v) dx \right)^2 \\ &+& \mu_2 \int_{\mathcal{M}} \xi(v)\|\nabla_{\mathcal{M}}v \| dx\,, \nonumber \end{eqnarray} where $\xi(t) \approx \delta \left(\eta(t)-\tfrac{1}{2}\right)$ and the norm is on the tangent space. The $\mu_2$-term in~(\ref{eq:funcorr_reg1}) is an intrinsic version of the {\em Mumford-Shah functional} \cite{mumford1989optimal}, measuring the length of the boundary of a part represented by a (soft) membership function. This functional was used previously in image segmentation applications \cite{vese2002multiphase}. \paragraph*{Correspondence regularization.} For the correspondence, we use the penalty \begin{eqnarray} \label{eq:funcorr_reg2} \rho_{\mathrm{corr}}(\mathbf{C}) &=& \mu_3\|\mathbf{C} \circ \mathbf{W}\|_\mathrm{F}^2 + \mu_4 \sum_{i\neq j}(\mathbf{C}^\top\mathbf{C})_{ij}^2 \nonumber\\ &+& \mu_5 \sum_{i}((\mathbf{C}^\top\mathbf{C})_{ii} - d_i)^2\,, \end{eqnarray} where $\circ$ denotes Hadamard (element-wise) matrix product. The $\mu_3$-term models the special slanted-diagonal structure of $\mathbf{C}$ that we observe in partial matching problems (see Fig.~\ref{fig:C}); the theoretical motivation for this behavior was presented in Sec.~\ref{sec:perturb}. Here, $\mathbf{W}$ is a weight matrix with zeros along the slanted diagonal and large values outside (see Fig.~\ref{fig:C}; details on the computation of $\mathbf{W}$ are provided in Appendix A). The $\mu_4$-term promotes orthogonality of $\mathbf{C}$ by penalizing the off-diagonal elements of $\mathbf{C}^\top\mathbf{C}$. The reason is that for isometric shapes, the functional map is volume-preserving, and this is manifested in orthogonal $\mathbf{C}$ \cite{ovsjanikov12}. Note that differently from the classical case (\emph{i.e.}, full shapes), in our setting we can only require area preservation going in the direction from partial to complete model, as also expressed by the $\mu_1$-term in \eqref{eq:funcorr_reg1}. For this reason, we do not impose any restrictions on $\mathbf{C}\mathbf{C}\ensuremath{^\top}$ and we say that the matrix is {\em semi}-orthogonal. Finally, note that due to the low-rank nature of $\mathbf{C}$ we can not expect the product $\mathbf{C}\ensuremath{^\top}\mathbf{C}$ to be full rank. Indeed, we expect elements off the slanted diagonal of $\mathbf{C}$ to be close to zero and thus $\mathbf{C}\ensuremath{^\top}\mathbf{C}\approx\begin{pmatrix}\mathbf{I}&\mathbf{0}\\\mathbf{0}&\mathbf{0}\end{pmatrix}$. The $\mu_5$-term in \eqref{eq:funcorr_reg2} models this behavior, where vector $\mathbf{d} = (d_1, \hdots, d_k)$ determines how many singular values of $\mathbf{C}$ are non-zero (the estimation of $\mathbf{d}$ is straightforward, and described in Appendix A). \paragraph*{Remark.} The fact that matrix $\vct{C}$ is low-rank is a direct consequence of partiality. This can be understood by recalling from Eq.~\eqref{eq:funcorr1} that the (non-truncated) functional map representation amounts to an orthogonal change of basis; since in the standard basis the correspondence matrix is low-rank (as it contains zero-sum rows), this property is preserved by the change of basis. \noindent In Fig.~\ref{fig:C} we show an example of a ground-truth partial functional map $\vct{C}$, illustrating its main properties. \begin{figure}[t] \centering \begin{minipage}[b]{0.49\linewidth} \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=1.0\linewidth]{cat0} \put(52.0,4){\footnotesize $\mathcal{M}$} \end{overpic} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=0.57\linewidth]{cat1} \put(30.0,4){\footnotesize $\mathcal{N}$} \end{overpic} \end{minipage} \begin{minipage}{0.49\linewidth} \centering \vspace{2pt} \includegraphics[width=0.65\linewidth]{singular_values_left.pdf} $\mathbf{C}$ \end{minipage}\hspace{-10pt} \begin{minipage}{0.49\linewidth} \centering \vspace{2pt} \includegraphics[width=0.65\linewidth]{singular_values_right.pdf} $\mathbf{C}\ensuremath{^\top}\mathbf{C}$ \end{minipage} \begin{minipage}{0.49\linewidth} \centering \vspace{2pt} \includegraphics[width=0.65\linewidth]{W.pdf} $\mathbf{W}$ \end{minipage}\hspace{-10pt} \begin{minipage}{0.49\linewidth} \centering \vspace{2pt} \input{singularvalues.tikz} \end{minipage} \caption{\label{fig:C}\rev{A partial functional map $\vct{C}$ from $\mathcal{N}$ to $\mathcal{M}$ has a slanted-diagonal structure (second row, left). The low-rank nature of such a map is also manifested in its singular values (bottom right). If the map is volume-preserving, then its full-rank sub-matrix is orthogonal: observe how $\vct{C}\ensuremath{^\top}\vct{C}$ approximates the identity, with a trail of small values along the diagonal corresponding to the almost-zero block of $\vct{C}$.}} \end{figure} \begin{figure*}[t] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=0.71\linewidth]{pipeline.png} \put(28.5,0){\footnotesize Iteration 1} \put(52,0){\footnotesize 2} \put(73,0){\footnotesize 3} \put(93,0){\footnotesize 4} % \put(19.7,3.3){\includegraphics[width=0.06\linewidth]{C1.png}} \put(40.1,3.3){\includegraphics[width=0.06\linewidth]{C2.png}} \put(60.6,3.3){\includegraphics[width=0.06\linewidth]{C3.png}} \put(81.0,3.3){\includegraphics[width=0.06\linewidth]{C4.png}} \end{overpic} \hspace{5pt} \setlength\figureheight{2.3cm} \setlength\figurewidth{0.4\columnwidth} \input{energy.tikz} \caption{\label{fig:alternating}An example of the matching process operating on two shapes from TOSCA. The algorithm alternatingly optimizes over corresponding part (top row) and functional correspondence (bottom row). Corresponding points between full and partial shape are shown with the same color. This solution was obtained by using 30 eigenfunctions on both manifolds.} \end{figure*} \paragraph*{Alternating scheme.} To solve the optimization problem~(\ref{eq:funcorr_part1}), we perform an alternating optimization w.r.t. to $\mathbf{C}$ and $v$, repeating the following steps until convergence: {\em C-step: } Fix $v^\ast$, solve for correspondence $\mathbf{C}$ \begin{eqnarray} \label{eq:funcorr_part_C} \min_{\mathbf{C}} \, \| \mathbf{C}\mathbf{A} - \mathbf{B}(\eta(v^\ast)) \|_{2,1} + \rho_{\mathrm{corr}}(\mathbf{C})\,. \end{eqnarray} {\em V-step: } Fix $\mathbf{C}^\ast$, solve for part $v$ \begin{eqnarray} \label{eq:funcorr_part_v} \min_{v} \, \| \mathbf{C}^\ast\mathbf{A} - \mathbf{B}(\eta(v)) \|_{2,1} + \rho_{\mathrm{part}}(v)\,. \end{eqnarray} A practical example of the alternating scheme applied to a pair of shapes is shown in Fig.~\ref{fig:alternating}. \section{Laplacian eigenvectors and eigenvalues under partiality} \label{sec:perturb} When one of the two shapes has missing parts, the assumption of approximate isometry does not hold anymore and a direct application of the method of Ovsjanikov~\emph{et al.}~(\emph{i.e.}, solving system \eqref{eq:cab}) would not produce meaningful results. However, as we show in this section, the matrix $\vct{C}$ still exhibits a particular structure which can be exploited to drive the matching process. We assume to be given a full shape $\mathcal{M}$ and a part thereof $\mathcal{N}\subset \mathcal{M}$. We further denote by $\overline{\mathcal{N}} = \mathcal{M}\setminus \mathcal{N}$ the remaining vertices of $\mathcal{M}$. The manifolds $\mathcal{M}$ and $\mathcal{N}$ are discretized as triangular meshes with $m$ and $n$ vertices, respectively, and $\bar{n} = m-n$. The scenario we consider in this paper concerns the problem of matching an approximately isometric deformation of part $\mathcal{N}$ to the full shape $\mathcal{M}$ (part-to-whole matching). Our goal is to characterize the eigenvalues and eigenvectors of the Laplacian $\vct{L}_\mathcal{M}$ in terms of perturbations of the eigenvalues and eigenvectors of the Laplacians $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$~\cite{NME:NME1620260202}. We tacitly assume that homogeneous Neumann boundary conditions~(\ref{eq:neumann}) apply. \subsection{Block-diagonal case}\label{sec:blockd} For the simplicity of analysis, let us first consider a simplified scenario in which $\mathcal{N}$ and $\overline{\mathcal{N}}$ are {\em disconnected}, \emph{i.e.}, there exist no links between the respective boundaries $\partial \mathcal{N}$ and $\partial \overline{\mathcal{N}}$. W.l.o.g., we can assume that the vertices in $\mathcal{M}$ are ordered such that the vertices in $\mathcal{N}$ come before those in $\overline{\mathcal{N}}$. With this ordering, the $m\times m$ Laplacian matrix $\vct{L}_\mathcal{M}$ is block-diagonal, with an $n\times n$ block $\vct{L}_\mathcal{N}$ and an $\bar{n}\times \bar{n}$ block $\vct{L}_{\overline{\mathcal{N}}}$. The (sorted) eigenvalues of $\vct{L}_\mathcal{M}$ form a mixed sequence composed of the eigenvalues from $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$. Similarly, the eigenvectors of $\vct{L}_\mathcal{M}$ correspond to the eigenvectors of the two sub-matrices, zero-padded to the correct size (Fig.~\ref{fig:blocks}). \paragraph*{Structure of C under partiality.} Suppose we are given the first $k$ Laplace-Beltrami eigenvalues of the full shape $\mathcal{M}$ and of its part $\mathcal{N}$. Since the spectrum of $\mathbf{L}_\mathcal{M}$ is an interleaved sequence of the eigenvalues of $\mathbf{L}_\mathcal{N}$ and $\mathbf{L}_{\overline{\mathcal{N}}}$, only the first ${r < k}$ eigenvalues of $\mathbf{L}_\mathcal{N}$ will appear among the first $k$ eigenvalues of $\mathbf{L}_\mathcal{M}$. The remaining ${k-r}$ eigenvalues of $\mathbf{L}_\mathcal{N}$ will only appear further along the spectrum of $\mathbf{L}_\mathcal{M}$ (see Fig. \ref{fig:spectra} for an example where $k=50$ and $r=21$). The same argument holds for the associated eigenfunctions, as illustrated in Fig.~\ref{fig:eigenfunctions}: if $\bm{\phi}_i$ is an eigenvector of $\mathbf{L}_{\mathcal{N}}$, then $\mathbf{L}_{\mathcal{M}}$ also has an eigenvector $\bm{\psi}_j$ such that $\bm{\phi}_{i} = \mathbf{T} \bm{\psi}_j$, where $\mathbf{T} = (\mathbf{I}_{n\times n}, \, \mathbf{0})^\top$ and $i < j$. This analysis leads us to the following simple observation: the partial functional map between $\mathcal{N}$ and $\mathcal{M}$ is represented in the spectral domain by the matrix of inner products $c_{ij} = \langle \mathbf{T}\bm{\phi}_i, \bm{\psi}_j \rangle_{\mathcal{M}}$, which has a {\em slanted-diagonal} structure with a slope $r/k$ (see examples in Figs.~\ref{fig:teaser}, \ref{fig:eigenfunctions} where this structure is manifested approximately). Consequently, the last $k-r$ columns of matrix $\mathbf{C}$ are zero such that $r = \mathrm{rank}(\vct{C})$. The value $r$ can be estimated by simply comparing the spectra of the two shapes, as shown in Fig.~\ref{fig:spectra}. Note that this behavior is consistent with Weyl's asymptotic law \cite{weyl11}, according to which the Laplacian eigenvalues grow linearly, with rate inversely proportional to surface area. \begin{figure*}[t] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=1\linewidth]{areas2.png} \put(3.5,0.1){\footnotesize $\phi_2$} \put(12.8,0.1){\footnotesize $\phi_3$} \put(22.5,0.1){\footnotesize $\phi_4$} \put(32.3,0.1){\footnotesize $\phi_5$} \put(42.1,0.1){\footnotesize $\phi_6$} \put(51.8,0.1){\footnotesize $\phi_7$} \put(61.5,0.1){\footnotesize $\phi_8$} \put(71.0,0.1){\footnotesize $\phi_9$} \put(80.9,0.1){\footnotesize $\phi_{10}$} % \put(3.5,13){\footnotesize $\psi_2$} \put(12.8,13){\footnotesize $\psi_3$} \put(22.5,13){\footnotesize $\psi_4$} \put(32.3,13){\footnotesize $\psi_5$} \put(42.1,13){\footnotesize $\psi_6$} \put(51.8,13){\footnotesize $\psi_7$} \put(61.5,13){\footnotesize $\psi_8$} \put(71.0,13){\footnotesize $\psi_9$} \put(80.9,13){\footnotesize $\psi_{10}$} % \put(3.5,30){\footnotesize $\zeta_2$} \put(12.8,30){\footnotesize $\zeta_3$} \put(22.5,30){\footnotesize $\zeta_4$} \put(32.3,30){\footnotesize $\zeta_5$} \put(42.1,30){\footnotesize $\zeta_6$} \put(51.8,30){\footnotesize $\zeta_7$} \put(61.5,30){\footnotesize $\zeta_8$} \put(71.0,30){\footnotesize $\zeta_9$} \put(80.9,30){\footnotesize $\zeta_{10}$} % \put(0,35.5){\footnotesize $\mathcal{N}_1$} \put(0,26){\footnotesize $\mathcal{M}$} \put(0,8){\footnotesize $\mathcal{N}_2$} % \put(90.5,2){\footnotesize $\langle \psi_i , T \phi_j \rangle$} \put(90.5,21){\footnotesize $\langle \psi_i, T\zeta_j\rangle$} \end{overpic} \caption{\label{fig:eigenfunctions}First ten eigenfunctions of a full shape $\mathcal{M}$ and two parts $\mathcal{N}_1,\mathcal{N}_2\subset\mathcal{M}$ with different surface area. All eigenfunctions of the partial shapes have a corresponding eigenfunction $\psi_i$ on the full shape for some $i$; the correspondence between eigenfunctions follows from the correspondence between eigenvalues (see also Fig. \ref{fig:spectra}). This is reflected in functional maps with different diagonal slopes, where the slope depends on the area ratios of the two surfaces (by Weyl's law).} \end{figure*} \subsection{Perturbation analysis} We will now show that these properties still approximately hold when the Laplacian matrix $\mathbf{L}_\mathcal{M}$ is not perfectly block-diagonal, \emph{i.e.}, when $\mathcal{N}$ and $\overline{\mathcal{N}}$ are joined along their boundaries. Roughly speaking, the main observation is that in this case as well the matrix $\mathbf{C}$ has a slanted diagonal structure, where the diagonal angle depends on the relative area of the part, and the diagonal `sharpness' depends on the position and length of the cut. Here we assume w.l.o.g. that within $\mathcal{N}$ the boundary vertices $\partial \mathcal{N}$ are indexed at the end, while within $\overline{\mathcal{N}}$ the boundary vertices $\partial \overline{\mathcal{N}}$ are indexed at the beginning. Then, there is a boundary band $\mathcal{B}=\partial \mathcal{N} \cup \partial \overline{\mathcal{N}}$ such that only the entries of the Laplacians $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$ between vertices in $\mathcal{B}$ are affected by the cut (Fig.~\ref{fig:blocks2}). \begin{figure}[b] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=0.725\linewidth]{blocks2.pdf} \put(52,44){\footnotesize $\mathbf{L}_\mathcal{N}$} \put(89,5){\footnotesize $\mathbf{L}_{\overline{\mathcal{N}}}$ } \put(79.5,28){\footnotesize $t\mathbf{E}$} \put(67,16.5){\footnotesize $t\mathbf{E}^\top$} \put(20,38.5){\footnotesize $\mathcal{N}$} \put(20.25,27){\color{white}\footnotesize $\overline{\mathcal{N}}$} \end{overpic}\\ \caption{ \label{fig:blocks2} The matrix $\mathbf{L}(t)$ is obtained as a perturbation of the block-diagonal Laplacian in the boundary band (shown in green). } \end{figure} We define the parametric matrix \begin{equation} \vct{L}(t) = \left(\begin{array}{c | c} \vct{L}_\mathcal{N} & \mathbf{0}\\\hline \mathbf{0} & \vspacer{10pt}\vct{L}_{\overline{\mathcal{N}}} \end{array}\right) + % t \left(\begin{array}{c | c} \vct{P}_\mathcal{N} & \vct{P} \\\hline \vct{P}^\top &\vspacer{10pt} \vct{P}_{\overline{\mathcal{N}}} \end{array}\right), \end{equation} where \begin{eqnarray*} \vct{P}_\mathcal{N} = \left(\begin{array}{c c} \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \vct{D}_{\mathcal{N}} \end{array}\right), \,\,\, % \vct{P}_{\overline{\mathcal{N}}} = \left(\begin{array}{c c} \vct{D}_{\overline{\mathcal{N}}} & \mathbf{0}\\ \mathbf{0} & \mathbf{0} \end{array}\right), \,\,\, \vct{P} = \left(\begin{array}{c c} \mathbf{0} & \mathbf{0}\\ \vct{E} & \mathbf{0} \end{array}\right) \end{eqnarray*} are matrices of size $n\times n$, $\bar{n}\times \bar{n}$, and $n\times \bar{n}$, respectively. Here $\vct{D}_\mathcal{N}$ and $\vct{D}_{\overline{\mathcal{N}}}$ represent the variations of the Laplacians $\vct{L}_\mathcal{N}$ and $\vct{L}_{\overline{\mathcal{N}}}$ {\em within} nodes in $\partial \mathcal{N}$ and $\partial \overline{\mathcal{N}}$ respectively, while $\vct{E}$ represents the variations {\em across} the boundary. The parameter $t$ is such that $\vct{L}(1)=\vct{L}_\mathcal{M}$, while for ${t=0}$ we get back to the disconnected case of Fig.~\ref{fig:blocks}. \rev{In what follows, we perform a differential analysis at ${t=0}$ and therefore analyze the change in eigenvalues and eigenvectors of $\vct{L}(t)$ as we interpolate between the seen ($\mathcal{N}$) and unseen ($\overline{\mathcal{N}}$) parts for $t=0$, to the full shape $\mathcal{M}$ for $t=1$.} Note that with the appropriate ordering of the vertices, the matrices $\vct{D}_\mathcal{N}$ and $\vct{D}_{\overline{\mathcal{N}}}$ will have a band-diagonal structure. In fact, a cut through an edge will affect the values of the discrete Laplacian matrix $\mathbf{L}$ only at the entries corresponding to the vertices at the extremities of the edge, and to the edges laying in the same triangle as the cut edge. For example, looking at Fig.~\ref{fig:cot_weights}, a cut through edge $(i,j)$ will affect the diagonal entries $l_{ii}$ and $l_{jj}$ as well as the off-diagonal entries $l_{ih}$, $l_{ik}$, $l_{jh}$, and $l_{jk}$. Note also that the continuity of the cut implies that two of the four off-diagonal entries will be cut as well, leaving no more than two affected edges on any side of the cut. As a result, the entries of the Laplacian affected by the cut correspond to the nodes and edges in a path along the boundary of the cut. \rev{We further note that although here we consider the cotangent Laplacian for simplicity of analysis, similar results hold for Laplacians that are not strictly local, but locally dominant \cite{CLOPC_SODA_09}, \emph{i.e.}, most of their $L_2$ norm is due to the elements in a tight boundary layer.} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm:evals} Let $\vct{L}_\mathcal{N}+t\vct{P}_\mathcal{N} = \bm{\Phi}(t)\ensuremath{^\top} \bm{\Lambda}(t) \bm{\Phi}(t)$, where $\bm{\Lambda}(t) = \mathrm{diag}(\lambda_1(t), \hdots, \lambda_n(t))$ is a diagonal matrix of eigenvalues, and $\bm{\Phi}(t)$ are the corresponding eigenvectors. The derivative of the non-trivial eigenvalues is given by \begin{equation}\label{eq:eigdt} \frac{d}{dt}\lambda_i = \sum_{v,w\in \partial \mathcal{N}} (\vct{P}_\mathcal{N})_{vw} {\phi}_{iv} {\phi}_{iw} = \bm{\phi}_i^\top \vct{P}_\mathcal{N} \bm{\phi}_i. \end{equation} \end{theorem} {\em Proof:} See Appendix C. Theorem~\ref{thm:evals} establishes that the (first-order) change in the eigenvalues of the partial shape $\mathcal{N}$ only depends on the change in the Dirichlet energy of the corresponding eigenvectors along the boundary $\partial\mathcal{N}$ \rev{(recall that for eigenvector $\bm{\phi}_i$, the Dirichlet energy is defined as $\bm{\phi}_i\ensuremath{^\top} (\vct{L}_\mathcal{N}+t\vct{P}_\mathcal{N})\bm{\phi}_i = \bm{\phi}_i\ensuremath{^\top} \vct{L}_\mathcal{N} \bm{\phi}_i + t\bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_i$).} This means that the eigenvalues are perturbed depending on the {\em length} and {\em position} of the cut. % \rev{Note that the estimate given in Eq.~\eqref{eq:eigdt} can not typically be computed directly, as this would assume knowledge of the correspondence between $\mathcal{N}$ and $\mathcal{M}$. However,} by virtue of this result, we can establish approximate correspondence between the eigenvalues \rev{$\lambda_i^\mathcal{N}$} of $\mathbf{L}_\mathcal{N}$ and a subset of the eigenvalues \rev{$\lambda_j^\mathcal{M}$} of $\mathbf{L}_\mathcal{M}$ (which are now not exactly equal as in the block-diagonal case). \rev{We do this in order to estimate the slope of $\vct{C}$. Specifically, we compute \begin{equation} r = \max \{ i ~|~ \lambda_i^\mathcal{N} < \max_{j=1}^k \lambda_j^\mathcal{M} \}\,; \end{equation} the slope of $\vct{C}$ can now be estimated as $r/k$, as explained in Section \ref{sec:blockd} (see also Fig.~\ref{fig:spectra} for a visual illustration of the estimation of $r$). } \begin{theorem}\label{thm:evecs} Assume that $\vct{L}_{{\mathcal{N}}}$ has distinct eigenvalues ($\lambda_i \neq \lambda_j$ for $i\neq j$), and furthermore, the non-zero eigenvalues are all distinct from the eigenvalues of $\vct{L}_{\overline{\mathcal{N}}}$ ($\lambda_i \neq \overline{\lambda}_j$ for all $i, j$). Let $\vct{L}_\mathcal{N}+t\vct{P}_\mathcal{N} = \bm{\Phi}(t)\ensuremath{^\top} \bm{\Lambda}(t) \bm{\Phi}(t)$, where $\bm{\Lambda}(t) = \mathrm{diag}(\lambda_1(t), \hdots, \lambda_n(t))$ is a diagonal matrix of eigenvalues, and $\bm{\Phi}(t)$ are the corresponding eigenvectors. Then, the derivative of the non-constant eigenvector is given by \begin{equation} \frac{d}{dt}\bm{\phi}_i = \sum_{ {j=1}\atop{j\neq i}}^{n} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_j}{{\lambda}_i-{\lambda}_j} \bm{\phi}_j + \sum_{j=1}^{\overline{n}} \frac{\bm{\phi}_i\ensuremath{^\top} \vct{P}\; \overline{\bm{\phi}}_j}{{\lambda}_i-\overline{{\lambda}}_j} \overline{\bm{\phi}}_j\,. \label{eq:thrm2} \end{equation} \end{theorem} {\em Proof:} See Appendix C. \paragraph*{Remark.} If $\vct{L}_{\overline{\mathcal{N}}}$ shares some eigenvalues with $\vct{L}_{\mathcal{N}}$, the second sum in~(\ref{eq:thrm2}) would be slightly different~\cite{NME:NME1620260202}, but would still only have support over $\overline{\mathcal{N}}$. \begin{figure}[t] \centering \setlength\figureheight{3.25cm} \setlength\figurewidth{0.6\columnwidth} \input{spectra.tikz} \includegraphics[width=0.24\linewidth]{dogs.png} \caption{\label{fig:spectra}Neumann spectra of a full shape and a part of it. The eigenvalues of the partial shape (in red) are approximately preserved under the partiality transformation (see Theorem~\ref{thm:evals}), and appear perturbed in the spectrum of the full shape (in blue). This simple observation allows us to estimate the diagonal slope of the functional map relating the two shapes; in this example, the slope is equal to $21/50$.} \end{figure} We conclude from Theorem~\ref{thm:evecs} that the perturbation associated with the partiality transformation gives rise to a mixing of eigenspaces. The second summation in~(\ref{eq:thrm2}) has support over $\overline{\mathcal{N}}$ and thus provides the completion of the eigenfunction on the missing part. The first summation in~(\ref{eq:thrm2}) is responsible for the modifications of the eigenvectors over the nodes in $\mathcal{N}$. Here the numerator has a term $\bm{\phi}_i\ensuremath{^\top} \vct{P}_\mathcal{N} \bm{\phi}_j$ which, since $\vct{D}_\mathcal{N}$ is band-diagonal and diagonally dominant, acts as a dot product of the eigenvectors over the boundary band. This points to large mixing of eigenvectors with a strong co-presence near the boundary. In turn, the term ${\lambda}_i-{\lambda}_j$ at the denominator forces a strong mixing of eigenvectors corresponding to similar eigenvalues. This results in an amplification of the variation for higher eigenvalues, as eigenvalues tend to densify on the higher end of the spectrum, and explains the funnel-shaped spread of the matrix $\mathbf{C}$ visible at high frequencies (see Fig.~\ref{fig:C}). Similarly to the case of eigenvalues, the eigenvectors are also perturbed depending on the length and position of the cut. The variation of the eigenvectors due to the mixing within the partial shape can be reduced either by shortening the boundary of the cut, or by reducing the strength of the boundary interaction. The latter can be achieved by selecting a boundary along which eigenvectors with similar eigenvalues are either orthogonal, or both small. The {\em boundary interaction strength} can be quantified by considering the following function (we refer to Appendix C for a derivation): \begin{equation}\label{eq:f} f(v) = \sum_{{i,j=1}\atop{j\neq i}}^{n} \left(\frac{{\phi}_{iv}{\phi}_{jv}}{{\lambda}_i-{\lambda}_j}\right)^2\,. \end{equation} Fig.~\ref{fig:cuts} shows an example of two different cuts with different interaction strengths, where the function $f$ is plotted on top of the cat model. The cuts plotted in the figure have the same length, but one cut goes along a symmetry axis of the shape and through low values of $f$, while the other goes through rather high values of $f$. This is manifested in the dispersion of the slanted diagonal structure of the matrix $\mathbf{C}$ (larger in the second case). \begin{figure}[t] \centering \begin{overpic} [trim=0cm 0cm 0cm 0cm,clip,width=1\linewidth]{nodal_points_3.pdf} \put(49,0){\input{f1.tikz}} \put(77,0){\input{f2.tikz}} \end{overpic} \caption{\label{fig:cuts}{\em Left}: A model is cut in two different ways (red and green curves) with cuts of same length. The off-diagonal dispersion depends mainly on the position of each cut. Function $f$ \eqref{eq:f} is plotted over the model. {\em Middle}: Ground-truth functional map between the complete model and the partial shape produced by the red cut (top), and values of $f$ along the cut (bottom). {\em Right}: Plots associated to the green cut.} \end{figure}
2023-04-23T08:18:09.890Z
2015-12-23T02:07:01.000Z
redpajama/arxiv
arxiv_0000
1,301
12,507
c90ec76bf0d8c57229c103169e89361cd660ca9d
\section{Introduction} Let $G$ be a finite group and $k$ a field of characteristic $p >0$. Let $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ be the stable category of finitely generated $kG$-modules modulo projective modules. It is a tensor triangulated category and its thick tensor ideal subcategories have been classified in terms of support varieties. Given a thick tensor ideal ${\mathcal{M}}$, a new category ${\mathcal{C}}$ is obtained by localizing $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ at ${\mathcal{M}}$ by inverting any map whenever the third object in the triangle of that map is in ${\mathcal{M}}$. In such a category, the endomorphism of the trivial module $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ is important because the endomorphism ring of every module is an algebra over it. In favorable cases its action on other modules can be used to define support varieties and other invariants. In the few cases where $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ has been computed, it turned out to be some sort of homogeneous localization of the cohomology ring of the group (see \cite{R}, \cite{CW}, \cite{BG}). However, in all of those examples, the localization is with respect to a thick tensor ideal determined by a subvariety that is a hypersurface or union of hypersurfaces. It was clear that the technique in those examples used for determining $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ does not work even when the determining subvariety is a point in ${\mathbb P}^2$. The point of this paper is to show that in such a case the structure of $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ can be very different. In examples, we find that $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ is a local $k$-algebra whose maximal ideal is infinitely generated and has square zero. We work in the setting that $G = H \times C$ is a finite group scheme where $H$ and $C$ are subgroup schemes defined over $k$ and the group algebra $kC$ is the group algebra of a cyclic group of order $p$. The thick tensor ideal subcategory is collection of all finitely generated $kG$-modules whose variety is the image of the variety of $C$ under the restriction map. Thus a module in ${\mathcal{M}}$ is projective on restriction to a $kH$-module, but not projective when restricted to $kC$. In this setting we prove that \[ \operatorname{End}\nolimits_{{\mathcal{C}}}(k) \cong \widehat{\operatorname{H}\nolimits}^{\leq 0}(H,k), \] the negative Tate cohomology ring of the subgroup scheme $H$. In the course of the proof we construct the idempotent modules associated to the subcategory ${\mathcal{M}}$. These are constructed directly from a $kH$-projective resolution of the trivial module for $H$, and it is this resolution that connects us to the Tate cohomology. In the case of group algebras, it has been shown that products in negative cohomology mostly vanish \cite{BC2} whenever the $p$-rank of the group is at least $2$. It is likely that the same holds for general group schemes whenever the Krull dimension of the cohomology ring of $H$ is at least $2$. \section{Preliminaries} \label{sec:prelim} In this section we establish some notation and recall some known results. For general reference on cohomology see \cite{CTVZ} or \cite{Cmods}. For basics on triangulated categories see \cite{Hap}. We follow that development in \cite{FP} for support varieties. Let $k$ be a field of characteristic $p > 0$, and let $G$ be a finite group scheme defined over $k$. Let $kG$ be its group algebra. Let $\operatorname{{\bf mod}(\text{$kG$})}\nolimits$ denote the category of finitely generated $kG$-modules and $\operatorname{{\bf Mod}(\text{$kG$})}\nolimits$ the category of all $kG$-modules. Recall that $kG$ is a cocommutative Hopf algebra which means that for $M$ and $N$, $kG$-modules, $M \otimes_k N$ is also a $kG$-module with action given by coalgebra map $kG \to kG \otimes kG$. If $G$ is a finite group, then $g(m\otimes n) = gm \otimes gn$ for $g \in G$, $m \in M$ and $n \in N$. By the symbol $\otimes$ we mean $\otimes_k$ unless otherwise indicated. In addition, $kG$ is self-injective so that projective $kG$-modules coincide with injective $kG$-modules. For $M$ a $kG$-module, $\Omega^{-1}(M)$ is the cokernel of an injective hull $M \hookrightarrow I$, for $I$ injective. Inductively, we let $\Omega^{-n}(M) = \Omega^{-1}(\Omega^{1-n}(M)$, On the positive side, $\Omega(M)$ is the kernel of a projective cover $P \twoheadrightarrow M$, for $P$ projective, and $\Omega^n(M) = \Omega(\Omega^{n-1}(M).$ The stable category $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ of $KG$-modules modulo projectives has the same objects as $\operatorname{{\bf mod}(\text{$kG$})}\nolimits$, but the morphisms are given by the formula \[ \operatorname{\underline{Hom}}\nolimits_{kG}(M,N) \ = \ \operatorname{Hom}\nolimits_{\operatorname{{\bf stmod}(\text{$kG$})}\nolimits}(M,N) \ = \ \operatorname{Hom}\nolimits_{kG}(M,N)/\operatorname{PHom}\nolimits_{kG}(M,N) \] where $\operatorname{PHom}\nolimits_{kG}$ is the set of homomorphisms that factor through a projective module. The stable category is a tensor triangulated category. Triangles correspond to short exact sequences in $\operatorname{{\bf mod}(\text{$kG$})}\nolimits$. The translation functor is $\Omega^{-1}$. Let $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ denote the stable category of all $kG$-modules. It has the same properties. The cohomology ring $\HH{*}{G}{k}$ is a finitely generated $k$-algebra, and every cohomology module $\operatorname{Ext}\nolimits^*_{kG}(M,N)$ is a finitely generated module over $\HH{*}{G}{k}$ for $M$ and $N$ in $\operatorname{{\bf mod}(\text{$kG$})}\nolimits$ \cite{FS}. Let ${V_G(k)}= \operatorname{Proj}\nolimits \HH{*}{G}{k}$ denote the projectivized prime ideal spectrum of $\HH{*}{G}{k}$. If $E$ is an elementary abelian $p$-group or rank $r$ (order $p^r$), then modulo its radical $\HH{*}{E}{k}/\operatorname{Rad}\nolimits(\HH{*}{E}{k}) \cong k[\zeta_1, \dots, \zeta_r]$ is a polynomial ring in $r$ variables. Thus when the field is algebraically closed, ${V_E(k)} \cong {\mathbb P}^{r-1}$ is projective $r-1$ space. We define the support variety of a $kG$-module by the method of $\pi$-points \cite{FP}. For finite groups this is essentially the same as the development in \cite{BCR2}. A $\pi$-point for $G$ is a flat map $\alpha_K: K[t]/(t^p) \to KG_K$, where $K$ is an extension of $k$, and $\alpha_K$ factors through the group algebra of some unipotent abelian subgroup scheme $C_K \subseteq G_K$ of $G_K$. For $M$ a $kG$-module, let $\alpha_K^*(M_K)$ denote the restriction of $M_K = K \otimes M$ to a $K[t]/(t^p)$-module along $\alpha_K$. Two $\pi$-points $\alpha_K$ and $\beta_L$ are equivalent if for every finite dimensional $kG$-module $M$, $\alpha_K^*(M_K)$ is projective if and only if $\beta_L^*(M_L)$ is projective. We say that a $\pi$-point $\alpha_K$ specializes to $\beta_L$ if, for any finitely generated $kG$-module $M$, the projectivity of $\alpha^*_K(M)$ implies the projectivity of $\beta^*_L(M)$. So two $\pi$-points are equivalent if each specializes to the other. Let ${\mathcal{V}}_G(k)$ denote the set of all equivalence classes of $\pi$-points. Then ${\mathcal{V}}_G(k)$ is a scheme and is isomorphic as a scheme to ${V_G(k)}$. Essentially, the class of a $\pi$-point $\alpha_K$ corresponds to the homogeneous prime ideal that is the kernel of the restriction map $\operatorname{H}\nolimits^*(G,K) \to \operatorname{H}\nolimits^*(K[t]/(t^p), K)/ \operatorname{Rad}\nolimits(\operatorname{H}\nolimits(K[t]/(t^p))$ along $\alpha_K$. Thus, since we extend the field $k$, a $\pi$-point may correspond to generic point of a homogeneous irreducible subvariety of $\HH{*}{G}{k}$. The support variety ${\mathcal{V}}_G(M)$ of a $kG$-module $M$ is the set of all equivalence classes of $\pi$-points $\alpha_K$ such that $\alpha_K^*(M_K)$ is not projective. If $M$ is a finitely generated module then ${\mathcal{V}}_G(M)$ is a closed subvariety of ${\mathcal{V}}_G(k)$. Otherwise, it is just a subset. A subcategory of a tensor triangulated category is thick if it is closed under taking of direct summands. It is a thick tensor ideal if, in addition, the tensor product of an object in the subcategory with any other object is again in the subcategory. There is a complete classification of the thick tensor ideals of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$. Suppose that ${\mathcal{V}}$ is a subset of ${\mathcal{V}}_G(k)$ that is closed under specializations, meaning that if $\alpha_K$ specializes to $\beta_L$ and if $\alpha_K$ is in ${\mathcal{V}}$ then so is $\beta_L$. Let ${\mathcal{M}}_{{\mathcal{V}}}$ be the full subcategory of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ generated by all $kG$-modules $M$ such that ${\mathcal{V}}_G(M) \subseteq {\mathcal{V}}$. The properties of the support variety are sufficient to insure that any ${\mathcal{M}}_{{\mathcal{V}}}$ is a thick tensor ideal in $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$. \begin{thm} \cite{BCR, FP} \label{thm:bcr3} Every thick tensor ideal in $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ is equal to ${\mathcal{M}}_{{\mathcal{V}}}$ for some some subset ${\mathcal{V}} \subseteq {\mathcal{V}}_G(k)$ which is closed under specialization. \end{thm} If ${\mathcal{M}}$ is a thick subcategory of a triangulated category ${\mathcal{C}}$, then the Verdier localization of ${\mathcal{C}}$ at ${\mathcal{M}}$ is the category whose objects are the same as those of ${\mathcal{C}}$ and whose morphisms are obtained by inverting a morphism if the third object in the triangle of that morphism is in the subcategory ${\mathcal{M}}$. Thus, a morphism from $L$ to $N$ in the localized category has the form \[ \xymatrix{ N \ar[r]^\theta & M & L \ar[l]_\gamma } \] where the third object in the triangle of the map $\theta$ is in ${\mathcal{M}}$. So in the localized category $\theta^{-1}\gamma$ is a morphism. \section{Resolution modules} \label{sec:resol} Assume that $k$ is a field of characteristic $p$, and consider a finite group scheme of the form $G = H \times C$ where $C$ is the group scheme of a cyclic group of order $p$ and $H$ is finite group scheme defined over $k$. Let $z$ be a generator for $C$, and $Z = z-1$, so that $kC \cong k[Z]/(Z^p)$ and $kG \cong kH \otimes k[Z]/(Z^p)$. Suppose that \[ \xymatrix{ \dots \ar[r] &C_2 \ar[r]^{\partial} & C_1 \ar[r]^{\partial} & C_0 \ar[r] & 0 } \] is a complex of $kH$-module, with all $C_i = \{0\}$ for $i <0$. We use the complex to define a sequence of $kG$-modules which we call resolution modules. For any $n > 0$, let $M(P_*,n)$ be the $kG$-module whose restriction to $kH$ is the infinite direct sum \[ C_0 \oplus C_1^{p-1} \oplus C_2 \oplus C_3^{p-1} \oplus \dots \oplus C_{2n-1}^{p-1}. \] For $i$ odd, define the action of $Z$ on $(m_1, \dots, m_{p-1}) \in C_i^{p-1}$ to be \[ Z(m_1, \dots, m_{p-1}) = (0, m_1, \dots, m_{p-2}) + \partial(m_{p-1}) \quad \in C_i^{p-1} \oplus C_{i-1} \] while for $i = 2j$ and $m \in C_i$, let \[ Zm = \begin{cases} 0 & \text{ if } m \in C_0 \text{ and} \\ (\partial(m), 0, \dots, 0) \in C_{2j-1}^{p-1} & \text{ if } m \in C_{2j}, \ j >0 . \end{cases} \] Note that the action of $Z$ commutes with that of $kH$, since the boundary maps $\partial$ are $kH$-homomorphisms. Moreover, $Z^pM = \{ 0\}$ because $C_*$ a complex. Consequently, the relations define a $kG$-module. We have a nested sequence of modules \[ M(C_*,1) \subseteq M(C_*,2) \subseteq M(C_*,3) \subseteq \dots \subseteq M(C_*,\infty). \] where $M(C_*,\infty)$ is the limit. That is $M(C_*,\infty)$ is the module whose restriction to $H$ is $\oplus_{i \geq 0} (C_{2i} \oplus C_{2i+1}^{p-1})$ with the action by $Z$ defined as above. Note that if $C_*$ is a complex of finitely generated modules, then every $M(C_*,n)$ is finitely generated, though $M(C_*,\infty)$ may not be. The construction has several interesting properties. \begin{lemma} \label{lem:resol1} Suppose that $C_*$ and $D_*$ are chain complexes of $kH$-modules in nonnegative degrees. We have the following. \begin{enumerate} \item Any chain map $\sigma: C_* \to D_*$ in even degrees induces a homomorphism $M(\sigma): M(C_*,n) \to M(D_*,n)$ for every $n\geq 0$ and $n = \infty$. \item For every $n\geq 0$ and for $n = \infty$, $M(C_* \oplus D_*,n) \cong M(C_*,n) \oplus M(D_*,n)$. \item If $J$ is a subgroup scheme of $H$, then the restriction of $M(C_*, n)$ to $J \times C$ is isomorphic to $M((C_*)_{\downarrow kJ}, n)$ where $(C_*)_{\downarrow kJ}$ is the restriction of $C_*$ to a complex of $kJ$-modules. \item If $C_*$ is an exact complex of projective modules, then in the stable module category $M(C_*,\infty)$ is zero. \item If $C_*$ and $D_*$ are projective resolutions of the same module $N$, then in the stable category $M(C_*,\infty) \cong M(D_*,\infty)$. \end{enumerate} \end{lemma} \begin{proof} The first item is clear since any chain map commutes with the boundary map and hence the $kH$-map $M(\sigma):M(C_*,n) \to M(D_*,n)$, which is defined on the direct sum of the terms of the complex, is a $kG$-homomorphism. The proofs of Items 2 and 3 are straightforward. Suppose that $C_*$ is an exact complex of projective modules. Then $C_*$ is a direct sum of complexes having the form $0 \to D_{i+1} \to D_i \to 0$. For $n > i$, either $M(D_*,n) \cong D_{i+1}^{p-1} \oplus D_i$ or $M(D_*,n) \cong D_{i+1} \oplus D_i^{p-1}$ as $kH$-modules. Because $\partial$ maps $D_{i+1}$ isomorphically onto $D_i$ and $D_i$ is projective as a $kH$-module, we conclude that $M(D_*,n) \cong D_i \otimes k[Z]/(Z^p)$ is a projective module. Suppose that $C_*$ and $D_*$ are projective resolutions of the same module $N$. Then there are chain maps $\sigma: C_* \to D_*$ and $\tau: D_* \to C_*$ that lift the identity of $N$. That is, the compositions $\sigma\tau$ and $\tau\sigma$ are homotopic to the identity maps on $D_*$ and $C_*$, respectively. It follows that there are exact complexes $P_*$ and $Q_*$ of projective $kH$-modules such that $C_* \oplus P_* \cong D_* \oplus Q_*$. Thus part (4) follows from parts (2) and (3). \end{proof} From the above we see that there is a functor $\Gamma: \operatorname{{\bf stmod}}\nolimits(kH) \to \operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ that takes a $kH$-module $N$ to $M(P_*,\infty)$ where $P_*$ is a $kH$-projective resolution of $N$. Moreover, it can be checked that this is a functor of triangulated categories, since for any map between objects in $\operatorname{{\bf stmod}(\text{$kH$})}\nolimits$, the mapping cone of the induce map on projective resolutions is a projective resolution of the third object in the triangle of that map. What is interesting, is that if we assume that $p=2$ and adjust the Hopf algebra structure on $kG$, then $\Gamma$ is also a functor of tensor triangulated categories. That is, the normal coalgebra structure on the group algebra $kC$ is the diagonal which takes a group element $g$ to $g \otimes g$. Because $kG$ is a product of algebras $kG \cong kH \otimes k[Z]/(Z^2)$, there is another natural coalgebra map that is the given coalgebra map on $H$ and takes $Z$ to $1 \otimes Z + Z \otimes 1$. This comes by regarding $k[Z]/(Z^2)$ as the restricted enveloping algebra of a one dimensional restricted Lie algebra. Regarding it as a group algebra of a cyclic group of order 2, we would have that $Z \mapsto 1 \otimes Z + Z \otimes 1 + Z \otimes Z$. Now note that if $P_*$ and $Q_*$ are projective resolutions of $kH$-modules $L$ and $N$, then in $M(P_* \otimes Q_*, \infty)$ we have that $Z(p\otimes q) = p \otimes Zq + Zp \otimes q = p \otimes \partial(q) + \partial(p) \otimes q = \partial(p \otimes q)$. Thus we have, using the Lie coalgebra structure, that \[ \Gamma(L \otimes M) = M(P_* \otimes Q_*, \infty) \cong M(P_*, \infty) \otimes M_(Q_*, \infty) = \Gamma(L) \otimes \Gamma(N). \] In addition, it implies that, with the Lie coalgebra structure, \[ \Gamma(k) \otimes \Gamma(k) \cong \Gamma(k \otimes k) = \Gamma(k) \] so that $\Gamma(k)$ is an idempotent module. In the Section \ref{sec:cann}, we show that $\Gamma(k)$ is idempotent even without the change in the Hopf structure. \section{An exact sequence} \label{sec:exact} Assume that $k$ and $G = H \times C$ are as before. The purpose of this section is to construct a triangle in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ that in Section \ref{sec:cann} is shown to be the canonical triangle of idempotent modules associated to the thick subcategory of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ consisting of modules whose support varieties are in the image of ${\mathcal{V}}_C(k)$ in ${\mathcal{V}}_G(k)$. Let ${\mathsf k}$ denote the complex of $kH$-modules that has $k$ in degree $0$ and the zero module in all other degrees. Suppose that $P_*$ is a projective resolution of the trivial $kH$-module $k$. Let $C_*$ be the augmented complex in nonnegative degrees: \[ \xymatrix{ \dots \ar[r] & P_2 \ar[r]^{-\partial} & P_1 \ar[r]^{-\partial} & P_0 \ar[r]^{-\varepsilon} & k \ar[r] & 0 } \] with the augmentation and boundary maps negated. That is, we set $C_0 = k$ and $C_i = P_{i-1}$, for $i > 0$. Let $N(P_*, n)$ be the module whose restriction to a $kH$-module is the direct sum \[ k \oplus P_0^{p-1} \oplus P_1 \oplus P_2^{p-1} \oplus \dots \oplus P_{2n-1} \] Multiplication by $Z$ annihilates the direct summand $k$. For $m = (m_1, \dots, m_{p-1}) \in P_{2i}^{p-1}$, define \[ Zm = \begin{cases} - \varepsilon(m_{p-1}) + (0, m_1, \dots, m_{p-2}) \in k \oplus P_0^{p-1} & \text{ if } i = 0 \\ -\partial(m_{p-1}) + (0, m_1, \dots, m_{p-2}) \in P_{2i-1} \oplus P_{2i} & \text{ if } i > 0 \end{cases} \] For $m \in P_{2i-1}$, let $Zm = -(\partial(m), 0, \dots, 0) \in P_{2i-2}^{p-1}$. Thus, $N(P_*,n)$ looks like $N(C_*,n)$ except that it has an odd rather than even number of $kH$-summand. In the limit, $N(P_*,\infty) \cong M(C_*, \infty)$. It is easy to see that Lemma \ref{lem:resol1} holds for this construction. The main result of this section is the construction of an exact sequence with the form given in the next proposition. \begin{prop} \label{prop:canon-tri} For the projective resolution $P_*$ as above and any $n>0$, including $n = \infty$, there is a projective $kG$-module $Q$ and an exact sequence \[ \xymatrix{ 0 \ar[r] & M(P_*,n) \ar[r]^{\theta} & k \oplus Q \ar[r]^{\mu} & N(P_*,n) \ar[r] & 0 } \] where the class of the map $\theta$ in $\operatorname{\underline{Hom}}\nolimits_{kG}(M(P_*,n),k)$ is the class of the map induced by the augmentation $\varepsilon:P_* \to k$ and the class of the map $\mu$ in $\operatorname{\underline{Hom}}\nolimits_{kG}(k, N(P_*,n))$ is the class of the map induced by the degree zero inclusion of ${\mathsf k}$ into the augmented projective resolution $(P_*, \varepsilon)$. \end{prop} \begin{proof} For $i \geq 0$, let $Q_i = P_i \otimes k[Z]/(Z^p)$ which is a projective module over $kG = kH \otimes k[Z]/(Z^p)$. Let $Q = Q_0 \oplus \dots \oplus Q_{2n-1}$ or let $Q = Q_0 \oplus Q_1 \oplus \dots$ in the case that $n = \infty$. We define the maps $\theta$ and $\mu$ as follows. Let $\ell_Q$ and $\ell_N$ be a generator for the $kH$-summand isomorphic to $k$ in $k \oplus Q$ and in $N(P_*,n)$, respectively. Then \[ \theta(m) = \varepsilon(m)\ell_Q \otimes 1 \quad \oplus \quad m \otimes Z^{p-1} \qquad \in k\oplus Q_{0} \quad \text{ for } m \in P_{0}. \] For $0 < i < n$, let \[ \theta(m) = \partial(m) \otimes 1 \quad \oplus \quad m \otimes Z^{p-1} \qquad \in Q_{2i-1} \oplus Q_{2i} \quad \text{ for } m \in P_{2i}. \] For $1 \leq i \leq n$ and $(m_1, \dots, m_{p-1}) \in P_{2i}^{p-1}$, let \[ \theta(m_1, \dots, m_{p-1}) = \sum_{j = 1}^{p-1} \partial(m_j) \otimes Z^{j-1} \ \oplus \ \sum_{j = 1}^{p-1} m_j \otimes Z^{i} \quad \in Q_{2i-1} \oplus Q_{2i}. \] The map $\mu$ is given by the following rules. First, let $\mu(\ell_Q) = \ell_N$. For an element $\sum_{j=0}^{p-1} m_j \otimes Z^j \in Q_i$, let \[ \mu(\sum_{j=0}^{p-1} m_j \otimes Z^j) = \begin{cases} -\varepsilon(m_{p-1}) \oplus (m_0, \dots, m_{p-2}) & \in k \oplus P_0^{p-1} \quad \text{ if } i=0 \\ -\partial(m_{p-1}) \oplus (m_0, \dots, m_{p-2}) & \in P_{i-1} \oplus P_i^{p-1} \quad \text{ if } i \text{ is even} \\ -(\partial m_, \dots, \partial m_{p-1}) \oplus m_0 & \in P_{i-1}^{p-1} \oplus P_i \quad \text{ if } i \text{ is odd} \end{cases} \] It is easy to see that $\theta$ and $\mu$ are $kH$-homomorphisms. Hence to see that they are $kG$-homomorphisms, it is only necessary to show that the maps commute with the action of $Z$. We leave it also to the reader to check that $\mu\theta = 0$. Once this is done, the exactness of the sequence can be demonstrated by noting that there is an obvious filtration on the sequence itself such that the successive quotients have the form either $0 \to P_i \to k \oplus Q_i \to P_i^{p-1} \to 0$ or $0 \to P_i^{p-1} \to k \oplus Q_i \to P_i \to 0$ where the maps are induced by $\theta$ and $\mu$. One can show that these quotient sequences are exact. Finally, the maps to and from the summand $k$ in the middle term of the sequence can be determined to be as asserted from the construction. \end{proof} \section{Idempotent Modules} \label{sec:idmod} In this section, we review some information that we require on idempotent modules. We also give a brief description of a calculation of the endomorphism ring of the trivial module in the stable category localized at a thick subcategory defined by the subvariety of an ideal generated by a single element in $\HH{*}{G}{k}$ (see Theorem \ref{thm:bcr3}). This material is taken mostly from Rickard's paper \cite{R}. Variations on the theme and other accounts, can be found in \cite{CW}, \cite{Cmods} and the last section of \cite{BG}) In general, associated to a thick tensor ideal ${\mathcal{M}}$ in $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$, for any $X$ in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$, there is a distinguished triangle in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ having the form \begin{equation} \label{eq:cann} \xymatrix{ {\mathcal{E}}_{{\mathcal{M}}}(X) \ar[r]^{\quad \theta_X} & X \ar[r]^{\mu_X \quad} & {\mathcal{F}}_{{\mathcal{M}}}(X) \ar[r] & \Omega^{-1}({\mathcal{E}}_{{\mathcal{M}}}(X)) } \end{equation} and having certain universal properties \cite{R}. Let ${\mathcal{M}}^{\oplus}$ denote the closure of ${\mathcal{M}}$ in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ under arbitrary direct sums. The map $\theta_X$ is universal for maps from objects in ${\mathcal{M}}^\oplus$ to $X$, meaning that if $Y$ is in ${\mathcal{M}}^{\oplus}$ then any map $Y \to X$ factors through $\theta_X$. The map $\mu_X$ is universal for maps from $X$ to ${\mathcal{M}}$-local objects. An object $Y$ is ${\mathcal{M}}$-local if $\operatorname{\underline{Hom}}\nolimits_{kG}(M, Y) = \{0 \}$ for all $M$ in ${\mathcal{M}}$. The universal property says that for a module $Y$ that is ${\mathcal{M}}$-local, any map $X \to Y$ factors through $\mu_X$. For a module $X$, the canonical triangle for $X$ is the tensor product of $X$ with the canonical triangle for $k.$ The modules ${\mathcal{E}}_{{\mathcal{M}}}(k)$ and ${\mathcal{F}}_{{\mathcal{M}}}(k)$ are idempotent module in that ${\mathcal{E}}_{{\mathcal{M}}}(k) \otimes {\mathcal{E}}_{{\mathcal{M}}}(k) \cong {\mathcal{E}}_{{\mathcal{M}}}(k)$ and ${\mathcal{F}}_{{\mathcal{M}}}(k) \otimes {\mathcal{F}}_{{\mathcal{M}}}(k) \cong {\mathcal{F}}_{{\mathcal{M}}}(k)$ in the stable category. In addition, the two are orthogonal meaning that ${\mathcal{E}}_{{\mathcal{M}}}(k) \otimes {\mathcal{F}}_{{\mathcal{M}}}(k)$ is projective, {\it i. e.} zero in the stable category. It is helpful to know the support varieties of these modules. The following is well known, but we sketch a proof. \begin{prop} \label{prop:varidem} Suppose that ${\mathcal{V}}$ is a collection of subvarieties of ${\mathcal{V}}_G(k)$ that is closed under specialization. Let ${\mathcal{M}} = {\mathcal{M}}_{\mathcal{V}}$, the thick tensor ideal of all finitely generated $kG$-modules $M$ such that ${\mathcal{V}}_G(M)$ is in ${\mathcal{V}}$. Then ${\mathcal{V}}_G({\mathcal{E}}_{{\mathcal{M}}}(k)) = {\mathcal{V}}$ and ${\mathcal{V}}_G({\mathcal{F}}_{{\mathcal{M}}}(k)) = {\mathcal{V}}_G(k) \setminus {\mathcal{V}}$. \end{prop} \begin{proof} The fact that ${\mathcal{E}}_{{\mathcal{M}}}(k) \otimes {\mathcal{F}}_{{\mathcal{M}}}(k)$ is projective, implies that their support varieties are disjoint. The fact that the trivial module is the third object in a triangle involving the two implies that ${\mathcal{V}}_G({\mathcal{E}}_{{\mathcal{M}}}(k)) \cup {\mathcal{V}}_G({\mathcal{F}}_{{\mathcal{M}}}(k)) = {\mathcal{V}}_G(k)$. If $V$ a closed subvariety of ${\mathcal{V}}$, then the universal property says that that the identity homomorphism of a finitely generated module $M$ with $V_G(M) = V$ factors through $M \otimes {\mathcal{E}}_{{\mathcal{M}}}(k)$. Thus $V \in {\mathcal{V}}_G({\mathcal{E}}_{{\mathcal{M}}}(k))$. On the other hand, if $V \in {\mathcal{V}}_G(k)$ is not in ${\mathcal{V}}$, then there is a module $M$ in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$ whose support variety is the one (generic) point $V$. Then $M$ is ${\mathcal{M}}$-local, and the universal property implies that $M$ is a direct summand of $M \otimes {\mathcal{F}}_{{\mathcal{M}}}(k)$. So $V$ is not in ${\mathcal{V}}_G({\mathcal{E}}_{{\mathcal{M}}}(k))$. \end{proof} Choose a nonnilpotent element $\zeta \in \operatorname{H}\nolimits^n(G,k)$ for some $n > 0$ and let $V = V_G(\zeta)$ be the variety of the ideal generated by $\zeta$. Note that $\zeta$ is represented by a cocycle $\zeta: k \to \Omega^{-n}(k)$. For convenience, we denote the shifts of this map $\Omega^t(k) \to \Omega^{t-n}(k)$, also by $\zeta$. Let ${\mathcal{M}}_V$ be the thick tensor ideal consisting of all finitely generated $kG$-modules $M$ with $V_G(M) \subseteq V$. The cohomology of any element in ${\mathcal{M}}$ is annihilated by a power of $\zeta.$ As a consequence, for $X$ in $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$, any map $\tau:k \to X$, whose third object in its triangle is in ${\mathcal{M}}$, has the property that it is a factor of $\zeta^t: k \to \Omega^{-tn}(k)$ for some $t$, sufficiently large. That is, there is a map $\beta: X \to \Omega^{-tn}(k)$ such that $\zeta^t = \beta\tau$. Thus it can be shown \cite{R} that the module ${\mathcal{F}}_{{\mathcal{M}}}(k)$ can be taken to be the direct limit (to be precise, we take a homotopy colimit) of the system \[ \xymatrix{ k \ar[r]^{\zeta} & \Omega^{-n}(k) \ar[r]^{\zeta} & \Omega^{-2n}(k) \ar[r]^{\zeta} & \Omega^{-3n}(k) \ar[r]^{\zeta} & \dots } \] Likewise, ${\mathcal{E}}_{{\mathcal{M}}}(k)$ can be taken to be the homotopy colimit of the third objects in the triangles of the maps $\zeta^t: k \to \Omega^{-tn}(k)$. It is the third object in the triangle $k \to {\mathcal{F}}_{{\mathcal{M}}}(k)$. For $X$ in $\operatorname{{\bf StMod}(\text{$kG$})}\nolimits$, the canonical triangle \ref{eq:cann} involving $X$ is the tensor product of $X$ with this one. Also ${\mathcal{F}}_{{\mathcal{M}}}(X)$ is ${\mathcal{M}}$-local and the universal properties are satisfied. From all of this, it is routine to show the following (see \cite{R}). \begin{prop} \label{prop:trivendo} Let ${\mathcal{C}}$ be the Verdier localization of the category $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ at the thick tensor ideal ${\mathcal{M}}_V$ for $V = V_G(\zeta)$ as above. Then the endomorphism ring $\operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k)$ of the trivial module $k$ is the degree zero part of the localized cohomology ring $\HH{*}{G}{k}[\zeta^{-1}]$. \end{prop} \begin{proof} Recall that there is a natural identification $\operatorname{Hom}\nolimits(k, \Omega^{-m}(k)) \cong \operatorname{H}\nolimits^m(G,k)$ for any $m \geq 0$. Choose an element in $\operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k)$. It must have the form $\tau^{-1}\gamma$ where $\gamma: k \to X$ and $\tau: k \to X$ has the property that the third object in the triangle of $\tau$ is in ${\mathcal{M}}_V$. Then as above, for some $t$ there exist $\zeta: X \to \Omega^{-tn}(k)$ such that $\zeta^t = \tau\beta$. So we have a diagram \[ \xymatrix{ k \ar[r]^\tau & X \ar[d]^\beta & k \ar[l]_\gamma \\ &\Omega^{-tn}(k) } \] and $\tau^{-1}\gamma = (\beta\tau)^{-1}(\beta\gamma) = \zeta^{-t}\beta\gamma$ where $\beta\gamma$ is an element of $\operatorname{H}\nolimits^{tn}(G,k)$. \end{proof} We end this section with a straightforward calculation that is needed later. \begin{prop} \label{prop:rank2} Suppose that $G = \langle y,z \rangle$ is an elementary abelian group of order $p^2$, and $H = \langle y \rangle$. Let $k$ be a field of characteristic $p$. Let $P_*$ be a minimal $kH$-projective resolution of the trivial module $k$. Let $V$ be the variety corresponding to the point defined by the subgroup $\langle z \rangle$, and let ${\mathcal{M}}_V$ be the thick tensor ideal of finitely generated module $M$ such that $V_G(M) = V$. Then ${\mathcal{E}}_{{\mathcal{M}}}(k) \cong M(P_*, \infty)$, ${\mathcal{F}}_{{\mathcal{M}}}(k) \cong N(P_*, \infty)$ and the canonical triangle of $k$ as in \ref{eq:cann} is the triangle as given in Proposition \ref{prop:canon-tri}, defined by the augmentation map $\varepsilon: P_* \to k$. \end{prop} \begin{proof} First notice that in the minimal resolution $P_*$, every $P_i \cong kH$, a $p$ dimensional module with basis consisting of $1, Y, \dots Y^{p-1}$ where $Y = 1+y$. The map $\partial: P_{2i+1} \to P_{2i}$ takes $1$ to $Y$ and $\partial: P_{2i} \to P_{2i-1}$ takes $1$ to $Y^{p-1}.$ Let $Z = 1+z$. Hence, $N(P_*,n)$ has a basis $u_0, \dots, u_{2n-1}$ where $u_{2i} = (1, 0, \dots, 0) \in P_{2i}^{p-1}$, and $u_{2i+1} = 1 \in P_{2i+1}$ for $i = 0, \dots, n-1$. Thus, we can see from the definition of $N(P_*,n)$ that \[ Yu_{2i}= -Zu_{2i+1} \ \text{ for } 0 \leq i \leq n-1 \text{ and } Y^{p-1}u_{2i-1} = -Z^{p-1}u_{2i} \ \text{ for } 1 \leq i \leq n-1. \] The map $\varepsilon: k \to N(P_*, n)$ induced by the augmentation $P_* \to k$ takes $1 \to Z^{p-1}u_0$. An injective $kH$-resolution of $k$ has the form \[ \xymatrix{ 0 \ar[r] & k \ar[r]^{\epsilon} & R_0 \ar[r] & R_1 \ar[r] & \dots } \] where every $R_i \cong kH$, $\epsilon(1) \in Y^{p-1}R_0$, and the boundary maps alternate between multiplication by $Y$ and by $Y^{p-1}$. Similarly, for $A = kZ/(Z^p)$ an $A$-injective resolution of $k$ has the form $0 \to k \to Q_0 \to Q_1 \dots$ where every $Q_i \cong A$ and the maps are as above with $Z$ substituted for $Y$. Thus, a $kG$-injective resolution of the $k$ is the tensor product $Q_* \otimes R_*.$ Now, $\Omega^{-2n}(k)$ is the quotient \[ \Omega^{-2n}(k) = (Q_* \otimes R_*)_{2n-1}/\partial((Q_* \otimes R_*)_{2n-2}). \] This module is generated by the classes of the element $v_i = 1 \otimes 1$ in $Q_i \otimes R_{2n-i-1}$ for $i= 0, \dots, 2n-1$. We have that for $1 \otimes 1 \in Q_{2i}\otimes R_{2(n-i)-2}$, \[ \partial(1 \otimes 1) = Z \otimes 1 \ + \ 1 \otimes Y \quad \in (Q_{2i+1}\otimes R_{2(n-i)-2}) \ \oplus \ (Q_{2i} \otimes R_{2(n-i)-1}), \] while for $1 \otimes 1$ in $Q_{2i-1} \otimes R_{2(n-i)-1}$ \[ \partial(1 \otimes 1) = Z^{p-1} \otimes 1 \ - \ 1 \otimes Y^{p-1} \quad \in (Q_{2i}\otimes R_{2(n-i)-1}) \ \oplus \ (Q_{2i-1} \otimes R_{2(n-i)}). \] Thus we have relations $Zv_{2i+1} = - Yv_{2i}$ and $Z^{p-1}v_{2i} = Y^{p-1}v_{2i-1}$. These are (except for signs) the same relations as for $N(P_*,n)$, and hence $\Omega^{-2n}(k) \cong N(P_*,n)$. Next notice that, with the above identification, the map $k \to N(P_*,n)$ takes $1 \in k$ to the class of $Z^{p-1}v_0$ which is contained in $Q_0 \otimes R_{2n-1}$. In the case that $n=1$, this is the cohomology class of the inflation to $G$ of the polynomial generator $\zeta$ in degree 2 of $\HH{*}{H}{k}$. For $n>1$ it represents the class of $\zeta^n$. Thus we have a sequence of maps \[ \xymatrix{ k \ar[r]^\zeta & \Omega^{-2}(k) \ar[r]^\zeta & \Omega^{-4}(k) \ar[r]^\zeta & \Omega^{-6}(k) \ar[r] & \dots } \] By the construction of \cite{R} that is summarized at the beginning of this section, we have that the canonical triangle associated to ${\mathcal{M}}_V$ is the triangle of the map $k \to N(P_*,\infty)$. This proves the proposition. \end{proof} \section{Canonical triangles} \label{sec:cann} Assume as before that $kG = kH \otimes kC$ where $kH$ and $kC$ are Hopf subalgebras and $kC \cong k[Z]/(Z^p)$. Let ${\mathcal{W}}$ be the point in ${\mathcal{V}}_G(k)$ which is the image of the restriction map $\operatorname{res}\nolimits_{G, C}^*: {\mathcal{V}}_C(k) \to {\mathcal{V}}_G(k)$. That is, ${\mathcal{W}}$ is the equivalence class of the inclusion $k[Z]/(Z^p) \to kG$ viewed as a $\pi$-point. Let ${\mathcal{M}} = {\mathcal{M}}_{\mathcal{W}}$ be the thick tensor ideal of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ consisting of all modules whose variety is in ${\mathcal{W}}$. Thus a module in ${\mathcal{M}}$ is a projective module on restriction to a $kH$-module and is not projective when restricted to $kC$. Let $P_*$ be a $kH$-projective resolution of the trivial $kH$-module $k$. As in the last section, let ${\mathcal{E}} = \Gamma(k) = M(P_*, \infty)$ and ${\mathcal{F}} = N(P_*,\infty)$. Let \[ \xymatrix{ {\mathcal{S}}: \qquad & {\mathcal{E}} \ar[r]^{\theta} & k \ar[r]^{\mu} & {\mathcal{F}} \ar[r] & \Omega^{-1}({\mathcal{E}}). } \] be the triangle of the exact sequence in Proposition \ref{prop:canon-tri}. For any $X \in \operatorname{{\bf StMod}(\text{$kG$})}\nolimits$, tensoring with ${\mathcal{S}}$ we obtain a triangle \[ \xymatrix{ X \otimes {\mathcal{S}}: & {\mathcal{E}}(X) \ar[r] & X \ar[r] \ar[r] & {\mathcal{F}}(X) \ar[r] & \Omega^{-1}({\mathcal{E}}(X)) } \] Our object is to show that the triangles $X \otimes {\mathcal{S}}$ satisfy certain universal properties. This allows us to calculate the endomorphisms in the Verdier localization at the subcategory ${\mathcal{C}}$. The first step is to establish the support varieties of the modules ${\mathcal{E}}$ and ${\mathcal{F}}$. \begin{prop} \label{prop:varidem2} The variety of ${\mathcal{E}}$ is ${\mathcal{V}}_G({\mathcal{E}}) = {\mathcal{W}}$, the set consisting of the single equivalence class or $\pi$-points as above. The variety of ${\mathcal{F}}$ is ${\mathcal{V}}_G(k) \setminus {\mathcal{W}}$. \end{prop} \begin{proof} Suppose that $K$ is an extension of $k$ and let $\alpha_K:K[t]/(t^p) \to KG$ be a $\pi$-point. Let $\beta: k[t]/(t^p) \to KG$ be the $\pi$ point given by $\beta(t) = Z$. Our objective is to show that if $\alpha_K$ is not equivalent to $\beta$ then the class of $\alpha_K$ is not in the variety of ${\mathcal{E}}$. We know that the class of $\beta$ is in ${\mathcal{V}}_G({\mathcal{E}})$. So assume that $\alpha_K$ is not equivalent to $\beta$. Recall that ${\mathcal{V}}_G(k) \cong \operatorname{Proj}\nolimits \HH{*}{G}{k}$. The equivalence send the class of $\alpha_K$ to the variety of the kernel of the induced map from the cohomology of $G$ to that of the $A_K = K[t]/(t^p)$. Recall that $\operatorname{H}\nolimits^*(A,K)/\operatorname{Rad}\nolimits(\operatorname{H}\nolimits^*(A,K)) \cong K[T]$, a polynomial ring, and $\operatorname{H}\nolimits^*(G, K) \cong \operatorname{H}\nolimits(H,K) \otimes \operatorname{H}\nolimits^*(C,K)$. Moreover, since $C$ is a cyclic group of order $p$, $\operatorname{H}\nolimits^*(C,K)/\operatorname{Rad}\nolimits(\operatorname{H}\nolimits^*(C,K)) \cong K[\zeta]$ is a polynomial ring in the degree two element $\zeta$. In particular, two $\pi$-points are in the same class if varieties of the corresponding kernels are the same. For the $\pi$-point $\alpha_K$, let $\varphi_\alpha: \operatorname{H}\nolimits^*(G,K) \to \operatorname{H}\nolimits^*(A,K) \cong K[T]$ be the map induced by the restriction. Let $\varphi:\operatorname{H}\nolimits^*(H,K) \to K[T]$ be the restriction of $\varphi_\alpha$ to $kH$ and then inflated to $kG$. That is, we restrict $\varphi_\alpha$ to the subring $\operatorname{H}\nolimits(H,K) \otimes 1$ of $\operatorname{H}\nolimits^*(G,K)$ and compose this with the map $\operatorname{H}\nolimits^*(G,K) \to \operatorname{H}\nolimits^*(H,K)$. Notice that if the image of $\varphi$, which is a subring of $K[T]$, is only the field $K$, then $\alpha_K$ is equivalent to $\beta$. That is, in such a case, if $\eta \otimes \zeta^n \in \operatorname{H}\nolimits(H,K) \otimes \operatorname{H}\nolimits^*(C,K)$ and the degree of $\eta$ is greater than zero, then $\varphi(\eta \otimes \zeta^n) = \varphi(\eta \otimes 1) \varphi(1 \otimes \zeta^n) = 0$. Thus $\alpha_K$ and $\beta$ correspond to the same element of $\operatorname{Proj}\nolimits \HH{*}{G}{k}$. Hence, we may assume that the kernel of $\varphi$ is a nonzero prime ideal in $\operatorname{H}\nolimits^*(H,K)$, which is not the ideal of all positive elements. Let $\gamma_K: k[t]/(t^p) \to KH \otimes 1 \subseteq kG$ be a $\pi$ point corresponding to $\varphi$. Let $KE = K[u,v]/(u^p, v^p)$ and let $\mu:KE \to KG$ be defined by $\mu(t) = \gamma_K(u)$ and $\mu(u) = \beta_K(v) = 1 \otimes Z$. Note that $\mu(u)$ and $\mu(v)$ commute so that the given conditions on $\mu$ define a homomorphism. Moreover, $\mu$ is a flat embedding since $\gamma_K$ is a flat embedding and $\beta_K(K[t]/(t^p)) = 1 \otimes KC$. Note especially that the kernel of $\varphi_\alpha$ is contained in the kernel of $\gamma_K^*: \operatorname{H}\nolimits^*(G,K) \to \operatorname{H}\nolimits^*(E,K)$. Thus there is some $\pi$-point $\hat{\alpha}_K: K[t]/(t^p) \to KE$ such that the composition $\gamma_K\hat{\alpha}_K$ is equivalent to $\alpha_K$. We consider the restriction $M = \mu^*({\mathcal{E}})$ to a $KE$-module. The projective resolution $K \otimes P_*$ restricted to $KE$ is $KE$-projective resolution of $K$. Hence, we have that $M \cong M(K \otimes P_*, \infty)$. Hence, from Lemma \ref{lem:resol1}, we know the variety of $M$ and know also that the third object in the triangle of $\theta:M \to K$ is $N = N(K \otimes P_*, \infty)$. The variety of consists only of the class of the $\pi$-point $\hat{\beta}: K[t]/(t^p) \to KE$ given by $t \mapsto v$. It follows that $\hat{\alpha}_K$ is not in the variety of $N$ and that $\alpha_K$ is not in the variety of ${\mathcal{E}}$. Since the restriction to $kE$ takes triangles to triangles, we have also that $\alpha_K$ is in the vareity of ${\mathcal{F}}$. This proves the theorem. \end{proof} We can now establish the universal properties. \begin{thm} \label{thm:uni-resol} Assume the hypotheses and notation of this section. For any $kG$-module $X$, the triangle $X \otimes {\mathcal{S}}$ is the canonical triangle as in \ref{eq:cann} for $X$ relative to ${\mathcal{M}}$. In particular, $1_X\otimes \theta$ is universal with respect to maps from objects in ${\mathcal{M}}^{\oplus}$ to $X$ and $1_X \otimes \mu$ is universal with respect to maps from X to ${\mathcal{M}}$-local objects. \end{thm} \begin{proof} First note that ${\mathcal{E}} \otimes {\mathcal{F}}$ is a projective module, zero in the stable category, because the intersection of the varieties of the two modules is empty. Thus tensoring ${\mathcal{S}}$ with ${\mathcal{E}}$ or ${\mathcal{F}}$, we see that ${\mathcal{E}}$ and ${\mathcal{F}}$ are idempotent modules. Suppose next that $M$ is in ${\mathcal{M}}$. Then \[ \operatorname{\underline{Hom}}\nolimits_{kG}(M, X \otimes {\mathcal{F}}) = \operatorname{\underline{Hom}}\nolimits_{kG}(k, M^*\otimes X \otimes {\mathcal{F}}) - \{0\}. \] since $M$ is finitely generated and the dual $M^*$ of $M$ and ${\mathcal{F}}$ have disjoint varieties. Thus, $X \otimes {\mathcal{F}}$ is ${\mathcal{M}}$-local, and by Lemma 5.2 of \cite{R}, $X \otimes {\mathcal{F}}$ is ${\mathcal{M}}^\oplus$-local. We claim that the map from $X$ to $X \otimes {\mathcal{F}}$ is universal map from $X$ to ${\mathcal{M}}$-local objects. The reason is that if $N$ is ${\mathcal{M}}$-local then $\operatorname{\underline{Hom}}\nolimits_(kG)(X\otimes {\mathcal{E}}, N) = \{0\}$ and so $\operatorname{\underline{Hom}}\nolimits_{kG}(X \otimes {\mathcal{F}}, N) \cong \operatorname{\underline{Hom}}\nolimits(X, N)$, the isomorphism being induced by the map $X \to X \otimes {\mathcal{F}}$. So any map from from $X$ to $N$ factors through $1_X \otimes \mu$. Likewise, we can see that the map $X \otimes {\mathcal{E}} \to X$ is universal with respect to map from an object $Y$ in ${\mathcal{M}}^\oplus$ to $X$. That is, if $M$ is in ${\mathcal{M}}^\oplus$, then because ${\mathcal{F}}$ is ${\mathcal{M}}^\oplus$-local, the canonical triangle for $X$ yields an isomorphism $\operatorname{\underline{Hom}}\nolimits_{kG}(Y, X \otimes {\mathcal{E}}) \cong \operatorname{\underline{Hom}}\nolimits_{kG}(Y,X)$. Thus any map from $Y$ to $X$ factors through $1_X \otimes \theta$. This proves the theorem. \end{proof} \section{The endomorphism ring of the trivial module} \label{sec:endo} In this section we assume the hypotheses and notation of Section \ref{sec:cann}, just previous. It is well known that the endomorphism ring of the trivial module in the localized category is associated to the structure of the module ${\mathcal{F}}$. Here is where we use our development of the structure of ${\mathcal{F}}$. Let $G = H \times C$ as before. Let ${\mathcal{M}}$ denote the thick tensor ideal in $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ consisting of all $kG$-modules whose variety is the point in $V_G(k)$ which is the image of the restriction map $\operatorname{res}\nolimits_{G, C}^*: V_C(k) \to V_G(k)$. Let ${\mathcal{C}}$ be the Verdier localization of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ at ${\mathcal{M}}$. We are interested in the ring $\operatorname{\underline{Hom}}\nolimits_{kG}(k,k) = \operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k)$. Given a morphism $k \rightarrow^\sigma M \leftarrow^\tau k$ with the third object of the triangle of $\sigma$ in ${\mathcal{M}}$, we get a diagram \[ \xymatrix{ & U \ar[d]^\nu \\ {\mathcal{F}} \ar[r] & k \ar[r]^\mu \ar[d]^\sigma & {\mathcal{F}} \\ & M } \] with $U$ in ${\mathcal{M}}$. Because ${\mathcal{F}}$ is ${\mathcal{M}}$-local the composition $\mu\nu$ is the zero map. Thus, there is a map $\varphi: M \to {\mathcal{F}}$ such that $\varphi\sigma = \mu$. The morphism $\sigma^{-1}\tau$ is equal to one of the form $\mu^{-1}\alpha$ for $\alpha = \varphi\tau$. Thus every endomorphism of $k$ is factors through ${\mathcal{F}}$. Note that we need not worry that ${\mathcal{F}}$ is infinitely generated. In the above, we could have replaced ${\mathcal{F}}$ with the submodule whose restriction to $kH$ is $N(P_*,n)_{\downarrow H} \cong k \oplus P_0^{p-1} \oplus \dots \oplus P_{2n-1}$ for $n$ sufficiently large. By the same argument as above, this is the third object in the triangle of the map $M(P_*,n) \to k$. The point of using ${\mathcal{F}}$ is that we have a context to compute compositions of morphisms. So suppose we have two endomorphisms of $k$ in ${\mathcal{C}}$, $\mu^{-1}\alpha$ and $\mu^{-1}\beta$. For the purposes of computing the product we may assume that there exist $m$ and $n$ such that $\alpha(k) \subseteq P_m$ and $\beta(k) \subseteq P_n$. That is, otherwise $\alpha$ and $\beta$ can be written as sums of such elements. The product is the map $\mu^{-1}\alpha^\prime \beta$ in the diagram \[ \xymatrix{ k \ar[dr]^\mu && k \ar[dl]^\alpha \ar[dr]^\mu && k \ar[dl]^\beta \\ & {\mathcal{F}} \ar[dr]^{Id} && {\mathcal{F}} \ar[dl]^{\alpha^\prime} \\ && {\mathcal{F}} } \] The construction of $\alpha^\prime$ is through a chain map on complexes \[ \xymatrix{ \dots \ar[r] & P_2 \ar[r] \ar[d] & P_1 \ar[r] \ar[d] & P_0 \ar[r]^{\varepsilon} \ar[d] & k \ar[r] \ar[d]^\alpha & 0 \\ \dots \ar[r] & P_{m+2} \ar[r]^\partial & P_{m+1} \ar[r]^\partial & P_m \ar[r]^{\partial} & P_{m-1} \ar[r] & \dots \ar[r] & P_0 \ar[r] & k \ar[r] & 0 } \] obtained by lifting the map $\alpha$. Then the chain map induces the map $\alpha^\prime: {\mathcal{F}} \to {\mathcal{F}}$ as in Lemma \ref{lem:resol1}. It is clear that the diagram commutes, {\it i. e.} $\alpha^\prime \mu = \alpha$. All of this sets up the proof of our main theorem. \begin{thm} \label{thm:trivendo} In the localized category ${\mathcal{C}}$ the endomorphism ring of $k$ is isomorphic to the negative Tate cohomology ring of $H$: \[ \operatorname{\underline{Hom}}\nolimits_{kG}(k,k) \cong \widehat{\operatorname{H}\nolimits}^{\leq 0}(H, k). \] \end{thm} \begin{proof} Because the projective resolution $P_*$ is minimal, we know that $\operatorname{Hom}\nolimits_{kG}(k, P_n) \cong \widehat{\operatorname{H}\nolimits}^{-n-1}(H,k).$ The Tate cohomology group $\widehat{\operatorname{H}\nolimits}^{-n-1}(H,k)$ is also isomorphic to homotopy classes of chain maps of degree $-n-1$ from the augmented complex $(P_*, \varepsilon)$ to itself as in the diagram. Indeed that diagram defines the correspondence. The product of two elements is defined (see \cite{BC2}) as the composition of the chain maps. Thus the element represented by $\alpha^\prime \beta$ is the product of the elements represented by $\alpha$ and $\beta$ in the Tate cohomology. \end{proof} There are a few cases in which we can be very specific about the structure of the endomorphism ring of the trivial module in these localized categories. We end the section with two examples and a remark on the role of Hopf structures. The reader should recall that a group has periodic cohomology (in characteristic p) if and only if its Sylow $p$-subgroup is either cyclic or quaternion (with $p=2$). \begin{prop} \label{prop:periodic} Suppose that $H$ has periodic cohomology meaning that the cohomology ring $\operatorname{H}\nolimits^*(H,k)$ has Krull dimension one. Then the endomorphism ring of $k$ is the degree zero part of a localization of the cohomology ring of $G$. Specifically, \[ \operatorname{\underline{Hom}}\nolimits_{kG}(k,k) \cong \operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k) \cong \sum_{n \geq 0} \operatorname{H}\nolimits^{nm}(G,k)\zeta^{-n} \] where $\zeta$ is an element is a regular element of degree $m$ in $\HH{*}{H}{k}$. \end{prop} \begin{proof} Because $G = H \times C_p$ is a direct product, $\HH{*}{G}{k} \cong \HH{*}{H}{k} \otimes \operatorname{H}\nolimits^*(C_p,k)$. If $\zeta \in \operatorname{H}\nolimits^m(H,k)$ is a regular element, the variety $V$ is $V = V_G(\zeta)$, where we identify $\zeta$ with $\zeta \otimes 1$. So the proposition follows from the discussion in Section \ref{sec:idmod}. We note that this result is compatible with Theorem \ref{thm:trivendo} since $\sum_{n \leq 0} \widehat{\operatorname{H}\nolimits}^{-n}(H,k) \cong \sum_{n \leq 0}\widehat{\operatorname{H}\nolimits}^{-n}(H,k)\gamma^n$ where $\gamma$ is a degree two generator for $\operatorname{H}\nolimits^*(C_p,k)$. \end{proof} On the other hand, suppose that $H$ has $2$-rank at least $2.$ Then we get a very different result. \begin{prop} \label{prop:eleab} Suppose that $H$ is an elementary abelian $p$-group of order at least $p^2$. Then $\operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k)$ is a local $k$-algebra whose radical is infinitely generated and has square zero. \end{prop} \begin{proof} By \cite{BC2}, the product of any two elements in negative cohomology for $H$ zero. So by Theorem \ref{thm:trivendo}, the radical of $\operatorname{Hom}\nolimits_{{\mathcal{C}}}(k,k)$ which consist of all elements in negative degrees, has square zero. At the same time, $\widehat{\operatorname{H}\nolimits}^{-n}(H,k)$ is Tate dual to $\operatorname{H}\nolimits^{n-1}(H.k)$ and hence its dimension grows as $n$ becomes large. \end{proof} \begin{rem} \label{rem:coalg} The assumption throughout this article has been that $G = H \times C$ where $H$ and $C$ are subschemes, {\it i. e.} that $kH$ and $kC$ are cocommutative sub-Hopf algebras of $kG$. Even though the coalgebra structure played a role in the proofs, it has no role in the statement of the main theorem. Neither the endomorphism ring $\operatorname{End}\nolimits_{{\mathcal{C}}}(k)$ nor the structure of the negative cohomology ring depend on even the existence of a coalgebra structure. Hence, if we have two algebras $kH$ and $kC$ on which we can create coalgebra maps of the right sort, then we come to the same result. For example, suppose that $G$ is an elementary abelian $p$-group and that $\alpha:k[t]/(t^p)$ is a $\pi$-point defined over $k$. Let $kC$ be the image of $\alpha$. Then $kC$ has a complement $kH$ such that $kH$ is the group algebra of an elementary abelian $p$-group of one smaller rank than $G$ and $kG \cong kH \otimes kC$. Let ${\mathcal{M}}$ denote the thick tensor ideal of modules whose support varieties are either empty or consist only of the class of $\alpha$, and let ${\mathcal{C}}$ be the localization of $\operatorname{{\bf stmod}(\text{$kG$})}\nolimits$ at ${\mathcal{M}}$. Then the main theorem applies. \end{rem}
2023-04-23T08:18:10.056Z
2020-07-16T02:02:09.000Z
redpajama/arxiv
arxiv_0000
1,305
8,619
fd5595cee3c795e574d69228905b5807fd170661
\section{Introduction} Pre and post-Lie algebras are two classes of non-associative algebras which later on have undergone to an extensive investigation because of the important role they play both in pure an applied mathematics. Pre-Lie algebras, also known in the literature under the name of left-symmetric and Vinberg algebras, were introduced in the mid-sixty, almost simultaneously, by Gerstenhaber, see \cite{Gersten}, and Vinberg, see \cite{Vinberg}, the first working on the theory of deformation of associative algebras and second on the theory of convex cones. Since then, they appeared unexpectedly in almost every area of the modern mathematics, from differential geometry, \cite{medina, hessian, bandiera} to combinatorics \cite{BS, CL, CP}, from mathematical physics, see \cite{EFP}, to numerical analysis \cite{Brouder, HLW}, see \cite{burderev,Manchon,EFP1} for comprehensive reviews. In spite post-Lie algebras have been introduced much more recently by Vallette, see \cite{Val}, and independently by Lundervold and Munthe-Kaas \cite{MKL}, since then they have been deeply studied, both from point of view of pure, see for example \cite{bai-guo-ni, bdv, EFMM, MQS} and of applied mathematics \cite{EFLMMK,MKSV,CEFMK}, see also \cite{KI,F-MK}. Recall that a post-Lie Lie algebra is a pair $({\mathfrak h},\triangleright)$ of a Lie algebra ${\mathfrak h}$, whose Lie bracket will be denoted by $[-,-]$, and a bilinear map $\triangleright:{\mathfrak h}\otimes{\mathfrak h}\rightarrow{\mathfrak h}$ called post-Lie product, satisfying the following two properties: \begin{enumPL} \item\label{PL Property 1} $x\triangleright [y,z]=[x\triangleright y,z]+[y,x\triangleright z]$, and \item\label{PL Property 2} $[x,y]\triangleright z={\rm a}_{\triangleright}(x,y,z)-{\rm a}_{\triangleright}(y,x,z)$, for all $x,y$ and $z$ in ${\mathfrak h}$. \end{enumPL} In the RHS of \ref{PL Property 2} \begin{equation} {\rm a}_\triangleright(x,y,z)=x\triangleright(y\triangleright z)-(x\triangleright y)\triangleright z,\,\forall x,y,z\in\mathfrak h \label{eq:ass} \end{equation} denotes the \emph{associator} defined by the bilinear product $\triangleright$. A post-Lie algebra whose Lie bracket is trivial is a pre-Lie algebra. On the other hand, a post-Lie algebra $({\mathfrak h},\triangleright)$ gives rise to a Lie algebra $\overline{{\mathfrak h}}$ with same underlying vector space as ${\mathfrak h}$ and whose Lie bracket $\llbracket-,-\rrbracket:{\mathfrak h}\otimes {\mathfrak h}\rightarrow {\mathfrak h}$ is defined by \begin{equation} \llbracket x,y\rrbracket=x\triangleright y-y\triangleright x+[x,y],\,\forall x,y\in{\mathfrak h}. \label{eq:classpostLie} \end{equation} Moreover, there is an element $\upsilon \in\operatorname{Hom}_{\text{Lie}}(\overline{{\mathfrak h}},\operatorname{Der}({\mathfrak h}))$, defined by \begin{equation} \upsilon_x(y)=x\triangleright y,\,\forall x,y\in\overline{{\mathfrak h}}. \end{equation} The enveloping algebra $\mathcal U({\mathfrak h})$ of a post-Lie algebra was analyzed in depth in \cite{EFLMMK}, whose authors, extending the results of \cite{GO1,GO2}, showed that a suitable extension of $\triangleright$ together with the coalgebra structure of $\mathcal U({\mathfrak h})$ allow to define a \emph{new} associative product $\ast\colon\thinspace \mathcal U({\mathfrak h})\otimes\mathcal U({\mathfrak h})\rightarrow\mathcal U({\mathfrak h})$, called the \emph{Grossman-Larson} product, compatible with the initial coalgebra structure and antipode. In this way it was proven that on $\mathcal U({\mathfrak h})$ it was possible to define a new Hopf algebra $\mathcal U_\ast({\mathfrak h})$, which turned out to be isomorphic to $\mathcal U(\overline{{\mathfrak h}})$. After suitable completion of the Hopf algebras involved, the above mentioned isomorphism defines an isomorphism between the (completed) Lie algebras $\overline{{\mathfrak h}}$ and ${\mathfrak h}$, whose inverse $\chi \colon\thinspace {\mathfrak h}\rightarrow\overline{{\mathfrak h}}$, called the \emph{post-Lie Magnus expansion}, abbreviated as pLMe hereafter, is one of the main concerns of the present note. The pLMe has two predecessors, the pre-Lie and the classical Magnus expansions, see \cite{BCOR}. The \emph{pre}-Lie Magnus expansion $\chi$ appeared at the beginning of the eighties in the work of Agrachev and Gramkelidze, see \cite{AG}. However it has been dubbed as such only in \cite{Kur-Man}, where the classical Magnus expansion was extensively explored in the context of the pre-Lie and dendriform algebras. Finally in \cite{CP} was presented a formula expressing $\chi$ in terms of the so called Grossman-Larson product which read as \[ \chi(x)=\log_\ast(\exp(x)), \] see also \cite{BS}. On the other hand, the pLMe was introduced in \cite{EFLMMK} in connection with a particular class of iso-spectral flow equations. There it is was shown that for every $x\in{\mathfrak h}$, $\chi_x(t):=\chi(tx)\in{\mathfrak h}[[t]]$ satisfies the following non-linear ODE \[ \dot{\chi}_x(t)=(d\exp_{\ast})^{-1}_{-\chi_x(t)}\big(\exp_{\ast}(-\chi_x (t))\triangleright x\big), \] and that, the \emph{non-linear post-Lie differential} equation \[ {\dot x}(t)=-x(t)\triangleright x(t), \] for $x=x(t)\in{\mathfrak h}[[t]]$, with initial condition $x(0)=x_0\in{\mathfrak h}$, has as a solution \[ x(t)=\exp_\ast (-\chi_{x_0}(t))\triangleright x_0. \] In \cite{CEFO} it was underlined the relevance of the pLMe in the theory of the Lie group integrators and in \cite{MQS} it was proven that on a post-Lie algebra, in analogy to what happens on every pre-Lie algebra, the pLMe provides an isomorphism between the group of \emph{formal flows} and the BCH-group defined on $\overline{{\mathfrak h}}$, generalizing the analog well known result proven in \cite{AG}, see also \cite{DSV,bandiera}. \\ The aim of this letter is twofold. In the first place, starting from the integration result presented in \cite{MQS}, we give a more Lie-theoretic interpretation of the pLMe, in terms of the so called \emph{crossed morphisms} of Lie groups and Lie algebras, see for example \cite{Lue, B-N, Higert-Neeb, PSTZ}. More precisely, first we show that a post-Lie algebra structure on a Lie algebra ${\mathfrak h}$ is equivalent to the datum $(\operatorname{id},\upsilon)$ where $\operatorname{id} \colon\thinspace \overline{{\mathfrak h}}\rightarrow{\mathfrak h}$, the identity map, is a crossed morphism relative to $\upsilon\in\operatorname{Hom}_{\text{Lie}}(\overline{{\mathfrak h}},\operatorname{Der}({\mathfrak h}))$. Then we argue that the pLMe is the (inverse of a) crossed morphism between the corresponding (local) Lie groups $\overline{\mathcal H}$ and $\mathcal H$, obtained \emph{integrating} $\operatorname{id}$. We would like to stress that while the local existence of the pLMe, at the level of the Lie groups $H$ and $\overline H$, is guaranteed by general Lie theory, its global existence is obstructed, see \cite{NeebLoc}, and its explicit expression seems, from this view-point, really difficult to obtain. On the other hand, working formally at the level of the completed enveloping algebras of $\overline{{\mathfrak h}}$ and ${\mathfrak h}$, one first can prove the existence of the pLMe and then, using a \emph{formal} integration process, can show that the inverse of the pLMe is a crossed morphism between the corresponding local Lie groups. This line of thoughts opens the door to a categorical interpretation of various approaches to post-Lie and pre-Lie algebras which one finds in the literature, see for example \cite{bai-guo-ni,bdv,MKL} and references there in. On the other hand, it makes clear the universal nature of the pLMe, asking for a (more systematic) method to compute the coefficients of this expansion. This goal is achieved in the second part of the paper, where such a method, based in the so called \emph{tubings}, see \cite{CarrDevadoss}, is presented. \\\\ \emph{Relations with other works.} Post-Lie algebras appeared recently as central objects in the study of the so called \emph{$\mathcal{O}$-operators}, first introduced in \cite{Kup}, which are particular extensions of the classical $r$-matrices, playing an important role in the theory of the generalized Lax pair representations, introduced in \cite{Bordemann}. The notion of $\mathcal O$-operator was further extended in \cite{bai-guo-ni}, where the concepts of $\mathcal O$-operator of \emph{weight $\lambda$} and, respectively, of \emph{extended} $\mathcal O$-operator, were introduced. It is in this framework that the relation between (generalized) Lax representations and post-Lie algebras crystallized. In particular in \cite{bai-guo-ni} it was shown that the post-Lie algebra structure on a Lie algebra ${\mathfrak g}$ are in one-to-one correspondence with the pairs $((\upsilon,{\mathfrak h}),\mathcal O)$ where ${\mathfrak h}$ is a Lie algebra, $\upsilon\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},\operatorname{Der}({\mathfrak h}))$ and $\mathcal O:{\mathfrak h}\rightarrow{\mathfrak g}$ is an \emph{invertible} $\mathcal O$-operator of weight $1$, see Corollary 5.5 in \cite{bai-guo-ni}. This result should be compared with Proposition \ref{pro:equiv} of the present work, see also \ref{Re1} in Remark \ref{rem:relationwithothers}. Another instance where the notion of a post-Lie algebra rises naturally is the theory of the so called simply-transitive \emph{NIL-affine actions} of nilpotent Lie groups, see \cite{bdv}. In this reference it was shown that given $(G,N)$, a pair of connected and simply-connected nilpotent Lie groups, there exists a simply transitive NIL-action of $G$ on $N$ if and only if there exists a Lie algebra ${\mathfrak g}'\sim {\mathfrak g}$ such that the pair $({\mathfrak g}',\mathfrak{n})$ carries a structure of a post-Lie algebra, see Theorem 2.5 in \cite{bdv}. The proof of this result is based on the observation that a pair of Lie algebras $({\mathfrak g},\mathfrak n)$ carries a structure of a post-Lie algebra if and only if there is a faithful morphism of Lie algebras $\varrho:{\mathfrak g}\rightarrow\mathfrak n\ltimes\operatorname{Der}(\mathfrak n)$ of the form $\varrho(x)=(x,L(x))$ for all $x\in{\mathfrak g}$, see Proposition 2.11 in \cite{bdv} for the precise statement. This result should be compared with \ref{Re2} in Remark \ref{rem:relationwithothers} of the present work. \\ \emph{Plan of the present work}. In Section \ref{sec: CM} is recalled the notion of crossed morphism for Lie groups and Lie algebras and the relation between \emph{invertible} crossed morphisms of Lie algebras and post-Lie algebras is explained. This section closes with a brief discussion on \emph{local} Lie groups. \\ In Section \ref{sec: Univ. env. alg. and pLMe} it is shown that the datum of a crossed morphism between two Lie algebras yields a morphism between the associated universal enveloping algebras, which, when the crossed morphism is invertible, provides an isomorphism giving rise to the \emph{Grossman-Larson} product. After introducing a suitable \emph{integration functor}, last part of this section is devoted to the analysis of the pLMe from the categorical view-point sketched above. \\ In Section \ref{sec: Computing pLMe} two combinatorial interpretations of the coefficients of the pLMe are given. Both interpretations are based on a notion of nested tubings. The first method is based on the \emph{vertical} nested tubings and allows to compute the coefficients associated to any forest recursively. The second method is based on the \emph{horizontal} nested tubings and allows to express these coefficients in a closed form. This section is divided into six parts. The first four parts \ref{sec: Operad of PRT}--\ref{sec: univ env alg} are essentially a reminder; they serve to set up conventions and to introduce the adequate combinatorics in order to handle the pLMe. More in details, the first part set up conventions and notations on planar trees and forests and introduces specific graftings of them. The second part is a brief reminder on the combinatorial operad $\ca{PSB}$, a model of the operad $\ca{P}ost\ca{L}ie$, which serves as a combinatorial base to handle operations on the free post-Lie algebra (on one generator) and on its universal enveloping algebra. These last two algebras are the subject of the third and four parts. The fifth part is dedicated to the notions of vertical and horizontal nested tubings which are the last essential ingredient to compute the pLMe. Finally, the last part is devoted to the computation of this expansion, first in terms of vertical, then in terms of horizontal nested tubings. \subsection{Conventions}\label{sec: convention operads} Throughout the paper $\mathbb K$ will denote a field of characteristic zero. The tensor product will be taken over $\mathbb K$. In particular the tensor product of two $\mathbb K$--vector spaces $V$ and $W$ will be denoted by $V\otimes W$. All Lie groups considered will be connected and simply-connected. The category of the post-Lie algebras and their morphisms will be denoted by $\cat{PostLie}$ while the category of the pre-Lie algebras and their morphisms will be denoted by $\cat{PreLie}$. \section{Crossed morphisms}\label{sec: CM} In this section we will recall the concepts of post-Lie algebra and of crossed morphism, both for Lie algebras and for Lie groups, and we will comment on how these relate to each other. \subsection{Crossed morphisms of Lie algebras} \begin{defn} Let ${\mathfrak g}$ and ${\mathfrak h}$ be two Lie algebras, and let $\upsilon:{\mathfrak g}\rightarrow\operatorname{Der}_{\text{Lie}}({\mathfrak h})$ be a morphism of Lie algebras. A \emph{crossed morphism relative to $\upsilon$} is a map $\phi\in\operatorname{Hom}_{\mathbb K}(\mathfrak g,\mathfrak h)$ that satisfies \begin{equation*} \phi([x,y]_{\mathfrak g})=\upsilon_x(\phi(y))-\upsilon_y(\phi(x))+[\phi(x),\phi(y)]_{\mathfrak h},\,\forall x,y\in\mathfrak g.\label{eq:crosshom} \end{equation*} The set of crossed morphisms of $\mathfrak g$ in $\mathfrak h$ relative to $\upsilon$ is denoted with $\operatorname{Cross}^\upsilon(\mathfrak g,\mathfrak h)$. The subset of the \emph{invertible} crossed morphisms is denoted with $\operatorname{Cross}_{\text{inv}}^{\upsilon}({\mathfrak g},{\mathfrak h})$. \end{defn} \begin{example} If $\mathfrak h$ is abelian, \emph{i.e.} if $[-,-]_{\mathfrak h}\equiv 0$, then $\operatorname{Der}_{\text{Lie}}(\mathfrak g)=\operatorname{End}_{\mathbb K}(\mathfrak g)$. In this case $\phi$ is a crossed morphism of $\mathfrak g$ in $\mathfrak h$ relative to $\upsilon\in\operatorname{End}_{\mathbb K}(\mathfrak g)$ if and only if \[ \phi([x,y]_{\mathfrak g})=\upsilon_x(\phi(y))-\upsilon_y(\phi(x)),\,\forall x,y\in\mathfrak g. \] \end{example} \begin{example} If $f\in\operatorname{Hom}_{\text{Lie}}(\mathfrak g,\mathfrak h)$ then $\upsilon_f:\mathfrak g\rightarrow\operatorname{Der}_{\text{Lie}}(\mathfrak h)$ defined by \[ \upsilon_{f}(x)(a)=[f(x),a]_{\mathfrak h},\,\forall a\in\mathfrak h, \] is a morphism of Lie algebras and $\phi\in\operatorname{Hom}_{\mathbb K}(\mathfrak g,\mathfrak h)$ belongs to $\operatorname{Cross}^{\upsilon_f}(\mathfrak g,\mathfrak h)$ if and only if $f+\phi\in\operatorname{Hom}_{\text{Lie}}(\mathfrak g,\mathfrak h)$. \end{example} \begin{example} If ${\mathfrak g}={\mathfrak h}$ and $\phi$ is the identity, then ${\mathfrak h}$ has another Lie algebra structure, given by \begin{equation}\label{eq: h bar } \llbracket x,y \rrbracket := \upsilon_{x}(y) - \upsilon_y (x) + [x,y]_{{\mathfrak h}} \text{ for all } x,y \in {\mathfrak h}. \end{equation} The resulting Lie algebra is denoted by $\overline{{\mathfrak h}} = ({\mathfrak h},\llbracket-,-\rrbracket)$. \end{example} \begin{defn} The category $\cat{CM}$ is as follows. The objects are the tuples $({\mathfrak g},{\mathfrak h},\upsilon, \phi)$ of two Lie algebras ${\mathfrak g}$ and ${\mathfrak h}$ and $(\upsilon,\phi)\in \operatorname{Hom}_{\text{Lie}}({\mathfrak g},\operatorname{Der}({\mathfrak h})) \times \operatorname{Cross}^{\upsilon}({\mathfrak g},{\mathfrak h})$. The morphisms between $({\mathfrak g},{\mathfrak h},\upsilon, \phi)$ and $({\mathfrak g}',{\mathfrak h}',\upsilon', \phi')$ are pairs $(f,g)\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},{\mathfrak g}')\times \operatorname{Hom}_{\text{Lie}}({\mathfrak h},{\mathfrak h}')$ such that \begin{enumM} \item\label{M1}$g\circ\phi=\phi'\circ f$ and \item\label{M2}$g(\upsilon_{x}(a))=\upsilon'_{f(x)}(g(a)) \text{ for all } x\in\mathfrak g \text{ and } a\in\mathfrak h$. \end{enumM} The subcategory $\cat{CM_{inv}}\subset \cat{CM}$ is the one of those tuples $({\mathfrak g},{\mathfrak h},\upsilon,\phi)$ such that $\phi$ is an invertible crossed morphism, \emph{i.e.} $\phi\in \operatorname{Cross}^{\upsilon}_{\text{inv}}({\mathfrak g},{\mathfrak h})$. The subcategory $\iota\colon\thinspace \cat{CM_{id}}\subset \cat{CM_{inv}}$ is the one of the tuples of the form $(\overline{{\mathfrak h}},{\mathfrak h},\upsilon, id)$. \end{defn} Let $R\colon\thinspace \cat{CM_{inv}} \to \cat{CM_{id}}$ be the functor given by \begin{equation} R({\mathfrak g},{\mathfrak h},\upsilon, \phi)=(\overline{{\mathfrak h}},{\mathfrak h},\upsilon\circ \phi^{-1}, id)\quad\text{and}\quad R(f,g)=(g,g).\label{eq:functR} \end{equation} Note that in the first tuple, the Lie algebra $\overline{{\mathfrak h}}$ is determined by $({\mathfrak h},\upsilon\circ \phi^{-1})$ so that is Lie bracket is given $\llbracket x,y\rrbracket=\upsilon_{\phi^{-1}(x)}(y)-\upsilon_{\phi^{-1}(y)}(x)+[x,y]_{{\mathfrak h}}$ for all $x,y\in{\mathfrak h}$, according to \eqref{eq: h bar }. \begin{prop}\label{pro:equivcat} The two categories $\cat{CM_{id}}$ and $\cat{CM_{inv}}$ are adjoint equivalent. \end{prop} \begin{proof} The inclusion functor is full and essentially surjective which proves the equivalence. It remains to show that the functor $R$ is a right adjoint to $\iota$. To do this it is enough to check that the unit $\eta\colon\thinspace id \to R\circ\iota$ and counit $\epsilon\colon\thinspace \iota \circ R \to id$ transformations satisfy the triangle relations: $\iota \xrightarrow{\iota \eta} \iota R \iota \xrightarrow{\epsilon \iota} \iota$ and $R \xrightarrow{\eta R} R\iota R \xrightarrow{R \epsilon} R$ are identities. This is a straightforward verification. \end{proof} To a tuple $(\overline{{\mathfrak h}},{\mathfrak h},\upsilon,id) \in \cat{CM_{id}}$ one may associate the post-Lie algebra $({\mathfrak h},\triangleright)$ where \begin{equation}\label{eq: postlie from hh,v ,id} x\triangleright y := \upsilon_{x}(y) \text{ for all } x,y\in {\mathfrak h}. \end{equation} Indeed, \ref{PL Property 1} is clear since $\upsilon_x$ is a derivation of ${\mathfrak h}$, and \ref{PL Property 2} results from the fact that $\upsilon\colon\thinspace \overline{{\mathfrak h}}\to \operatorname{Der}({\mathfrak h})$ is a Lie morphism: for all $x,y$ and $z$ in ${\mathfrak h}$, one has \begin{equation*} [x,y]\triangleright z =\upsilon_{[x,y]}(z) =\upsilon_{\llbracket x,y \rrbracket - \upsilon_{x}(y) + \upsilon_{y}(x)}(z)\\ =\upsilon_{x}(\upsilon_{y}(z))-\upsilon_{y}(\upsilon_{x}(z))-\upsilon_{\upsilon_{x}(y)}(z)+\upsilon_{\upsilon_{y}(x)}(z). \end{equation*} The following is straightforward. \begin{prop}\label{pro:equiv} The two categories $\cat{CM_{id}}$ and $\cat{PostLie}$ are isomorphic. \end{prop} \begin{rem}\label{ex:preLie} The tuples $(\mathfrak g,\mathfrak h,\upsilon,\phi)$ as in $\cat{CM_{inv}}$ where ${\mathfrak h}$ is an abelian Lie algebra form a full subcategory of $\cat{CM_{inv}}$, denoted, hereafter, by $\cat{CM_{pl}}$. The full subcategory of $\cat{CM_{pl}}$ whose objects are the tuples $(\overline{{\mathfrak h}},{\mathfrak h},\upsilon,\operatorname{id})$, denoted from now on by $\cat{CM_{pl,id}}$, which is adjoint equivalent to $\cat{CM_{pl}}$, is isomorphic to $\cat{PreLie}$, recovering the result of \cite{bai}, see also \cite{baus} and references therein. To be more explicit, it is worth to note that if $(\mathfrak g,\mathfrak h,\upsilon,\phi)$ is an object in $\cat{CM_{pl}}$, then \[ \phi([x,y]_{\mathfrak g})=\upsilon_x(\phi(y))-\upsilon_y(\phi(x)),\,\forall x,y\in{\mathfrak g}, \] i.e. $\phi$ is a \emph{bijective} $1$-cocycle (of the Chevalley-Eilenberg cohomology) of ${\mathfrak g}$ with values in ${\mathfrak h}$. \end{rem} \begin{example}[\cite{B-N},\cite{MKL}] For a given Lie group $K$ whose Lie algebra is $\mathfrak k$, let ${\mathfrak g}=\mathfrak X(K)$ with its standard Lie bracket and ${\mathfrak h}=C^\infty(K,\mathfrak k)$ with the Lie bracket defined by $\llceil f,g\rrceil(k)=[f(k),g(k)]_\mathfrak k$, for all $f,g\in{\mathfrak h}$ and $k\in K$. Then \begin{equation} \upsilon_X(f)(k):=(X_kf),\,\forall X\in{\mathfrak g},\;f\in{\mathfrak h},\label{eq:up} \end{equation} is a morphism of Lie algebras from ${\mathfrak g}$ to $\operatorname{Der}({\mathfrak h})$. Furthermore, recall that $\theta\in\Omega^1(K,\mathfrak k)$, defined via the left-translations $L_k$ by $\theta_k(v)=(L_{k^{-1}})_{\ast,k}(v)$ for all $k\in K$ and $v\in T_kK$, defines a parallelization of $TK{\simeq}K\times\mathfrak k$ by the $v\stackrel{\theta}{\rightsquigarrow}(k,\theta_k(v))$, for all $k\in K$ and $v\in T_kK$. Composing this map with the projection $K\times\mathfrak k\rightarrow\mathfrak k$, one obtains $\phi\in C^\infty(K,\mathfrak k)$ \begin{equation} \phi(X)=i_X\theta,\,\forall X\in{\mathfrak g}.\label{eq:phi} \end{equation} Computing $i_{[X,Y]}\theta=\mathcal L_X(i_Y\theta)-i_Y(\mathcal L_X\theta)$, where $\mathcal L_X$ denotes the operation of Lie derivative in the direction $X$, and recalling that $\theta$ satisfies the Maurer-Cartan equation, i.e. $d\theta+\frac{1}{2}[\theta,\theta]=0$, one obtains \[ i_{[X,Y]}\theta=i_X(di_Y\theta)-i_Y(di_X\theta)+\llceil i_X\theta,i_Y\theta\rrceil,\,\forall X,Y\in{\mathfrak g}, \] i.e. $\phi\in\operatorname{Cross}^\upsilon_{\text{inv}}({\mathfrak g},{\mathfrak h})$. The application $\phi$ is an invertible map, $C^\infty(K)$-linear, such that $\phi(X_x)=x$ for all $x\in\mathfrak k$, where $X_x$ is the left-invariant vector field defined by the element $x\in\mathfrak k$. Applying the functor $R$ defined in \eqref{eq:functR}, one concludes that \begin{equation} f\triangleright g:=\phi^{-1}(f)g\,\forall f,g\in{\mathfrak h},\label{eq:Kpl} \end{equation} makes $({\mathfrak h}, \llceil-,-\rrceil, \triangleright)$ into a post-Lie algebra. Moreover, the Lie algebra $\overline{\mathfrak h}$ whose underlying vector space is $C^\infty(K,\mathfrak k)$ and whose Lie bracket is \[ \llfloor f,g\rrfloor=\phi^{-1}(f)g-\phi^{-1}(g)f+\llceil f,g\rrceil,\,\forall f,g\in C^\infty(K,\mathfrak k). \] Pulling back \eqref{eq:Kpl} to ${\mathfrak g}$, one obtains $\blacktriangleright:{\mathfrak g}\otimes{\mathfrak g}\rightarrow{\mathfrak g}$, defined by \begin{equation} X\blacktriangleright Y=\phi^{-1}(\phi(X)\triangleright\phi(Y)),\,\forall X,Y\in{\mathfrak g}, \end{equation} which is a $C^\infty(K)$-linear product on $\mathfrak X(K)$ with respect to the first entry, such that \[ X\blacktriangleright(\xi Y)=X(\xi)Y+\xi X\blacktriangleright X,\,\forall\xi\in C^\infty(K),\,X,Y\in\mathfrak X(K), \] which, together with \eqref{eq:Kpl}, implies that $X\blacktriangleright Y=0$ for all $X\in\mathfrak X(K)$ and all $Y$ left-invariant. In other words, $\blacktriangleright$ defines a flat linear connection on $TK$, whose flat sections are the left-invariant vector fields, and whose torsion is easily shown to be parallel since $T(X_x,X_y)=-X_{[x,y]_\mathfrak t}$, for all $x,y\in\mathfrak k$. \end{example} \begin{rem}\label{rem:relationwithothers} A couple of remarks are now in order. \begin{enumerate} \item\label{Re1} Keeping the same notations introduced above, $r\in\operatorname{Hom}_{\mathbb{K}}({\mathfrak h},{\mathfrak g})$ is called an $\mathcal O$-operator of weight $\lambda\in\mathbb R$, if \begin{equation} [r(x),r(y)]_{{\mathfrak g}}=r(\upsilon_{r(x)}y-\upsilon_{r(y)}x+\lambda[x,y]_{\mathfrak h}).\,\forall x,y\in{\mathfrak h}.\label{eq:Ope} \end{equation} The tuple $({\mathfrak g},{\mathfrak h},\upsilon,r)$ where $r$ satisfies \eqref{eq:Ope} form a category $\cat{CM}_{\mathcal O,\lambda}$ whose morphisms between $({\mathfrak g},{\mathfrak h},\upsilon,r)$ and $({\mathfrak g}',{\mathfrak h}',\upsilon',r')$ are pairs $(f,g)\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},{\mathfrak g}')\times\operatorname{Hom}_{\text{Lie}}({\mathfrak h},{\mathfrak h}')$, satisfying \ref{M2} and the analogue of \ref{M1}, i.e. $f\circ r=r'\circ g$. The full subcategory of $\cat{CM}_{\mathcal O,\lambda=1}$ whose objects are the tuples $({\mathfrak g},{\mathfrak h},\upsilon,r)$ whose $r$ is invertible is isomorphic to $\cat{CM_{inv}}$, because of Proposition \ref{pro:equiv}, it is adjoint equivalent to $\cat{PostLie}$. In this way we recover the description of post-Lie algebras given in \cite{bai-guo-ni}. \item\label{Re2} Let $\cat{CM}_b$ be the category whose objects are the tuples $({\mathfrak g},{\mathfrak h},\varrho)$, where $\varrho\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},{\mathfrak h}\rtimes\operatorname{Der}({\mathfrak h}))$ and the Lie bracket in ${\mathfrak h}\rtimes\operatorname{Der}({\mathfrak h}))$ is defined by the formula \[ \{(h_1,d_1),(h_2,d_2)\}=([h_1,h_2]_{{\mathfrak h}}+d_1(h_2)-d_2(h_1),[d_1,d_2]). \] Note that composing $\varrho:{\mathfrak g}\rightarrow{\mathfrak h}\rtimes\operatorname{Der}({\mathfrak h})$ with the canonical projections $\pi_2:{\mathfrak h}\rtimes\operatorname{Der}({\mathfrak h})\rightarrow\operatorname{Der}({\mathfrak h})$ and $\pi_1:{\mathfrak h}\rtimes\operatorname{Der}({\mathfrak h})\rightarrow{\mathfrak h}$, one gets $\upsilon_\varrho\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},\operatorname{Der}({\mathfrak h}))$ and, respectively, $\phi_\varrho\in\operatorname{Cross}^{\upsilon_\varrho}({\mathfrak g},{\mathfrak h})$. A morphism between two objects $({\mathfrak g},{\mathfrak h},\varrho)$ and $({\mathfrak g}',{\mathfrak h}',\varrho')$ in $\cat{CM}_b$ is a pair $(f,g)\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},{\mathfrak g}')\times\operatorname{Hom}_{\text{Lie}}({\mathfrak h},{\mathfrak h}')$ satisfying \ref{M1} and \ref{M2} with respect to the pairs $(\upsilon_\varrho,\phi_\varrho)$'s. The full subcategory $\cat{CM_{b,inv}}\subset\cat{CM}_b$ whose objects are $({\mathfrak g},{\mathfrak h},\varrho)$, where $\phi_\varrho$ is a bijective linear map, is easily shown to be isomorphic to $\cat{CM_{inv}}$. Analogously the full subcategory $\cat{CM_{b,id}}\subset\cat{CM_{b,inv}}$ whose objects are $({\mathfrak g},{\mathfrak h},\varrho)$ where ${\mathfrak g}$ and ${\mathfrak h}$ are defined on the same underlying vector space and $\phi_\varrho=\operatorname{id}$ turns out to be isomorphic to $\cat{CM_{id}}$. In this way one recovers the description of $\cat{PostLie}$ given in \cite{bdv}. \end{enumerate} \end{rem} \subsection{Crossed morphism of Lie group type objects} In analogy to the Lie algebra case one can define the notion of crossed morphism between two Lie groups. First, recall that for $H$ a Lie group, $\operatorname{Aut}(H)$ denotes the group of automorphisms of $H$ which are diffeomorphisms of $H$, i.e. $\phi\in\operatorname{Aut}(H)$ if and only if \begin{enumerate} \item[(i)] $\phi$ is an isomorphism of abstract groups, \item[(ii)] $\phi$ is a diffeomorphism. \end{enumerate} \begin{defn} Let $G$ and $H$ be two Lie groups and let $\Upsilon\colon\thinspace G\rightarrow\operatorname{Aut}(H)$ be a morphism of Lie groups. A \emph{crossed morphism relative to $\Upsilon$} is a smooth map $\Phi\colon\thinspace G\rightarrow H$ that satisfies \begin{equation} \Phi(gh)=\Phi(g)\Upsilon_g(\Phi(h)),\,\forall g,h\in G.\label{eq:crossG} \end{equation} \end{defn} By changing, in the definition of $\cat{CM}$, the underlying category of Lie algebras by the one of Lie groups, one obtains the following category. \begin{defn} The category $\cat{CMGp}$ is as follows. The objects are the tuples $(G,H,\Upsilon, \Phi)$ of two Lie groups $G$ and $H$ and $(\Upsilon,\Phi)\in \operatorname{Hom}_{\text{LieGp}}(G,\operatorname{Aut}(H)) \times \operatorname{Cross}^{\Upsilon}(G,H)$. The morphisms between $(G,H,\Upsilon, \Phi)$ and $(G',H',\Upsilon', \Phi')$ are pairs $(f,g)\in\operatorname{Hom}_{\text{LieGp}}(G,G')\times \operatorname{Hom}_{\text{LieGp}}(H,H')$ such that \begin{equation*} g\circ\Phi=\Phi'\circ f \quad \text{ and } \quad g(\Upsilon_{x}(a))=\Upsilon'_{f(x)}(g(a)) \text{ for all } x\in G \text{ and } a\in H. \end{equation*} \end{defn} In the same vein as before, one has subcategories $\cat{CMGp_{id}} \subset \cat{CMGp_{inv}} \subset \cat{CMGp}$, and an adjoint equivalence between $\cat{CMGp_{id}}$ and $\cat{CMGp_{inv}}$. Note that the projection functor \begin{equation*} P\colon\thinspace \cat{CMGp_{inv}}\to \cat{CMGp_{id}} \end{equation*} sends any tuple $(G,H,\Upsilon,\Phi)$ to $(\overline{H}, H, \Upsilon\circ \Phi^{-1},id)$, where the product of $\overline{H}=(H,\star)$ is given by \begin{equation}\label{eq: prod H bar} h_1\star h_2 = h_1\Upsilon_{\Phi^{-1}(h_1)}(h_2) \text{ for all } h_1,h_2\in H. \end{equation} The classical Lie functor gives rise to a functor \begin{equation*} T_e\colon\thinspace \cat{CMGp} \to \cat{CM} \end{equation*} that sends $(G,H,\Upsilon, \Phi)$ to $({\mathfrak g},{\mathfrak h},\Upsilon_{\ast,e_G}, \Phi_{\ast,e_G})$. It restricts to the sub categories of invertible crossed morphisms \begin{equation*} T_e\colon\thinspace \cat{CMGp_{inv}} \to \cat{CM_{inv}} \end{equation*} and also to $T_e\colon\thinspace \cat{CMGp_{id}} \to \cat{CM_{id}}$. The latter means that $T_e$ sends $(\overline{H},H,\Upsilon, id)$ to $(\overline{{\mathfrak h}},{\mathfrak h},\Upsilon_{\ast,e_G}, id)$, which makes notations consistent; to see this it is enough to verify that \eqref{eq: prod H bar}, with $\Phi=id$, gives rise to the Lie bracket of \eqref{eq: h bar } by differentiation. Moreover, $T_e$ commutes with the projections: \begin{prop} $R \circ T_e = T_e \circ P$. \end{prop} The previous constructions and remarks can be adapted almost verbatim to the case of \emph{local Lie groups}. Instead to recall the formal definition of this structure, we simply remind that a local Lie group is a smooth manifold $M$ with a distinguished point $e$ and two operations $\mu$ and $\iota$ only partially defined, i.e. defined on a suitable neighborhood of $e$ and satisfying the following compatibility conditions $(1)$ $\mu(e,x)=x=\mu(x,e)$, $(2)$ $\mu(x,\iota(x))=e=\mu(\iota(x),x)$ and $(3)$ $\mu(\mu(x,y),z)=\mu(x,\mu(y,z))$, for all $x,y,z\in M$ \emph{sufficiently} close to $e\in M$. To every local Lie group can be associated a Lie algebra whose underlying vector space is the tangent space at $e$ and whose Lie bracket is defined restricting the canonical Lie bracket of $\mathfrak X(M)$ to the (say) left invariant vector fields. It is worth to observe that every Lie group $G$ is a local Lie group and that every neighborhood $U$ of the identity of a Lie group is a local Lie group, restricting to $U$ both multiplication and inversion map defined on $G$. Another class of local Lie groups is obtained looking at suitable neighborhoods of the $0$ element in a finite dimensional Lie algebra ${\mathfrak g}$. In this case the multiplication map is provided by the Baker-Campbell-Hausdorff series, i.e. $\mu(x,y)=\operatorname{BCH}_{\mathfrak g}(x,y)$ for all $x,y$, $e$ is the $0$ element and $\iota(x)=-x$ for all $x$. A neighborhood of $0$ is \emph{suitable} if on it the $\operatorname{BCH}$ series is convergent. This class of examples of local Lie groups will the only one we will consider in this letter. In particular, any local Lie group defined by the Lie algebra ${\mathfrak g}$ and its $\operatorname{BCH}$ series will be called a \emph{$\operatorname{BCH}$-group} and it will be denoted simply by $\mathcal G$. In this case ${\mathfrak g}$ will be called the Lie algebra underlying $\mathcal G$. In spite of the appearances, our choice to consider only $\operatorname{BCH}$-groups is not really a severe restriction. In fact one can show that every local Lie group, if seen in coordinates, is a $\operatorname{BCH}$-group, see \cite{tao}. The categories introduced in the first part of this section can be defined trading Lie with local Lie groups. More precisely one can define $\cat{CMGp^{loc}}$, $\cat{CMGp^{loc}_{inv}}$ and, respectively, $\cat{CMG^{loc}_{id}}$. All the comments made and properties discussed about the categories $\cat{CMGp}$, $\cat{CMGp_{inv}}$ and, respectively, $\cat{CMG_{id}}$ can be re-proposed for their local versions. \section{Universal enveloping algebras and the post-Lie Magnus expansion} \label{sec: Univ. env. alg. and pLMe} Recall that to a post-Lie algebra $({\mathfrak h},\triangleright)$ one may associate two universal enveloping algebras: that of the Lie algebra $\overline{{\mathfrak h}}$ and that of the underlying Lie algebra ${\mathfrak h}$. The latter comes equipped with the \emph{Grossman-Larson product} $\ast$, which emerges from the post-Lie structure, making it a bialgebra. Both bialgebras are related by an isomorphism $\Theta\colon\thinspace \mathcal{U}(\overline{{\mathfrak h}}) \to (\mathcal{U}({\mathfrak h}),\ast)$ which turns out to be responsible for the existence of the \emph{pLMe} $\chi\colon\thinspace \ca{H} \to \ca{H}$. In this section is defined a functor $\ca{U}\colon\thinspace \cat{CM} \to \cat{Pbialg}$ that provides the above data $( \mathcal{U}(\overline{{\mathfrak h}}), \mathcal{U}({{\mathfrak h}}),\Theta)$ when restricted to $\cat{CM_{id}}$. Then is defined an integration functor which gives rise to the pLMe. The following diagram gives an overview of the functors considered in the previous and present sections; the bottom line corresponds to the above discussion. \begin{equation*} \begin{tikzpicture} [>=stealth,thick,draw=black!65, arrow/.style={->,shorten >=1pt}, point/.style={coordinate}, pointille/.style={draw=red, top color=white, bottom color=red},scale=0.7, photon/.style={decorate,decoration={snake,post length=1mm}}] \matrix[row sep=8mm,column sep=8mm,ampersand replacement=\&] { \node (-10) {$\cat{CMGp}$};\& \node (-11){$\cat{CM}$} ;\& \node (-12){$\cat{Pbialg}$} ;\& \node (-13){} ;\\ \node (00) {$\cat{CMGp_{inv}}$};\& \node (01){$\cat{CM_{inv}}$} ;\& \node (02){$\cat{Pbialg_{inv}}$} ;\& \node (03){$\cat{CMGp^{loc}}$} ;\\ \node (10) {$\cat{CMGp_{id}}$}; \& \node (11){$\cat{CM_{id}}$} ;\& \node (12){} ;\& \node (13){} ;\\ \node (20) {$\cat{Gp}$}; \& \node (21){$\cat{PostLie}$} ;\& \node (22){$\cat{Bialg}$} ;\& \node (23){$\cat{Gp^{loc}}$} ;\\ }; \path (-10) edge[above,->] node {$T_e$} (-11) (-11) edge[above,->] node (Uu) {$\ca{U}$} (-12) (00) edge[above,->] node {$T_e$} (01) (01) edge[above,->] node (Uu) {$\ca{U}$} (02) (02) edge[above,->,photon] node {$\text{Int}$} (03) (10) edge[above,->] node {$T_e$} (11) (11) edge[below,->] node {$\ca{U}_{|\iota}$} (02) (20) edge[above,->] node {$T_e$} (21) (21) edge[above,->,out=45,in=135] node (*U) {$\ast\circ \ca{U}$} (22) (21) edge[below,->,out=-45,in=-135] node (Uo[]) {$\ca{U}\circ \llbracket-,-\rrbracket$} (22) (22) edge[above,->,photon] node (bch1) {$\text{Int}$} (23) (00) edge[left,left hook->] node {} (-10) (01) edge[left,left hook->] node {} (-11) (02) edge[left,left hook->] node {} (-12) (10) edge[left,left hook->,out=125,in=-125] node {$s$} (00) (00) edge[left,->] node {} (10) (01) edge[left,->] node {$R$} (11) (11) edge[left,left hook->,out=135,in=-135] node {$\iota$} (01) (11) edge[left,->] node {$\cong$} (21) (Uo[]) edge[right,double,shorten <=8pt,shorten >=8pt,-implies] node {$\Theta$} (*U) (11) edge[left,double,shorten <=8pt,shorten >=8pt,-implies] node {$\Psi$} (Uu) ; \end{tikzpicture} \end{equation*} \begin{notation} The universal enveloping algebra of a post-Lie algebra $({\mathfrak h},\triangleright)$ is the universal enveloping algebra of the underlying Lie algebra ${\mathfrak h}$. It is a bialgebra when endowed with the shuffle coproduct $\Delta_{sh}\colon\thinspace \ca{U}({\mathfrak h}) \to \ca{U}({\mathfrak h})^{\otimes 2}$. It may be useful to consider Sweedler's notation (without sum): $\Delta_{sh}(X)= X_{(1)}\otimes X_{(2)}$ for all $X\in \ca{U}({\mathfrak h})$. \end{notation} \begin{defn} The category $\cat{Pbialg}$ is as follows. The objects are tuples $(A,B,\theta)$ where $A$ and $B$ are bialgebras, $B$ is an $A$--module and $\theta\colon\thinspace A \to B$ is a morphism of $A$--modules and coalgebras. The morphisms are pairs $(f,g)\in \operatorname{Hom}_{bialg}(A,A')\times \operatorname{Hom}_{Modcoalg}(B,B')$ such that $g\circ \theta = \theta' \circ f$. That $g$ belongs to $\operatorname{Hom}_{Modcoalg}(B,B')$ means that it is a morphism of coalgebras and that $g(a\cdot b)=f(a)\cdot g(b)$ for all $a\in A$ and $b\in B$. Let $\cat{Pbialg_{inv}}$ be the subcategory of $\cat{Pbialg}$ of those tuples $(A,B,\theta)$ such that $\theta$ is an isomorphism. \end{defn} \begin{rem}\label{rmk: PBinv bialg} If $(A,B,\theta)$ belongs to $\cat{Pbialg_{inv}}$ then $B$ has another bialgebra structure, given by $b\ast b' := \theta^{-1}(b)\cdot b$ for all $b,b'\in B$. Moreover, $\theta\colon\thinspace A \to (B,\ast)$ is an isomorphism of bialgebras. \end{rem} Let \begin{equation*} \ca{U}\colon\thinspace \cat{CM} \to \cat{Pbialg} \end{equation*} be the functor that associates to each tuple $({\mathfrak g},{\mathfrak h},\upsilon,\phi)$ the following tuple $(\ca{U}({\mathfrak g}), (\ca{U}({\mathfrak h}),M),\Theta)$. The following construction of the action $M\colon\thinspace \ca{U}({\mathfrak g}) \to \operatorname{End}_{\mathbb K} (\ca{U}({\mathfrak h}))$ and the morphism $\Theta$ are a straightforward generalization of \cite[Section 5]{MQS}; the main steps are given here. Since $\upsilon$ has values in $\operatorname{Der}_{\text{Lie}}({\mathfrak h})$ it can be extended to be with values in the derivations for the algebra $\ca{U}({\mathfrak h})$. By keeping the same notation for this extension, this means that $\upsilon_x(XY)=X\upsilon_x(Y) + \upsilon_x(X)Y$ for each $x\in {\mathfrak h}$ and $X,Y\in \ca{U}({\mathfrak h})$. Let $\sigma^\phi\colon\thinspace\mathfrak g\rightarrow\operatorname{End}_{\mathbb K}(\mathcal U(\mathfrak h))$ be the linear application defined by \begin{equation*} \sigma^\phi(x)(X)=\phi(x)\cdot X \text{ for all } x\in{\mathfrak g} \text{ and } X\in\mathcal U(\mathfrak h), \end{equation*} and let $M_{(\upsilon,\phi)}:\mathfrak g\rightarrow\operatorname{End}_\mathbb K(\mathcal U(\mathfrak h))$ be the linear map defined \begin{equation} M_{(\upsilon,\phi)}(x)=\upsilon_x+\sigma^\phi_x, \text{ for all } x\in\mathfrak g.\label{eq:eqM} \end{equation} The following lemma shows that $M_{(\upsilon,\phi)}$ extends to a morphism of associative algebras $M_{(\upsilon,\phi)}\colon\thinspace \ca{U}({\mathfrak g}) \to \operatorname{End}_{\mathbb K} (\ca{U}({\mathfrak h}))$, providing the action map. \begin{lem} For all $x,y\in\mathfrak g$, one has \begin{equation} M_{(\upsilon,\phi)}([x,y]_{{\mathfrak g}})=[M_{(\upsilon,\phi)}(x),M_{(\upsilon,\phi)}(y)]. \label{eq:comm} \end{equation} In other words, $\mathcal U(\mathfrak h)$ carries a structure of a $(\mathfrak g,[-,-]_\mathfrak g)$--module defined by $M_{(\upsilon,\phi)}\colon\thinspace {\mathfrak g}\rightarrow \operatorname{End}_\mathbb K(\ca{U}({\mathfrak h}))$. \end{lem} \begin{proof} For every $x,y\in\mathfrak g$ and $a\in\mathfrak h$, it suffices to compare $M^\upsilon_\phi([x,y]_\mathfrak g)(a)$ with $[M^\upsilon_\phi(x),M^\upsilon_\phi(y)](a)$, recalling that $\phi$ satisfies \eqref{eq:crosshom} and $\upsilon\in\operatorname{Hom}_{\text{Lie}}(\mathfrak g,\operatorname{Der}(\mathfrak h))$. \end{proof} The map $\Theta= \Theta_{(\upsilon,\phi)}\colon\thinspace \mathcal U(\mathfrak g)\rightarrow\mathcal U(\mathfrak h)$ is defined on every monomial $X\in\mathcal U(\mathfrak g)$ by \begin{equation} \Theta_{(\upsilon,\phi)}(X)=M_{(\upsilon,\phi)}(X)(1)\label{eq:psi} \end{equation} and is extended to all $\mathcal U(\mathfrak g)$ by linearity. It is a morphism of coalgebras as well as of left $\mathcal U(\mathfrak g)$--modules; see \cite{KLM} and also \cite[Proposition 28]{MQS}. To see that $\ca{U}$ is indeed a functor, it remains to show the following. \begin{lem} Let $(f,g)\colon\thinspace ({\mathfrak g},{\mathfrak h},\upsilon,\phi)\to ({\mathfrak g}',{\mathfrak h}',\upsilon',\phi')$. For all $A\in \ca{U}({\mathfrak g})$ and $B\in \ca{U}({\mathfrak h})$, one has \begin{equation*} \ca{U}(g)(M_{A}(B)) =M'_{\ca{U}(f)(A)}(\ca{U}(g)(B)). \end{equation*} In particular, one has $\ca{U}(g) \circ \Theta = \Theta' \circ \ca{U}(f)$. \end{lem} \begin{proof} By linearity, it is enough to show the result for $A=a_1\cdots a_m\in \ca{U}({\mathfrak g})$ and $B=b_1\cdots b_n\in \ca{U}({\mathfrak h})$ being two monomials. The proof is by induction on $m$. For $m=1$, one has \begin{align*} \ca{U}(g)(M_{a}(B)) &= \ca{U}(g) (\upsilon_{a}(B)+ \phi(a)B) \\ &= \ca{U}(g) (\sum_{1\leq i \leq n} b_1\cdots b_{i-1} \upsilon_{a}(b_i)b_{i+1}\cdots b_n + \phi(a)B) \\ &= \sum_{1\leq i \leq n} g(b_1) \cdots g(b_{i-1}) \upsilon'_{f(a)}(g(b_i))g(b_{i+1})\cdots g(b_n) + \phi'(f(a))\ca{U}(g)(B) \\ &= \upsilon'_{f(a)}(\ca{U}(g)(B)) + \phi'(f(a))\ca{U}(g)(B) \\ &= M'_{\ca{U}(f)(a)}(\ca{U}(g)(B)). \end{align*} Let $m\geq 2$. Remark that $\upsilon_{a_1}(M_{a_2}(\cdots (M_{a_m}(B))\cdots )$ can be written as a sum of terms of the form $C_1\upsilon_{a}(C_2)C_3$ where each $C_i\in \ca{U}({\mathfrak g})$ are monomial of the following form. By writing $C_i$ as $c_{k_1}\cdots c_{k_i}$, the term $c_r$ is of the form $\phi(a_s)$, or $\upsilon_{a_{j_1}}( \upsilon_{a_{j_2}} (\cdots \upsilon_{a_{j_s}}(\phi(a_{j_{s+1}}))\cdots ))$ or $\upsilon_{a_{j_1}}( \upsilon_{a_{j_2}} (\cdots \upsilon_{a_{j_s}}(B)\cdots ))$ for some indices $\{j_1,...,j_{s+1}\} \subset \{1,...,m\}$. Consequently, one has \begin{equation*} \ca{U}(g) \big( \upsilon_{a_1}(M_{a_2}(\cdots (M_{a_m}(B))\cdots ) \big) = \upsilon'_{f(a_1)}(M'_{f(a_2)}(\cdots (M'_{f(a_m)}(\ca{U}(g)(B))\cdots ). \end{equation*} Therefore, one has \begin{multline*} \ca{U}(g) (M_{a_1\cdots a_m}(B)) =\ca{U}(g) \Big(\upsilon_{a_1}(M_{a_2\cdots a_m}(B)) + \phi(a_1)M_{a_2...a_m}(B)\Big) = M'_{\ca{U}(f)(a_1\cdots a_m)}(\ca{U}(g)(B)). \end{multline*} \end{proof} If $\phi$ is invertible, then so is $\Theta$; see \cite[Theorem 29]{MQS}. Therefore, the functor $\ca{U}$ restricts to a functor \begin{equation*} \ca{U}\colon\thinspace \cat{CM_{inv}} \to \cat{Pbialg_{inv}}. \end{equation*} Since $(\iota,R)$ is an adjoint equivalence, the counit provides a natural isomorphism $\Psi=\ca{U}\epsilon \colon\thinspace R\ca{U}_{|\iota} \to \ca{U}$. In particular one has \begin{equation}\label{eq: Theta phi} \Theta_{(\upsilon\circ \phi^{-1},id)} = \Theta_{(\upsilon,\phi)} \circ \ca{U}(\phi^{-1}). \end{equation} \begin{rem}\label{rmk: GL product and D-alg} By Remark \ref{rmk: PBinv bialg}, the morphism $\Theta_{(\upsilon\circ \phi^{-1},id)}$ is a morphism of bialgebras, so one recovers the initial viewpoint of \cite{KLM}, see also \cite{GO1,GO2}. In particular, the resulting $\ast$ product on $\ca{U}({\mathfrak h})$ is the \emph{Grossman-Larson} product; it can be constructed as follows. The post-Lie product on ${\mathfrak h}$, defined via \eqref{eq: postlie from hh,v ,id}, can be extended to a map $\triangleright\colon\thinspace \ca{U}({\mathfrak h})^{\otimes 2} \to \ca{U}({\mathfrak h})$ with the following properties. For all $X,Y$ and $Z$ in $\mathcal{U}({\mathfrak g})$ and $x$ and $y$ in ${\mathfrak g}$, one has: \begin{enumD} \item\label{D bial item1} $1\triangleright X=X$ and $X\triangleright 1=0$; \item\label{D bial item4} $X\triangleright(Y\cdot Z)=(X_{(1)}\triangleright Y)\cdot(X_{(2)}\triangleright Z)$; and, \item\label{D bial item5} $(x\cdot X)\triangleright y=x\triangleright (X\triangleright y)-(x\triangleright X)\triangleright y$. \end{enumD} The resulting structure is known as a \emph{$D$--bialgebra} structure $(\mathcal{U}({\mathfrak h}),\Delta_{sh},\triangleright)$; see \cite{MQS}. The Grossman-Larson product is given by \begin{equation} \begin{split} \ast \colon\thinspace \mathcal{U}({\mathfrak h})^{\otimes 2} &\to \mathcal{U}({\mathfrak h}) \\ X\otimes Y & \mapsto X_{(1)}(X_{(2)}\triangleright Y). \end{split} \end{equation} \end{rem} \subsection{Integration of post-Lie algebras} Let $\CMinvfin$ denotes the subcategory of $\cat{CM_{inv}}$ of those tuples where the Lie algebras are \emph{finite dimensional} and let $\PBcomp$ be the obtained from $\cat{Pbialg_{inv}}$ by requiring the bialgebras to be \emph{complete}. The functor $\ca{U}\colon\thinspace \CMinvfin \to \cat{Pbialg_{inv}}$ induces, after completion, a functor $\widehat{\ca{U}}\colon\thinspace \CMinvfin \to \PBcomp$. Its image forms a category $\ImU$: the objects of $\ImU$ are tuples of the form $(\widehat{\ca{U}}({\mathfrak g}),(\widehat{\ca{U}}({\mathfrak h}),M),\Theta)= \widehat{\ca{U}}({\mathfrak g},{\mathfrak h},\upsilon,\phi)$, and morphisms are of the form $\widehat{\ca{U}}(f,g)$ for morphisms $(f,g)$ in $\CMinvfin$. In what follows is defined an integration functor \begin{equation*} \text{Int} \colon\thinspace \ImU \to \cat{CMGp^{loc}}. \end{equation*} To any tuple $(\widehat{\ca{U}}({\mathfrak g}),(\widehat{\ca{U}}({\mathfrak h}),M),\Theta)$ in $\ImU$ one may associate the following tuple $(\ca{G}, \ca{H},\varUpsilon, \varPhi)$. The map $\varUpsilon\colon\thinspace \ca{G} \to \operatorname{Aut}(\ca{H})$ is given by \begin{equation*} \varUpsilon= \operatorname{Exp}(\upsilon) \end{equation*} where $\upsilon_x(y):= M_x(y) - \Theta(x)\cdot y$ for all $x\in {\mathfrak g}$ and $y\in {\mathfrak h}$. \begin{lem} $\varUpsilon$ is a morphism of local groups. \end{lem} \begin{proof} Note that if $d\in\operatorname{Der}({\mathfrak g})$, then $\operatorname{Exp}(d)\in\operatorname{Aut}({\mathfrak g})\subset\operatorname{Aut}(\ca{G})$. Furthermore, if ${\mathfrak g}$ and ${\mathfrak h}$ are two (finite dimensional) Lie algebras and $\upsilon\in\operatorname{Hom}_{\text{Lie}}({\mathfrak g},\operatorname{Der}({\mathfrak h}))$, then one has \begin{equation*} \operatorname{Exp}(\upsilon_x)\operatorname{Exp}(\upsilon_y)=\operatorname{Exp}(\operatorname{BCH}_{\operatorname{End}(\mathfrak h)}(\upsilon_x,\upsilon_y))=\operatorname{Exp}(\upsilon_{\operatorname{BCH}_{\mathfrak g}(x,y)}),\,\forall x,y\in\mathfrak g, \end{equation*} which gives the result. \end{proof} The map $\varPhi\colon\thinspace \ca{G} \to \ca{H}$ is defined as \begin{equation*} \varPhi = \varPhi_{(\upsilon,\phi)} = \log_{{\mathfrak h}} \circ \Theta_{(\upsilon,\phi)} \circ \exp_{{\mathfrak g}}. \end{equation*} \begin{lem} $\varPhi$ is a crossed morphism of local groups. \end{lem} \begin{proof} It is a direct consequence of Theorem \ref{thm:intpostLie} stated hereafter. Indeed formula \eqref{eq:bch1} can be written as \begin{equation*}\label{eq:bch2} \varPhi(\operatorname{BCH}_{{\mathfrak g}}(x,y))=\operatorname{BCH}_{{\mathfrak h}}(\varPhi(x),\operatorname{Exp} (\upsilon_{x}) \varPhi(y)). \end{equation*} \end{proof} The integration functor $\text{Int}$ is given by $\text{Int} (\widehat{\ca{U}}({\mathfrak g}),(\widehat{\ca{U}}({\mathfrak h}),M),\Theta) = (\ca{G}, \ca{H},\varUpsilon, \varPhi)$. A direct verification shows that it is indeed a functor. \subsection{The post-Lie Magnus expansion} Observe that since $\phi$ is invertible, so is $\varPhi_{(\upsilon, \phi)}$. Its inverse $\chi_{(\upsilon, \phi)} \colon\thinspace \ca{H} \rightarrow \ca{G}$ is therefore given by \begin{equation}\label{eq:chi} \chi_{(\upsilon , \phi )} = \log_{{\mathfrak g}}\circ (\Theta_{(\upsilon, \phi)})^{-1} \circ \exp_{{\mathfrak h}}. \end{equation} \begin{defn} The map $\chi_{(\upsilon, \phi)}$ is called the \emph{post-Lie Magnus expansion associated to} $({\mathfrak g},{\mathfrak h},\upsilon,\phi) \in \cat{CM_{inv}}$. \end{defn} In analogy to \cite[Proposition 39]{MQS}, one can prove the following result. \begin{thm}\label{thm:intpostLie} For all $a,b\in {\mathfrak h}$ one has \begin{equation}\label{eq:bch1} \operatorname{BCH}_{{\mathfrak g}}(\chi_{(\upsilon, \phi)}(a), \chi_{(\upsilon, \phi)}(b)) =\chi_{(\upsilon, \phi)}\big(\operatorname{BCH}_{{\mathfrak h}} \big(a,\operatorname{Exp}(\upsilon_{\chi_{(\upsilon, \phi)}(a)})b\big)\big). \end{equation} \end{thm} The proof of this result is based on the following two preliminary lemmas. First recall that, by Remark \ref{rmk: PBinv bialg}, the bialgebra $\ca{U}({\mathfrak h})$ can be endowed with another product $\ast$. Also recall that, by \eqref{eq: postlie from hh,v ,id}, the map $\triangleright\colon\thinspace a\otimes b\mapsto \upsilon_{\phi^{-1}(a)}(b)$ defines a post-Lie product on ${\mathfrak h}$. Let $\sharp \colon\thinspace {\mathfrak h}\times {\mathfrak h}\rightarrow {\mathfrak h}$ be defined by \begin{equation*}\label{eq:compositio1} a\sharp b=\log_{\mathfrak h}(\exp_{\mathfrak h}(a)\ast\exp_{\mathfrak h}(b)) \text{ for all } a,b\in {\mathfrak h}. \end{equation*} \begin{lem} \label{eq:composition2} For all $a,b\in {\mathfrak h}$ one has \begin{equation*} a\sharp b=\operatorname{BCH}_{\mathfrak h}(a,\exp_{\mathfrak h}(a) \triangleright b). \end{equation*} \end{lem} \begin{proof} This result was proven in \cite{F-MK} and that proof extends without modification to this context. \end{proof} \begin{lem} For all $a,b\in\mathfrak h$ one has \begin{equation} \exp_\cdot(a)\triangleright b =\operatorname{Exp}(\upsilon_{\chi_{(\upsilon, \phi)}(a)})b,\label{eq:identity} \end{equation} where the right hand side of the previous formula reads as \[ b+\upsilon_{\chi_{(\upsilon, \phi)}(a)}(b)+\frac{1}{2}\upsilon_{\chi_{(\upsilon, \phi)}(a)}\big(\upsilon_{\chi_{(\upsilon, \phi)}(a)}(b)\big)+\frac{1}{3!}\upsilon_{\chi_{(\upsilon, \phi)}(a)}\big(\upsilon_{\chi_{(\upsilon, \phi)}(a)}(\upsilon_{\chi_{(\upsilon, \phi)}(a)}(b))\big)+\cdots \] \end{lem} \begin{proof} Recall that the extension of the post-Lie product to $\mathcal U(\mathfrak h)$ endowed the latter with a structure of a $D$-bialgebra, see Definition 19 in \cite{MQS}. In this case one has \begin{enumerate} \item[(i)] $a\ast A=a\cdot A+a\triangleright A$, for all $a\in {\mathfrak h}$, see Formula $(4.20)$ pag. 570 in \cite{MQS}; \item[(ii)] ($a\cdot A)\triangleright a'=a\triangleright (A\triangleright a')-(a\triangleright A)\triangleright a'$, see Formula $(\rm{D}.5)$ pag. 566 in \cite{MQS}, \end{enumerate} for all $a,a'\in {\mathfrak h}$ and $A\in{\mathfrak h}(V)$. Plugging $(ii)$ into $(i)$ one obtains \[ (a\ast A)\triangleright a'=a\triangleright (a'\triangleright A). \] The proof of the statement now can be obtained using a simple induction on the length of the monomials in the RHS of \eqref{eq:identity}, applying $(i)$ and $(ii)$ above recalled. \end{proof} After these observations the proof of the Theorem \ref{thm:intpostLie} is formally identical to the one presented in \cite{MQS} and for this reason it is not presented again. \\ The functoriality of the construction of the pLMe implies some relations between the different pLMe's that rely on relations between the source objects. More precisely, one has the following relations. Let $X=({\mathfrak g},{\mathfrak h},\upsilon,\phi)$ be an object of $\CMinvfin$. Recall the equation \eqref{eq: Theta phi} which, by applying $\text{Int}$, gives $\varPhi_{(\upsilon\circ \phi^{-1},id)} = \varPhi_{(\upsilon,\phi)} \circ \text{Int} (\widehat{\ca{U}}(\phi^{-1}))$. In other words, one has \begin{equation*} \chi_{(\upsilon,\phi)} = \phi^{-1} \circ \chi_{(\upsilon\circ \phi^{-1},id)}. \end{equation*} Let $(f,g)\colon\thinspace ({\mathfrak g},{\mathfrak h},\upsilon,\phi) \to ({\mathfrak g}',{\mathfrak h}',\upsilon',\phi')$ be a morphism in $\CMinvfin$. By applying $\text{Int} \circ \widehat{\ca{U}}\circ R$, one obtains \begin{equation*} \varPhi_{(\upsilon \circ \phi^{-1},id)} \circ \text{Int}(\widehat{\ca{U}}(g) ) = \text{Int}( \widehat{\ca{U}}(g)) \circ \varPhi_{(\upsilon' \circ (\phi')^{-1},id)}. \end{equation*} In particular, if $(f,g)$ is an isomorphism one has \begin{equation*} \chi_{(\upsilon' \circ (\phi')^{-1},id)} = g^{-1} \circ \chi_{(\upsilon \circ \phi^{-1},id)} \circ g. \end{equation*} \section{Computing the Post-Lie Magnus expansion}\label{sec: Computing pLMe} In this section two combinatorial interpretations of the coefficients associated to any forest of the pLMe are given. Both interpretations are based on a notion of nested tubings. The first method is concerned with \emph{vertical} nested tubings and allows to compute the coefficients associated to any forest recursively. The second method is concerned with \emph{horizontal} nested tubings and allows to express these coefficients in a closed form. \\ \subsection{Planar trees and forests}\label{sec: Operad of PRT} \begin{defn} A \emph{planar rooted tree} is an isomorphism class of contractible graphs, embedded in the plane, and endowed with a distinguished vertex, called the \emph{root}, to which is attached an adjacent half-edge, called the \emph{root-edge} of the planar tree. \end{defn} For a planar rooted tree $T$, we let $V(T)$ be the set of all its vertices. On it, we consider two orders: \begin{itemize} \item The \emph{level partial} order $\prec$ defined by orienting the edges of $T$ towards the root, except the root-edge. For two vertices $u$ and $v$ of $ V(T)$, we write $v \prec u$ if there is a string of oriented edges from $v$ to $u$. In particular, the root is maximal for this partial order. \item The \emph{canonical linear} order $<$: starting from the root-edge of $T$, we run along $T$ in the clockwise direction, passing trough each edge once per direction. The order we meet the vertices for the first time gives the order $<$. In particular the root is the minimal element for $<$. \end{itemize} Pictorially, our trees are drawn with the root at the bottom, and the order on the set of the incoming edges of a vertex is given by the clockwise direction, i.e. from the left to the right. From now on, when there is no ambiguity, planar rooted trees are simply called trees. \begin{example} For any two trees $R$ and $S$, let $C(\bullet; R,S)$ be the corolla with an unlabeled vertex $v$ of arity $2$ as root; the roots of $R$ and $S$ are input edges of $v$ in this order. \end{example} \begin{defn} Let $T$ be a tree and $v$ a vertex of it. Consider a small disc centered in $v$. The outgoing and incoming edges of $v$ cut the disc into connected components. If $v$ has at least one incoming edge, the \emph{left side of $v$} is the connected components delimited by the outgoing edge and the first incoming edge of $v$. Otherwise, its \emph{left side} is the unique connected component of the cut disc. \end{defn} \begin{example} A vertex $v$ and its left side (the darkest gray region): \begin{equation*} \pgfdeclarelayer{background} \pgfsetlayers{background,main} \begin{tikzpicture} [ level distance=0.45cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm] \node (1) [my circle] {} [grow=up] { child {node (3) [my circle,label={[label distance=-.03cm]0:\tiny{$v$}},label={[label distance=.25cm]180:\tiny{left side}}] {} child {node (5) [my circle] {}} child {node (6) [my circle] {}} } }; \draw [-] (0,-.25) -- (0,0) ; \begin{pgfonlayer}{background} \tkzMarkAngle[fill=gray,size=0.31cm,opacity=.8](6,3,1) \tkzMarkAngle[fill=gray,size=0.31cm,opacity=.25,draw opacity=.3](5,3,6) \tkzMarkAngle[fill=gray,size=0.31cm,opacity=.25,draw opacity=.3](1,3,5) \end{pgfonlayer} \end{tikzpicture} \end{equation*} \end{example} \begin{defn} A \emph{forest} is a (non commutative) word of trees. For $n\geq 1$, the forest of $n$ times the tree with one vertex is denoted by $\bullet^{\times n}$ and is called \emph{horizontal}. \end{defn} Trees and forests can be grafted at vertices, as follows. \begin{notation}\label{notation: operations graft} \begin{enumerate} \item For any two trees $R$ and $T$ we let $R \triangleright_{v} T$ be the tree obtained by grafting the root-edge of $R$ at the vertex $v$, on its left side. \item\label{item: operations grafting/concat} Let $n\geq 1$ and $n_0+ n_1+...+n_k=n$ be a partition of $n$ such that $n_i\geq 1$ for $1\leq i \leq k$ and $n_{0}\geq 0$. Let $F$ be a forest and let $v_1,...,v_k$ be $k$ vertices of $F$. For any forest $E$ of $n$ trees, let $E \ltimes_{v_1,...,v_k}^{n_0, n_1,...,n_k} F$ be the forest obtained from $F$ by grafting the first $n_1$ roots of $E$ to the left-side of $v_1$, the next $n_2$ roots of $\bullet^{\times n}$ to the left-side of $v_2$, and so on until $n_k$; the $n_{0}$ last trees are concatenated to the left of the so-obtained forest. In particular, \begin{itemize} \item for $k=0$, the operation $\ltimes_{\emptyset}^{n}$ is the concatenation operation that we simply denote by $\times$; \item for $k=1$ and $n_{0}=0$, the operation $\ltimes_{v}^{0,n}$ is the grafting of all the roots to a single vertex $v$ that we simply denote by $\triangleright_v$; \item for $n_{0} = 0$, we write $ \ltimes_{v_1,...,v_k}^{0,n_1,...,n_k} $ as $\triangleright_{v_1,...,v_k}^{n_1,...,n_k}$. \end{itemize} \end{enumerate} \end{notation} For instance, one has \begin{equation*} \begin{tikzpicture} [ level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.35cm},baseline=2.5ex] \node [] {} [grow=up] { child {node [my circle] {} child {node [my circle] {} } } }; \end{tikzpicture} \triangleright_{v} \begin{tikzpicture} [ level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.35cm},baseline=2.5ex] \node [] {} [grow=up] { child {node [my circle] {} child {node [my circle,label=right:\small{$v$}] {} child {node [my circle] {} } } } } ; \end{tikzpicture} = \begin{tikzpicture} [ level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.35cm},baseline=2.5ex] \node [] {} [grow=up] { child {node [my circle] {} child {node [my circle] {} child {node [my circle] {} } child {node [my circle] {} child {node [my circle] {} }} } } }; \end{tikzpicture} \text{ and }~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.4cm }, level 2/.style={level distance=0.2cm,sibling distance=.6cm }, level 3/.style={sibling distance=0.6cm,level distance=0.35cm}, sibling distance=0.6cm,baseline=1.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [my circle] (A) {} child {node [my circle] (B) {} } }} child { edge from parent[draw=none] child { {node [my circle,black] (l1) {} } }} }; \end{tikzpicture} \triangleright_v \begin{tikzpicture} [ level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.35cm},baseline=2.5ex] \node [] {} [grow=up] { child {node [my circle] {} child {node [my circle,label=right:\small{$v$}] {} } } }; \end{tikzpicture} = \begin{tikzpicture} [ level distance=0.4cm, level 2/.style={sibling distance=0.56cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.35cm},baseline=2.5ex] \node [] {} [grow=up] { child {node [my circle] {} child {node [my circle] {} child {node [my circle] {} child {node [my circle] {} } } child {node [my circle] {} } } } }; \end{tikzpicture} \text{ and } \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.4cm }, level 2/.style={level distance=0.2cm,sibling distance=.6cm }, level 3/.style={sibling distance=0.4cm,level distance=0.35cm}, sibling distance=0.6cm,baseline=1.5ex] \node [] {} [grow'=up] { child { edge from parent[draw=none] child {node [my circle] (A) {} child {node [my circle] (B) {} } } } child { edge from parent[draw=none] child { {node [my circle] (T2) {}} } } child { edge from parent[draw=none] child { {node [my circle] (T3) {}} } } child { edge from parent[draw=none] child {node [my circle] (T4lev0) {} child {node [my circle] (T4lev1) {}} child {node [my circle] (T4lev12) {}}} } }; \end{tikzpicture} \right) \ltimes_{v_1,v_2}^{1,2,1} \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.2cm,sibling distance=.6cm }, level 3/.style={sibling distance=0.6cm,level distance=0.35cm}, sibling distance=0.6cm,baseline=1.5ex] \node [] {} [grow'=up] { child { edge from parent[draw=none] child {node [my circle,label=left:\scriptsize{$v_1$}] (A) {} child {node [my circle] (B) {} } } } child { edge from parent[draw=none] child { {node [my circle,label=above:\scriptsize{$v_2$}] (T2) {}} } } child { edge from parent[draw=none] child { {node [my circle] (T3) {}} } } }; \end{tikzpicture} \right) = \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.65cm }, level 2/.style={level distance=0.2cm,sibling distance=.6cm }, level 3/.style={sibling distance=0.35cm,level distance=0.35cm}, sibling distance=0.6cm,baseline=1.5ex] \node [] {} [grow'=up] { child { edge from parent[draw=none] child {node [my circle] (A) {} child {node [my circle] (B) {} } } } child { edge from parent[draw=none] child { {node [my circle] (T1) {} child {node [my circle] (T1lev11) {} } child {node [my circle] (T1lev12) {} } child {node [my circle] (T1lev13) {} }}} } child { edge from parent[draw=none] child { {node [my circle] (T2) {} child {node [my circle] (T2lev1) {} child {node [my circle] (T2lev21) {} } child {node [my circle] (T2lev22) {} }}}} } child { edge from parent[draw=none] child { {node [my circle] (T3) {}}} } }; \end{tikzpicture}. \end{equation*} In the second case one has $k=1$, $n_0=0$ and $n_1=2$; in the last case one has $k=2$, $n_0=1$, $n_1=2$ and $n_3=1$. \\ We also will be led to consider trees with labelings, or more in general, with partial labelings. \begin{defn} Let $T$ be a tree and let $U$ be a subset of $V(T)$. A \emph{$U$--label} of $T$ is a bijection $l\colon\thinspace U \rightarrow \{1,...,n\}$. A tree $T$ equipped with a $U$--label is called \emph{partially labeled}. \end{defn} \begin{example} Examples of partially labeled trees: \\ \begin{equation*} \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.35cm}, sibling distance=0.5cm, level 1/.style={level distance=0.35cm}] \node {} [grow=up] {child{node [my circle,label=left:\tiny{$3$}] {} child {node (A) [my circle,label=left:\tiny{$2$}] {} } child {node [my circle,label=left:\tiny{$1$}] {}} }}; \end{tikzpicture}~~ \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.35cm}, sibling distance=0.5cm, level 1/.style={level distance=0.35cm}] \node {} [grow=up] {child{node [my circle] {} child {node (A) [my circle,label=left:\tiny{$1$}] {} child {node [my circle,label=left:\tiny{$2$}] {}}} child {node [my circle,label=left:\tiny{$3$}] {} } }}; \end{tikzpicture}~~ \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.35cm}, sibling distance=0.5cm, level 1/.style={level distance=0.35cm}] \node {} [grow=up] {child{node [my circle,label=left:\tiny{$3$}] {} child {node (A) [my circle, label=right:\tiny{$1$}] {} child {node [my circle,label=right:\tiny{$2$}] {}}} child {node [my circle] {} } }}; \end{tikzpicture} \end{equation*} \end{example} \subsection{Definition of $\ca{PSB}$} In this section is reminded the minimal material about the operad $\ca{PSB}$; we refer to \cite{MQS} for completeness. \\ For $n\geq 1$, let $\mathcal L(n)$ be the $\mathbb{K}$--vector space generated by the fully labeled trees with $n$ vertices. For each $n\geq 2$ let $\mathcal{W} (n)$ be the $\mathbb{K}$--vector space generated by trees $T$ with partial labeling $l\colon\thinspace U\to \{1...,n\}$ that satisfy: \begin{enumerate} \item[(a)] the root of $T$ is unlabeled; \item[(b)] if a vertex of $T$ is unlabeled, then so is its $\prec$--successor; \item[(c)] each unlabeled vertex of $T$ has exactly two incoming edges. \end{enumerate} Let \begin{equation*} \ca{LW}(1):=\ca{L}(1) \text{ and } \ca{LW}(n):=\mathcal L(n)\oplus\mathcal W(n) \text{ for } n\geq 2. \end{equation*} In \cite{MQS}, a structure of operad was provided on the collection $\{\ca{LW}(n)\}_n$. One may therefore consider the following ideal $\mathcal{I}\subset \mathcal{LW}$ generated by \[\Biggl \{ \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.4cm}, sibling distance=0.5cm, level 1/.style={level distance=0.4cm}] \node [my circle] {} [grow=up] {child {node (A) [my circle,label=left:\tiny{$2$}] {}} child {node (B) [my circle,label=left:\tiny{$1$}] {}} }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} - \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.4cm}, sibling distance=0.5cm, level 1/.style={level distance=0.4cm}] \node [my circle] {} [grow=up] {child {node (A) [my circle,label=left:\tiny{$1$}] {} } child {node [my circle,label=left:\tiny{$2$}] {}} }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} , \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.4cm}, sibling distance=0.5cm, level 1/.style={level distance=0.4cm}] \node [my circle] {} [grow=up] { child {node (A) [my circle] {} child {node [my circle,label=left:\tiny{$3$}] {}} child {node [my circle,label=left:\tiny{$2$}] {} } } child {node [my circle,label=left:\tiny{$1$}] {} } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} - \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.4cm}, sibling distance=0.5cm, level 1/.style={level distance=0.4cm}] \node [my circle] {} [grow=up] {child {node (A) [my circle,label=right:\tiny{$3$}] {}} child {node [my circle] {} child {node [my circle,label=left:\tiny{$2$}] {}} child {node [my circle,label=left:\tiny{$1$}] {} } } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} - \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level 2/.style={sibling distance=0.5cm,level distance=0.4cm}, sibling distance=0.5cm, level 1/.style={level distance=0.4cm}] \node [my circle] {} [grow=up] {child {node (A) [my circle] {} child {node [my circle,label=left:\tiny{$3$}] {}} child {node [my circle,label=left:\tiny{$1$}] {} } } child {node [my circle,label=left:\tiny{$2$}] {} } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} \Biggr \}. \] For each $n\geq1,$ we let $\ca{PSB}(n):= \mathcal{LW}(n)/ \mathcal{I}(n)$. \begin{thm}\label{th: iso}{\cite{MQS}} The collection $\{\ca{PSB}(n)\}_n$ is endowed with a structure of symmetric operad which makes it isomorphic to the operad $\mathcal{P}ost\mathcal{L}ie$. \end{thm} Let us make this operadic structure explicit for any two trees $T\in \ca{PSB}(m)$ and $R\in \ca{PSB}(n)$ that are fully labeled. Let $v$ be the vertex of $T$ that is labeled by $i$; let $k$ be the number of its incoming edges. For a map $\phi\colon\thinspace \{1,...,k\} \to V(R)$, let $T\circ_i^{\phi} R$ to be the tree obtained by substituting the vertex labeled by $i$ by the tree $R$, and then grafting the incoming edges of $i$ to the labeled vertices of $R$ following the map $\phi$. The grafting is required to be performed in such a way that it respects the natural order of each fiber of $\phi$. This means that if $\phi(v)^{-1}=\{i_1<i_2<...<i_s\}\subset \{1<...<k\}$, then, in the resulting tree, the incoming edge resulting from the grafting of $i_r$--th incoming edge is the $r$--th incoming edge of $v$. The labeling of $T\circ_i^{\phi} R$ is given by classical re-indexation. The partial composition of $T$ and $R$ at $i$ is: \begin{equation}\label{eq: explicit partial compo partial planar tree} T\circ_i R = \sum_{\phi} T \circ_i^{\phi} R, \end{equation} where $\phi$ runs through the set of maps from $\{1,...,k\}$ to $V(R)$. For instance, one has \begin{equation*} \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$3$}] {}} child {node [my circle,label=left:\tiny{$2$}] {}} }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} \circ_1 \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$2$}] {}} }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} = \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$2$}] {}} child {node [my circle,label=left:\tiny{$4$}] {}} child {node [my circle,label=left:\tiny{$3$}] {}} }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} + \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$2$}] {} child {node [my circle,label=left:\tiny{$4$}] {}} } child {node [my circle,label=left:\tiny{$3$}] {} } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} + \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$2$}] {} child {node [my circle,label=left:\tiny{$3$}] {}} } child {node [my circle,label=left:\tiny{$4$}] {} } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} + \begin{tikzpicture} [baseline, my circle/.style={draw, fill, circle, minimum size=3pt, inner sep=0pt}, level distance=0.4cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=1.5ex] \node [my circle,label=left:\tiny{$1$}] {} [grow=up] { child {node [my circle,label=left:\tiny{$2$}] {} child {node [my circle,label=left:\tiny{$4$}] {}} child {node [my circle,label=left:\tiny{$3$}] {}} } }; \draw [-] (0,-.25) -- (0,0) ; \end{tikzpicture} . \end{equation*} \subsection{The free post-Lie algebra}\label{rem: free postlie} Given an operad $\mathcal{O}$ and a vector space $V$, we denote by $\mathcal{O}(V)$ the free $\mathcal{O}$--algebra generated by $V$. It is explicitly given by $ \mathcal{O}(V)= \bigoplus_{n\geq 0} \mathcal{O}(n)\otimes_{\mathbb{S}_n} V^{\otimes n}$. By Theorem \ref{th: iso}, we know that $\ca{PSB}(\mathbb{K})$ is the free post-Lie algebra on $\mathbb{K}$, which is the vector space generated by trees of $\ca{PSB}$, with a unique label. In other words, if we let $\mathbb{K}=\mathbb{K}<\bsq>$ for a generator $\bsq$, then $\ca{PSB}(\mathbb{K})$ is generated by the set \begin{equation*}\label{eq: set gen of free postlie} \mathcal{G}= \bigg\{ \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} } child {node [sq] {} } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} child {node [sq] {} }} } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} } child {node [sq] {} child {node [sq] {} } } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} child {node [sq] {} } } child {node [sq] {} } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [my circle bis] {} child {node [sq] {} } child {node [sq] {} child {node [sq] {} } } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} } child {node [sq] {} } child {node [sq] {} } } }; \end{tikzpicture} , \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} child {node [sq] {} child {node [sq] {} }}} } }; \end{tikzpicture} ,\dots \bigg\}. \end{equation*} Let us distinguish the subset $\ca{G}_{\bullet}$ of those classes of trees that have at least one round-shape vertex (\emph{i.e.} the generating set of the Lie elements); let $\ca{G}_{\bsqsmall}$ be its complementary. The operadic structure of $\ca{PSB}$ provides both the Lie and the post-Lie product of any two elements. Explicitly, the Lie product of two generators $R$ and $S$ is the class of the tree $C(\bullet; R,S)$; their post-Lie product $R\triangleright S$ is as follows. \begin{defn} For a tree $T$ in $\ca{G}$, let $V_{\bullet}(T)$ and $V_{\bsqsmall}(T)$ be the sets of round-shape and square-shape vertices of $T$, respectively. \end{defn} Suppose $R$ is a tree in $\ca{G}\setminus \ca{G}_{\bullet}$ and $S \in \ca{G}$. The post-Lie product of $R$ and $S$ is given by \begin{equation} \label{eq: grafting sq tree to any tree} R\triangleright S = \sum_{v \in V_{\bsqsmall}(S)} R\triangleright_v S. \end{equation} Suppose that $R$ is in $\ca{G}_{\bullet}$. Recall from \cite[Section 3.3.1]{MQS} that transpositions act on each round-shape vertex of $R$ by switching its two outputs, and that each tuple of transpositions $\sigma \in \mathbb{S}_2^{\times |V_{\bullet}(R)|}$ provides a tree $R_{\sigma}$ by performing such action vertex-wise. Recall also that $R$ (and also $R_{\sigma}$) can be \emph{contracted} into a tree $Con(R)$ with only one round-shape vertex (with possibly more than two outputs, so such a tree does not necessarily belong to $\ca{G}$); it is obtained by contracting all the edges between round-shape vertices. Given two vertices $v_1$ and $v_2$ of a tree $T$, we let $Con_{(v_1,v_2)}(T)$ be the tree obtained from $T$ by contracting the edge between $v_1$ and $v_2$; the resulting vertex inherits of the shape of $v_2$. One has \begin{equation}\label{eq: grafting lie to any tree} R\triangleright S = \sum_{v \in V_{\bsqsmall}(S)} \sum_{\sigma\in \mathbb{S}_2^{\times |V_{\bullet}(R)|}} \epsilon(\sigma) Con_{(r,v)}(Con(R_{\sigma}) \triangleright_v S), \end{equation} where $r$ is the root-vertex of $R$ and the sign $\epsilon(\sigma)$ is the product $sgn(\sigma_1)\cdots sgn(\sigma_k)$ for $\sigma=(\sigma_1,...,\sigma_k)$. Let us interpret $Con_{(r,v)}(Con(R_{\sigma}) \triangleright_v S)$ in terms of grating of forests: If $R$ has $k$ round-shape vertices, it corresponds to a $k$--bracketing of trees $T_1,...,T_k$ in $\ca{G}_{\bsqsmall}$, so that one has \begin{equation}\label{eq: relation contraction and grafting forests} Con_{(r,v)}(Con(R_{\sigma}) \triangleright_v S) = (T_{\sigma(1)}T_{\sigma(2)}\cdots T_{\sigma(k)}) \triangleright_v S. \end{equation} \subsection{The universal enveloping algebra of the free post-Lie algebra}\label{sec: univ env alg} \begin{defn} Let $n,k\geq 1$ and $q_1+...+q_k=n$ be a partition of $n$ by positive integers $q_i\geq 0$. A \emph{$(q_1,...,q_k)$--shuffle} is a partition of $\{1<\cdots <n\}$ by $k$ ordered sets of cardinal $q_i$ for each $1\leq i\leq k$. \end{defn} The number of $(q_1,...,q_k)$--shuffles is $\operatorname{sh}_{q_1,...,q_k}:=\frac{(q_1+...+q_k)!}{q_1!\cdots q_k!}$. \\ Recall that the universal enveloping algebra of a post-Lie algebra (and in fact of any Lie algebra) is equipped with the shuffle coproduct $\Delta_{sh}\colon\thinspace \mathcal{U}({\mathfrak g}) \to \mathcal{U}({\mathfrak g})^{\otimes 2}$ that makes it a bialgebra with respect to its classical product. Lie elements are primitive for $\Delta_{sh}$, that is one has $\Delta_{sh}(l) = l\otimes 1 + 1\otimes l$ for all $l\in {\mathfrak g}$. Therefore, for any Lie element $l$ and $k\geq 2$, one has \begin{equation}\label{eq: shuffle copro on lie} \Delta_{sh}^{(k)} (l) = \sum_{i_1+...+i_k=i} \operatorname{sh}_{i_1,...,i_k} l^{i_1} \otimes \cdots \otimes l^{ i_k}. \end{equation} Recall from Remark \ref{rmk: GL product and D-alg}, that the Grossman-Larson product on $(\mathcal{U}({\mathfrak g}),\Delta_{sh})$ is given by $\ast\colon\thinspace X\otimes Y \mapsto X_{(1)}(X_{(2)}\triangleright Y)$ for all $X,Y\in \ca{U}({\mathfrak g})$. Recall also that here $\triangleright \colon\thinspace \mathcal{U}({\mathfrak g})^{\otimes 2} \to \mathcal{U}({\mathfrak g})$ is the extension of the post-Lie product and it satisfies the properties \ref{D bial item1}, \ref{D bial item4} and \ref{D bial item5}. The left-side extension of the post-Lie product was identified in \cite{MQS} as \emph{post-symmetric braces}, which are operations encoded by the corollas in $\ca{PSB}$. This means that, for $X=x_1\cdots x_n \in \mathcal{U}({\mathfrak g})$ and $y\in {\mathfrak g}$, one has \begin{equation}\label{eq: braces} X\triangleright y = \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow'=up] { child {node [my circle, label=right:\small{$n+1$}] {} child {node [my circle,label=above:\small{$1$}] {}} child {node [my circle,label=above:\small{$2$}] {}} child {node [label=above:\small{$\dots$}] {}} child {node [my circle,label=above:\small{$n$}] {}} } }; \end{tikzpicture} \left( x_1 \otimes \cdots \otimes x_n \otimes y \right). \end{equation} The rest of this section is dedicated to the universal enveloping algebra of the free post-Lie algebra $\ca{PSB}(\mathbb{K})$. Remark that, since the product of the free associative algebra on $\ca{PSB}(\mathbb{K})$ is given by concatenation of trees, its underlying vector space is generated by the forests of $\mathcal{G}$. The universal enveloping algebra $(\mathcal{U}(\ca{PSB} (\mathbb{K})), \ast)$ is the vector space generated by the forests on $\mathcal{G}$, modded out by the ideal generated by $RS-SR- C(\bullet; R,S)$ for every $R$ and $S$ in $\mathcal{G}$. For later use, let us investigate the product $E\ast F$ for a few particular forests $E$ and $F$. \begin{lem}\label{lem: grafting of Lie element} For any Lie element $l$ and any forests $F$, one has $l \ast F = l F + l\triangleright F $. \end{lem} \begin{proof} Recall that Lie elements are primitive elements for the shuffle coproduct. We conclude by observing that $l\ast F :=l_{(1)}\cdot (l_{(2)}\triangleright F)$. \end{proof} \begin{lem}\label{lem: calc 2 F=T} For $n\geq 1$ and $T$ a tree, one has \begin{equation*} \bsq^{\times n} \triangleright T = \sum_{1\leq k \leq |T|} \sum_{ \substack{n_1+...+n_k=n,~ n_i>0 \\ \{v_1,...,v_k\},~ v_i\in T, v_i\neq v_j} } \operatorname{sh}_{n_1,...,n_k}~ \bsq^{\times n} \triangleright_{v_1,...,v_k}^{n_1,...,n_k} T. \end{equation*} \end{lem} \begin{proof} Recall from \eqref{eq: braces} that $\bsq^{\times n} \triangleright T$ is given by \begin{equation*} \Big( \Big( \cdots \Big( \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow'=up] { child {node [my circle, label=right:\small{$n+1$}] {} child {node [my circle,label=above:\small{$1$}] {}} child {node [my circle,label=above:\small{$2$}] {}} child {node [label=above:\small{$\dots$}] {}} child {node [my circle,label=above:\small{$n$}] {}} } }; \end{tikzpicture} \circ_{n+1} T \Big) \circ_{1} \bsq \Big) \circ_2 \cdots \Big) \circ_n \bsq. \end{equation*} From the operadic structure of $\ca{PSB}$, see \eqref{eq: explicit partial compo partial planar tree}, we know that this is the sum of $\bsq^{\times n} \triangleright_{v_1,...,v_k}^{n_1,...,n_k} T$ over all distinct vertices $v_1,...,v_k$ and all the maps $\phi\colon\thinspace \{1,...,n\}\to \{1,...,k\}$ such that $|\phi(i)^{-1}|=n_i$ for $1\leq i\leq k$. \end{proof} \begin{lem}\label{lem: calc 1 F=TA...Tk} Let $F=T_1\cdots T_k$ be a forest of $k$ trees and let $n\geq 1$. One has \begin{equation*} \bsq^{\times n} \ast F = \sum_{j_0+...j_{k}=n, ~j_i\geq 0} \operatorname{sh}_{j_0,...,j_{k} } \bsq^{\times j_0} (\bsq^{\times j_1} \triangleright T_1 ) \cdots (\bsq^{\times j_k} \triangleright T_k). \end{equation*} \end{lem} \begin{proof} By \eqref{eq: shuffle copro on lie}, one has \begin{equation*} \bsq^{\times n} \ast F = \sum_{j_0+j_1=n, ~j_i\geq 0} \operatorname{sh}_{j_0,j_1} \bsq^{\times j_0}( \bsq^{\times j_1} \triangleright F), \end{equation*} and in turn, by \ref{D bial item4} and \eqref{eq: shuffle copro on lie}, one has \begin{equation*} \bsq^{\times i} \triangleright F = \bsq^{\times i} \triangleright (T_1\cdots T_k) = \sum_{i_1+...+i_k=i} \operatorname{sh}_{i_1,...,i_k} (\bsq^{\times i_1} \triangleright T_1 ) \cdots (\bsq^{\times i_k} \triangleright T_k ). \end{equation*} \end{proof} \subsection{Nested tubings} In this subsection are presented two notions of nested tubings of forests, the \emph{vertical} nested tubings and the \emph{horizontal} ones. The term \emph{tubing} is borrowed from \cite{CarrDevadoss} though the present definition differs from the original one. \\ Recall the level partial order $\prec$ and the canonical linear order $<$ of Section \ref{sec: Operad of PRT}, given for trees. For a vertex $v$ of a tree $T$, let $\mathfrak{b}_v \subset V(T)$ be the subset of the $\prec$--predecessors of $v$; it inherits of the order $<$. The set of roots $\text{Root}(F)$ of a forest $F$ has a \emph{horizontal} order $<_h$ that is increasing as one goes from left to right: for a forest $ST$, one has $v<_h w$ for $v$ the root of $S$ and $w$ the root of $T$. Recall that $\mathcal{G}_{\bsqsmall}$ be is the subset of $\mathcal{G}$ of those trees that have only square-shape vertices $\bsq$. Let $\ca{F}or$ be the set of the forests on $\mathcal{G}_{\bsqsmall}$; it admits a decomposition into the subsets $\ca{F}or_n$ of those forests that have exactly $n$ vertices. Let $\ca{F}or_n'$ be the set $\ca{F}or_n \setminus \{\bsq^{\times n}\}$, where $\bsq^{\times n}$ denotes the {horizontal} forest of $n$ trees. \begin{defn} A \emph{higher set} of a poset $(\mathcal{P},<)$ is a subset of $\mathcal{P}$ that contains the $<$--successors of each of its elements. \end{defn} \begin{defn}\label{de: tube} A \emph{tube} of a tree $T$ is a connected higher set $t$ of $(V(T),\prec)$, such that, for each $v\in t$, one has $t\cap \mathfrak{b}_v$ is a higher set of $(\mathfrak{b}_v,<)$. \end{defn} \begin{defn} A \emph{tube} of a forest $F\in \ca{F}or$ is a subset of $V(F)$ such that its intersection with $(\text{Root}(F),<_h)$ is a higher set and such that it intersects each tree of $F$ into a (possibly empty) tube. \end{defn} \begin{rem} A tube of a forest $F$ can be identify to a sub forest of $F$; we will often use this identification implicitly. \end{rem} \begin{defn}\label{de: pre tubing} A \emph{nested tubing} of $F\in \ca{F}or$ is a collection of non empty tubes of $F$ that are pairwise nested and such that: \begin{enumerate} \item it contains at least two tubes; \label{item: at least 2 tubes} \item it contains the maximal tube (the tube that is the whole set of the vertices of the forest); \label{item: max tube} \end{enumerate} For a nested tubing $t=\{t_i\}_{i\in I}$ of $F$, the \emph{boundary} of the tube $t_i$ is $\partial t_i= t_i \setminus \{ t_j\varsubsetneq t_i\}$. \end{defn} \subsubsection{Vertical nested tubings} \begin{defn}\label{de: tubing} A \emph{vertical} nested tubing is a nested tubing such that: \begin{enumerate} \item\label{item: horizontal tubes} the boundary of each tube is \emph{not} a horizontal forest of more than one tree; and, \item\label{item: forest conneccted to one vertex} if the boundary of a tube is a forest, then either all the roots of this forest are connected to a single vertex of a sub tube, or none of the roots are connected to any sub tube. \end{enumerate} \end{defn} \begin{defn} For $F$ in $\ca{F}or'$, let $Tub(F)$ be the set of its vertical nested tubings. \end{defn} \begin{example}\label{ex: tubings and not} a) is a tube, b) is a higher set that is not a tube (condition on $\mathfrak{b}_v$ unsatisfied), c), d) and e) are not vertical nested tubings (condition \ref{item: horizontal tubes} is not satisfied; in addition for d), condition \ref{item: forest conneccted to one vertex} is not satisfied either). The three last examples f), g) and h) are vertical nested tubings. \begin{equation*} \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fforeground} a) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=1.0cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {}} }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=10pt,opacity=1,black!50,line cap=round,rounded corners] (A.center) -- (B.center); \end{pgfonlayer} \end{tikzpicture} ~~~b) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=1.0cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {}} }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=10pt,opacity=1,black!50,line cap=round,rounded corners] (A.center) -- (C.center); \end{pgfonlayer} \end{tikzpicture} ~~~c) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=1.0cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {}} }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [fill=black!50, draw=black!50, line cap=round,rounded corners] (A) circle (0.2); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=20pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (A.center) -- (B.center) -- (C.center) --cycle; \end{pgfonlayer} \end{tikzpicture} ~~~d) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [fill=black!50, draw=black!50, line cap=round,rounded corners] (A) circle (0.18); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=17pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (B.center) -- (A.center)--(l1.center) --cycle; \end{pgfonlayer} \end{tikzpicture} ~~~e) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=10pt,black!50, line cap=round,rounded corners] (l1.center) -- (A.center); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=17pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (A.center) -- (l1.center)--(B.center) --cycle; \end{pgfonlayer} \end{tikzpicture} ~~~f) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (A) circle (0.15); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=12pt,black!50, line cap=round,rounded corners] (l1.center) -- (A.center); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=17pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (B.center) -- (A.center)--(l1.center) --cycle; \end{pgfonlayer} \end{tikzpicture} ~~~g) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (A) circle (0.15); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=12pt,black!50, line cap=round,rounded corners] (B.center) -- (A.center); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=17pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (B.center) -- (A.center)--(l1.center) --cycle; \end{pgfonlayer} \end{tikzpicture} ~~~h) \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (r) {} } } child { edge from parent[draw=none] child {node [sq] (m) {} } } child { edge from parent[draw=none] child { {node [sq,black] (l1) {} child {node [sq] (l2) {} } } }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [fill=black!50, draw=black!50, line cap=round,rounded corners] (r) circle (0.15); \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=17pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (r.center) --(l2.center) -- (l1.center)--(r.center); \end{pgfonlayer} \end{tikzpicture} \end{equation*} \end{example} \newcommand{\treeTT}{ \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (RA) {} child {node [sq] (RB) {} } }} child { edge from parent[draw=none] child {node [sq] (LA) {} child {node [sq] (LB) {} } }} }; \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=20pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (LB.center) -- (LA.center)--(RA.center) --(RB.center) --cycle; \end{pgfonlayer} } \newcommand{\treetwothree}{ \begin{pgfonlayer}{fforeground} \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq] (l1) {} child {node [sq] (l2) {} child {node [sq] (l31) {} } child {node [sq] (l32) {} } child {node [sq] (l33) {} } }}} }; \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=22pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (l31.center) --(l1.center) -- (l33.center) -- (l32.center) -- (l31.center) --(l1.center)--cycle; \end{pgfonlayer} } \begin{example} Here below are all the vertical nested tubings of \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child {node [sq] (C) {} child {node [sq] (D) {} } }} }; \end{tikzpicture}. % \begin{equation*} \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fforeground} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=12pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=12pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=12pt,black!65, line cap=round,rounded corners] (LB.center) -- (LA.center) --(RA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [fill=black!40, draw=black!40, line cap=round,rounded corners] (RA) circle (0.12); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (RB.center) -- (RA.center) ; \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [fill=black!40, draw=black!40, line cap=round,rounded corners] (RA) circle (0.12); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (LB.center) -- (LA.center) --(RA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.12); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (RB.center) -- (RA.center) ; \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.11); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.11); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (LB.center) -- (LA.center) --(RA.center); \end{pgfonlayer} \end{tikzpicture} \end{equation*} \end{example} \subsubsection{Horizontal nested tubings} \begin{defn} A \emph{horizontal} nested tubing of $F\in Forest$ is a nested tubing of $F$ such that the boundary of each tube is a horizontal forest. \end{defn} We let $hTub(F)$ denote the set of horizontal nested tubings of $F$. For any $p_1+...+p_k = N$ such that $p_i>0$, and $F\in \ca{F}or'_N$, we let $hTub(F)_{p_1,...p_k}$ be the subset of $hTub(F)$ of those horizontal nested tubings $t=t_1\supset t_2 \supset \cdots \supset t_k$ such that $|\partial t_i|=p_i$ for each $1\leq i \leq k$. One has \begin{equation}\label{eq: decompo tubings} hTub(F) = \bigsqcup_{p_1+...+p_k=N, p_i>0} hTub_{p_1,...,p_k}(F). \end{equation} \begin{example} In Example \ref{ex: tubings and not}, a) is a tube whose boundary is not a horizontal forest, c) to g) are horizontal nested tubings, and h) is not horizontal. \end{example} \begin{example}\label{example: horizontal tubings} Here below are all the horizontal nested tubings of \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child {node [sq] (C) {} child {node [sq] (D) {} } }} }; \end{tikzpicture}. % \begin{equation*} \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fforeground} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=12pt,black!65, line cap=round,rounded corners] (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!40, draw=black!40, line cap=round,rounded corners] (RA) circle (0.15); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [fill=black!40, draw=black!40, line cap=round,rounded corners] (RA) circle (0.15); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (LB.center) -- (LA.center) --(RA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.12); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (RB.center) -- (RA.center) ; \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.11); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (RB.center) -- (RA.center) --(LA.center); \end{pgfonlayer} \end{tikzpicture} ~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \treeTT \begin{pgfonlayer}{foreground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (RA) circle (0.11); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=9pt,black!40, line cap=round,rounded corners] (LA.center) --(RA.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=14pt,black!65, line cap=round,rounded corners] (LB.center) -- (LA.center) --(RA.center); \end{pgfonlayer} \end{tikzpicture} \end{equation*} \end{example} \subsection{Post-Lie Magnus expansion in terms of nested tubings} For a post-Lie algebra ${\mathfrak g}$, the pLMe of $x\in {\mathfrak g}$ is the element $\chi(x)$ that satisfies the equation \begin{equation*} \exp_{\cdot}(x)=\exp_{\ast}\big(\chi(x)\big). \end{equation*} In particular, in $\hat{\mathcal{U}}_{\ast}(\ca{PSB}(\mathbb{K}))$, it is a sum over all forests in $\ca{F}or$: \begin{equation*} \chi(\bsq)= \sum_{F\in \ca{F}or} c_F F. \end{equation*} We propose to compute the coefficients $c_F$ for any forest $F$ by two methods. \subsubsection{Post-Lie Magnus expansion via vertical nested tubings} As showed in \cite[Equation (81)]{KI} the pLMe can be expressed as a sum $\chi= \sum_{n\geq 1} \chi_n$, where $\chi_1(x)=x$ and \begin{equation*} \chi_n(x)= \frac{x^n}{n!} - \sum_{\stackrel{k\geq 2, p_i>0}{p_1+...+p_k=n}} \frac{1}{k!} \chi_{p_1}(x)\ast \cdots \ast \chi_{p_k}(x) ~\text{ for all } x\in {\mathfrak g}. \end{equation*} We will describe $\chi_n\colon\thinspace \ca{PSB} (\mathbb{K}) \to \ca{PSB} (\mathbb{K})\subset \hat{\mathcal{U}_{\ast}}(\ca{PSB}(\mathbb{K}))$ of the free post-Lie algebra on $\mathbb{K}=\mathbb{K}<\bsq>$. Note that $\chi_n(\bsq)$ is a homogeneous Lie polynomial of degree $n$. In particular, in $\hat{\mathcal{U}}_{\ast}(\ca{PSB}(\mathbb{K}))$ it is a sum over all forests in $\ca{F}or$ with $n$ vertices: \begin{equation*} \chi_n(\bsq)= \sum_{F\in \ca{F}or_n} c_F F. \end{equation*} \begin{defn} For $n,k\geq 2$, partition $p_1+...+p_k=n$ by strictly positive integers and $F$ in $\ca{F}or'_n$, let $\mathfrak{D}(F)_{p_1,...,p_k}$ be the set of all the possible expressions \begin{equation}\label{eq: graft and conc} F= F_1 \ltimes_1 (\cdots (\ltimes_2 (F_{k-2} \ltimes (F_{k-1} \ltimes_{k-1} F_k))) \cdots ), \end{equation} where $F_i$ runs through the forests of $p_i$ vertices that are not horizontal and $\ltimes_i$ is either the concatenation or the one vertex grafting operations $\triangleright_v$ for some $v$. \end{defn} \begin{lem}\label{lem: decompo} There is a bijection between $\mathfrak{D}(F)_{p_1,...,p_k}$ and the set of all the vertical nested tubings $t=t_1\supset t_2 \supset \cdots \supset t_k$ of $F$ such that $|\partial t_i|=p_i$ for each $1\leq i \leq k$. \end{lem} \begin{proof} Since concatenation and grafting do not remove vertices nor edges, the decomposition \eqref{eq: graft and conc} provides an embedding of $F_1,...,F_k$ into $F$, which we claim, can be represented by a vertical nested tubing. Explicitly, the tube $t_k$ is $F_k$ seen in $F$ and is the most right sided subforest forest; the tube $t_{k-1}$ is sub forest $F_{k-1} \ltimes_{k-1} F_k$ of $F$ and it contains $F_k$, etc. For example, one has \begin{equation*} \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} } }; \end{tikzpicture} \triangleright_{a} \left( \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq] {} child {node [sq] {} } } }; \end{tikzpicture} \triangleright_{b} \left( \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq,label=left:\tiny{$b$}] {} } }; \end{tikzpicture} \times \left( \begin{tikzpicture} [ level distance=0.5cm, level 2/.style={sibling distance=0.6cm}, sibling distance=0.6cm,baseline=2.5ex,level 1/.style={level distance=0.4cm}] \node [] {} [grow=up] { child {node [sq,label=left:\tiny{$a$}] {} child {node [sq] {} } } }; \end{tikzpicture} \right)\right)\right) \longleftrightarrow \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fforeground} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.8cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {}} }} child { edge from parent[draw=none] child { {node [sq,black] (l1) {} child [black] {node [sq] (l2) {} child {node [sq] (l3) {}} } } }} }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=7pt,opacity=1,black!10,line cap=round,rounded corners] (A.center) -- (B.center); \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=12pt,black!40, line cap=round,rounded corners] (l1.center) -- (A.center) -- (B.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=19pt,black!65,line cap=round,rounded corners] (l3.center) -- (l2.center) -- (l1.center) --(A.east) -- (B.center) ; \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=26pt,opacity=1,black!25,line cap=round,rounded corners] (l1.center) -- (l2.center) -- (l3.center) -- (B.center) -- (A.center) --cycle; \end{pgfonlayer} \end{tikzpicture}. \end{equation*} This assignment is well-defined: \begin{itemize} \item grafting is on the left side of a vertex; this is condition on $\mathfrak{b}_v$ in Definition \ref{de: tube}; \item one vertex grafting of forests corresponds to the condition \eqref{item: forest conneccted to one vertex} of Definition \ref{de: tubing}; \item concatenations with right most parentheses correspond to the higher set condition for the order $<_h$. \end{itemize} Let us show that such assignment is surjective. Firstly, the above discussion shows that the tubings are such that they do not encode any other type of operations (than concatenations and one vertex graftings with right most parenthesis). Secondly, note that condition \eqref{item: max tube} of Definition \ref{de: tubing} ensures that the whole forest is decomposed. Moreover, since the tubes $t_i$ are such that $|\partial t_i| = p_i$, their boundary $\partial t_i$ corresponds to sub forests of $F$ that are in $\ca{F}or_{p_i}$. Finally, condition \eqref{item: horizontal tubes} of Definition \ref{de: tubing} corresponds to the absence of the forest $\bsq^{\times p}$. \end{proof} \begin{prop} For $F\in \ca{F}or'_n$, one has $c_F= \sum_{t\in Tub(F)} c_t$, where $c_t=\frac{-1}{|t|!} \prod_{t' \in t} c_{\partial t'}$ and $c_{\hspace{1pt}\bsq}=1$. \end{prop} \begin{proof} As $\chi_1(\bsq)=\bsq$, the first coefficient $c_{\hspace{1pt}\bsq}$, which is the coefficient of the unique tubing of $\bsq$, is $1$. Let $n\geq 2$ and $F$ be a forest in $\ca{F}or_n$ that is not $\bsq^{\times n}$. To compute $c_F$, let us remark that $F$ appears in $- \frac{1}{k!} \chi_{p_1}(x)\ast \cdots \ast \chi_{p_k}(x)$ for some $k$--partitions $p_1+...+p_k=n$. For each of these partitions, $F$ is obtained by shuffle concatenations and/or graftings of $k$ forests, say $F_1,....,F_k$, that belong to $\chi_{p_1}(x)$,..., $\chi_{p_k}(x)$ respectively (shuffles arise from \ref{D bial item4}). In fact, there are several operations of these types that we can exclude. Indeed, since for all $p$ the element $\chi_p(\bsq)$ is a Lie polynomial, every $F_i\in \chi_{p_i}$ for each $p_i$, is either a tree or belongs to a commutator. Therefore, thanks to \eqref{eq: grafting sq tree to any tree} and \eqref{eq: grafting lie to any tree}-\eqref{eq: relation contraction and grafting forests} and to Lemma \ref{lem: grafting of Lie element}, when considering the product $F_i\ast F_{i+1}$ it is enough to consider the concatenation $F_iF_{i+1}$ and the grafting $F_i\triangleright_v F_{i+1}$ for each vertex $v$ of $F_{i+1}$. Moreover, since $\ast$ is associative, we can restrict ourselves to applying the concatenation and the one vertex grafting operations with the right most parentheses. In other words, it is enough to consider all the expressions of the form \eqref{eq: graft and conc} which, thanks to Lemma \ref{lem: decompo}, correspond to vertical nested tubings. For each vertical nested tubings $t=t_1\supset t_2 \supset \cdots \supset t_k$ of $F$ such that $|\partial t_i|=p_i$ for each $1\leq i \leq k$, we let $c_t=\frac{-1}{k!}c_{F_1}c_{F_2}\cdots c_{F_k}$. By summing over all the possible tubings, one obtains $c_F = \sum_{t\in Tub(F)} c_t$; note that condition \eqref{item: at least 2 tubes} ensures that tubings encode non-trivial decompositions. In addition, note that each $c_{F_i}$ itself is given by $c_{\partial t_i}$, which gives the result. Note that Lemma \ref{lem: decompo} stands for forests $F_i$ in $\ca{F}or'_{p_i}$, some of which may not be in $\chi_{p_i}(\bsq)$, that is, there are forests $F_i$ such that $c_{F_i}=0$. This is not an issue since if $c_{F_i}=0$ for some $i$, then $c_t=0$. \end{proof} \begin{rem} The last condition \eqref{item: horizontal tubes} of Definition \ref{de: tubing} may be removed, provided that one modifies Lemma \ref{lem: decompo} accordingly. Indeed, since $c_{\bsqsmall^{\times n}}=0$ for $n\geq 2$, this does not interfere in the result. \end{rem} \subsubsection{Post-Lie Magnus expansion via horizontal nested tubings} This section is devoted to the computation of the pLMe using \emph{horizontal} nested tubings. While the previous method is recursive, the present method allows to compute the coefficient $c_F$ of any forest $F\in \ca{F}or'_N$ for $N\geq 2$ in a closed form. We will use the following form of $\chi$: \begin{equation}\label{eq: chi = log exp} \chi(x) = \log_*(\exp_{\cdot}(x)) = \sum_{k\geq 1} \frac{(-1)^{k-1}}{k} \left(\sum_{j\geq 1} \frac{x^j}{j!}\right)^{* k}. \end{equation} In particular we will be led to investigate elements of the form \begin{equation}\label{eq: p& ast ...pk} \bsq^{\times p_1} \ast ( \cdots \ast (\bsq^{\times p_{k-1}} \ast \bsq^{\times p_k})\cdots ), \end{equation} for partitions $p_1+...+p_k = N$ with $p_i>0$. \begin{defn} For $N,k\geq 2$, partition $p_1+...+p_k=N$ by strictly positive integers and $F$ in $\ca{F}or'_N$, let $h\mathfrak{D}(F)_{p_1,...,p_k}$ be the set of all the possible expressions of $F$ of the form \begin{equation}\label{eq: express mixed oper} F = \bsq^{\times p_1} \ltimes_1 (\cdots (\ltimes_2 ( \bsq^{\times p_{k-2}} \ltimes ( \bsq^{\times p_{k-1}} \ltimes_{k-1} \bsq^{\times p_k} ))) \cdots ), \end{equation} in which each $\ltimes_i$ is an operation of the form $\ltimes_{v_1,...,v_k}^{n_0, n_1,...,n_k}$ as introduced in Notation \ref{notation: operations graft}, item \eqref{item: operations grafting/concat}. \end{defn} \begin{lem}\label{lem: surjective map decompo to htubings} For each $N\geq 2$ and each $p_1+...+p_k=N, p_i>0$, there is a bijection between $h\mathfrak{D}(F)_{p_1,...,p_k}$ and $hTub_{p_1,...,p_k}(F)$. \end{lem} \begin{proof} Since the operations $\ltimes_{v_1,...,v_k}^{n_0,n_1,...,n_k}$ do not remove vertices nor edges, the decomposition \eqref{eq: express mixed oper} provides an embedding of $\bsq^{\times p_1},..., \bsq^{\times p_k}$ into $F$, which we claim, can be represented by a horizontal nested tubing. Explicitly, the tube $t_k$ is $\bsq^{\times p_k}$ seen in $F$ as the most right sided sub forest; the tube $t_{k-1}$ is the sub forest $\bsq^{\times p_{k-1}} \ltimes_{k-1} \bsq^{\times p_k}$ that contains $\bsq^{\times p_k}$ and $\bsq^{\times p_{k-1}}$, etc. For example, one has \begin{equation}\label{eq: corresp decop tubing 2 } \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.65cm,level distance=0.55cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq] (A) {}} } child {edge from parent[draw=none] child {node [sq] (B) {} } } child {edge from parent[draw=none] child {node [sq] (B) {} } } child {edge from parent[draw=none] child {node [sq] (B) {} } } child {edge from parent[draw=none] child {node [sq] (B) {} } } }; \end{tikzpicture} \ltimes_{v_1,v_2}^{2,2,1} \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.65cm,level distance=0.55cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq,label=above:\small{$v_2$}] (A) {}} } child {edge from parent[draw=none] child {node [sq] (B) {} } } }; \end{tikzpicture} \triangleright_{v_1}^2 \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.65cm,level distance=0.55cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq,label=above:\small{$v_1$}] (A) {}} } child {edge from parent[draw=none] child {node [sq] (B) {} } } }; \end{tikzpicture} \right) \right) \mapsto \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fforeground} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=1.4cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.65cm,level distance=0.55cm}, sibling distance=0.6cm,baseline=2.5ex] \begin{pgfonlayer}{fforeground} \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq] (A) {}} } child {edge from parent[draw=none] child {node [sq] (B) {} } } child {edge from parent[draw=none] child {{node [sq,black] (T1lev0) {} child {node [sq] (T1lev11) {}} child {node [sq] (T1lev12) {}} child {node [sq] (T1lev13) {} child {node [sq] (T1lev23) {}} } child {node [sq] (T1lev14) {}} } }} child {edge from parent[draw=none] child {node [sq] (T2) {} } } }; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=10pt,opacity=1,black!10,line cap=round,rounded corners] (T1lev0.center) -- (T2.center); \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=19pt,black!65,line cap=round,rounded corners] (T2.center) --(T1lev0.center) --(T1lev13.center) -- (T1lev14.center) --cycle; \end{pgfonlayer} \begin{pgfonlayer}{bbackground} \draw [line width=27pt,opacity=1,black!20,line cap=round,rounded corners] (A.center) -- (B.center) -- (T1lev0.center) --(T2.center) -- (T1lev14.center) --(T1lev23.center) --(T1lev11.center) --(A.center) ; \end{pgfonlayer} \end{tikzpicture}. \end{equation} Note that by construction the boundary of each tube is a horizontal forest, the maximal tube is included, and because decompositions are not trivial there are at least $2$ tubes. Moreover, \begin{itemize} \item since grafting is on the left side of a vertex, the tubes satisfy condition on $\mathfrak{b}_v$ in Definition \ref{de: tube}; \item since operations are performed with right most parentheses, the tubes satisfy the higher set condition for the order $<_h$. \end{itemize} Therefore, the map is well-defined. The bijectivity can be shown by considering the inverse map, which is as follows. Each pair of tubes $(t_i,t_{i-1})$ determines an operation of the form $\ltimes_{v_1,...,v_k}^{n_0, n_1,...,n_k}$: the integer $n_0$ is the number of roots of $\partial t_i$ that are not attached to any vertex of $t_{i-1}$ (they are the most left sided roots of $\partial t_i$ because of the higher set condition for $<_h$); the integer $n_1$ is the number of next roots (from left to right) of $\partial t_i$ that are attached to a same vertex, which is $v_1$, etc. \end{proof} \begin{notation}\label{notation th2} Let $F$ be a forest and $t$ a horizontal nested tubing of $F$. For each non minimal tube $s$ of $t$ we let $s'\subset s$ be its predecessor in $t$; so, $\partial s = s\setminus s'$. Recall that $s'$ is a forest, say of trees $T_1,..., T_{lg(s')}$. \begin{itemize} \item We let $j(s)_0\geq 0$ be the number of roots of $F$ in $\partial s$, that is the number of vertices that are not attached to any vertex of $s'$. \item For $1\leq a \leq lg(s')$, we let $j(s)_a$ be the number of vertices of $\partial s$ that are attached to vertices of $T_a$. \item For each tree $T_a$ of $s'$ and each vertex $v$ of $T_a$, we let $f_{T_a}^s(v)$ be the cardinal of its fiber in $\partial s$, that is the cardinal of the set $\mathfrak{b}_v\cap \partial s$ of vertices in $\partial s$ that are attached to $v$. \item We let $k(T_a)\geq 0$ be the number of vertices $v$ of $T_a$ such that $f_{T_a}^s(v)\neq 0$ and we let $v_1,...,v_{k(T_a)}$ be the collection of such vertices. \end{itemize} \end{notation} \begin{example} Consider the horizontal nested tubing of \eqref{eq: corresp decop tubing 2 }; let $s$ be the maximal tube. The boundary $\partial s$ is a forest of five trees; the first two trees (\emph{i.e.} left most sided) are not attached to $s'$, and the next three trees are attached to the same tree of $s'$. Therefore one has $j(s)_0=2$, $j(s)_1=3$ and $j(s)_2=0$. For the first tree $T_1$ of $s'$ (the corolla with 3 vertices, the root $v_r$, the most left-sided vertex $v_1$ and the other one $v_2$), one has $f^s_{T_1}(v_r)=2$, $f^s_{T_1}(v_1)=1$ and $f^s_{T_1}(v_2)=0$. \end{example} For $F \in \ca{F}or'_N$ and $t\in hTub(F)_{p_1,...,p_k}$, we let $A(t)$ be the number of times the expression that corresponds to $t$ via Lemma \ref{lem: surjective map decompo to htubings} appears in $\bsq^{\times p_1} \ast ( \cdots \ast (\bsq^{\times p_{k-1}} \ast \bsq^{\times p_k})\cdots )$. \begin{lem}\label{lem: decompo appears A times} \begin{equation*} A(t) = \prod_{s\in t,~ s \text{ not minimal}} \operatorname{sh}_{j(s)_0,...,j(s)_{lg(s')}} \prod_{1\leq a \leq lg(s')} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) }. \end{equation*} \end{lem} \begin{proof} The proof is by induction. We let $t= t_k\supset t_{k-1} \supset \cdots \supset t_1$. Consider $t_2\supset t_1$ as a horizontal nested tubing for the forest $t_2$. Recall that $lg(t_i)$ is the number of trees in the tube $t_i$. One has $lg(t_1)= p_1$ and we write $T_1\cdots T_{p_1}$ the decomposition of $t_1$ into trees. Note that $t_2\supset t_1$ corresponds to $\bsq^{\times p_{2}} \ltimes_{v_{i_1},...,v_{i_r}}^{j(t_2)_0,...,j(t_2)_{p_1}} \bsq^{\times p_{1}}$ for some subset $\{i_1,...,i_r\}$ of $\{1,...,p_1\}$ where $v_i$ is the unique vertex of $T_i$. By Lemma \ref{lem: calc 1 F=TA...Tk} and Lemma \ref{lem: calc 2 F=T}, the expression $\bsq^{\times p_{2}} \ltimes_{v_{i_1},...,v_{i_r}}^{j(t_2)_0,...,j(t_2)_{p_1}} \bsq^{\times p_{1}}$ appears $\operatorname{sh}_{ j(t_2)_0,...,j(t_2)_{p_1} } \prod_{1\leq a \leq p_1} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) }$ times. Suppose the statement is true for the tubing $t_j\supset t_{j-1} \supset \cdots \supset t_1$ of $t_j$, for any $2 \leq j < k$. Let $s= t_k$ and $s'=t_{k-1}$. Let $T_1\cdots T_{lg(s')}$ be the decomposition of $s'$ into trees. By Lemma \ref{lem: calc 1 F=TA...Tk} and Lemma \ref{lem: calc 2 F=T}, the expression $\bsq^{\times p_{k}} \ltimes_{v_{i_1},...,v_{i_r}}^{j(s)_0,...,j(s)_{lg(s')}} s'$ appears \begin{equation*} \operatorname{sh}_{j(t_2)_0,...,j(t_2)_{p_1}} \prod_{1\leq a \leq p_1} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) } \end{equation*} multiplicate by the number of times the sub-expression that corresponds to $t_{k-1}\supset t_{k-2} \supset \cdots \supset t_1$ appears. Hence the result. \end{proof} \begin{thm}\label{th: decompo2} With Notation \ref{notation th2}, for each $F$ in $\ca{F}or_N'$ with $N\geq 2$, the coefficient $c_F$ is \begin{equation*} \sum_{t\in hTub(F)} \frac{(-1)^{|t|-1}}{|t|} \prod_{s\in t} \frac{1}{|\partial s|!} \operatorname{sh}_{j(s)_0,...,j(s)_{lg(s')}} \prod_{1\leq a \leq lg(s')} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) }, \end{equation*} where $\operatorname{sh}_{j(s)_0,...,j(s)_{lg(s')}} \prod_{1\leq a \leq lg(s')} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) }:=1$ when $s$ is minimal. \end{thm} \begin{proof} Let $F$ be in $\ca{F}or_N'$ for $N\geq 2$. By using equation \eqref{eq: chi = log exp}, one can write $\chi(\bsq)$ as \begin{equation*} \sum_{N\geq 1} \sum_{p_1+...+p_k = N, ~p_i>0} \frac{(-1)^{k-1}}{k} \frac{1}{p_1!p_2!\cdots p_k!} \bsq^{\times p_1} \ast ( \cdots \ast (\bsq^{\times p_{k-1}} \ast \bsq^{\times p_k})\cdots ). \end{equation*} If we let $D_{p_1,...,p_k}$ denote the number of times the forest $F$ appears in $ \bsq^{\times p_1} \ast ( \cdots \ast (\bsq^{\times p_{k-1}} \ast \bsq^{\times p_k})\cdots )$, then one has $c_F= \sum_{p_1+...+p_k = N, ~p_i>0} \frac{(-1)^{k-1}}{k} \frac{1}{p_1!p_2!\cdots p_k!} D_{p_1,...,p_k}$. Let us compute $D_{p_1,...,p_k}$. Consider a decomposition of $F$ of the form \eqref{eq: express mixed oper}; by Lemma \ref{lem: surjective map decompo to htubings} this amounts to considering $t\in hTub(F)_{p_1,...,p_k}(F)$. Such a decomposition appears exactly $A(t)$ times in $\bsq^{\times p_1} \ast ( \cdots \ast (\bsq^{\times p_{k-1}} \ast \bsq^{\times p_k})\cdots )$. Therefore, one has $D_{p_1,...,p_k}= \sum_{t\in hTub(F)_{p_1,...,p_k}} A(t)$, which by Lemma \ref{lem: decompo appears A times}, gives \begin{equation*} D_{p_1,...,p_k}= \sum_{t\in hTub(F)_{p_1,...,p_k}} \prod_{s\in t} \operatorname{sh}_{j(s)_0,...,j(s)_{lg(s')}} \prod_{1\leq a \leq lg(s')} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) }. \end{equation*} Finally, using the decomposition \eqref{eq: decompo tubings} of $hTub(F)$ one obtains the result. \end{proof} \begin{example} Here is presented the computation of $c_F$ for $F=$ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow'=up] { child {edge from parent[draw=none] child {node [sq] (l1) {} child {node [sq] (l2) {} child {node [sq] (l31) {} } child {node [sq] (l32) {} } child {node [sq] (l33) {} } }}} }; \end{tikzpicture}. Let us list all the possible horizontal nested tubings, which are of the form $(p_k,p_{k-1},...,p_1)$ for $1\leq k\leq 5$. There are only four possibilities, which corresponds to $(1,1,1,1,1)$, $(1,2,1,1)$, $(2,1,1,1)$ and $(3,1,1)$: \begin{equation*} \pgfdeclarelayer{foreground} \pgfdeclarelayer{fforeground} \pgfdeclarelayer{fbforeground} \pgfdeclarelayer{background} \pgfdeclarelayer{bbackground} \pgfsetlayers{bbackground,background,main,foreground,fbforeground,fforeground} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \treetwothree \begin{pgfonlayer}{bbackground} \draw [line width=22pt,opacity=1,draw=black!25,fill=black!25,line cap=round,rounded corners] (l31.center) --(l1.center) -- (l33.center) -- (l32.center) -- (l31.center) --(l1.center)--cycle; \end{pgfonlayer} \begin{pgfonlayer}{background} \draw [line width=18pt,opacity=1,draw=black!65,fill=black!65,line cap=round,rounded corners] (l1.center) -- (l33.center) -- (l32.center) --(l1.center)--cycle; \end{pgfonlayer} \begin{pgfonlayer}{main} \draw [line width=12pt,opacity=1,draw=black!15,fill=black!15,line cap=round,rounded corners] (l1.center) -- (l2.center) --(l33.center) ; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=8pt,opacity=1,draw=black!45,fill=black!45,line cap=round,rounded corners] (l1.center) -- (l2.center) ; \end{pgfonlayer} \begin{pgfonlayer}{fbforeground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (l1) circle (0.11); \end{pgfonlayer} \end{tikzpicture} ~~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \treetwothree \begin{pgfonlayer}{background} \draw [line width=18pt,opacity=1,draw=black!65,fill=black!65,line cap=round,rounded corners] (l1.center) -- (l33.center) -- (l32.center) --(l1.center)--cycle; \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=10pt,opacity=1,draw=black!45,fill=black!45,line cap=round,rounded corners] (l1.center) -- (l2.center) ; \end{pgfonlayer} \begin{pgfonlayer}{fbforeground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (l1) circle (0.12); \end{pgfonlayer} \end{tikzpicture} ~~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \treetwothree \begin{pgfonlayer}{main} \draw [line width=13.5pt,opacity=1,draw=black!65,fill=black!65,line cap=round,rounded corners] (l1.center) -- (l2.center) --(l33.center); \end{pgfonlayer} \begin{pgfonlayer}{foreground} \draw [line width=9pt,opacity=1,draw=black!45,fill=black!45,line cap=round,rounded corners] (l1.center) -- (l2.center) ; \end{pgfonlayer} \begin{pgfonlayer}{fbforeground} \draw [fill=black!10, draw=black!10, line cap=round,rounded corners] (l1) circle (0.11); \end{pgfonlayer} \end{tikzpicture} ~~ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \treetwothree \begin{pgfonlayer}{foreground} \draw [line width=12pt,opacity=1,draw=black!65,fill=black!65,line cap=round,rounded corners] (l1.center) -- (l2.center) ; \end{pgfonlayer} \begin{pgfonlayer}{fbforeground} \draw [fill=black!15, draw=black!15, line cap=round,rounded corners] (l1) circle (0.12); \end{pgfonlayer} \end{tikzpicture} \end{equation*} In all those cases there are no shuffles involved because all tubes are trees and there is only one vertex which has a non trivial fiber. One obtains \begin{equation*} c_F= \frac{1}{5} (1\times 1 \times 1 \times 1\times 1) - \frac{1}{4} (1\times 1 \times \frac{1}{2!} \times 1 + 1\times 1 \times 1 \times \frac{1}{2!}) + \frac{1}{3} (1\times 1 \times \frac{1}{3!}) = \frac{1}{180}. \end{equation*} \end{example} \begin{example} Here is the computation of $c_F$ for $F=$ \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.5cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child { edge from parent[draw=none] child {node [sq] (C) {} child {node [sq] (D) {} } }} }; \end{tikzpicture}. The horizontal nested tubings are listed in Example \ref{example: horizontal tubings} and correspond to $(2,2)$, $(2,1,1)$, $(1,2,1)$, $(1,1,2)$ (two possible horizontal tubings), and $(1,1,1,1)$ (three possibilities). In the first tubing, for $s$ the maximal tube, its predecessor $s'$ has two trees $T_1$ and $T_2$; one has $j(s)_0 = 0$ (there is no unattached vertices in $s$), $j(s)_1=1$ and $j(s)_2=1$; and, $f^s_{T_1}(v)= 1$ and $f^s_{T_2}(v_1)=1$. Therefore $\operatorname{sh}_{j(s)_0,...,j(s)_{lg(s')}} \prod_{1\leq a \leq lg(s')} \operatorname{sh}_{ f^s_{T_a}(v_1),...,f^s_{T_{a}}(v_{k(T_a)}) } = \operatorname{sh}_{1,1} \times \operatorname{sh}_{1} \times \operatorname{sh}_{1}= 2$. Doing this for each tube and tubing, one obtains, \begin{equation*} c_F= -\frac{1}{2} (\frac{1}{2!}\times\frac{1}{2!}\times 2) + \frac{1}{3} (\frac{1}{2!}\times 2 + \frac{1}{2!}\times 2 + \frac{1}{2!} +\frac{1}{2!}) -\frac{1}{4} (1 + 1 + 1) = 0. \end{equation*} Of course, since $\chi(\bsq)$ is a Lie element, we already knew that $c_F=0$. \end{example} To end this part we give the first four terms of $\chi(\bsq)$: \begin{align*} \chi_1(\bsq) &= \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.5cm}, sibling distance=0.6cm,baseline=1.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} }} }; \end{tikzpicture} , ~~ \chi_2(\bsq) = -\frac{1}{2} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} }; \end{tikzpicture} ,~~ \chi_3(\bsq) = \frac{1}{3} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} child {node [sq] {}} } }} }; \end{tikzpicture} + \frac{1}{12} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {}} }} }; \end{tikzpicture} + \frac{1}{12} \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} }} child {edge from parent[draw=none] child {{node [sq] (l1) {} child {node [sq] {}} } } } }; \end{tikzpicture} - \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } }} child {edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{tikzpicture} \right) \text{ and } \\ \chi_4(\bsq) &= -\frac{1}{4} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} child {node [sq] {} child {node [sq] {}} } } }} }; \end{tikzpicture} - \frac{1}{12} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child {edge from parent[draw=none] child {node [sq] {} child {node [sq] (A) {} child {node [sq] {} } child {node [sq] {} } } } } }; \end{tikzpicture} - \frac{1}{12} \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] { child { edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] (C) {} child {node [sq] {}} } }} }; \end{tikzpicture} + \frac{1}{24} \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} } child {node [sq] {}} } } child {edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{tikzpicture} - \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} }} child {edge from parent[draw=none] child {{node [sq] (l1) {} child {node [sq] {} } child {node [sq] {}} } } } }; \end{tikzpicture} \right) + \frac{1}{12} \left( \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} child {node [sq] (B) {} child {node [sq] {} } } }} child {edge from parent[draw=none] child { {node [sq,black] (l1) {} } }} }; \end{tikzpicture} - \begin{tikzpicture} [level 1/.style={level distance=0cm,sibling distance=.5cm }, level 2/.style={level distance=0.3cm,sibling distance=1.0cm }, level 3/.style={sibling distance=0.6cm,level distance=0.4cm}, sibling distance=0.6cm,baseline=2.5ex] \node [] {} [grow=up] {child {edge from parent[draw=none] child {node [sq] (A) {} }} child {edge from parent[draw=none] child {{node [sq] (l1) {} child {node [sq] {} child {node [sq] {}} } } } } }; \end{tikzpicture} \right). \end{align*} \section*{Acknowledgments} The second author was supported by grant ``\#2018/19603-0, S\~ao Paulo Research Foundation (FAPESP)''. \bibliographystyle{abbrv}
2023-04-23T08:18:11.509Z
2020-06-19T02:02:48.000Z
redpajama/arxiv
arxiv_0000
1,346
23,128
7be7710a4c83b0706b086e7deaf07832a298df3c
\section{Introduction} In the 70s Lauterbur, Mansfield and Maudsley showed that magnetic field gradients, together with the magnetic resonance technique, could be used to produce images \cite{Lauterbur,Mansfield1,Mansfield2,Mansfield3}. Magnetic resonance imaging (MRI) became a very useful technique in medical science ever since. In high-resolution nuclear magnetic resonance (NMR) spectroscopy, one of the first notable applications of the magnetic field gradient was presented by Stejskal and Tanner. They demonstrated how gradients can be used to determine the diffusion coefficient of liquids \cite{Stejskal}. With time as the equipments used to produce the gradients started improving \cite{equip1,equip2,equip3,equip4,equip5,equip6}, new applications emerged \cite{seq1,seq2,seq3,seq4,seq5,seq6,seq7,seq8,seq9}. Recently, gradients were used in quantum thermodynamics experiments to implement measurement protocols \cite{szi,ter1} and to prepare thermal states \cite{ter2,ter3}. In NMR quantum computing, coherence pathway selection technique \cite{livrole}, which uses gradients, is routinely employed to transform the thermal state of a group of nuclear spins into a pseudo pure state \cite{livro}. Magnetic field gradients were also used to simulate noise \cite{oti,dco}. Given numerous applications, a good theoretical description and methods to simulate the dynamics of the nuclear spins under the influence of magnetic field gradients are becoming increasingly essential to design new experiments. Nowadays, we already know few ways to simulate the dynamics of spins under the influence of magnetic field gradients, and some software and algorithms are already available \cite{rgrad0,rgrad1,rgrad2,rgrad3,rgrad4,rgrad5,rgrad6,rgrad7,rgrad8}. Among these methods, here we will be interested in the one presented by Allard \textit{et. al.} \cite{rgrad0}. In this method, the space is discretized into several divisions, and in each division we have a spin that due to the magnetic field gradient has its oscillation frequency slightly modified. Thus, when applying a magnetic field gradient, the final state of the system will be given by the average of the final states of the spins in each division. This method has the advantage of completely describing the final state of the system, but it can be slow when used to study the dynamics of molecules composed of many nuclear spins. However, as we will demonstrate here, we can accelerate this method using approximations and/or specific configurations that allows us to perform the simulation with a small number of divisions. Here, we show that with an optimum value of number of divisions, it is possible to simulate quickly and with high precision sequences composed of several radio-frequency pulses, free evolutions and gradients. We also show that, together with optimization algorithms \cite{livrooti}, a fast simulation of the dynamic of spins under the influence of gradients can be used to optimize non-unitary evolutions, which are essential for implementing quantum channels or preparing specific states \cite{livronc}. We carried out experiments, implementing two quantum channels, to demonstrate that your simulations describe the dynamics of the system with good accuracy. Finally, we also use our results to optimize and prepare experimentally a pseudo pure state with better signal to noise ratio. During the optimization of pseudo pure state, we saw a trend pointing to a limit for the maximum improvement in the signal to noise ratio. This paper is organized as follows: in section \ref{sec:Theory} we review NMR theoretical background. Then, we describe the gradient discretizations in time and space in section \ref{sec:simulation}. We study time discretization in section \ref{sec:TimeDisc} and space discretization in section \ref{sec:SpaceDisc} to provide a guideline for choosing efficient discretization values. Finally, in section \ref{sec:experiment}, we use the techniques developed to optimize sequence and test it experimentally. \section{NMR theory}\label{sec:Theory} Here, we are going to consider that our sample is an isotropic liquid, but our results can be generalized to other types of samples. For the samples that we used in your experimental test, we can study the natural dynamics of the system using the following Hamiltonian: \begin{equation}\label{eq:h0} \begin{split} \ \mathcal{H}_{0} = \sum_{k}\frac{\hbar(\omega_{k}-\omega_{R})\sigma_{z_{k}}}{2} + \sum_{k \neq n}\frac{\pi\hbar J_{kn}\sigma_{z_{k}}\sigma_{z_{n}}}{4}, \end{split} \end{equation} where $\omega_{k}$ and $\sigma_{\beta_{k}}$ are, respectively, the angular oscillation frequency and the Pauli matrix $\beta$ of the $k$-th nuclear spin, $\hbar$ is the Planck constant divided by $2\pi$, $\omega_{R}$ is the angular frequency of the rotating frame \cite{livrole} and $J_{kn}$ is the scalar coupling constant of the spins $k$ and $n$. If we apply a magnetic field gradient, whose magnitude increase linearly along the $z$ direction, the Hamiltonian of the nuclear spins of a molecule in the position $z$ at time $t$ will be given by: \begin{equation}\label{eq:hg} \begin{split} \ \mathcal{H}_{\texttt{g}}(t,z) = \sum_{k}\frac{ \hbar \gamma_{k} \texttt{g}(t) z\sigma_{z_{k}}}{2}, \end{split} \end{equation} where $\gamma_{k}$ is the gyromagnetic ratio of the $k$-th nuclear spin and $\texttt{g}(t)$ is the magnitude of the magnetic field gradient at time $t$. In this work, we are considering the case where the duration of the applied gradient is fast enough so that we can have a good description of the system dynamics, without including the diffusion or relaxation processes. The nuclear spins state are controlled by radio-frequency pulses applied in the $xy$ plane with an angular frequency $\omega_{R}$. The Hamiltonian that describes the interactions of the spins with a pulse in the rotation frame will be given by: \begin{equation}\label{eq:hc} \begin{split} \ \mathcal{H}_{c}(t) = \hbar \Omega(t) \sum_{k=1}^{s}\frac{\cos[ \phi(t) ] \sigma_{x_{k}} + \sin[ \phi(t) ] \sigma_{y_{k}}}{2}, \end{split} \end{equation} where $\Omega(t)$ and $\phi(t)$ are the modulations of the pulse amplitude and phase respectively. Considering the interactions described above, the total Hamiltonian of the nuclear spins from a molecule in the position $z$ at time $t$ is given by: \begin{equation}\label{eq:ht} \begin{split} \ \mathcal{H}_{T}(t,z) = \mathcal{H}_{0} + \mathcal{H}_{\texttt{g}}(t,z) + \mathcal{H}_{c}(t), \end{split} \end{equation} and the evolution of this system under the action of $\mathcal{H}_{T}(t,z)$ will produce the following unitary: \begin{equation}\label{eq:uht} \begin{split} \ U_{\mathcal{H}_{T}}(z) = \mathcal{T} \left[ \exp\left(-\frac{i}{\hbar} \int \mathcal{H}_{T}(t,z) dt \right) \right], \end{split} \end{equation} where $\mathcal{T}$ represents the Dyson time-ordering operator. \section{Gradient simulation} \label{sec:simulation} In our simulation, we consider that the molecules are uniformly distributed along the $z$ axis, and they are diluted so that we can disregard intra-molecular interactions. A molecule in the position $z$ will evolve under the Hamiltonian $\mathcal{H}_{T}(t,z)$, eq.~\eqref{eq:ht}, and after time $t$, the state of the nuclear spins of this molecule will be given by: \begin{equation}\label{eq:estado} \begin{split} \ \rho(t,z) = U_{\mathcal{H}_{T}}(z)\rho (0,z)U^{\dagger}_{\mathcal{H}_{T}}(z), \end{split} \end{equation} where $\rho (0,z)$ represents the initial state of these spins. The state of the whole sample will be given by: \begin{equation}\label{eq:estadot1} \begin{split} \ \rho_{S}(t) = \dfrac{\int \rho(t,z) dz}{\int dz}. \end{split} \end{equation} We need to perform two types of discretizations to simulate the dynamics of the system. One in the time, to calculate $U_{\mathcal{H}_{T}}(z)$, and another in the space, to obtain the value of $\rho_{S}(t)$. It is worth mentioning that our goal is not to present an accurate method, but one where an approximate simulation of the system dynamics can be obtained quickly. To provide a realistic estimate of these approximations, we use two types of molecules for our simulation: the $^{13}\textrm{C}$-labeled transcrotonic acid and the per-$^{13}\textrm{C}$-labeled (1S,4S,5S)-7,7-dichloro-6-oxo-2-thiabicyclo[3.2.0]heptane-4-carboxylic acid. The $^{13}\textrm{C}$ nuclear spins have spin-$1/2$ and thus the molecules can be used to represent physically a set of 4 and 7 qubits, respectively. The values of the resonance frequencies and the scalar coupling constants of the $^{13}\textrm{C}$ nuclear spins of the two molecules are shown in fig. \ref{fig:molecula4q}(a-b). \begin{figure}[h]% \includegraphics[width=8.5cm]{molecula4q} \centering \caption{ Sample information for (a) $^{13}\textrm{C}$-labeled transcrotonic acid molecule (4 qubits system) and (b) per-$^{13}\textrm{C}$-labeled (1S,4S,5S)-7,7-dichloro-6-oxo-2-thiabicyclo[3.2.0]heptane-4-carboxylic acid molecule (7 qubits system) - The off-diagonal terms in the table are the $J$ coupling constants of the $^{13}\textrm{C}$ nuclear spins of the molecules. Meanwhile, on the diagonal we have the values of the chemical shifts of each nuclear spin. The values in the table are in Hz.}% \label{fig:molecula4q}% \end{figure} We use fidelity \cite{livronc} as a measure of the distance between the final states obtained with and without the use of approximations in the simulation. When performing simulations, we considered that the molecules are uniformly distributed in the $z$-direction and when the field gradient is applied, each molecule will have a slightly different resonance frequency given by their physical location. Due to hardware restrictions, some NMR equipments requires delays of a few $\mu$s before and after the application of a field gradient, in our simulations this delay is $200 \mu$s. \subsection{Time discretization}\label{sec:TimeDisc} We discretize the total time of evolution $\tau$ into $m$ intervals of duration $\delta t$. The value of $\delta t$ must be small enough to allow us to consider that $\mathcal{H}_{T}(\delta t,z)$ is approximately constant at each of the $m$ time intervals. Then, the value of $U_{\mathcal{H}_{T}}(z)$ can be calculated by: \begin{equation}\label{eq:uhtpro} \begin{split} \ U_{\mathcal{H}_{T}}(z) = U_{m}(z)U_{m-1}(z)U_{m-2}(z) \cdots U_{2}(z)U_{1}(z), \end{split} \end{equation} with \begin{equation}\label{eq:uhtprok} \begin{split} \ U_{k}(z) = \exp\left\lbrace-\frac{i}{\hbar} \mathcal{H}_{T}(k \delta t,z) \delta t \right\rbrace . \end{split} \end{equation} One of the most time-consuming computational operations in this simulation is the computation of the exponential of the matrix present in eq. \eqref{eq:uhtprok}. Therefore, we must adopt strategies to calculate the value of $U_{k}(z)$ efficiently. The strategy used will depend on whether radio-frequency pulses are applied during the implementation of the magnetic field gradient. If we consider that pulses are not applied together with the gradient, the Hamiltonian of the system during the gradient will always be diagonal in the $\sigma_{z}$ basis. Thus, we do not need to calculate the exponential of matrices, because $U_{k}(z)$ can be determined by calculating the exponential of the diagonal elements of $-i \mathcal{H}_{T}(k \delta t,z) \delta t /\hbar$. Since $U_k$ and $U_{\mathcal{H}_{T}}(z)$ are diagonal, we can determine the $j^{th}$ diagonal element of $U_{\mathcal{H}_{T}}(z)$ by multiplying all the $j^{th}$ diagonal elements of the $m$ matrices $U_k$. By doing this, we do not need to perform the matrix multiplications from eq. \eqref{eq:uhtpro}. In the special case where the amplitude of the gradient does not depend on time and we do not apply pulses during the gradient, the total Hamiltonian, $\mathcal{H}_{T}$, will be independent of time too. If the gradient is applied for a time $\tau$, the evolution is given by a simplified equation: \begin{equation}\label{eq:uhtsemt} \begin{split} \ U_{\mathcal{H}_{T}}(z) = \exp\left\lbrace-i \mathcal{H}_{T}(z) \tau /\hbar \right\rbrace. \end{split} \end{equation} Although this case has several restrictions, it is widely used in experiments of quantum computing, quantum information and thermodynamics. In the case where pulses are applied together with the gradient, the system's Hamiltonian is not always diagonal. However, for systems composed only of spins $1/2$ (qubits), we can avoid the matrix exponentiation if we use the approximation presented by Bhole and Jones \cite{apro} with a slight modification to include the magnetic field gradient. This approximation requires a small discretization in time, $\delta t$, to calculate the value of $U_{k}(z)$ with a good precision. According to this approximation, for a system composed of $Q$ qubits, we can write \begin{equation}\label{eq:uhtprokaproximado} \begin{split} \ U_{k}(z) \approx W_{k}^{+}(z)H_{Q} e^{-i\Omega (k\delta t)\varsigma \delta t} H_{Q}W_{k}^{-}(z), \end{split} \end{equation} where $H_{Q}$ is the tensor product of $Q$ Hadamard gates and $W_{k}^{\pm}(z)=e^{-i [\mathcal{H}_{0} +\mathcal{H}_{g}(k \delta t,z) \pm 2\phi (k\delta t)\varsigma /\delta t ] \delta t/2}$, with $\varsigma = \sum_{l=1}^{Q}\sigma_{z_{l}}/2$. Since the matrix $\varsigma $ is diagonal, the value of $U_{k}(z)$ can be determined without a need of matrix exponentiation. \begin{figure}[h]% \includegraphics[width=8.5cm]{seqgr} \centering \caption{ Sequence and shape of the amplitude of the magnetic field gradient - (a) sequence used to analyse the error due to the approximation used in eq. \eqref{eq:uhtprokaproximado}. (b) Shape of the amplitude of the magnetic field gradient. The blue line is the graphical representation of the function $\texttt{g}_{1}(t)$, and the red one is the representation of the function $\texttt{g}_{2}(t)$, with $\texttt{g}_{2}(t) = sin[\pi t/(0.2+ \tau)]$ for $ 0.2 \leq t \leq 0.2+\tau $.}% \label{fig:seqgr}% \end{figure} In order to study the errors due to the approximation presented in eq. \eqref{eq:uhtprokaproximado}, we start with a random initial state, apply an unitary operator and a field gradient simultaneously, and calculate the a final state using the approximation from eq. \eqref{eq:uhtprokaproximado}. We compare this final state with the final state when we do not use the approximation, for different values of $\delta t$. A graphical representation of this scheme is shown in fig. \ref{fig:seqgr}(a). In our simulations, we start with 1024 different random initial states. Then, we apply in each of these states a field gradient, whose amplitude is modulate by one of the two shapes $\texttt{g}_{k}(t)$ shown in fig. \ref{fig:seqgr}(b), and one of the rotations: $R_{x}^{all}(\pi /2)$, $R_{x}^{odd}(\pi /2)$ and $R_{x}^{odd}(\pi)$, where $R_{x}^{\alpha}(\theta)$ is a rotation of an angle $\theta $ around the axis $x$, in the nuclear spins $\alpha$. Thus, resulting in six simulations for each random initial state. \begin{figure}[h]% \includegraphics[width=8.5cm]{tab1} \centering \caption{ Error due to the approximation used in eq. \eqref{eq:uhtprokaproximado} - The error was estimated for the 4 and 7 qubits system, considering different pulses, shapes for the gradient and 1024 random initial states. The values of best infidelity ($1-worst(fidelity)$) are show for different discretization, gradient shape and rotations. }% \label{fig:tab1}% \end{figure} The pulses used to apply the rotations were optimized using the method developed by Peterson \textit{et. al.} \cite{johnp}. For simulating gradient with and without the approximation from eq. \eqref{eq:uhtprokaproximado}, we used an ensemble with $N = 10^4$ to minimize errors due to space discretization. The length of the sample, $L$, was considered to be $5$ cm. The pulse and the field gradient are applied simultaneously and have a duration, $\tau = 500$ $\mu$s. We performed simulations to estimate the error of four different values of time dicretization, $ \delta t = \left \{5,2,1,0.5\right \}$ $\mu$s. After these simulations, we compared the fidelity between the final states obtained with and without the approximation of eq. \eqref{eq:uhtprokaproximado}. In fig. \ref{fig:tab1}, we report the worst fidelity obtained among the 1024 initial states for the 4 and 7 qubits system as well as the ratio of simulation time with and without eq. \eqref{eq:uhtprokaproximado} for different values of $ \delta t $. As we can see in fig. \ref{fig:tab1}, the fidelity does not vary significantly when the shape of the gradient or the rotation are changed. However, we can have a big variation when $\delta t$ is changed. This gives us a way to choose the minimum value of $\delta t$ that satisfies a desired precision. For example, if our goal is to perform a simulation with fidelity $0.99999$ for a $4$ qubit system, we use $\delta t=$ 1 $\mu$s in eq. \eqref{eq:uhtprokaproximado} and obtain the result faster than not using the approximation. For this case, the simulation (with $\delta t=$ 1 $\mu$s and using eq. \eqref{eq:uhtprokaproximado}) will be faster by a factor of 6.98, 3.49 or 1.396 if we compare with the time of the simulation without approximation and using $\delta t=$ 1 $\mu$s, $\delta t=$ 2 $\mu$s or $\delta t=$ 5 $\mu$s, respectively. In fig. \ref{fig:tab1}, we can see that the precision also depends strongly on the system used. Thus, if the system used is different from the two considered here, the fig. \ref{fig:tab1} must be reconstructed for this new system. Once the values are characterized for a new system, they can be used for different experiments. \subsection{Space discretization}\label{sec:SpaceDisc} When we discretize the space, the integral in eq. \eqref{eq:estadot1} is replaced by a sum and the state of the ensemble divided into $N$ division with each division comprised of same number of molecules. Then, the state of the whole sample will be given by the following sum: \begin{equation}\label{eq:estadot} \begin{split} \ \rho_{S}(t) = \dfrac{\sum_{k=1}^{N} \rho(t,k \delta z)}{N}, \end{split} \end{equation} where $\delta z$ is the size of the discretization of the space. Generally, NMR samples are prepared in cylindrical tubes that are filled with liquids up to a height $L$, then we have $\delta z = L/N$. Our goal is to estimate the smallest number of divisions, $N$, for us to be able to simulate quickly and with a high precision the dynamics of the system when we apply magnetic field gradients. Here, we will consider that pulses are not applied simultaneously with the magnetic field gradient. This will facilitate our analysis and will allow us to avoid the errors due to the approximation presented in eq. \eqref{eq:uhtprokaproximado}. \subsubsection{Density matrix} A density matrix can be written as a summation of individual terms: \begin{equation}\label{eq:basis} \rho = \sum_{v,w\in [1,2^{Q}]}a_{vw}\outpr{b_v}{b_w}, \end{equation} where $b_v$ is the binary number ($v-1$) of length $Q$, with $v \in \{1,2,...,2^{Q}\}$. For example, if $Q=2$, then $b_1 = 00, b_2 = 01, b_3 = 10 $ and $b_4 = 11$. \subsubsection{Order of coherence} Coherence terms in a density matrix correspond to transition between different states, and the order of coherence is defined by how much is the change in the spin angular momentum quantum number, $m_l$. $\ket{0},\ket{1}$ are the eigenstates of $\sigma_z$ with eigenvalues $+1$,$-1$ and angular momentum quantum number $m_l = +\frac{1}{2}$ and $-\frac{1}{2}$, respectively. Then, $\outpr{00}{10}$ and $\outpr{01}{11}$ have coherence order, $\Delta m_l = -1$, $\outpr{11}{00}$ have coherence order, $\Delta m_l = 2$. For $Q$ qubits, coherence order can vary from -$Q$ to $Q$. A term of the form $\outpr{b_v}{b_w}$ will have coherence order: \begin{equation}\label{eq:coh} c_{vw}=\frac{1}{2}\sum_{k=1}^{Q} [(-1)^{b_{w}^{k}}-(-1)^{b_{v}^{k}}], \end{equation} where, $b_{v}^{k}$ is the $k^{th}$ element of binary number $b_v$. For example, if $b_v = 01$, we will have $b_{v}^{1}=0$ and $b_{v}^{2}=1$. \subsubsection{Evolution during a gradient} Since $\mathcal{H}_0$ and $\mathcal{H}_{\texttt{g}}(t,z)$ commute, we can write the evolution of the nuclear spin at position $z$ as: \begin{eqnarray} U_{\mathcal{H}_{T}}(z) = U_0 \cdot U_{\texttt{g}}(z) = U_{\texttt{g}}(z)\cdot U_0, \end{eqnarray} where $U_0$ and $U_{\texttt{g}}(z)$ are evolution under $\mathcal{H}_0$ and $\mathcal{H}_{\texttt{g}}(t,z)$ respectively. Here, we consider that the gradient is time independent, $\texttt{g}_{k}(t)=\texttt{g}$, and the system is homonuclear, \textit{i.e.}, $\gamma_k = \gamma$. However, our analysis can be extend to time dependent gradient amplitude and heteronuclear systems. The evolution of a term of the form $\outpr{b_v}{b_w}$ will result in: \begin{eqnarray}\label{eq:evo} U_{\mathcal{H}_{T}}(z)\outpr{b_v}{b_w}&& U_{\mathcal{H}_{T}}(z)^{\dagger} \nonumber \\ &&= A_{vw} U_{\texttt{g}}(z)\outpr{b_v}{b_w} U_{\texttt{g}}(z)^{\dagger}, \end{eqnarray} where $A_{vw}$ is the constant produced by the application of $U_0$ in $\outpr{b_v}{b_w}$. The exact value of $A_{vw}$ can be easily calculated for small systems. After the evolution under the gradient (see appendix \ref{ap:a1}), and using eq. (\ref{eq:coh}), the total evolution is given by: \begin{eqnarray} U_{\mathcal{H}_{T}}(z)\outpr{b_v}{b_w}&& U_{\mathcal{H}_{T}}(z)^{\dagger} \nonumber \\ &&= A_{vw}\expc\left(-i\gamma \texttt{g} z t c_{vw}\right) \outpr{b_v}{b_w}. \label{eq:grevo1} \end{eqnarray} Using eq. (\ref{eq:estado}), eq. (\ref{eq:basis}) and eq. (\ref{eq:grevo1}) the density matrix of the spins at position $z$ at time $t$, \begin{equation}\label{eq:dde} \begin{split} \ \rho(t,z) = \sum_{v,w\in [1,2^{Q}]}a_{vw} A_{vw}\expc\left(-i\gamma \texttt{g} z t c_{vw}\right) \outpr{b_v}{b_w}. \end{split} \end{equation} For infinitely many divisions of the sample of length $L$, hereby referred as the continuous case, we can use eq. (\ref{eq:estadot1}) to describe the state of the whole sample as (appendix \ref{ap:a1}): \begin{widetext} \begin{eqnarray} \rho_{S}(t)= \sum_{v,w\in [1,2^{Q}]} a_{vw} A_{vw}\Big(\sinc(\gamma \texttt{g} L t c_{vw}) - i \sinc(\gamma \texttt{g} L t c_{vw}/2)\sin(\gamma \texttt{g} L t c_{vw}/2) \Big)\outpr{b_v}{b_w}. \label{eq:continous2} \end{eqnarray} \end{widetext} When dividing sample into a finite number of divisions, hereby referred as the discrete case, using eq. (\ref{eq:estadot}) we obtain (appendix \ref{ap:a1}), \begin{widetext} \begin{eqnarray} \rho_{S}(t)= \sum_{v,w\in [1,2^{Q}]}a_{vw} A_{vw}\left(\frac{\cos(\frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{N}{N-1})\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2})}{N\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2}\frac{1}{N-1})} + i \frac{\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2}\frac{N}{N-1})\sin(\frac{\gamma \texttt{g}L t c_{vw}}{2})}{N\sin(\frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{1}{N-1})} \right) \outpr{b_v}{b_w}. \label{eq:discrete} \end{eqnarray} \end{widetext} For large $N$, $ \frac{N}{N-1} \approx 1 $ and $ \sin(\frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{1}{N-1}) \approx \frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{1}{N-1} $. Hence, \begin{widetext} \begin{eqnarray} \frac{\cos(\frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{N}{N-1})\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2})}{N\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2}\frac{1}{N-1})} \approx \frac{\cos(\frac{\gamma \texttt{g}L t c_{vw}}{2})\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2})}{\frac{\gamma \texttt{g} Lt c_{vw}}{2}} = \frac{\sin(\gamma \texttt{g} Lt c_{vw})}{\gamma \texttt{g} Lt c_{vw}} = \sinc(\gamma \texttt{g} Lt c_{vw}), \label{eq:real} \end{eqnarray} and \begin{eqnarray} \frac{\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2}\frac{N}{N-1})\sin(\frac{\gamma \texttt{g}L t c_{vw}}{2})}{N\sin(\frac{\gamma \texttt{g}L t c_{vw}}{2}\frac{1}{N-1})} \approx \frac{\sin(\frac{\gamma \texttt{g}L t c_{vw}}{2})\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2})}{\frac{\gamma \texttt{g} Lt c_{vw}}{2}} = \sinc(\frac{\gamma \texttt{g}L t c_{vw}}{2})\sin(\frac{\gamma \texttt{g} Lt c_{vw}}{2}), \label{eq:imaginary} \end{eqnarray} \end{widetext} Thus, verifying that eq. \eqref{eq:discrete} converges to \eqref{eq:continous2} for large values of $N$. Generally, gradients are used to suppress all the coherences except the zero order, which are unaffected. To choose the minimum number of divisions $N$ for a simulation, we must guarantee that eq. (\ref{eq:discrete}) and eq. (\ref{eq:continous2}) produce close final states. For $\gamma \texttt{g} L = 2 \pi $ kHz and $\tau = 1$ ms, all the coherences (except zero order) vanishes according to eq. (\ref{eq:continous2}). In this case, it can be seen that for $N>Q+1$, where $Q$ is the number of qubits, all the coherences vanishes from eq. (\ref{eq:discrete}). Since $-Q\leq c_{vw} \leq Q$, for any $N \leq Q+1 $, there will always be one coherence for which denominator of eq. (\ref{eq:discrete}) goes to zero. Thus, for specific configurations, we can simulate the gradient with high precision using a small value for $N$. We plot the evolution of the coefficients of various coherences with time for aforementioned value of $\gamma \texttt{g} L$ in fig. \ref{fig:SimCoh}. At time $\tau = 1$ ms, we see that the all the coherences are zero irrespective of the number of divisions. \begin{figure}% \includegraphics[width=0.47\textwidth]{Figure_2.pdf} \centering \caption{Plots of the evolution of the real (left column) and imaginary (right column) coefficients of coherence order, $1$ to $4$ (top to bottom). We compare the continuous case with infinitely many divisions (red) with $N = 6$, and $14$ divisions. It is evident that all the coefficients goes to zero at time, $1$ ms, marked by a black dot.}% \label{fig:SimCoh}% \end{figure} \begin{figure}[h]% \includegraphics[width=8.5cm]{seqgr2} \centering \caption{Sequence composed of $\Gamma$ repetition of random unitary evolutions and magnetic field gradients with $\tau = 1$ ms. The magnetic field gradient is same for all the repetitions }% \label{fig:seqgr2}% \end{figure} \begin{figure}[h]% \includegraphics[width=8.5cm]{fide7} \centering \caption{Number of divisions needed to simulate the sequence shown in fig. \ref{fig:seqgr2} and obtain a fidelity greater than 0.99999 for different values of $\Gamma$ - The red and blue dots represent the results for the 4 and 7 qubits system, respectively. The lines are fits of the function $N(\Gamma,Q) = a\Gamma^{b} -Q$, where $Q$ is the number of qubits in the system. For the 4 qubtis system, gray line, we obtained that $ a = 8.5 \pm 0.2$ and $ b = 0.464 \pm 0.008$. For the 7 qubtis system, green line, we obtained that $ a = 12.03 \pm 0.04$ and $ b = 0.4486 \pm 0.0008$.}% \label{fig:ffid}% \end{figure} Now, we verify how $N$ increases, when we simulate a sequence composed of several pulses and gradients. For this, we simulated the sequence shown in fig. \ref{fig:seqgr2}, composed of $\Gamma$ random unitary evolutions and magnetic field gradients, fixing $\gamma \tau L \int_{0.2}^{\tau+0.2}\texttt{g}_{1}(t)dt = 2\pi$ and varying the value of $\delta z$. We used the initial states $\left | 0000 \right \rangle$ and $\left | 0000000 \right \rangle$ for the simulation with the 4 and 7 qubits system. The sequence from fig. \ref{fig:seqgr2} was simulated 64 times for each value of $\Gamma$ with a different set of random unitary, and in each simulation the value of $\delta z$ was optimized to obtain a fidelity greater than 0.99999 using the smallest number of molecules. After this optimization, we determined the highest value of $N$ obtained for different values of $\Gamma$. The results for the 4 and 7 qubits systems are presented in fig. \ref{fig:ffid}. We used these results to fit the function $N(\Gamma,Q) = a\Gamma^{b} -Q$. The values of $a$, $b$ and the fitted curves for the 4 and 7 qubits systems are shown in fig. \ref{fig:ffid}. As the values of $a$ and $b$ are small, the value of $N$ will not increase too fast when $\Gamma$ increases. Then, we can still perform a fast simulation with good precision considering an ensemble composed of few molecules, if we choose well the value of $\delta z$. Furthermore, we can use the function $N(\Gamma,Q)$ to have an estimation for the value of $N$ needed to simulate a sequence composed of $\Gamma$ unitary and gradients. \section{Experiment}\label{sec:experiment} Here, we report some experimental tests showing that our simulations with a small $N$ agree with the experimental data. The experiments were performed using a Bruker Avance III $700$ MHz NMR spectrometer with the sample containing $^{13}\textrm{C}$-labelled transcrotonic acid dissolved in acetone, the 4 qubits nuclear spins systems from fig. \ref{fig:molecula4q}(a), at room temperature of 298 K. \subsection{Implementing quantum channels} In the first experimental tests, we implemented the two quantum channels shown in fig. \ref{fig:seqtomo}(a-b). The experiments were performed for two different shapes of gradient, $\texttt{g}_{1}(t)$ or $\texttt{g}_{2}(t)$ from fig. \ref{fig:seqgr}(b), with different maximum amplitude and duration of the gradient. The pulses used to implement the unitaries were optimized using the technique developed by Peterson \textit{et. al.} \cite{johnp}. Since in quantum state tomography (QST), the number of measurements increases exponentially with the size of the system, we performed the QST on the subsystem of two-qubit, $C_{1}$ and $C_{2}$ \cite{livronc,tomografia}. By doing so, we reduced the number of measurements for QST and were able to get information about coherence terms of order $0$, $1$, and $2$, and test if our simulations can describe the experiments well. \begin{figure}[h]% \hspace*{-0.4cm} \includegraphics[width=6.7cm]{seqtomo.jpg} \centering \caption{Sequences used in the experiments to test if our simulation agree with the experimental data - The two sequence are used to create coherence. Then, the gradient is applied for a time $\tau$ and the state of $C_{1}$ and $C_{2}$ are determined using the quantum state tomography. $U(\Delta t)$ represents a free evolution for a time $\Delta t = 3.459$ ms under the action of $\mathcal{H}_{0}$.}% \label{fig:seqtomo}% \end{figure} In fig. \ref{fig:restomo}, we report the fidelity between the experimental state and the simulated (obtained using an ensemble with $N =6$) for different types of tests. The fidelity for the simulation with big and small value of $N$ have a value of at least 0.99999, when the value of the space discretization ($\delta z$) is optimized. Our best experimental fidelity is around 0.99. We have slightly worse fidelity experimentally, because in the experiments there are other effects that influence the dynamics of the system, and they are not included in our simulations. The main contributions for these errors are from: the diffusion process, inhomogeneity of the magnetic field that can cause some extra gradients \cite{livrole}, gradient of temperatures in the sample \cite{atemp}, the optimized pulses, and the field gradients not being implemented correctly. Even with these errors, the fidelity obtained is good enough to allow us to use the method presented in this article, together with an optimization algorithm, to find sequences (composed of pulses, field gradient and free evolutions) to implement a specific non-unitary dynamics that can be use to prepare a specific state or implement a quantum channel \cite{livronc}. \begin{figure}[h]% \includegraphics[width=8.5cm]{restomo} \centering \caption{ Experimental fidelity - $Amp = $ max$[\texttt{g}_{k}(t)]$/$G$, where $G$ is the maximum amplitude of the gradient that the experimental equipment can produce. For our set-up, we have $\gamma_{k} L G = 48.6 \pm 0.2$ kHz.}% \label{fig:restomo}% \end{figure} \subsection{Optimization of a PPS sequence} One of the applications of the gradient in quantum computation is the preparation of pseudo pure states. By combining the results of this work with those presented in \cite{johnp} and \cite{oti} with some modifications, we were able to obtain sequences to prepare pseudo pure states for 4 and 7 qubit systems. These sequences produce pseudo pure state using multiple scan, which corresponds to do multiple experiments (scans) each with a similar or different pulse sequence and the result is the average over all the scans. We set the limit of one magnetic field gradient per scan. Our sequences have a small number of pulses that implement rotations, and produce pseudo pure states with better signal to noise ratio. In order to study the increase in the signal to noise ratio, we compared the thermal state spectrum with the pseudo pure states spectrum using the same number of scans. This implies, when comparing the signal of the pseudo pure state with the thermal state, we are not taking into account the signal increase resulting from several scans. In our simulations with the 4 qubits system, we obtained sequences that can double the value of the signal to noise ratio compared to the thermal state. In the 7 qubits system, the improvement is, approximately, 3.5 times. By performing simulations with other homonuclear systems with nuclear spins $1/2$, we note a trend in the maximum increase in the signal to noise ratio: for a system with $Q$ homonuclear spins, the signal to noise ratio can be increased, approximately, by a factor of $Q/2$. In our algorithm, the time of the free evolutions, the angles and phases of the rotations of the circuit presented in fig. \ref{fig:otimi} are optimized to minimize the value of the function: \begin{equation}\label{eq:fide} \begin{split} \ \mathcal{F} = [1-\texttt{Fidelity}(\rho_{f},\rho_{pps})](1-\epsilon) + \epsilon \left \| Q/2 -M \right \|, \end{split} \end{equation} where $\rho_{pps}$ is the theoretical pseudo pure state, $\rho_{f}$ is the final state obtained after the simulation of the circuit presented in fig. \ref{fig:otimi}, $M$ is the element of $\rho_{f}$ that has the highest absolute value and $\epsilon$ is a number, in the interval $[0,0.5]$, that can be used to prioritize in the optimization the fidelity or the improvement in the signal to noise ratio of the pseudo pure state. In our optimization, we used the shape $\texttt{g}_{1}(t)$ for the gradient, with $\tau = 1$ ms. \begin{figure}[h]% \includegraphics[width=8.5cm]{otimi} \centering \caption{Sequence of rotations, free evolutions and gradient used in the optimization to prepare a pseudo pure state - The index $s$ represents the set of angles and times used in the scan $s$, $U(\Delta t)$ represents a free evolution for a time $\Delta t$ under the action of $\mathcal{H}_{0}$. The $R_{\phi}^{s}(\theta)$ is a rotation of an angle $\theta$, around the axis $\zeta = cos(\phi)\hat{x} + sin(\phi)\hat{y}$. }% \label{fig:otimi}% \end{figure} \begin{figure}[h]% \includegraphics[width=7.5cm]{ppsinte} \centering \caption{ Improvement in the signal to noise ratio using our optimal sequence to prepare the pseudo pure state with the 4 qubits system. }% \label{fig:ppsinte}% \end{figure} As a final test for our algorithm, we optimized the angles and times of the sequence from fig. \ref{fig:otimi} to prepare a pseudo pure state for the 4 qubits system. Instead of search for a sequence that give an improvement of 2 in the signal to noise ratio, we prioritized the length of the sequence and the fidelity. To make the sequence short, we used 2 scan, and in each one we have a different set of angles and times of free evolutions. Then, in theory, the improvement in the signal to noise ratio is 1.902, the theoretical fidelity is higher than 0.9999, the duration of the sequence is, approximately, $27$ ms and for each scan we fix $\eta = 5$ in the circuit shown in fig. \ref{fig:otimi}. The improvement in the signal to noise ratio measured experimentally is presented in fig. \ref{fig:ppsinte}. The fidelity of the pseudo pure state prepared experimentally is 0.998. \section{Conclusions} Pulse field gradients (PFG) are important for different areas of science. In physics, they are essential to prepare certain states, perform measurements, measure the diffusion coefficient of a sample, and implement non-unitary dynamics (quantum channels). We studied how to efficiently simulate PFGs using discretization in time and space. Utilizing the recent techniques developed by Bhole and Jones \cite{apro}, we provide a guideline for the size of time discretization, depending upon the fidelity required in the experiment. We show how to efficiently discretize the space in a very small number of divisions, and still be able to simulate the PFGs with high precision. We show that for a system of $Q$ qubits, when applying a single gradient, the minimum number of divisions needed in space is $Q+2$. For sequences composed of multiple gradients and unitary evolutions, we shown with our simulations that number of slices vary as $a\Gamma^b - Q$, where $\Gamma$ is the number of repetitions of gradients and unitary evolutions implemented, $a$ and $b$ are small numbers and depend upon the system being studied. As the number of slices is small, and do not increases too fast when $\Gamma$ increases, we can simulate the sequences quickly and with high precision when the value of the space discretization is optimized. We perform two types of experiments which utilize the above developed techniques. Firstly, we implemented two quantum channel and determine part of the state of the system to compare with the theoretical prediction. For all states determined experimentally, the fidelity was higher than 0.98. In the second experiment, we show that the method presented here to simulate the gradient, can be used together with an optimization algorithm to find the optimum sequence to prepare the pseudo pure state. We describe how to perform the optimization to obtain a sequence to prepare a pseudo pure state with better signal to noise ratio, compared to any other procedure we are aware of. We were able to see a trend in our simulations showing that $Q/2$ is the maximum improvement of the signal to noise ratio, for a homonuclear system of $Q$ qubits. At the end, we implemented a optimized sequence to prepare the pseudo pure state in four qubits systems, and measure, experimentally, a improvement higher than 1.8 in the signal to noise ratio. The fidelity of the experimental state is higher than 0.99. Thus, in addition to demonstrate that a fast simulation of the dynamics of a NMR sample state under the influence of field gradient is possible, we compared our fast simulation results with the experimental data, and we show how a fast simulation can be applied to optimize sequences. The results of this study was already used to design a quantum Szilard engine, which uses information about the state of a system to fully convert heat into work \cite{szi}. We believe that our study can help in designing optimum NMR sequences, composed of PFGs, employed in different areas of science. \begin{acknowledgments} We acknowledge financial support from Ministery of Innovation, Science and Economic Development (Canada), the Government of Ontario, CIFAR, Mike and Ophelia Lazaridis. \end{acknowledgments}
2023-04-23T08:18:11.535Z
2020-06-19T02:03:01.000Z
redpajama/arxiv
arxiv_0000
1,348
6,745
aad0ad5b111e0b7030182d5ac1917db6820710cc
\section{Introduction} The Kazhdan--Lusztig polynomial of a matroid was introduced by Elias, Proudfoot, and Wakefield in 2016 \cite{firstklpolys}, which we define here. Throughout, let $M$ be a matroid, $F$ be a flat of the matroid $M$, $\operatorname{rk}$ be the rank function on $M$, and $\chi_M$ be the characteristic function for $M$. We denote $M^F$ (respectively $M_F$) for the localization (respectively contraction) for $M$ at $F$. Then, the Kazhdan-Lusztig polynomial for $M$, denoted $P_M(t)$ is given by the following conditions: \begin{enumerate} \item If $\operatorname{rk} M=0$, then $P_M(t)=1$. \item If $\operatorname{rk} M>0$, then $\deg P_M(t)<{1\over 2} \operatorname{rk} M$. \item $\displaystyle t^{rk M}P_M(t^{-1})=\sum_{F: \text{ a flat}} \chi_{M^F}(t)P_{M_F}(t)$. \end{enumerate} Since their introduction, these polynomials have drawn active research efforts. Mostly, this is due to their (conjecturally) nice properties, such as non-negativity of coefficients, and real-rootedness (see \cite{firstklpolys,thag,klum,fanwheelwhirl,equithag}). There has also been much effort put into finding relations between these polynomials or generalizations thereof (see \cite{deletion,kls,pfia}). However, these polynomials have been explicitly calculated only for very special classes of matroids (for instance, see \cite{stirling,klum,klpolys,fanwheelwhirl,qniform}), and yet many of the known formulas have left much room for improvement. In particular, as of now, there is no enlightening interpretation for such coefficients. In this paper, we provide a combinatorial formula for the Kazhdan-Lusztig polynomials of sparse paving matroids. Then we use this formula to deduce the positivity for the coefficients of these polynomials. The class of sparse paving matroids is known to enjoy properties such as being dual-closed and minor-closed. However, what draws research interest to these matroids is a conjecture given by Mayhew, Newman, Welsh, and Whittle \cite{sparsealmostall}. Based on Crapo's and Rota's prediction \cite{cr}, they conjecture that sparse paving matroids will eventually predominate in any asymptotic enumeration of matroids. That is, \[\lim_{n\to \infty}{s_n\over m_n}=1,\] where $s_n$ is the number of sparse paving matroids on $n$ elements and $m_n$ is the number of matroids on $n$ elements. {In pursuit of this conjecture, Pendavingh and van der Pol \cite{sparselog} have shown that }\[\lim_{n\to\infty} {\log s_n \over \log m_n}=1.\] That is, so far, what we know is that logarithmically almost all matroids are sparse paving matroids. Hence, the fact that we are able to prove the non-negativity for coefficients of such matroids is favorable for the conjecture that all matroids have Kazhdan-Lusztig polynomials with non-negative coefficients. There are several known characterizations of sparse paving matroids. Let $M$ be a matroid of rank $d$ so that the ground set has $m+d$ elements. Let $\mathcal B$ be the set of bases for $M$, so in particular $\mathcal B\subseteq {[m+d]\choose d}$. Set $\mathcal{ CH}:={[m+d]\choose d} \setminus \mathcal B$. Then $M$ is sparse paving if any (and hence all) of the following hold. \begin{enumerate} \item $\mathcal{ CH}$ is the set of circuit-hyperplanes for $M$. \item For distinct $C,C'\in \mathcal{ CH}$, we have $|C\triangle C'|\geq 4$, where $C\triangle C':=(C\setminus C') \cup (C'\setminus C)$ is the \textit{symmetric difference.} \item Every nonspanning circuit is a hyperplane. \item $M$ and its dual $M^*$ are both paving; that is, their circuits have cardinality at least $d$. \end{enumerate} To this end, we let $\smd$ be the sparse paving matroid of rank $d$ with ground set $[m+d]$ so that $\mathcal{ CH}$ is the set of circuit-hyperplanes. The last thing we need to define before stating our main result is the object that will allow us to write our combinatorial formula for the coefficients of the polynomials. Define $\operatorname{Skyt}(a,i,b)$ to be the set of fillings of the following shape so that the rows and columns strictly increase with entries in $[a+b+2i-2]$. \begin{center} \begin{figure}[h] \begin{tikzpicture}[scale=0.3, line width=1pt] \draw (-1,0) grid (0,6); \draw[decoration={brace,raise=7pt},decorate] (-1,0) -- node[left=7pt] {$a$} (-1,6); \draw (0,4) grid (4,6); \draw[decoration={brace,mirror, raise=4pt},decorate] (0.1,4) -- node[below=7pt] {$i$} (5,4); \draw (4,4) grid (5,9); \draw[decoration={brace,mirror, raise=5pt},decorate] (5,4) -- node[right=7pt] {$b$} (5,9); \end{tikzpicture} \caption{The left-most column has height $a$, followed by $i-1$ columns of height 2, followed by the right-most column of height $b$.} \end{figure} \end{center} We define a related object which we denote $\operatorname{\overline{Skyt}}(i,b)$, the subset of $\operatorname{Skyt}(2,i,b)$ where the value 1 appears at the top of the left-most column. We set $\operatorname{skyt}(a,i,b):=\#\operatorname{Skyt}(a,i,b)$ and $\operatorname{\overline{skyt}}(i,b):=\#\operatorname{\overline{Skyt}}(i,b)$. There are some conventions for special values of $a$, $i$ and $b$, but we leave these for Section \ref{sec:SKYT}. We are now ready to state our main result. \begin{theorem}\label{thm:main} Let $\coeff$ be the $i$-th coefficient for the Kazhdan-Lusztig polynomial for the sparse paving matroid $\smd$. Then \[{\coeff}=\operatorname{skyt}(m+1,i,d-2i+1)-|\mathcal{ CH}|\cdot \operatorname{\overline{skyt}}(i,d-2i+1).\] Moreover, this formula is always non-negative. \end{theorem} What is truly remarkable about this formula is that it is not effected by how the elements of $\mathcal{ CH}$ relate to one-another. Keep in mind that $\mathcal{ CH}$ could be \textit{any} set of elements so that their pairwise symmetric difference is at least 4. Given a fixed $m$, $d$, and $i$, the value of the coefficient is invariant of selection of $\mathcal{ CH}$ so long as $|\mathcal{ CH}|$ remains the same. When $\mathcal{ CH}$ is a disjoint collection, we have already shown in \cite[Proposition 2]{rhoremoved} that the formula in Theorem \ref{thm:main} has a manifestly positive interpretation. Consider the subset of $\operatorname{Skyt}(m+1,i,d-2i+1)$ satisfying at least one of the following three conditions. \begin{itemize} \item the top entry of the right-most column is 1; or \item the bottom entry of the right-most column is greater than $d+|\mathcal{ CH}|$; or \item the third entry (from the top) of the left-most column is less than $d+1$. \end{itemize} Then the size of this subset agrees with the formula we give in Theorem \ref{thm:main}. In the special case where $\mathcal{ CH}=\emptyset$, the second condition becomes tautological as the bottom of the right-most column is guaranteed to be at least $d+1$ for any tableaux. So when $\mathcal{ CH}=\emptyset$, we get the entire size of $\operatorname{Skyt}(m+1,i,d-2i+1)$ as our coefficient, as Theorem \ref{thm:main} indicates. Also in this case we have $\smd=U_{m,d}$, the uniform matroid of rank $d$ on $m+d$ elements. \footnote{The first (and only known) manifestly positive integral interpretation for uniform matroids was given in \cite[Remark 3.4]{klpolysequiv}, which requires possibly many Young diagrams.} In light of this, we have proven the following conjecture in the case of sparse paving matroids. \begin{conjecture} Let $M$ be a matroid of rank $d$ on $m+d$ elements, and let $c^i$ be the $i$-th coefficient for $P_M(t)$. Then \[c^i\leq c^i_{m,d}(\emptyset).\] That is, among all matroids with rank $d$ and ground set size $m+d$, the Kazhdan-Lusztig polynomial for $U_{m,d}$ has the largest coefficients. \end{conjecture} \noindent This conjecture was posed by Katie Gedeon. It has no written source, but was communicated to us to Nicholas Proudfoot. It is also interesting to note that when $\mathcal{ CH}$ is a disjoint collection, $\smd$ can be seen to be representable. This in turn gives a combinatorial formula for the intersection cohomology Poincar\'{e} polynomial of the corresponding reciprocal plane over a finite field, thanks to \cite{firstklpolys}. In general, though, almost all sparse paving matroids are not representable. This is due in large part to Nelson \cite{almostallnonrep} who showed that asymptotically almost all matroids are not representable. In particular, his work implies that the logarithmic growth of representable matroids is bounded by a polynomial. Meanwhile, the logarithmic growth of matroids in general are known to have at least exponential growth, and so the same must be true for sparse paving matroids. One final thing to note that is interesting about our formula is that if $m+1=2$ or $d-2i+1=2$, then $\operatorname{skyt}(m+1,i,d-2i+1)$ becomes equal to a well-known number, namely the number of polygon dissections \cite{S}. Hence, when $m+1=d-2i+1=2$, it becomes a Catalan number.\footnote{This connection to polygon dissections was already mentioned in several places, namely in Remark 1.3 in \cite{intcohom} and Remark 5.3 of \cite{klpolys}, but with the discovery of our combinatorial object, this fact follows directly from \cite{S}.} It should be remarked that in \cite{chowring}, Braden, Huh, Matherne, Proudfoot, and Wang say that their forthcoming paper will prove the non-negativity of the coefficients for the Kazhdan-Lusztig polynomials of all matroids. However, their approach may not develop a directly computable formula for these coefficients. On the other hand, we have formulas for the tableauxs appearing in Theorem \ref{thm:main}, which means one can use our formulas to directly find what these coefficients are in the case of sparse paving matroids. This paper proceeds as follows. In section \ref{sec:SKYT}, we further discuss the elements of $\operatorname{Skyt}(a,i,b)$ and $\operatorname{\overline{Skyt}}(i,b)$. We also bring up some important conventions and useful identities for $\operatorname{skyt}(a,i,b)$ and $\operatorname{\overline{skyt}}(a,i,b)$. In section \ref{sec:flats_contr_local_charpoly}, we discuss flats, localizations, contractions, and characteristic polynomial for $\smd$. In section \ref{sec:KL for sparse paving}, we verify the formula for the Kazhdan-Lusztig polynomial of $\smd$ given in Theorem \ref{thm:main}. We then give some useful upper bounds on $|\mathcal{ CH}|$ in section \ref{sec:bounds}. We use these bounds to prove the non-negative part of Theorem \ref{thm:main}, which we do in section \ref{sec:positivity}. We end the paper with some integral identities we use throughout the paper in section \ref{sec:integral_identities}. \textit{Acknowledgments:} The authors would like to thank Nicholas Proudfoot and Jacob Matherne for their helpful comments and feedback. \section{Skew Young Tableaux}\label{sec:SKYT} Consider the following shape. \begin{center} \begin{figure}[h] \begin{tikzpicture}[scale=0.4, line width=1pt] \draw (-1,0) grid (0,6); \draw[decoration={brace,raise=7pt},decorate] (-1,0) -- node[left=7pt] {$a$} (-1,6); \draw (0,4) grid (4,6); \draw[decoration={brace,mirror, raise=4pt},decorate] (0.1,4) -- node[below=7pt] {$i$} (5,4); \draw (4,4) grid (5,9); \draw[decoration={brace,mirror, raise=5pt},decorate] (5,4) -- node[right=7pt] {$b$} (5,9); \end{tikzpicture} \caption{The left-most column has height $a$, followed by $i-1$ columns of height 2, followed by the right-most column of height $b$.} \end{figure} \end{center} A \textit{legal filling }of the above shape involves placing each number from $\{1,2,\dots, a+2i+b-2\}$ into the squares such that the values in the columns and rows strictly increase going down and right, respectively. Note that this is the same restriction on the entries of a standard young tableau, but the above shape does not fit the description of the typical young tableau. We refer to a legal filling of the above shape as a \textit{skew young tableau}, and denote $\operatorname{Skyt}(a,i,b)$ as the set of such legal fillings, and denote $\operatorname{skyt}(a,i,b):=\#\operatorname{Skyt}(a,i,b)$. For our tableaux to be defined, we need $a,b\geq 2$ and $i\geq 1$, but our formula in Theorem \ref{thm:main} may be used for other non-negative values of $a$, $b$, and $i$. Hence, there are some conventions we have set for the few exceptional values that can occur so that our formula still works. \begin{itemize} \item If $i=0$, then $\operatorname{skyt}(a,i,b)=1$. \item If $i>0$ and at least one of $a$ or $b$ is less than 2, then $\operatorname{skyt}(a,i,b)=0$. \end{itemize} We also define a related collection of objects, which we denote $\operatorname{\overline{Skyt}}(i,b)$. This set is the subset of $\operatorname{Skyt}(2,i,b)$ so that 1 is always the entry at the top of the left-most column. The size of $\operatorname{\overline{Skyt}}(i,b)$ is denoted $\operatorname{\overline{skyt}}(i,b)$. By convention, $\operatorname{\overline{skyt}}(i,b)=0$ if $i=0$. In \cite[Lemma 1]{rhoremoved}, we prove the following result. \begin{lemma}\label{lem:skyt_count} \[\operatorname{skyt}(a,i,b)={1\over i!(a-2)!(a+i-1)}\sum_{k=0}^{b-2} (-1)^k{a+b+2i-2 \choose b-2-k} {(a+2i+k)!(k+1)\over (a+i+k)(i+k+1)!}, \] \end{lemma} Using the proof of this result, it is not difficult to achieve the following identity by setting $a=2$ and replacing the $a+2i+b-2$ in Lemma \ref{lem:skyt_count} with $a+2i+b-3$. \begin{lemma}\label{lem:bsyt_count} \[\operatorname{\overline{skyt}}(i,b)={1\over (i+1)!}\sum_{k=0}^{b-2} (-1)^k{b+2i-1 \choose b-2-k} {(2i+k+2)!(k+1)\over (i+k+2)!}, \] \end{lemma} One can achieve two formulas for $\operatorname{skyt}(a,i,b)$ and $\operatorname{\overline{skyt}}(i,b)$ that avoids alternating sums. We will need a few integral identities to produce these formulas. These identities can be found in section \ref{sec:integral_identities}, but are referenced as they are needed in the proofs that follow. Throughout, $(x)^{(n)}$ is the rising factorial $(x)(x+1)\cdots(x+n-1)$ for integers $x$ and $n$. We start with the formula for $\operatorname{skyt}(a,i,b)$. \begin{lemma}\label{lem:manifposskyt} \[\operatorname{skyt}(a,i,b)={a+i-2\choose i}{a+b+2i-2\choose b+i-1}\sum_{k=0}^{b-2}{{b+i-k-3\choose i-1}\over {a+i+k\choose k+1}}\] \end{lemma} \begin{proof} One can rewrite Lemma \ref{lem:skyt_count} as \begin{align} \operatorname{skyt}(a,i,b)&={(a+b+2i-2)!\over i!(a-2)!(a+i-1)(b-2)!}\sum_{k=0}^{b-2} (-1)^k {b-2\choose k}{1\over (a+i+k)(k+2)^{(i)}}.\label{eq:skytalt} \end{align} We can recover this sum for $\operatorname{skyt}(a,i,b)$ by applications of integrals to a polynomial. Let \[\displaystyle f(x,y)={(a+b+2i-2)!xy^{a+i-1}(1-xy)^{b-2}\over i!(a-2)!(a+i-1)(b-2)!}.\] Our integrals our broken up into three parts. \begin{enumerate} \item[(a)] First find $g(x)$, where $g(x):=\displaystyle\int_0^1 \ f(x,y) \ dy$; then \item[(b)] find $\displaystyle h_{i-1}(x_{i-1}):=\int_0^{x_{i-1}}h_{i-2}(x_{i-2})\ dx_{i-2}$, where $\displaystyle h_1(x_1):=\int_0^{x_1} g(x_0) \ dx_0$ and $x_0,x_1,\dots, x_{i-1}$ are $i$ variables; then \item[(c)] solve $\displaystyle\int_0^1 h_{i-1}(x_{i-1})\ dx_{i-1}$. \end{enumerate} It is not difficult to show that, if $(1-xy)^{b-2}$ is written using the binomial expansion, part (c) will give the equation for $\operatorname{skyt}(a,i,b)$ found in equation \eqref{eq:skytalt} above. To get the statement of Lemma \ref{lem:manifposskyt}, we apply these three steps to $f(x,y)$ directly as written. First, we use Corollary \ref{cor:int_0^1_y^a(1-xy)^b} to do part (a). \[g(x):=\int_0^1f(x,y)\ dy={(a+b+2i-2)!(a+i-1)!\over i!(a-2)!(a+i-1)}\sum_{k=0}^{b-2}{(1-x)^{b-k-2}x^{k+1}\over (a+i+k)!(b-k-2)!}.\] To complete parts (b) and (c) we apply Proposition \ref{prop:int_i_times} to get \[\int_0^1 h_{i-1}(x_{i-1}) \ dx_{i-1} = {(a+b+2i-2)!(a+i-1)!\over i!(a-2)!(a+i-1)(i-1)!(b+i-1)!}\sum_{k=0}^{b-2}{(b+i-k-3)!(k+1)!\over (a+i+k)!(b-k-2)!}\] This gets us a manifestly positive sum, and all that is left to get our desired result is to perform some algebraic manipulations. One can combine the terms $(b+i-k-3)!$, $(b-k-2)!$, and $(i-1)!$ combine to give $\displaystyle {b+i-k-3 \choose i-1}$. Then combine $(a+i-1)!$, $(k+1)!$, and $(a+i+k)!$ to get $\displaystyle {a+i+k\choose k+1}$. Then scale by ${(a+i-2)!\over (a+i-2)!} $ allows us to group the remaining factors into binomial coefficients giving \[{a+i-2\choose i}{a+b+2i-2\choose b+i-1}\sum_{k=0}^{b-2}{ {b+i-k-3 \choose i-1}\over {a+i+k\choose k+1}}. \qedhere\] \end{proof} \begin{rem} \label{rem:rewriting} \leavevmode \begin{enumerate} \item While having a manifestly positive formula for $\operatorname{skyt}(a,i,b)$ is nice, it is unfortunate that, in general, the terms of the sum in Lemma \ref{lem:manifposskyt} are not necessarily integers, even if you scale them by ${a+i-2\choose i}$ and ${a+b+2i-2 \choose b+i-1}$. \item It will be useful to rewrite Lemma \ref{lem:manifposskyt} using a common denominator. We can do this by rewriting the binomials in the sum using the falling factorial $(x)_{(n)}:=x(x-1)\cdots(x-n+1)$. Rewriting the sum gives \begin{align*} &\sum_{k=0}^{b-2}{{b+i-k-3\choose i-1}\over {a+i+k\choose k+1}}\\ &=\sum_{k=0}^{b-2}{(b+i-k-3)_{b-k-2}(k+1)!\over (b-k-2)! (a+i+k)_{(k+1)}}\\ &={1\over (b-2)! (a+i+b-2)_{(b-1)}}\sum_{k=0}^{b-2}(b+i-k-3)_{b-k-2}(k+1)!(b-2)_{(k)}(a+i+b-2)_{(b-k-2)} \end{align*} We will find this version useful later, though it is not as concise as the original formula. \end{enumerate} \end{rem} Using similar methods, we can find a formula for $\operatorname{\overline{skyt}}(i,b)$ which not only avoids an alternating sum, but is in fact a single term. \begin{lemma}\label{lem:manifposbarskyt} \[\operatorname{\overline{skyt}}(i,b)={2(b+2i-1)!\over (i+1)!(i-1)!(b-2)!(b+i)(b+i-2)}\] \end{lemma} \begin{proof} One can rewrite Lemma \ref{lem:manifposbarskyt} as \begin{align}\label{eq:alt_bskyt} \operatorname{\overline{skyt}}(i,b)={(b+2i-1)!\over (i+1)!(b-2)!}\sum_{k=0}^{b-2}(-1)^k{b-2\choose k} {(2i+k+2)\over (k+2)^{(i+1)}}. \end{align} We can recover this sum for $\operatorname{\overline{skyt}}(a,i,b)$ by applications of a derivative and integrals to a polynomial. Let \[\displaystyle f(x,y)={(b+2i-1)!xy^{2i+2}(1-xy)^{b-2}\over (i+1)!(b-2)!}.\] We break up our plan for applications of a derivative and integrals into three parts. \begin{enumerate} \item[(a)] First solve $g(x):=\displaystyle\left. {d\over dy} f(x,y) \ \right|_{y=1}$; then \item[(b)] find $\displaystyle h_{i}(x_{i}):=\int_0^{x_{i}}h_{i-1}(x_{i-1})\ dx_{i-1}$, where $\displaystyle h_1(x_1):=\int_0^1 g(x_0) \ dx_0$ and $x_0,x_1,\dots, x_i$ are $i+1$ variables; then finally \item[(c)] find $\displaystyle\int_0^1 \ h_i(x_i) \ dx_i$. \end{enumerate} If one writes $(1-xy)^{b-2}$ using the binomial expansion, part (c) outputs the equation for $\operatorname{\overline{skyt}}$ found in equation \eqref{eq:alt_bskyt} above. We claim that leaving $f(x,y)$ as written and then applying these three steps lead to the statement of Lemma \ref{lem:manifposbarskyt}. First, for part (a) observe that \[g(x)=\left.{d\over dy} f(x,y)\right|_{y=1}={2(i+1)(b+2i-1)!\over (i+1)!(b-2)!}x(1-x)^{b-2}-{(b-2)(b+2i-1)!\over (i+1)!(b-2)!}x^2(1-x)^{b-3}.\] We do parts (b) and (c) simultaneously due to Proposition \ref{prop:int_i_times}. This gives \begin{align*} \int_0^1 h_i(x_i)\ dx_i&={2(i+1)(b+2i-1)!\over (i+1)!(b-2)!}{(b-2+i)!\over i!(b+i)!}-{(b-2)(b+2i-1)!\over (i+1)!(b-2)!}{2(b-3+i)!\over i!(b+i)!}\\ &={2(b+2i-1)!(b+i-3)![(i+1)(b-2+i)-(b-2)]\over i!(i+1)!(b+i)!(b-2)!}\\ &={2i(b+2i-1)!(b+i-3)!(b+i-1)\over i!(i+1)!(b+i)!(b-2)!}\\ &={2(b+2i-1)!\over (i+1)!(i-1)!(b-2)!(b+i)(b+i-2)} \qedhere \end{align*} \end{proof} \section{Flats, Contractions, Localizations, and Characteristic Polynomials for $\smd$}\label{sec:flats_contr_local_charpoly}\leavevmode Throughout, let $F$ be a flat, that is, a set which is maximal with respect to its rank. For a matroid $M$, recall that $M^F$ (respectively, $M_F$) denotes the localization (respectively, contraction) of $M$ at $F$. By $M^F$, we mean the matroid with ground set $F$, whose independent sets are those subsets of $F$ that are also independent in $M$. By $M_F$, we mean the matroid with ground set $M\setminus F$, whose independent sets are those subsets whose union with a basis for $F$ is independent in $M$. First, we discuss the flats of $\smd$. It is an elementary exercise to verify the following. \begin{prop} The flats of $\smd$ are \begin{enumerate} \item the sets of cardinality at most $d-2$; \item the sets of cardinality $d-1$ not contained in any element of $\mathcal{ CH}$; \item the elements of $\mathcal{ CH}$; \item $[m+d]$. \end{enumerate} \end{prop} With this, we can now discuss the localizations and contractions of $\smd$. First, recall the localizations and contractions of $U_{m,d}$, the uniform matroid of rank $d$ with groundset $[m+d]$. \[(U_{m,d})^F=\begin{cases} U_{m,d} & F=[m+d]\\ U_{0,|F|} & F\neq [m+d] \end{cases},\] and \[(U_{m,d})_F=\begin{cases} U_{0,0} & F=[m+d]\\ U_{m,d-|F|} & F\neq [m+d] \end{cases}.\] The corresponding equations for $\smd$ can also be described in a similar manner. In what follows, if $F$ is a flat, then we define $\mathcal{ CH}(F):=\{C\setminus F: C\in \mathcal{ CH} \text{ such that }F\subseteq C\}$. It is worth noting that if $\mathcal{ CH}$ is the set of circuit-hyperplanes for a sparse paving matroid, then so is $\mathcal{ CH}(F)$, so long as $F$ is strictly contained in some circuit-hyperplane. One way to check this is by verifying $\mathcal{ CH}(F)$ satisfies the condition that any pair has symmetric difference at least 4 \begin{prop}\label{prop:rest_local_orum} \[\smd^F=\begin{cases} \smd & F=[m+d]\\ U_{1,d-1} & F\in \mathcal{ CH} \\ U_{0,|F|} & \textit{otherwise} \end{cases}\] and \[\smd_F=\begin{cases} \smd & F=\emptyset\\ U_{m-1,1} & F\in \mathcal{ CH}\\ \smdtemp{m}{d-|F|}{\mathcal{ CH}(F)} & \emptyset\subsetneq F\subsetneq C, \text{ for some $C\in \mathcal{ CH}$}\\ (U_{m,d})_F & \textit{otherwise.} \end{cases}\] \end{prop} \begin{proof} For the localization, the only new case necessary to mention in comparison to the uniform case is for $F\in \mathcal{ CH}$; the other cases follow from the uniform case. The localization of this matroid at $F$ treats $F$ as the ground set, with independent sets being those that are independent in $\smd$. We know every \textit{proper} subset of $F$ is independent, giving $ U_{1,d-1}$. Now for the contraction. If we have $F\nsubseteq C$ for all $C\in \mathcal{ CH}$, then the structure of $\smd_F$ is exactly that of $(U_{m,d})_F$. For the case where $F\in \mathcal{ CH}$, we want the subsets of $S:=[m+d]\setminus F$ such that their union with a basis for $F$ is independent in $\smd$. The bases for $F$ are the elements of ${F\choose d-1}$. Note if $B\in {[m+d]\choose d}$ satisfies $|B\triangle F|=2$, then $B$ is independent in $\smd$. This means the desired subsets of $S$ are the empty set and every singleton of $S$. This gives a matroid isomorphic to $U_{m-1,1}$. Finally, when $\emptyset \subsetneq F\subsetneq C$, for some $C\in \mathcal{ CH}$, note that $F$ is independent, and hence a basis for itself. Thus, the independent sets for $\smd_F$ are the subsets $X$ of $[m+d]\setminus F$ so that $X\cup F$ is independent in $\smd$. That is, $|X|\leq d-|F|$. When $|X|<d-|F|$, $|X\cup F|<d$ and every subset of $[m+d]$ of size smaller than $d$ is independent. When $|X|=d-|F|$, $X\cup F$ is a basis for $\smd$ if and only if $X\cup F\neq C$, for any $C\in \mathcal{ CH}$, which is true if and only if $X\notin \mathcal{ CH}(F)$. That is, we get a matroid isomorphic to $\smdtemp{m}{d-|f|}{\mathcal{ CH}(F)}$. \end{proof} With these in mind, we can now compute the characteristic equation for all localizations for $\smd$. However, by Proposition \ref{prop:rest_local_orum}, we equivalently just need to find the characteristic polynomial for $U_{m,d}$ and $\smd$. First, recall that for a matroid $M$, the characteristic polynomial is given by \[\chi_M(t)=\sum_{F\in L(M)} \mu_{L(M)}(\hat{\textbf{0}},F)t^{\operatorname{rk} M -\operatorname{rk} F},\] where $L(M)$ is the lattice of flats for matroid $M$. The case when $M=U_{m,d}$, $\chi_M(t)$ is well understood. \[\chi_{U_{m,d}}(t)=(-1)^d{m+d-1\choose d-1}+\sum_{i=0}^{d-1} (-1)^i{m+d\choose i}t^{d -i}.\] Parts of this also arise in $\chi_{\smd}$. \begin{prop}\label{prop:charpoly} Let $c=|\mathcal{ CH}|$. \[\chi_{\smd}(t)=(-1)^d{m+d-1\choose d-1}-(-1)^dc+t(-1)^{d-1}\left({m+d\choose d-1}-c\right)+\sum_{i=0}^{d-2} (-1)^i{m+d\choose i}t^{d -i}.\] \end{prop} \noindent It is noteworthy that this characteristic polynomial is the same for all choices of $\mathcal{ CH}$ that have the same size. This is due entirely to the symmetric difference condition on $\mathcal{ CH}$, as we will utilize in the proof. \begin{proof}[Proof of Proposition \ref{prop:charpoly}.] For convenience, we omit subscripts for $\chi$ and $\mu$, since throughout we work in $\smd$. The terms of degree at least 2 follows from the uniform matroid case since in $\smd$, every set of size at most $d-2$ is still flat, since every set of size $d-1$ is independent. The term of degree one comes from summing $\mu(\hat{\textbf{0}}, F)$ for flats $F$ of rank $d-1$. Recall that these flats are the elements of $\mathcal{ CH}$ and all elements of ${[m+d]\choose d-1}$ not contained in any member of $\mathcal{ CH}$. When $F$ is one of the latter described flats, it follows from the uniform case that $\mu(\hat{\textbf{0}}, F)=(-1)^{d-1}$. Note that the number of such flats is ${m+d\choose d-1}-c{d\choose d-1}$, since the symmetric difference condition on $\mathcal{ CH}$ implies that $|C_i\cap C_j|\leq d-2$ for all $C_i,C_j\in \mathcal{ CH}$. That is to say that no set of size $d-1$ is contained in two elements of $\mathcal{ CH}$. Otherwise, if $C\in \mathcal{ CH}$, \begin{align*} \mu(\hat{\textbf{0}},C)&=-\sum_{\hat{\textbf{0}}\leq F<C}\mu(\hat{\textbf{0}},F)\\ &=-\sum_{i=0}^{d-2}(-1)^i{m+d\choose i}\\ &=(-1)^d+d(-1)^{d-1}. \end{align*} Thus the coefficient linear term for $\chi$ is given by \begin{align*} c(-1)^d+c d(-1)^{d-1}+(-1)^{d-1}\left({m+d\choose d-1}-c{d\choose d-1}\right)=(-1)^{d-1}{m+d\choose d-1}-c(-1)^{d-1}. \end{align*} For the constant term, it is equivalent to negate the sum over $\mu(\hat{\textbf{0}}, F)$ for all flats $F\neq [m+d]$. This gives \begin{align*} -\sum_{i=0}^{d-2} (-1)^i{m+d\choose i}-(-1)^{d-1}{m+d\choose d-1}-c(-1)^d&=-\sum_{i=0}^{d-1} (-1)^i{m+d\choose i}-c(-1)^d\\ &=(-1)^d{m+d-1\choose d-1}-c(-1)^d. \qedhere \end{align*} \end{proof} It will be helpful to restate this proposition in the following way for when we prove Theorem \ref{thm:main}. \begin{prop}\label{prop:charpoly_restated} (Proposition \ref{prop:charpoly} restated.) \[ [t^i]\chi_{\smd} = \begin{cases} (-1)^d{m+d-1\choose d-1}-c(-1)^d & i=0\\ (-1)^{d-1}{m+d\choose d-1}-c(-1)^{d-1} & i=1\\ (-1)^{d-i}{m+d\choose d-i} & 2\leq i\leq d \end{cases}\] \end{prop} \section{The Kazhdan-Lusztig Polynomials for Sparse Paving Matroids}\label{sec:KL for sparse paving} This section is dedicated to justifying the combinatorial formula given in Theorem \ref{thm:main}. We restate this part here for convenience, as its own Theorem. \begin{theorem}\label{thm:combformula} Let $\coeff$ be the $i$-th coefficient for the Kazhdan-Lusztig polynomial for the sparse paving matroid $\smd$. Then \[{\coeff}=\operatorname{skyt}(m+1,i,d-2i+1)-|\mathcal{ CH}|\cdot \operatorname{\overline{skyt}}(i,d-2i+1).\] \end{theorem} \begin{rem}\label{rem:conventions} For some values of $m$, $d$, and $i$, we need to use our conventions set in place for $\operatorname{skyt}(a,i,b)$ and $\operatorname{\overline{skyt}}(a,i,b)$ in section \ref{sec:SKYT} for our formula to truly work. \begin{itemize} \item \cite[Proposition 2.11]{firstklpolys} shows that the degree 0 term always has coefficient 1. That is, when $i=0$, our formula must always return 1. \item When $d=0$ we are forced to have $P_{\smd}(t)=1$. \item When $0<d<3$, the degree requirement on Kazhdan-Lusztig polynomials forces $P_{\smd}(t)$ to have degree 0. Namely, in this case, we have $P_{\smd}(t)=1$, again by \cite[Proposition 2.11]{firstklpolys}. \item When $m=0$, note that $\mathcal{ CH}$ is forced to be empty and $\smd$ becomes $U_{0,d}$. It is shown in \cite[Proposition 2.7]{firstklpolys} that $P_{M_1\oplus M_2}(t)=P_{M_1}(t)P_{M_2}(t)$ for matroids $M_1$ and $M_2$. With this, one can verify that $P_{U_{0,d}}(t)=1$ by seeing that $P_{U_{0,1}}(t)=1$ based on the $d<3$ discussion above. \end{itemize} In all cases, our conventions guarantee we get the right values. Besides these cases, our conventions are not needed for our formula, and we are guaranteed that $\smd$ has more interesting structure than that of the boolean lattice. \end{rem} The following technical result will be crucial in demonstrating why the formula given in Theorem \ref{thm:combformula} only depends on $|\mathcal{ CH}|$, and not the relationship between the elements of $\mathcal{ CH}$. \begin{lemma}\label{lem:simplification} Let $c,i\in \mathbb N\cup\{0\}$. For $I\subseteq [c]$, let $x_I$ be a variable. Let $g(k)$ and $h(k)$ are functions varying in $k$. Then \[-\sum_{\substack{J\subseteq [c] \\ |J|\geq 2 }} (-1)^{|J|} x_J\sum_{k=0}^i g(k)=\sum_{\substack{\emptyset\subsetneq I\subseteq [c] }}\sum_{\substack{I\subseteq J\subseteq [c] \\ |J|\geq 2 }} (-1)^{|J|-|I|} x_J\sum_{k=0}^i (\ g(k)-|I|h(k)\ ),\] \end{lemma} \begin{proof} We show that the term with $x_J$ on both sides of the statement of the lemma is the same for every $J\subseteq [c]$, where $|J|\geq 2$. We start with the coefficient of $x_J$ on the right side. We note that the terms with $x_J$ appear for each $I$ that is contained in $J$, where $|I|\geq 1$. Hence, the term with $x_J$ on the right hand side of the statement of the Lemma is \begin{align*} &\sum_{\ell=1}^{|J|} (-1)^{|J|-\ell}x_J{|J| \choose \ell} \sum_{k=0}^i (\ g(k)-\ell h(k)\ )\\ &=x_J(-1)^{|J|}\sum_{\ell=1}^{|J|} {|J| \choose \ell}(-1)^{\ell}\sum_{k=0}^i (\ g(k)-\ell h(k)\ )\\ &=x_J(-1)^{|J|}\left(\sum_{k=0}^i g(k)\sum_{\ell=1}^{|J|} (-1)^{\ell}{|J| \choose \ell}-\sum_{k=0}^i h(k) \sum_{\ell=1}^{|J|} (-1)^{\ell}\ell{|J|\choose \ell} \right)\\ &=x_J(-1)^{|J|}\left(-\sum_{k=0}^i g(k)\right),\\ \end{align*} since we know in general we have the identities $\displaystyle \sum_{\ell=0}^n (-1)^\ell{n\choose \ell}=0$ for $n\geq 1$ and $\displaystyle\sum_{\ell=0}^n(-1)^\ell {n\choose \ell}\ell=0$ for $n\geq 2$. Note that the there is exactly one time where $x_J$ appears exactly once, and the corresponding term is $\displaystyle-x_J(-1)^{|J|}\sum_{k=0}^i g(k)$. \end{proof} We now prove the desired formula for $\coeff$. \begin{proof}[Proof of Theorem \ref{thm:combformula}.] Let $M:=\smd$, and set $c:=|\mathcal{ CH}|$. Recall that the definition for the Kazhdan-Lusztig polynomial is that it satisfies the following recurrence, \[\displaystyle t^{rk M}P_M(t^{-1})=\sum_{F \text{ a flat}} \chi_{M^F}(t)P_{M_F}(t),\] which may be rewritten as \[\displaystyle t^{rk M}P_M(t^{-1})-P_M(t)=\sum_{F \text{ a non-empty flat}} \chi_{M^F}(t)P_{M_F}(t).\] Recall that $\deg P(t)<{1\over 2} d$, and so the power of each monomial in $t^{d}P_M(t^{-1})$ is strictly larger than ${1\over 2}d$. Hence, our goal is to show that for $0\leq i< {1\over 2} d$ we have \begin{align}\label{eq:step1} -\operatorname{skyt}(m+1,i,d-2i+1)+c\cdot \operatorname{\overline{skyt}}(i,d-2i+1)=[t^i]\sum_{F \text{ a non-empty flat}} \chi_{M^F}(t)P_{M_F}(t). \end{align} Using our work from Proposition \ref{prop:rest_local_orum}, and consolidating common factors involving the various flats in $\mathcal{ CH}$, we can rewrite the right of equation \eqref{eq:step1} to be \begin{align}\label{eq:step2} [t^i]\chi_{\smd}+c[t^i]\chi_{U_{1,d-1}}P_{U_{m-1,1}}+\sum_{\substack{\emptyset\subsetneq F\subsetneq C\\\text{For some $C\in \mathcal{ CH}$}}}[t^i]\chi_{U_{0,|F|}}P_{\smdtemp{m}{d-|F|}{\mathcal{ CH}(F)}}+\sum_{\substack{\emptyset\subsetneq F\subsetneq [m+d]\\ F\nsubseteq C\ \forall C\in \mathcal{ CH}}}[t^i]\chi_{U_{0,|F|}}P_{{U_{m,d-|F|}}}, \end{align} where the first term corresponds to the case where $F=[m+d]$, and the second where $F\in \mathcal{ CH}$. By Proposition \ref{prop:charpoly_restated}, we are required to break this up into three case: $i=0$, $i=1$, and $2\leq i<d/2$ if we are to write this out explicitly. Note we can write everything explicitly except $P_{{\smdtemp{m}{d-|F|}{\mathcal{ CH}(F)}}}$. Hence, we proceed by induction on the matroid rank $d$, noting that $d>d-|F|$ since for the corresponding summand $F$ is never empty. We now define some notation in order to rewrite the summations appearing in \eqref{eq:step2}. Let $I\subseteq [c]$ and $C_i\in \mathcal{ CH}$. We define $\displaystyle C_I:=\bigcap_{i\in I}C_i$ and denote $c_I:=|C_I|$. By convention, $C_\emptyset=[m+d]$. Recall that $\mathcal{ CH}(F):=\{C\setminus F: C\in \mathcal{ CH} \text{ such that } F\subseteq C\}$. Let $j$ be an integer and define the following sum indexed by $J$: \[\Phi_j(I):=\sum_{I\subseteq J \subseteq [c]}(-1)^{|J|-|I|}{c_J\choose j}.\] If $j$ is selected appropriately, $\Phi_j(I)$ counts the number of flats of rank $j$ contained in $C_I$, but not in any $C_J$ so that $C_J\subseteq C_I$. Hence, $F$ is a flat counted by $\Phi_j(I)$ if and only if $\mathcal{ CH}(F)=\{C_i\setminus F: i\in I\}$. What we will leverage from this is that $|\mathcal{ CH}(F)|=|I|$. We can now rewrite equation \eqref{eq:step2}. We use the Kronecker delta function $\delta(i,j)=\begin{cases} 1& i=j\\ 0& i\neq j\\\end{cases}$ to combine the cases for $i=1$ and $2\leq i<d/2$. \begin{enumerate} \item[$i=0$:] \begin{align*} &(-1)^d{m+d-1\choose d-1}-c(-1)^{d}+c(-1)^{d-1}{d-1\choose d-2}\\ &+\sum_{j=1}^{d-2}\sum_{\emptyset\subsetneq I \subseteq [c]}\Phi_j(I)(-1)^j(\operatorname{skyt}(m+1,0,d-j+1)-|I|\cdot\operatorname{\overline{skyt}}(0,d-j+1))\\ &+\sum_{j=1}^{d-1}\Phi_j(\emptyset)(-1)^j\operatorname{skyt}(m+1,0,d-j+1) \end{align*} \item[$i>0$:]\begin{align*} &(-1)^{d-i}{m+d-1\choose d-i}-c(-1)^{d-1}\delta(i,1)+c(-1)^{d-1-i}{d\choose d-1-i}\\ &+\sum_{j=1}^{d-2}\sum_{\emptyset\subsetneq I \subseteq [c]}\Phi_j(I)\sum_{k=0}^i(-1)^{j-i+k}{j\choose j-i+k}(\operatorname{skyt}(m+1,k,d-j-2k+1)-|I|\operatorname{\overline{skyt}}(k,d-j-2k+1))\\ &+\sum_{j=1}^{d-1}\Phi_j(\emptyset)\sum_{k=0}^i(-1)^{j-i+k}{j\choose j-i+k}\operatorname{skyt}(m+1,k,d-j-2k+1) \end{align*} \end{enumerate} In both cases, the sum running from $j=1$ to $j=d-2$ is the summand in equation \eqref{eq:step2} over $\emptyset\subsetneq F\subsetneq C$ for $C\in \mathcal{ CH}$, since the flats contained in $C$ have size at most $d-2$. The other sum running from $j=1$ to $j=d-1$ corresponds to the summand in equation \eqref{eq:step2} over $\emptyset\subsetneq F\subsetneq [m+d]$ such that $F\nsubseteq C$ for all $C\in \mathcal{ CH}$. To simplify things further, first, note that \[\displaystyle\Phi_{d-1}(\emptyset)={m+d\choose d-1}-c{d\choose d-1}.\] By construction, $\Phi_{d-1}(\emptyset)$ counts the rank $d-1$ flats contained in no element of $\mathcal{ CH}$. Recall that the only rank $d-1$ flats are those not contained in any circuit-hyperplane. Next, note that many terms from the two sums running over $j$ in both the $i=0$ and $i>0$ case will cancel as a result of Lemma \ref{lem:simplification}. Fix $j\leq d-2$ and suppose $J\subseteq [c]$. Set $\bullet$ $\displaystyle x_J:={c_J\choose j}$, $\bullet$ $\displaystyle g(k):=(-1)^{j-i+k}{j \choose j-i+k}\operatorname{skyt}(m+1,k,d-j-2k+1)$, and $\bullet$ $\displaystyle h(k):=(-1)^{j-i+k}{j\choose j-i+k}\operatorname{\overline{skyt}}(k,d-j-2k+1)$. \noindent This allows us to rewrite our two cases in the following way. \begin{enumerate} \item[$i=0$:] \begin{align*} &(-1)^d{m+d-1\choose d-1}-c(-1)^{d}+c(-1)^{d-1}{d-1\choose d-2}\\ &+\sum_{j=1}^{d-2}\sum_{\emptyset\subsetneq I \subseteq [c]} \sum_{I\subseteq J \subseteq [c]}(-1)^{|J|-|I|}x_J(\ g(0)-|I|h(0)\ )\\ &+\sum_{j=1}^{d-1}\sum_{\emptyset\subseteq J \subseteq [c]}(-1)^{|J|}x_J g(0) \end{align*} \item[$i>0$:]\begin{align*} &(-1)^{d-i}{m+d\choose d-i}-c(-1)^{d-1}\delta(i,1)+c(-1)^{d-1-i}{d\choose d-1-i}\\ &+\sum_{j=1}^{d-2}\sum_{\emptyset\subsetneq I \subseteq [c]}\sum_{I\subseteq J \subseteq [c]}(-1)^{|J|-|I|}x_J\sum_{k=0}^ig(k)-|I|h(k)\\ &+\sum_{j=1}^{d-1}\sum_{\emptyset\subseteq J \subseteq [c]}(-1)^{|J|}x_J\sum_{k=0}^ig(k) \end{align*} \end{enumerate} The following argument works for both the $i=0$ and $i>0$ case, so we speak of both simultaneously as if they were one. Let $A$ correspond to the sum indexed by $j$ where $j$ is at most $d-2$. Likewise define $B$ to be the sum indexed by $j$ where $j$ is at most $d-1$. By Lemma \ref{lem:simplification}, the terms where $|J|\geq 2$ in $A$ will cancel all terms where $|J|\geq 2$ in $B$. What remains in $A$ are the terms where $|J|=1$, that is, the terms where $J=I$ and $|I|=1$. There are $c$ such terms, each contributing ${d\choose j}$, as the members of $\mathcal{ CH}$ have cardinality $d$. For $B$, when $j\leq d-2$, the only terms that remain are those where $|J|$ equals 0 or 1. This gives $c+1$ terms: one contributing ${m+d\choose j}$, and $c$ terms contributing $-{d\choose j}$. Combining this with our identity for $\Phi_{d-1}(\emptyset)$ given above, we get the following simplification. \begin{enumerate} \item[$i=0$:] \begin{align*} &(-1)^d{m+d-1\choose d-1}-c(-1)^{d}+c(-1)^{d-1}{d-1\choose d-2}\\ &+c\sum_{j=1}^{d-2}{d\choose j}(-1)^j(\operatorname{skyt}(m+1,0,d-j+1)-\operatorname{\overline{skyt}}(0,d-j+1))\\ &+\sum_{j=1}^{d-1}\left({m+d\choose j}-c{d\choose j}\right)(-1)^j\operatorname{skyt}(m+1,0,d-j+1) \end{align*} \item[$i>0$:]\begin{align*} &(-1)^{d-i}{m+d\choose d-i}-c(-1)^{d-1}\delta(i,1)+c(-1)^{d-1-i}{d\choose d-1-i}\\ &+c\sum_{j=1}^{d-2}{d\choose j}\sum_{k=0}^i(-1)^{j-i+k}{j\choose j-i+k}(\operatorname{skyt}(m+1,k,d-j-2k+1)-\operatorname{\overline{skyt}}(k,d-j-2k+1))\\ &+\sum_{j=1}^{d-1}\left({m+d\choose j}-c{d\choose j}\right)\sum_{k=0}^i(-1)^{j-i+k}{j\choose j-i+k}\operatorname{skyt}(m+1,k,d-j-2k+1). \end{align*} \end{enumerate} We now point out that remarkably, this formula no longer depends on the structure of $\mathcal{ CH}$, only the size. Hence, the proof proceeds as in the case of Theorem 3 in \cite{rhoremoved}. \end{proof} \section{Bounds on $|\mathcal{ CH}|$} \label{sec:bounds} Our proof for the non-negativity of Theorem \ref{thm:main} will be purely computational. Hence, since $|\mathcal{ CH}|$ is a part of our formula, having bounds on this value will be useful. We will give two particularly important bounds. The first bound is given as follows. \begin{theorem}\label{thm:codingbound} \[|\mathcal{ CH}|\leq {1\over m+1}{m+d\choose d}.\] \end{theorem} This can be recovered in multiple settings. One can find an outline of a matroid theory argument in \cite[Lemma 2.7]{sparsebasispaper}. However, this bound also happens to be a standard coding theory result. Recall that for $\smd$, the circuit-hyperplanes $\mathcal{ CH}$ is a subset of elements in ${[m+d]\choose d}$ so that any pair has symmetric difference at least 4. One could equivalently describe such a set as a binary constant-weight code with hamming distance 4. In this context, the bound in Theorem \ref{thm:codingbound} gives a bound on the size of a code with these conditions, as shown in \cite[Theorem 12]{codingbounds}. In fact, \cite{codingbounds} proves a more arbitrary bound accounting for any lower bound on symmetric difference, not just 4. It is also worth noting that the proofs for this bound given in both \cite{codingbounds} and \cite{sparsebasispaper} are in fact different, even when both are in the language of matroid theory. While this bound will serve useful, there will be times where it will not be sufficient for our purposes. Unlike the prior bound, we found no literature to support the bound that follows. \begin{theorem}\label{thm:jamiesbound} \[|\mathcal{ CH}|\leq {2\over m+d+2}{m+d\choose d}.\] \end{theorem} \begin{rem} These two bounds have an interesting relationship. First, observe that \[ {1\over m+1}{m+d\choose d}> {2\over m+d+2}{m+d\choose d} \text{ if and only if } d>m.\] A take-away here is that both bounds are necessary to get a good bound for $|\mathcal{ CH}|$. Excitingly, when $m=d$, not only do these bounds agree, but they equal the $m$th Catalan number $C_m$, where \[C_m={1\over m+1}{2m\choose m}.\] \end{rem} To prove Theorem \ref{thm:jamiesbound}, we will utilize a graph theory technique known as \textit{discharging}. First, though, it is necessary to make clear the connection between sparse paving matroids and graphs. Let $J(n,d)$ be a graph with vertex set ${[n]\choose d}$, where vertices are adjacent if and only if their symmetric difference is size 2. This graph is best known as the \textit{Johnson Graph}. The symmetric difference condition on $\mathcal{ CH}$ implies that $\mathcal{ CH}$ is an independent set in $J(m+d,d)$, that is, a set of vertices with no edges between them. So finding an upper bound on $|\mathcal{ CH}|$ is equivalent to a bound on the size of an independent set in $J(m+d,d)$. There are some final graph theory notation conventions we give before providing the proof of Theorem \ref{thm:jamiesbound}. Let $A$ and $B$ be vertices in $J(n,d)$. To indicate $A$ and $B$ are {adjacent} we write $A\sim B$. When an edge has vertex $A$ as an endpoint, we say that edge is \textit{incident} to $A$. By $N(A)$ we mean the induced graph on the vertices adjacent to $A$ in $J(n,d)$. That is, $N(A)$ is the subgraph of $J(n,d)$ where for all vertices $B,C\in N(A)$, we have $B\sim C$ in $N(A)$ if and only if $B \sim C$ in $J(n,d)$. \textit{Proof of Theorem \ref{thm:jamiesbound}.} { Let $I \subseteq \binom{[n]}d$ be an independent set of vertices in $J(n,d)$. We will describe an assignment of weights to edges of $J(n,d)$ based on $I$. Start with a weight of 0 on all edges of $J(n,d)$. If $A\in I$ we add a weight of $1$ to each edge incident with $A$. Furthermore, $A$ adds a weight of $1/2$ to all edges in $N(A)$. Note that there are $d(n-d)$ vertices of $N(A)$ since every neighbor $B$ of $A$ is specified uniquely by $B = (A \wo \set{a_B}) \cup \set{x_B}$ where $a_B \in A$ and $x_B\in A^{c}$. Two vertices $B,C\in N(A)$ are adjacent iff $a_B=a_C$ or $x_B=x_C$. This implies that the graph induced on $N(A)$ is regular of degree $d-1+(n-d-1)=n-2$. Thus $A$ assigns a total weight of \[ w = d(n-d) + \frac12 \cdot \frac{ d(n-d)(n-2)}2 = d(n-d)\parens[\Big]{1 + \frac{n-2}4} \] to edges of the graph. We will now show that no edge of $J(n,d)$ receives a total weight of more than $1$ from this assignment. First, note that no edge is incident with two elements of $I$, for they would be adjacent. Similarly, if an edge is incident with $A\in I$ it cannot also be an edge in $N(A')$ for any $A'\in I$ for then we would have $A \sim A'$, a contradiction. Thus it only remains to prove that if $AB$ is an edge then there exist at most two elements $A'$ of $I$ that have $A,B\in N(A')$. Let us consider what common neighbors of $A$ and $B$ look like. We know that $C=A\cap B$ has size $d-1$ and for some $x,y\in [n]$ we have $A = C\cup\set{x}$ and $B = C\cup \set{y}$. Consider now $A' \in N(A)\cap N(B)$. If $C\subseteq A'$ then $A' = C\cup {z}$ for some $z\neq x,y$ in $C^{c}$. We call such common neighbors \emph{type $1$}. Now if a neighbor $A'$ of $A$ is not of type $1$ then it has the form $(C\wo \set{c}) \cup \set{x,z}$ for some $c\in C$ and $z\not\in A$. But the only way such a set can also be a neighbor of $B$ is to have $z=y$. Thus all other common neighbors of $A$ and $B$ are \emph{type $2$} common neighbors: those of the form $(C\wo \set{c}) \cup\set{x,y}$. Now we simply note that the type $1$ common neighbors of $A$ and $B$ are all pairwise adjacent to one-another in $J(n,d)$, as are the type $2$ common neighbors. That means at most one type 1 neighbor and at most one type 2 neighbor may be in $I$. Thus the edge $AB$ receives a weight of $1/2$ from at most one type $1$ common neighbor, and weight $1/2$ from at most one type $2$ common neighbor, for a total weight of at most $1$. Now we simply compute as follows. Each member of the independent set $I$ assigns total weight $w$ to the edges of $J(n,d)$, and each edge of $J(n,d)$ receives total weight at most $1$ from the elements of $I$, so \begin{align*} \abs{I}\, w = \abs{I}\, d(n-d)\parens[\Big]{1 + \frac{n-2}4} &\le \binom{n}d \frac{d(n-d)}2 = e(J(n,d)), \\ \shortintertext{thus} \abs{I} \,\parens[\Big]{1 + \frac{n-2}4} &\le \binom{n}d \frac12 \\ \abs{I} \,(n+2) &\le 2 \binom{n}d \\ \abs{I} &\le \frac{2}{n+2} \binom{n}d. \end{align*} \hfill $\square$ } \section{Non-Negativity for Sparse Paving Matroids}\label{sec:positivity} With the formula for Theorem \ref{thm:main} proven, we now move to showing that this formula is always non-negative. When $\mathcal{ CH}$ is a disjoint family, this formula has a manifestly positive interpretation, as stated in the introduction of this paper. More details can be found in \cite{rhoremoved}. Otherwise, for more general cases of sparse paving matroids, we are not yet able to give a manifestly non-negative expression. Instead, we show directly that our formula from Theorem \ref{thm:main} is non-negative by relying on the bounds given in section \ref{sec:bounds} for $|\mathcal{ CH}|$, our formulas for $\operatorname{skyt}(a,i,b)$ and $\operatorname{\overline{skyt}}(i,b)$ given in section \ref{sec:SKYT}, and some standard algebra and calculus tools. The details for this proof will be rather technical, and our proof will need a few cases, so the proof serves more as an outline, leaving most of the work to seperate Lemmas and Propositions. Throughout the proofs of this section, we use the falling factorial $(x)_{(n)}:=x(x-1)\cdots(x-n+1)$. We will also regularly use the fact $\deg P_M(t)<{1\over 2}\operatorname{rk} M$. That is, if $d$ is the rank of a matroid $M$, and $i$ is the power of some term in the Kazhdan-Lusztig polynomial $P_M(t)$, then we must have $i<d/2$. \begin{theorem} Let $\coeff$ be the Kazhdan-Lusztig coefficient for a sparse paving matroid $\smd$. Then \[\coeff\geq 0.\] \end{theorem} \begin{proof} We are able to take care of most of the cases simultaneously. Since \[\coeff=\operatorname{skyt}(m+1,i,d-2i+1)-|\mathcal{ CH}|\cdot \operatorname{\overline{skyt}}(i,d-2i+1)\] by Theorem \ref{thm:combformula} and \(|\mathcal{ CH}|\leq {2\over m+d+2}{m+d\choose d}\) by Theorem \ref{thm:jamiesbound}, we have \[\coeff\geq \operatorname{skyt}(m+1,i,d-2i+1)-{2\over m+d+2}{m+d\choose d}\cdot \operatorname{\overline{skyt}}(i,d-2i+1).\] Then by Lemma \ref{lem_pos_i_m_arb}, this expression is non-negative for $i\geq 3$, $m\geq 3$, and for all possible $d$. That is, for $d>2i$ This leaves a small number of more specific cases left, which need to be addressed independently. We first note that the cases for $m=0$ and $i=0$ are taken care of by Remark \ref{rem:conventions}. When $m=1$, notice that any pair of basis elements have symmetric difference 2, and so $|\mathcal{ CH}|\leq 1$. In this case our desired result is immediate since by definition, we may view $\operatorname{\overline{Skyt}}(i,d-2i+1)$ as a subset of $\operatorname{Skyt}(2,i,d-2i+1)$. When $m=2$, it is necessary to find a better bound on the size of $\mathcal{ CH}$. It is not too much work to show that $|\mathcal{ CH}|\leq {d+2\over 2}$ by using the symmetric difference condition on $\mathcal{ CH}$. It is easier to work with the complements of the elements in $\mathcal{ CH}$, which are elements of ${[d+2]\choose 2}$. Then it is equivalent in this case to count the size of the largest disjoint family in ${[d+2]\choose 2}$. So in the case of $m=2$ we have \[ \coeff\geq \operatorname{skyt}(m+1,i,d-2i+1)-{d+2\over 2}\cdot \operatorname{\overline{skyt}}(i,d-2i+1),\] and so to prove our desired result in this case we need only prove \[\operatorname{skyt}(m+1,i,d-2i+1)-{d+2\over 2}\cdot \operatorname{\overline{skyt}}(i,d-2i+1)\geq 0.\] We do this for $i\geq 1$, leaving the details to Lemma \ref{lem_m_2}. Now we move on to the remaining values of $i$, noting we need only show them for $m\geq 3$. When $i=1$, one can get the following closed formula for $\operatorname{skyt}(m+1,i,d-2i+1)$. We get \[\operatorname{skyt}(m+1,1,d-1)={m+d\choose d-1}-m-d\] by Proposition \ref{prop_closedform_ieq1}. Also, note that $\operatorname{\overline{skyt}}(1,d-1)=d-1$, which can be seen by using Lemma \ref{lem:manifposbarskyt}, or by simply observing that only numbers in $\{2,3,4,\dots,d\}$ may appear below the position containing 1 in $\operatorname{\overline{skyt}}(1,d-1)$. It is also important to note that when $i=1$, $d\geq 3$. Then to get our desired result in this case, we can combine Theorem \ref{thm:combformula} and Theorem \ref{thm:codingbound} and instead show that \[{m+d\choose d-1}-m-d-{1\over m+1}{m+d\choose d}(d-1)\geq 0.\] Lemma \ref{lem_pos_i1} is able to show this for $d\geq 3$ when $m\geq 4$, but only for $d\geq 4$ when $m=3$. This leaves the case when $m=3$ and $d=3$ to be done explicitly. Note that \[\operatorname{skyt}(4,1,2)=9\] and \[\operatorname{\overline{skyt}}(1,2)=2,\] which can be easily verified by any of our formulas from section \ref{sec:SKYT}, or by hand. Then non-negativity follows from the fact that in the special case of $m=d=3$, we can guarantee $|\mathcal{ CH}|\leq 4$, which one verify via a constructive argument. When $i=2$, we can use a similar strategy that we used for the $i\geq 3$ and $m\geq 3$ case described in Lemma \ref{lem_pos_i_m_arb}. However, there will be a bit more involved here, and so we leave the details of this final case to Lemma \ref{lem_pos_i2}. \end{proof} \begin{rem} In the case of $m=d=3$, it is worth noting that finding the bound $|\mathcal{ CH}|\leq 4$ was necessary. Both bounds for $|\mathcal{ CH}|$ given by Theorem \ref{thm:codingbound} or Theorem \ref{thm:jamiesbound} give $|\mathcal{ CH}|\leq 5$, and $9-5\cdot 2=-1$. So in this special case, we needed to get a better bound on $|\mathcal{ CH}|$ than what either of our two bounds could provide. \end{rem} \begin{lemma}\label{lem_pos_i_m_arb} Let $i$ and $m$ both be at least 3. Then \[ \operatorname{skyt}(m+1,i,d-2i+1)-{2\over m+d+2}{m+d\choose d}\operatorname{\overline{skyt}}(i,d-2i+1) \geq 0.\] \end{lemma} \begin{proof} One can rewrite the sum in Lemma \ref{lem:manifposskyt} using Remark \ref{rem:rewriting}. After doing this, letting $a=m+1$ and $b=d-2i+1$, the $k=0$ term in the formula for $\operatorname{skyt}(m+1,i,d-2i+1)$ is \begin{align*} A:&={m+i-1\choose i}{m+d \choose d-i}{(d-i-2)_{(d-2i-1)}(m+d-i)_{(d-2i-1)}\over (d-2i-1)!(m+d-i)_{(d-2i)}}\\ &={(m+i-1)!(m+d)!(d-i-2)_{(d-2i-1)}(m+d-i)_{(d-2i-1)}\over i!(m-1)!(d-i)!(m+i)!(d-2i-1)!(m+d-i)_{(d-2i)}}\\ &={(m+d)!(d-i-2)_{(d-2i-1)}\over i!(m-1)!(d-i)!(m+i)(d-2i-1)!(m+i+1)}. \end{align*} Utilizing Lemma \ref{lem:manifposbarskyt}, we have \begin{align*} B:&={2\over m+d+2}{m+d\choose d}\operatorname{\overline{skyt}}(i,d-2i+1)\\ &={4(m+d)!\over m!(m+d+2)(i+1)!(i-1)!(d-2i-1)!(d-i+1)(d-i-1)}. \end{align*} Note that \[\operatorname{skyt}(m+1,i,d-2i+1)-{2\over m+d+2}{m+d\choose d}\operatorname{\overline{skyt}}(i,d-2i+1)\geq A-B,\] so it suffices to show $A-B\geq 0$. Recall that $i< d/2$. Put another way, this says that $d-i>i>i-1$. Hence, we may combine $A-B$ in the following way. \begin{align*} A-B&=A{m(i+1)(m+d+2)(d-i+1)(d-i-1)\over m(i+1)(m+d+2)(d-i+1)(d-i-1)}-B{(d-i)_{(d-2i+1)}(m+i)(m+i+1)\over (d-i)_{(d-2i+1)}(m+i)(m+i+1)}\\ &={(m+d)!p(m,i,d)\over m!(m+d+2)(i+1)!(d-i)!(m+i)(m+i+1)(d-i+1)(d-i-1)(d-2i-1)!} \end{align*} where \[p(m,i,d)=(d-i-2)_{(d-2i-1)}m(i+1)(m+d+2)(d-i+1)(d-i-1)-4(d-i)_{(d-2i+1)}(m+i)(m+i+1).\] Hence, it suffices to show that $p(m,i,d)\geq 0$. We can, in fact, reduce the problem further by simplifying $p(m,i,d)$. Observe that \[p(m,i,d)=(d-i-1)_{d-2i}[m(i+1)(m+d+2)(d-i+1)-4(m+i)(m+i+1)(d-i)],\] so it now suffices to show \[q(m,i,d):=m(i+1)(m+d+2)(d-i+1)-4(m+i)(m+i+1)(d-i)\geq 0.\] We show this for $m,i\geq 3$ by viewing $q$ as a function of $m$. The desired result follows from the following three claims for $q$ as a function of $m$. \begin{enumerate} \item $q$ is quadratic and concave up; \item the critical point of $q$ is negative; and \item $q(m,i,d)\geq 0$ for $m=3$. \end{enumerate} Showing these are elementary exercises in algebra and calculus, so we just highlight the important parts. For claim (1), note that the coefficient of $m^2$ in $q(m,i,d)$ is $(i+1)(d-i+1)-4(d-i)$, and that we assume $d>2i$ and $i\geq 3$. Hence this coefficient is non-negative. For claim (2), it suffices to show the coefficient of $m$ in $q(m,i,d)$ is positive. This coefficient is \[(i+1)(d+2)(d-i+1)-4(i+1)(d-i)-4i(d-i).\] Using the fact that $d>2i$, one can show this is an increasing function in $d$ and is non-negative when $d=2i$. For claim (3), it suffices to show $q(3,i,d)$ is an increasing function in $d$ and that $q(3,i,2i)$ is non-negative. This works out similarly to claim (2). \end{proof} \begin{lemma}\label{lem_m_2} Let $i\geq 1$ and $m=2$. Then \[ \operatorname{skyt}(m+1,i,d-2i+1)-{d+2\over 2}\operatorname{\overline{skyt}}(i,d-2i+1) \geq 0.\] \end{lemma} \begin{proof} As in Lemma \ref{lem_pos_i_m_arb}, keeping in mind that $m=2$, set \begin{align*} A:={(d+2)!(d-i-2)_{(d-2i-1)}\over i!(d-i)!(i+2)(d-2i-1)!(i+3)} \end{align*} and \begin{align*} B:&={d+2\over 2}\operatorname{\overline{skyt}}(i,d-2i+1)\\ &={d!(d+2)\over (i+1)!(i-1)!(d-2i-1)!(d-i+1)(d-i-1)}. \end{align*} It follows from the proof of Lemma \ref{lem_pos_i_m_arb} that $\operatorname{skyt}(m+1,i,d-2i+1)\geq A$ for $m=2$, and so the desired result follows if we show $A-B\geq 0$. Observe that \begin{align*} A-B=&A{(i+1)(d-i+1)(d-i-1)\over (i+1)(d-i+1)(d-i-1)}-B{(i+2)(i+3)(d-i)_{(d-2i+1)}\over (i+2)(i+3)(d-i)_{(d-2i+1)}}\\ &={ d!(d+2)p(i,d)\over (i+3)!(d-i)!(d-2i-1)! (d-i+1)(d-i-1)}, \end{align*} where \[p(i,d):=(d-i-2)_{(d-2i-1)}(d+1)(i+1)(d-i+1)(d-i-1)-(i+2)(i+3)(d-i)_{(d-2i+1)}.\] Hence, it suffices to show that $p(i,d)$ is non-negative. One can factor $p(i,d)$ to reduce the problem further: \[p(i,d)=(d-i-1)_{(d-2i)}[(d+1)(i+1)(d-i+1)-(i+2)(i+3)(d-i)],\] and so it suffices to show that \[q(i,d):=(d+1)(i+1)(d-i+1)-(i+2)(i+3)(d-i)\] is non-negative. Since in the context of Kazhdan-Lusztig polynomials we have $d>2i$, we may set $d=2i+j$ for $j\geq 1$. Then $q(i,2i+j)$ is quadratic in $j$ and we have the following values of $[j^\ell]q(i,2i+j)$: \begin{align*} [j^2]q(i,2i+j)&=i+1\\ [j^1]q(i,2i+j)&=2i^2-4\\ \text{Remaining terms}&\text{: }i^3-2i+1\\ \end{align*} When $i\geq 2$, all three values are individually positive positive. If $i=1$, then \[q(1,j+2)=2j^2-2j\] which is non-negative for all $j\geq 1$, giving our desired result. \end{proof} \begin{prop}\label{prop_closedform_ieq1} \[\operatorname{skyt}(m+1,1,d-1)={m+d\choose d-1}-m-d.\] \end{prop} \begin{proof} Note that if $\alpha\in \operatorname{Skyt}(m+1,1,d-1)$, it is made up of two ``tails'', one of length $m+1$ extending down, and the other of length $d-1$ extending up, so that the two tails overlap in exactly two positions. See the below figure for a schematic of $\alpha$, with some entries labeled. \begin{center} \begin{tikzpicture}[scale=.4] \draw (0,1) grid (-1,-4); \draw[decoration={brace,raise=7pt},decorate] (-1,-4) -- node[left=7pt] {$m+1$} (-1,1); \draw (0,-1) grid (1,4); \draw[decoration={brace,raise=7pt},decorate] (1,4) -- node[right=7pt] {$d-1$} (1,-1); \node[] (x1) at (-.5,.5) {$w$}; \node[] (x1) at (-.5,-.5) {$x$}; \node[] (x1) at (.5,.5) {$y$}; \node[] (x1) at (.5,-.5) {$z$}; \end{tikzpicture} \end{center} Note that there are $m+d$ positions in these tableaux, and we require that $w<y$ and $x<z$. Now, pick an element $\displaystyle S\in {[m+d]\choose d-1}$. The number of elements of $\operatorname{Skyt}(m+1,1,d-1)$ is equivalent to the number of $S$ that appear as the right tail in an element in $\operatorname{Skyt}(m+1,1,d-1)$, as the entries of one tail determine the entries of the other. It is easiest to count the complement, that is, the $S$ that will \textit{not} appear as the the right tail in an element of $\operatorname{Skyt}(m+1,1,d-1)$. These are the $S$ that force $w>y$, $x>z$, or both. We leave it to the reader to verify that the complement has size $m+d$. \end{proof} \begin{lemma}\label{lem_pos_i1} We have \[{m+d\choose d-1}-m-d-{1\over m+1}{m+d\choose d}(d-1)\geq 0\] for $d\geq 3$ when $m\geq 4$, and $d\geq 4$ when $m=3$. \end{lemma} \begin{proof} We start by rewriting of our expression of interest. \begin{align*} {m+d\choose d-1}-m-d-{1\over m+1}{m+d\choose d}(d-1)&={d\over m+1}{m+d\choose d}-m-d-{1\over m+1}{m+d\choose d}(d-1)\\ &={1\over m+1}{m+d\choose d} ({d}-(d-1))-m-d\\ &={1\over m+1} {m+d\choose d} -m-d\\ &={(m+d)_{(d-1)}\over d!}-m-d\\ &=(m+d)\left({(m+d-1)_{(d-2)}\over d!}-1\right). \end{align*} Hence, if\[f(m,d):={(m+d-1)_{(d-2)}\over d!}\] it suffices to show $f(m,d)\geq 1$. As a function in $m$, $f(m,d)$ is increasing. Also, \[f(4,d)={(d+3)_{(d-2)}\over d!}={(d+3)!\over 5!d!}={1\over 20}{d+3\choose 3}.\] See that $f(4,d)$ is increasing in $d$ and also $f(4,3)=1$. So when $m\geq 4$, we have our desired result for $d\geq 3$. When $m=3$, observe we have \[f(3,d)={(d+2)_{(d-2)}\over d!}={(d+2)!\over 4!d!}={1\over 12}{d+2\choose 2}.\] See that $f(3,d)$ is increasing in $d$, and $f(3,4)={15\over 12}$. \end{proof} \begin{lemma}\label{lem_pos_i2} If $m\geq 3$, we have \[c_{m,d}^2(\mathcal{ CH})\geq 0.\] \end{lemma} \begin{proof} It will be important to remember that since $i=2$, we have $d\geq 5$ by the degree requirement on Kazhdan-Lusztig polynomials. To show our desired result, we will need two separate cases. First suppose $m\geq d$. Note then we already have $m\geq 3$ since $d\geq 5$. As in Lemma \ref{lem_pos_i_m_arb}, accounting for the fact that in this case $i=2$, let \begin{align*} A:&={(m+d)!(d-4)_{(d-5)}\over 2(m-1)!(d-2)!(m+2)(d-5)!(m+3)}\\ &={(m+d)!(d-4)!\over 2(m-1)!(d-2)!(m+2)(d-5)!(m+3)}\\ &={(m+d)!(d-4)m(m+1)\over 2(m+3)!(d-2)!}. \end{align*} Also similarly to Lemma \ref{lem_pos_i_m_arb}, but using the bound from Theorem \ref{thm:codingbound} for $|\mathcal{ CH}|$, let \[B:={1\over m+1}{m+d\choose d}{2\cdot d!\over 6(d-5)!(d-1)(d-3)}={(m+d)!(d-2)(d-4)\over 3(m+1)!(d-1)!}.\] A combination of Theorem \ref{thm:combformula}, Theorem \ref{thm:codingbound}, and the proof of Lemma \ref{lem_pos_i_m_arb} implies that \[c_{m,d}^2(\mathcal{ CH})\geq A-B,\] and so we show $A-B\geq 0$ when $m\geq d$. Notice that \[A-B={(m+d)!(d-4)f(m,d) \over 6(m+3)!(d-1)!},\] where \[f(m,d):=3m(m+1)(d-1)-2(d-2)(m+2)(m+3).\] Hence, it suffices to show that $f(m,d)\geq 0$ to show that $A-B\geq 0$. Since we are assuming $m\geq d$, we set $m=d+j$, for $j\geq 0$. Then $f(d+j,d)$ is quadratic in $j$ and we have \begin{align*} [j^2] f(d+j,d)&=d + 1\\ [j] f(d+j,d)&=2d^2 - 5d + 17\\ \text{Remaining terms of }f(d+j,d)&:d^3 - 6d^2 + 5d + 24 \end{align*} Each of these are positive when $d=5$. In fact, the $[j^2]$ term is clearly positive when $d\geq 5$. The $[j]$ term is increasing for $d\geq {5\over 2}$. For the remaining terms, note that the derivative is $3d^2-12d+5$, which increases so long as $d\geq 2$, and is already positive at $d=5$. This means that the derivative remains positive for $d\geq 5$, and so the original function remains increasing. Hence, this shows that $c_{m,d}^2(\mathcal{ CH})\geq 0$ so long as $m\geq d$. Now we show the same result holds when $d\geq m$. To do this, we reuse $A$ as above, and redefine $B$ using our bound from Theorem \ref{thm:jamiesbound}. \[B:={2\over m+d+2}{m+d\choose d}{2\cdot d!\over 6(d-5)!(d-1)(d-3)}={2(m+d)!(d-2)(d-4)\over 3(m+d+2)m!(d-1)!}\] For similar reasons as before, $c_{m,d}^2(\mathcal{ CH})\geq 0$ if $A-B\geq 0$. Note that \[A-B={(m+d)!(d-4)(m+1)g(m,d) \over 6(m+3)!(d-1)!},\] where \[g(m,d):=3m(m+d+2)(d-1)-4(d-2)(m+2)(m+3).\] Observe that $g$ is a concave up quadratic function in $d$. If one expands the function, its vertex can be seen to occur at \[d={m^2+17m+24\over 6m}.\] However, note that this value is less than $m$ so long as $m\geq 5$ since \[{m^2+17m+24\over 6m}\leq m \text{ if and only if }-5m^2+17m+24\leq 0.\] Hence, this says that $g(m,d)$ is increasing in $d$ when $d\geq m\geq 5$. Also, when $m=3$ the vertex for $g$ is at approximately $d=4.67$ and when $m=4$ the vertex for $g$ is at $d=4.5$. We know that $d\geq 5$ regardless of its relation to $m$, so we have in fact shown that $g$ is increasing in $d$ for any $m\geq 3$ when $d\geq m$. Moreover, one can verify \[g(m,m)=2(m^3 - 6m^2 + 5m + 24)\geq 0\] so long as $m\geq 5$. Also, note that $g(3,5)=0$ and $g(4,5)=24$. Hence $g(m,d)$ is always non-negative for $d\geq m$ when $m\geq 3$. \end{proof} \section{Integral Identities}\label{sec:integral_identities} \begin{prop}\cite[Identity 2.110.8]{gradshteyn} Let $a,b$ be positive integers. Then \[\int y^a(1-xy)^b \ dy=a!b!\sum_{k=0}^b{(1-xy)^{b-k}y^{a+k+1}x^k\over (a+k+1)!(b-k)!}.\] \end{prop} \begin{cor}\label{cor:int_0^1_y^a(1-xy)^b} Let $a,b$ be positive integers. Then \[\int_0^1 y^a(1-xy)^b \ dy=a!b!\sum_{k=0}^b{(1-x)^{b-k}x^k\over (a+k+1)!(b-k)!}.\] \end{cor} \begin{cor}\label{cor:int_0^yx^a(1-x)^b} Let $a,b$ be positive integers. Then \[\int_0^y x^a(1-x)^b\ dx=a!b!\sum_{k=0}^{b} {(1-y)^{b-k}y^{a+k+1}\over (a+k+1)!(b-k)!}\] \end{cor} \begin{prop}\label{prop:int_i_times} Let $x_0,x_1,\dots, x_i$ be a list of $i+1$ variables. Set $h_1(x_1)=\displaystyle\int_0^{x_1} \ x_0^a(1-x_0)^b \ dx_0$, and for $i>1$ define $h_i(x_i)=\displaystyle\int_0^{x_i} \ h_{i-1}(x_{i-1}) \ dx_{i-1}$. Then \[\displaystyle\int_0^1\ h_i(x_i)\ dx_i ={a!(b+i)!\over i!(a+b+i+1)! }.\] \end{prop} \begin{proof} Using Corollary \ref{cor:int_0^yx^a(1-x)^b} $i$ times, we get the following expression for $h_i(x_i)$. \begin{align}\label{eq_sum} h_i(x_i)=a!b!\sum_{k_1=0}^{b}\sum_{k_2=0}^{b-k_1}\ \sum_{k_3=0}^{b-k_1-k_2}\cdots \ \sum_{k_i=0}^{b-\sigma} {x_i^{a+\sigma+k_i+i}(1-x_i)^{b-\sigma-k_i}\over (a+\sigma+k_i+i)!(b-\sigma-k_i)!} \end{align} where $\sigma=k_1+k_2+\cdots +k_{i-1}$. Noting that \[\int_0^1 x_i^{a+\sigma+k_i+i}(1-x_i)^{b-\sigma-k_i}\ dx_i={(a+\sigma+k_i+i)!(b-\sigma-k_i)!\over (a+b+i+1)!}.\] we may use \eqref{eq_sum} to write \begin{align*} \int_0^1 h_i(x_i)\ dx_i&=a!b!\sum_{k_1=0}^{b}\ \ \sum_{k_2=0}^{b-k_1}\ \ \sum_{k_3=0}^{b-k_1-k_2}\ \cdots \ \sum_{k_i=0}^{b-\sigma} {(a+\sigma+k_i+i)!(b-\sigma-k_i)!\over (a+\sigma+k_i+i)!(b-\sigma-k_i)!(a+b+i+1)!}\\ &={a!b!\over (a+b+i+1)!}\sum_{k_1=0}^{b}\ \ \sum_{k_2=0}^{b-k_1}\ \ \sum_{k_3=0}^{b-k_1-k_2}\ \cdots \ \sum_{k_i=0}^{b-\sigma} 1, \end{align*} which simplifies using Proposition \ref{prop_combident} to \[{a!b!\over (a+b+i+1)!}{b+i\choose i}={a!(b+i)!\over i!(a+b+i+1)! }.\] \end{proof} \begin{prop}\label{prop_combident} \[\sum_{k_1=0}^{b}\ \sum_{k_2=0}^{b-k_1}\ \sum_{k_3=0}^{b-k_1-k_2}\cdots \sum_{k_i=0}^{b-\sigma}1={b+i\choose i},\] where $\sigma=k_1+k_2+\cdots +k_{i-1}$. \end{prop} \begin{proof} It is helpful to first reindex the summations so that they start at 1 instead of 0. Then the identity holds from counting the below set in two ways. \[\bigcup_{x_1\in [b+1]}\ \bigcup_{x_2\in [b+2]\setminus [x_1]}\ \bigcup_{x_3\in [b+3]\setminus [x_2]}\cdots \bigcup_{x_i\in [b+i]\setminus [x_{i-1}]}\{x_1,x_2,\dots, x_i\}.\] \end{proof}
2023-04-23T08:18:11.766Z
2020-06-19T02:06:25.000Z
redpajama/arxiv
arxiv_0000
1,357
12,525
50e1a51ed8d3509b85019b4b741756ceba605732
\section{Introduction} \label{introduction} Linear-response time dependent density functional theory (TDDFT)\cite{runge1984density,casida1995time,dreuw2005single} is very widely used to model electronic excited states of chemical species. TDDFT is an appealing approach as it is computationally inexpensive ($O(N^{3-4})$ scaling vs number of basis functions $N$), nearly black-box and able to simultaneously compute a large number of excited states. However, the lack of explicit orbital relaxation renders it unsuitable for describing excitations that involve substantial reorganization of electron density, such as charge transfer\cite{dreuw2005single,peach2008excitation} or Rydberg excited states\cite{casida1998molecular,tozer2000determination}. Excitation of core electrons in particular involves a substantial relaxation of the core-hole (and an accompanying reorganization of valence electron density), which leads to substantial errors in excitation energies predicted by TDDFT by standard functionals. It is consequently not unusual to blue-shift TDDFT core-level spectra by $\sim10$ eV for alignment with experiment\cite{besley2010time,wenzel2014calculating,attar2017femtosecond,bhattacherjee2018photoinduced,chantzis2018ab,lestrange2015calibration}(though the qualitative nature of transitions is typically reasonably predicted). Some specialized short-range corrected functionals specifically trained to predict core-level spectra\cite{besley2009time} tend to fare better\cite{besley2020density,besley2011theoretical,robinson2010modelling,fogarty2018experimental,buckley2011theoretical,ljubic2016experimental,ljubic2018characterisation}, but the very strong sensitivity of TDDFT excitation energies on delocalization error\cite{perdew1982density,dreuw2005single} is troubling (as even small perturbations could have disproportionate impact on relative peak positions). In contrast, linear-response based wave function theories like equation of motion coupled cluster singles and doubles (EOM-CCSD)\cite{sekino1984linear,stanton1993equation,krylov2008equation,peng2015energy,coriani2015communication,coriani2016molecular,carbone2019analysis,vidal2019new} tend to systematically overestimate core-excitation energies\cite{coriani2012coupled,frati2019coupled,tsuru2019time,peng2015energy} due to lack of explicit orbital relaxation, often necessitating empirical redshifting by 1-2 eV for alignment with experiment\cite{frati2019coupled,tsuru2019time,peng2015energy,coriani2015communication}. Encouragingly however, use of different core-valence separation\cite{cederbaum1980many} (CVS) schemes has been observed to reduce the magnitude of the shift required\cite{vidal2019new,lopez2020equation,vidal2020dyson}. A flavor of second order extended algebraic diagrammatic construction (ADC(2)-x\cite{wormit2014investigating}, specifically CVS-ADC(2)-x\cite{wenzel2014calculating}) is also often employed to calculate core-level spectra\cite{wenzel2014calculating,norman2018simulating,list2020probing,wenzel2016physical}. The accuracy of CVS-ADC(2)-x however owes a great deal to fortuitious cancellation between various sources of error\cite{wenzel2014calculating,wenzel2015analysis}, and performance actually worsens when third order ADC is employed\cite{wenzel2015analysis}. At any rate, the higher computational cost of these wave function theories ($O(N^6)$ for both EOM-CCSD and ADC(2)-x) and slower basis set convergence renders them impractical for large molecular systems or extended materials, relative to computationally inexpensive DFT approaches. Nonetheless, development of lower-scaling approximations to these wave function based methods is expected to broaden their applicability considerably\cite{myhre2016near,peng2015energy}. In contrast to these linear-response based protocols, state-specific orbital optimized (OO) methods have been much more successful at accurate prediction of core-level spectra even within the DFT paradigm\cite{besley2009self,derricotte2015simulation,michelitsch2019efficient,hait2020highly,ehlert2020psixas}. The main difficulty with these methods is the potential for `variational collapse' of the target excited state down to the ground state or another excited state, as it is challenging to optimize excited state orbitals (by virtue of excited states typically being saddle points of energy). The maximum overlap method (MOM)\cite{gilbert2008self,barca2018simple} was developed to address this problem for repeated Fock matrix diagonalization based methods like DIIS\cite{pulay1980convergence}, though convergence failures and variational collapse (via slow drifting of orbitals) are not always prevented\cite{mewes2014molecular,hait2020excited}. More recently, some of us have have proposed a square gradient minimization (SGM)\cite{hait2020excited} based direct minimization approach that appears to be robust against both modes of MOM failure. SGM has been employed in conjunction with the spin-pure restricted open-shell Kohn-Sham (ROKS) method\cite{filatov1999spin,kowalczyk2013excitation} to predict highly accurate ($<0.5$ eV error) core-level spectra of closed-shell molecules\cite{hait2020highly} at local DFT cost (using the modern SCAN\cite{SCAN} functional). It is also worth noting that there exist linear-response methods that incorporate partial OO character through relaxed core-ionized states, like Static Exchange (STEX)\cite{aagren1997direct} or Non-orthogonal Configuration Interaction Singles (NOCIS)\cite{oosterbaan2018non,oosterbaan2019non,oosterbaan2020generalized}, though such treatments are wave function based and $\sim1$ eV error remains common due to lack of dynamic correlation. Stable open-shell molecules are fairly uncommon in nature and there is consequently a scarcity of static experimental spectra for such species. However open-shell systems are omnipresent in chemical dynamics experiments (either as fragments or excited states of closed-shell molecules) where transient X-ray absorption spectroscopy (XAS) is often employed\cite{chergui2017photoinduced,bhattacherjee2018ultrafast,schnorr2019tracing,yang2018electron}. It is consequently useful to have cheap and reliable theoretical techniques capable of modeling core-level spectra of such species. The highly accurate ROKS method is however not applicable to most open-shell systems, as it is explicitly designed for singlet states with one broken electron pair. In fact, open-shell systems pose additional challenges for many of the methods described above, as a spin-pure treatment of excited states necessitates inclusion of some double excitations\cite{maurice1996nature,oosterbaan2019non,oosterbaan2020generalized} even for states that conventionally appear to be single excitations breaking one electron pair. This is not too difficult for wave function approaches, as shown by the extended CIS (XCIS\cite{maurice1996nature}) and open-shell NOCIS\cite{oosterbaan2019non,oosterbaan2020generalized} methods. However, it is not at all straightforward to achieve this within TDDFT, which has no route for describing double excitations within the widely used adiabatic approximation\cite{maitra2004double,levine2006conical,dreuw2005single}. It is tempting to believe that missing such configurations would not be particularly significant if the unpaired electrons interact only weakly, but the failure of TDDFT in describing excited state single bond dissociations despite the unrestricted reference state being reasonable\cite{hait2019beyond} indicates some cause for caution. In this work, we apply OO excited state DFT in conjunction with SGM to study single core-excitations of open-shell systems. This entails investigation of excitations to both singly occupied levels (which can be well described by single determinants, in principle) and completely unoccupied levels (which result in intrinsically multiconfigurational states). We present a scheme for recoupling multiple configurations to obtain an approximate doublet state for the latter class of excitations and demonstrate the utility of this protocol by considering the C K-edge spectra of the allyl radical, O K-edge of CO$^+$ and the N K-edge of NO$_2$. We also discuss general principles for reliably using these techniques to predict core-excitation spectra. Overall, we demonstrate that highly accurate DFT results can be obtained via orbital optimization with the modern local SCAN functional at low computational cost, similar to behavior observed for closed-shell systems. Low error can also be achieved via cam-B3LYP, TPSS and $\omega$B97X-D3 functionals (albeit at a somewhat higher asymptotic cost for the hybrid functionals). \section{Theory} \subsection{Single configurational states}\label{enas} Excitations from the core to singly occupied molecular orbitals (SOMOs) of open-shell systems result in states representable via a single Slater determinant, as there is no change in the number of unpaired electrons. The simplest approach for modeling such states is $\Delta$ Self-Consistent Field ($\Delta$SCF)\cite{ziegler1977calculation,gilbert2008self,kowalczyk2011assessment,besley2009self}, where the non-aufbau solution to the Hartree-Fock\cite{szabo2012modern} or Kohn-Sham\cite{kohn1965self} DFT equations is converged via an excited state solver like SGM or MOM. The resulting excited KS determinant would not necessarily be exactly orthogonal to the ground state determinant but this is generally of little concern since KS determinants are fictitious entities useful for finding densities and thus there exists no requirement that ground and excited state determinants be orthogonal. Nonetheless, a significant ($>0.1$, for example) squared overlap between the ground and excited state configurations would be concerning but we have not observed such occurrences in our investigations and do not believe them to be likely without at least partial variational collapse of the core-hole. The principal dilemma for such states is choosing between spin-restricted or unrestricted orbitals for $\Delta$SCF. Unrestricted orbitals are typically more suitable for DFT studies on open-shell systems, though some functionals are known to yield atypically unphysical behavior in certain limits away from equilibrium\cite{hait2019wellbehaved}. On the other hand, restricted open-shell (RO) orbitals artificially enforce a spin-symmetry that does not exist in radicals. As will be shown later (in Table \ref{tab:uvsro}), use of unrestricted orbitals appears to systematically lower the core-excitation energies (via extra stabilization of the core-excited state relative to the ground state). The best functionals for predicting spectra of closed-shell species yield lower errors for radicals when unrestricted orbitals are employed, and we thus recommend the use of unrestricted orbitals over RO orbitals for radicals. RO orbitals however should be employed for closed-shell systems (via ROKS or related methods)\cite{hait2020highly}, on account of the existence of spin-symmetry in such species. \subsection{Multiconfigurational states}\label{multi} Multiconfigurational DFT is a difficult challenge even outside the unique challenges of TDDFT for double excitations, as the Kohn-Sham (KS) exchange-correlation energy is defined for a single determinant reference. KS-DFT target states therefore should be single determinants, and directly recoupling them via configuration interaction (CI) would result in double counting of some electron-electron interactions through both the functional and the CI off-diagonal terms. This is quite undesirable, making modeling such states fairly challenging. One very reasonable solution is to note that single determinants with both $\alpha$ and $\beta$ unpaired electrons are mixtures of different spin-states, and the highest spin-state within that ensemble can be well approximated by a single determinant by merely making all unpaired spins point in the same direction. Approximate spin-projection (AP)\cite{yamaguchi1988spin} can consequently be applied to remove this high spin contribution from a spin impure mixed determinant. This approach should be sufficient when there are only two eigenstates that significantly contribute to the mixed configuration, as is the case for single excitations out of closed-shell molecules (where only the singlet and triplet states contribute). ROKS in fact utilizes this very feature to ensure spin-purity. ROKS employs a mixed configuration that has one unpaired $\alpha$ spin and one unpaired $\beta$ spin (which has energy $E_M$) and a triplet configuration that has both unpaired spins as $\alpha$ (which has energy $E_T$). The use of RO orbitals forces the mixed configuration to be exactly halfway between singlet and triplet, indicating $E_M=\dfrac{E_S+E_T}{2}$ where $E_S$ is the true singlet energy. ROKS consequently optimizes the purified singlet energy $E_S=2E_M-E_T$. Things are however substantially more challenging for doublet states. A mixed configuration with two unpaired $\alpha$ electrons and one unpaired $\beta$ electron is a mixture of three states---two doublets and a quartet. The quartet contribution can be easily removed using an AP protocol similar to ROKS, but disentangling the two doublet energies is nontrivial. Looking at the pure wave function based CI approach however offers some hints as to how to proceed. If we consider restricted open-shell configurations with three unpaired electrons occupying three spin-restricted orbitals (labeled $1,2$ and $3$, respectively), eight possible configurations exist. Spin-inversion symmetry in the absence of magnetic fields however indicate that only four provide unique information: \begin{enumerate} \item $\ket{Q}=\ket{\uparrow\uparrow\uparrow}$: All three spins are $\alpha$. This is the pure quartet with energy $E_Q$. \item $\ket{M_1}=\ket{\downarrow\uparrow\uparrow}$: Only the spin at orbital $1$ is $\beta$. This is a mixed configuration with energy $E_{M_1}=E_Q+K_{12}+K_{13}$, where $K_{pq}$ is the exchange interaction $\braket{pq}{qp}$ between an electron in orbital $p$ and another in orbital $q$. The inversion of the spin in orbital $1$ relative to the quartet leads to a loss of exchange stabilization between this orbital and the other two, leading to the energy going up by $K_{12}+K_{13}$. \item $\ket{M_2}=\ket{\uparrow\downarrow\uparrow}$ Only the spin at orbital $2$ is $\beta$. Consequently $E_{M_2}=E_Q+K_{12}+K_{23}$. \item $\ket{M_3}=\ket{\uparrow\uparrow\downarrow}$ Only the spin at orbital $3$ is $\beta$. Consequently $E_{M_3}=E_Q+K_{13}+K_{23}$. \end{enumerate} Having the single determinant energies $E_Q,E_{M_1},E_{M_2},E_{M_3}$ is sufficient to uniquely solve for the exchange interactions $K_{pq}$, with $K_{12}=\dfrac{E_{M_1}+E_{M_2}-E_Q-E_{M_3}}{2}$ etc. This is quite useful, as the off-diagonal CI coupling elements are $\bra{M_i}H\ket{M_j}=-K_{ij}$ from Slater-Condon rules for double excitations\cite{szabo2012modern}. This indicates that the knowledge of the single determinant energies is sufficient for solving the CI problem. With this, we find the eigenvalues of $H$ within the subspace spanned by $\ket{M_{1,2,3}}$ to be: \begin{align} E_1 &= E_Q\\ E_2 &= \dfrac{1}{2}\left(E_{M_1}+E_{M_2}+E_{M_3}-E_{Q}- \sqrt{2\left[\left(E_{M_1}-E_{M_2}\right)^2+\left(E_{M_2}-E_{M_3}\right)^2+\left(E_{M_3}-E_{M_1}\right)^2\right]}\right)\label{d1}\\ E_3 &= \dfrac{1}{2}\left(E_{M_1}+E_{M_2}+E_{M_3}-E_{Q}+ \sqrt{2\left[\left(E_{M_1}-E_{M_2}\right)^2+\left(E_{M_2}-E_{M_3}\right)^2+\left(E_{M_3}-E_{M_1}\right)^2\right]}\right)\label{d2} \end{align} The first eigenvalue corresponds to the quartet within the $M_S=\dfrac{1}{2}$ subspace (which is a linear combination of all three configurations with equal weights). The other two correspond to the energies of the two possible doublet states. We propose that the same approach be employed for recoupling DFT configurations, with the KS energies of configurations $\ket{M_{1,2,3}}$ being employed instead of the HF ones used in the wave function theory approach. The risk of double counting should be greatly reduced as the effective off-diagonal elements are found directly from the KS energies versus Slater-Condon rules. Indeed, the off-diagonal elements should no longer be viewed as exchange interactions but rather effective spin-spin coupling elements. The entire approach is basically equivalent to solving for the eigenstates of the effective Ising like Hamiltonian $H^\prime=-2J_{12}\vec{S}_1\cdot\vec{S}_2-2J_{13}\vec{S}_1\cdot\vec{S}_3-2J_{23}\vec{S}_2\cdot\vec{S}_3$ for three interacting spins, where the couplings $J_{ij}$ are obtained from DFT (and are equivalent to the exchange interactions $K_{ij}$ if HF is used as the functional). Such approaches have been used within broken-symmetry DFT to calculate spin coupling constants of transition metal species to reasonable accuracy\cite{yamaguchi1979singlet,noodleman1981valence,noodleman1985models,sinnecker2004calculating,lovell2001femo,adams1997density,mouesca1995density,witzke2020bimetallic}, and it is hoped that similar behavior will transfer over. Furthermore, equivalent logic for the case of two unpaired spins yields ROKS, which is known to be quite accurate for singlet states with one broken electron pair\cite{kowalczyk2013excitation,hait2016prediction,hait2020excited}. These known instances of successful behavior encourages us to believe that this protocol is worthwhile to explore. We also note that Eqns \ref{d1}-\ref{d2} were reported in Ref \onlinecite{kowalczyk2011assessment} without an explicit description of the derivation, but these have not been actually applied to core-level spectroscopy (or any excited state problem) to the best of our knowledge. Having obtained $E_{2,3}$ as spin-purified energies, we next seek to determine how to obtain the optimal orbitals. It is tempting to directly optimize $E_{2,3}$ in a manner analogous to ROKS but we have elected not to do so at present. This optimization is nontrivial due to the nonlinear nature of the energy expression (vs the simpler form for ROKS). In addition, the derived equation is only precisely true for restricted open-shell orbitals, while Sec \ref{enas} seems to suggest unrestricted orbitals are optimal. We therefore look to AP-$\Delta$SCF\cite{ziegler1977calculation,kowalczyk2011assessment} for singlet excited states for inspiration, where the mixed determinant and triplet determinants are individually optimized (resulting in two sets of orbitals) and the singlet energy is simply computed as $2E_M-E_T$ from the individually optimized energies, instead of optimizing a single set of orbitals as in ROKS. The resulting energies however are often not dramatically different from ROKS\cite{kowalczyk2013excitation} and so we choose to follow a similar protocol here to determine if there is sufficient utility in this route for recoupling mixed determinants to justify optimizing a single set of unrestricted orbitals for computing the doublet energies. We consequently optimize $\ket{Q}$ and $\ket{M_{1,2,3}}$ individually and compute $E_{2,3}$ from those optimized energies. One rather inconvenient detail is that individually optimized $\ket{M_{1,2,3}}$ configurations would thus not be strictly orthogonal to each other due to slight differences in the orbitals. However we do not consider any non-orthogonality derived terms arising from mixed configurations, as the KS determinants are fictitious constructs. On a more practical note, we ensure low overlap via providing restricted open-shell quartet orbitals as the initial guess for SGM optimization of the mixed determinants. The initial guesses are thus orthogonal, and orbital relaxation to the closest stationary point (which SGM is supposed to achieve) in unrestricted space should not lead to significant non-orthogonality for cases where this model of three unpaired electrons is a good approximation. Further details about initial guesses are enumerated in Sec \ref{sec:recs}. \subsection{Transition Dipole Moments}\label{tmu} The magnitude of the transition dipole moment between the ground and excited states is essential for computing oscillator strengths (and thus relative intensities in computed spectra). The fictitious nature of the KS determinant (which represents a wave function of noninteracting electrons subjected to a fictitious potential) is a significant obstacle here, as it implies there is no rigourous route for computing transition dipole moments. However, treating the KS determinants as real wave functions might be a reasonable approximation for computing this quantity, in the hope that the KS determinants (or superpositions thereof) would have a reasonably large overlap with the true wave functions to make this exercise worthwhile. Indeed, spectra computed via this route show fairly good agreement with experiment (as can be seen from previous work\cite{hait2020highly} by some of us, for instance). Such a protocol can (and should) account for nonorthogonality between ground and mixed determinants as it is fairly simple to compute NOCI dipole matrix elements\cite{thom2009hartree}. There are some additional factors to consider for the recoupled multiconfigurational states. The wave function inspired approach indicates that transition dipole moments should be computed via a linear combination of the transition dipole moments of individual determinants, as weighted by their coefficients in the eigenvectors corresponding to $E_{2,3}$. The effect of non-orthogonality between mixed determinants $\ket{M_{1,2,3}}$ on eigenvector coefficients is neglected here both because such terms are relatively small (because the mixed determinants have fairly low overlap with each other) and because it is not straightforward to calculate these effects. The decision to not consider this form of nonorthogonality does not appear to have any significant deleterious impact, as shown by the spectra presented later. The other important factor to consider is that the analysis in Sec \ref{multi} found off-diagonal coupling elements directly from the energies $E_{M_{1,2,3}}$ and thus did not account for phases of $\ket{M_{1,2,3}}$. These phases however are critical for estimating transition dipole moments, and thus must be obtained somehow. A protocol for estimating these phases via the formally ``quartet" state is supplied in the Appendix. \section{Results and Discussion} \subsection{Excitations to the SOMO} The relative scarcity of experimental XAS data for radicals leaves us with a fairly small dataset of 17 excitations for assessing the performance of single determinant $\Delta$SCF. The precise statistical values here are thus less reliable than those obtained in Ref \onlinecite{hait2020highly} from 40 excitations out of closed-shell molecules, but general qualitative trends can be drawn even from this restricted amount of data. The experimental excitation energies for all the C K-edge excitations (save allyl and CO$^+$) were measured by some of us, via radicals obtained from the photodissociation of the corresponding iodide\cite{yang2018electron,yang2018}. These values should have an uncertainty of $\pm 0.1$ eV, although vibrational excitations induced by photodissociation could shift the values somewhat. However, the resulting excitation energy for CH$_3$ agrees well with vibrationally resolved spectra obtained from radicals generated from flash pyrolysis\cite{alagia2007probing}. Furthermore, (as can be seen from Table \ref{tab:somodata}), the experimental shifts between the C K-edge of the allyl radical (obtained by authors of Ref \onlinecite{alagia2013soft} on cold radicals generated via flash pyrolysis) and other C K-edges are very well reproduced by theoretical methods, suggesting that any vibrational excitation induced effect was small overall. A full Frank-Condon analysis could prove useful in quantifying any such effect, but was not pursued at present. We only consider a relatively small number of density functionals as a combination of large experimental uncertainty (typically 0.1 eV) and limited number of data points would make precise rankings of many functionals meaningless. We think it is more useful to investigate the performance of some representative functionals and see if they are sufficiently accurate to justify wider use. We therefore consider the following functionals from various rungs of Jacob's ladder\cite{perdew1982density}: \begin{enumerate} \item Rung 1 (local spin-density approximation/LSDA\cite{Slater,VWN,PW92}): Not considered due to very large errors found in Ref \onlinecite{hait2020highly}. \item Rung 2 (generalized gradient approximation/GGA): BLYP\cite{b88,lyp}, PBE\cite{PBE}. \item Rung 3 (meta-GGA): TPSS\cite{tpss}, SCAN\cite{SCAN}. \item Rung 4 (hybrids): B3LYP\cite{b3lyp}, PBE0\cite{pbe0} (global hybrids); cam-B3LYP\cite{camb3lyp}, $\omega$B97X-D3\cite{wB97XD3}, $\omega$B97X-V\cite{wb97xv} (range separated hybrids). \item Rung 5 (double hybrids): Not considered due to significant computational expense. \end{enumerate} The Hartree-Fock (HF) wave function method is also considered, in order to determine the impact of neglecting correlation entirely. The choice of functionals here was guided by both a desire to compare against closed-shell results reported earlier\cite{hait2020highly} and a desire to examine the behavior of classic, minimally parameterized functionals like B3LYP. \begin{table}[htb!] \begin{tabular}{llllllllllll} Radical & Expt. & BLYP & PBE & TPSS & SCAN & B3LYP & PBE0 & cam-B3LYP & $\omega$B97X-D3 & $\omega$B97X-V & HF \\ CH$_3$ & 281.4\cite{yang2018,alagia2007probing} & 281.6 & 280.8 & 281.8 & 281.8 & 281.7 & 281.2 & 281.6 & 281.8 & 281.9 & 282.8 \\ CH$_3$CH$_2$ & 281.7\cite{yang2018} & 282.0 & 281.3 & 282.1 & 282.2 & 282.1 & 281.6 & 282.0 & 282.2 & 282.3 & 283.0 \\ (CH$_3$)$_2$CH & 282.2\cite{yang2018} & 282.3 & 281.6 & 282.4 & 282.5 & 282.4 & 282.0 & 282.3 & 282.5 & 282.6 & 283.3 \\ (CH$_3$)$_3$C & 282.6\cite{yang2018} & 282.6 & 281.9 & 282.6 & 282.8 & 282.7 & 282.3 & 282.6 & 282.8 & 282.9 & 283.5 \\ Allyl & 282.0\cite{alagia2013soft} & 282.3 & 281.5 & 282.4 & 282.5 & 282.4 & 281.9 & 282.3 & 282.5 & 282.6 & 283.5 \\ CO$^+$ & 282.0\cite{couto2020carbon} & 282.2 & 281.3 & 282.2 & 282.3 & 282.3 & 281.8 & 282.2 & 282.4 & 282.5 & 283.3 \\ CH$_2$Br & 282.6\cite{yang2018} & 282.8 & 282.0 & 282.9 & 282.9 & 282.8 & 282.4 & 282.7 & 282.9 & 283.1 & 283.8 \\ CH$_2$Cl & 282.8\cite{yang2018electron} & 283.0 & 282.2 & 283.1 & 283.2 & 283.0 & 282.6 & 282.9 & 283.1 & 283.3 & 284.0 \\ NH$_2$ & 394.3\cite{parent2009irradiation} & 394.5 & 393.6 & 394.6 & 394.7 & 394.5 & 394.1 & 394.5 & 394.7 & 394.8 & 395.7 \\ N$_2^+$ & 394.3\cite{lindblad2020x} & 394.4 & 393.5 & 394.3 & 394.3 & 394.2 & 393.6 & 394.0 & 394.2 & 394.3 & 394.4 \\ NH$_3^+$ & 395.2\cite{bari2019inner} & 395.0 & 394.2 & 395.2 & 395.3 & 395.1 & 394.6 & 395.0 & 395.2 & 395.4 & 396.4 \\ NO$_2$ & 401.0\cite{zhang1990inner} & 401.0 & 400.2 & 401.0 & 401.2 & 401.0 & 400.6 & 401.1 & 401.3 & 401.5 & 402.1 \\ OH & 525.8\cite{stranges2002high} & 525.8 & 524.9 & 526.0 & 526.0 & 525.8 & 525.3 & 525.8 & 525.9 & 526.1 & 527.0 \\ HO$_2$ & 528.6\cite{lacombe2006radical} & 528.5 & 527.6 & 528.6 & 528.5 & 528.3 & 527.8 & 528.3 & 528.4 & 528.6 & 528.9 \\ NO$_2$ & 530.3\cite{zhang1990inner} & 530.5 & 529.7 & 530.5 & 530.5 & 530.2 & 529.7 & 530.1 & 530.2 & 530.4 & 529.9 \\ O$_2$ & 530.8\cite{coreno1999vibrationally} & 530.8 & 530.0 & 530.8 & 530.8 & 530.6 & 530.1 & 530.6 & 530.8 & 531.0 & 530.6 \\ CO$^+$ & 528.5\cite{couto2020carbon} & 528.3 & 527.5 & 528.4 & 528.5 & 528.0 & 527.6 & 528.0 & 528.1 & 528.2 & 528.3 \\ & & & & & & & & & & & \\ & RMSE & 0.2 & 0.7 & 0.2 & 0.3 & 0.3 & 0.5 & 0.2 & 0.3 & 0.4 & 1.1 \\ & ME & 0.1 & -0.7 & 0.2 & 0.2 & 0.0 & -0.4 & 0.0 & 0.2 & 0.3 & 0.9 \\ & MAX & 0.3 & 1.0 & 0.4 & 0.5 & 0.5 & 0.9 & 0.5 & 0.5 & 0.6 & 1.5 \end{tabular} \caption{$\Delta$SCF/aug-cc-pCVTZ\cite{dunning1989gaussian,kendall1992electron,woon1995gaussian} core to SOMO excitation energies (in eV) for open-shell species, as predicted by various functionals. Unrestricted orbitals were used for both the ground and excited states. Root mean squared error (RMSE), mean error (ME) and maximum absolute error (MAX) are also reported.} \label{tab:somodata} \end{table} Table \ref{tab:somodata} presents the excitation energies calculated using the chosen approaches (using spin-unrestricted orbitals), along with statistical measures of error like the root mean squared error (RMSE). None of the density functional methods deviate from experiment by more than 1 eV, which is in sharp contrast to the typical behavior of TDDFT with the same functionals\cite{hait2020highly}. Even HF has only $< 2$ eV error despite complete absence of correlation. We specifically observe that the BLYP, TPSS, SCAN, B3LYP, cam-B3LYP and $\omega$B97X-D3 functionals yield 0.3 eV or lower RMSE, and do not deviate by more than 0.5 eV from the experimental reference values. The good performance by local functionals like BLYP, TPSS and SCAN is quite impressive, as these functionals are much more computationally efficient than hybrids. Of the trio, the performance of only SCAN has been characterized for closed-shell systems\cite{hait2020highly}, where it was also found to be similarly accurate. We consequently focus on the performance of SCAN in later sections of this work, as good performance in both the closed and open-shell limits is critical for prediction of transient X-ray absorption spectroscopy. However, we believe that good performance can be obtained from many functionals considered in this work (as partially demonstrated in the Supporting Information). Interestingly, PBE and PBE0 perform surprisingly poorly, especially relative to BLYP and B3LYP, respectively. Table \ref{tab:somodata} furthermore shows that the small errors for many functionals are mostly systematic, which appears to suggest that the change in excitation energy between two species (say between methyl and tert-butyl, for instance) would be reproduced fairly accurately by most functionals. This is also in principle true for TDDFT, although the massive ($\sim 10 $ eV) errors in the individual excitation energies mean that even a relatively small variation in absolute error could have significant impact on relative peak positions (made more likely by very high sensitivity of TDDFT results to delocalization error\cite{perdew1982density,dreuw2005single}). Most functionals (including SCAN) appear to systematically overestimate energies, while PBE and PBE0 systematically underestimate (which might be the reason for their poor overall performance). Inclusion of relativistic effects\cite{takahashi2017relativistic} (which systematically increase excitation energies by binding core electrons more tightly) would therefore degrade performance of many functionals, while improving the performance of PBE and PBE0. The atom specific relativistic corrections for C,N and O are however quite small\cite{takahashi2017relativistic} (0.1-0.3 eV) and therefore are often neglected in studies (such as by the SRC functionals trained for TDDFT spectra prediction\cite{besley2009time}, which has these effects implicitly baked into what is fundamentally a nonrelativistic theory). The impact of incorporating these corrections on the errors of various models is provided in the supporting information, which shows that the RMSE of functionals (other than PBE and PBE0) goes up by $0.1$ eV at most, suggesting that this is not a major issue in practice. We also note that HF systematically overestimates excitation energies by $\sim 1$ eV due to missing correlation, which indicates that simple models for dynamical correlation (such as perturbative approaches\cite{szabo2012modern,cremer2011moller}) might be adequate for substantially lowering error, albeit at higher computational cost than DFT. HF however has a strong propensity to spuriously spin-contaminate Slater determinants, and the performance of perturbative corrections to HF references could consequently be greatly degraded\cite{gill1988does,cremer2011moller}. \begin{table}[htb!] \begin{tabular}{llllll} Radical & Experiment & RO-SCAN & USCAN & RO-PBE0 & UPBE0 \\ CH$_3$ & 281.4 & 281.9 & 281.8 & 281.3 & 281.2 \\ CH$_3$CH$_2$ & 281.7 & 282.3 & 282.2 & 281.7 & 281.6 \\ (CH$_3$)$_2$CH & 282.2 & 282.6 & 282.5 & 282.1 & 282.0 \\ (CH$_3$)$_3$C & 282.6 & 282.9 & 282.8 & 282.4 & 282.3 \\ Allyl & 282.0 & 282.5 & 282.5 & 281.9 & 281.9 \\ CO$^+$ & 282.0 & 282.5 & 282.3 & 281.9 & 281.8 \\ CH$_2$Br & 282.6 & 283.0 & 282.9 & 282.5 & 282.4 \\ CH$_2$Cl & 282.8 & 283.3 & 283.2 & 282.7 & 282.6 \\ NH$_2$ & 394.3 & 394.8 & 394.7 & 394.2 & 394.1 \\ N$_2^+$ & 394.3 & 394.5 & 394.3 & 393.8 & 393.6 \\ NH$_3^+$ & 395.2 & 395.4 & 395.3 & 394.7 & 394.6 \\ NO$_2$ & 401.0 & 401.4 & 401.2 & 400.7 & 400.6 \\ OH & 525.8 & 526.2 & 526.0 & 525.4 & 525.3 \\ HO$_2$ & 528.5 & 528.6 & 528.5 & 527.7 & 527.6 \\ NO$_2$ & 528.6 & 528.7 & 528.5 & 527.9 & 527.8 \\ O$_2$ & 530.3 & 530.7 & 530.5 & 529.8 & 529.7 \\ CO$^+$ & 530.8 & 531.0 & 530.8 & 530.2 & 530.1 \\ & & & & & \\ & RMSE & 0.4 & 0.3 & 0.4 & 0.5 \\ & ME & 0.4 & 0.2 & -0.3 & -0.4 \\ & MAX & 0.6 & 0.5 & 0.8 & 0.9 \end{tabular} \caption{Comparison of $\Delta$SCF/aug-cc-pCVTZ core to SOMO excitation energies (in eV) for restricted open-shell (RO) and unrestricted (U) orbitals with SCAN and PBE0. Results for other functionals are provided in the supporting information.} \label{tab:uvsro} \end{table} We also consider whether there is any benefit to using restricted open-shell orbitals over unrestricted orbitals. Table \ref{tab:uvsro} indicates that use of unrestricted orbitals systematically lowers excitation energies by $\sim 0.1$ eV relative to restricted open-shell results. This consequently indicates that use of RO orbitals instead of U would degrade the performance of most of the studied functionals (as they systematically overestimate with U orbitals) and improve the behavior of PBE and PBE0. Indeed, Table \ref{tab:uvsro} shows that both RO-PBE0 and RO-SCAN have the same RMSE of 0.4 eV. This potentially argues that RO-PBE0 is perhaps preferable to USCAN, as the small relativistic corrections furthers improve the RO-PBE0 RMSE to 0.2 eV (while degrading USCAN's RMSE to 0.4 eV, as shown in the Supporting Information). However, we believe that SCAN with unrestricted orbitals is still the preferred route, even aside from greater asymptotic computational efficiency. Open-shell systems tend to often arise in transient absorption experiments starting from closed-shell species, and so it is important to use an approach that is effective at predicting the spectra for both types of systems. PBE0 is perceptibly inferior to SCAN when it comes to closed-shell systems\cite{hait2020highly} (irrespective of inclusion of relativistic effects), and the two are fairly close in predictive ability for open-shell systems, making SCAN with unrestricted orbitals the preferred choice. We also note that a comparison between aug-cc-pCVTZ and aug-cc-pCVQZ results shows that a small part ($\sim 0.1$ eV) of the systematic overestimation predicted by SCAN for Table \ref{tab:somodata} values stems from basis set incompleteness (as shown by a comparison in the Supporting Information), similar to behavior of closed-shell species\cite{hait2020highly}. \begin{figure}[htb!] \includegraphics[width=0.5\textwidth]{nh2p.pdf} \caption{Comparison of experimental N K-edge spectrum of NH$_2^+$ (obtained from Ref \onlinecite{bari2019inner}) with those computed with SCAN/aug-cc-pCVTZ. A Voigt profile with a Gaussian standard deviation of 0.2 eV and Lorentzian $\gamma=0.121$ eV was utilized for broadening the computed spectra. Bars are supplied to denote the location of the predicted excitation energies. The singlet and triplet spectra have been normalized by the same factor for a fair comparison.} \label{fig:nh2p} \end{figure} \subsubsection{The case of NH$_2^+$} \begin{table}[htb!] \begin{tabular}{lllll} Method & & Triplet & & Singlet (ROKS) \\ & Low & High & Average & \\ Experiment & & & 396.4\cite{bari2019inner} & \\ BLYP & 395.7 & 396.0 & 395.8 & 396.1 \\ PBE & 394.9 & 395.2 & 395.0 & 395.3 \\ TPSS & 396.0 & 396.3 & 396.2 & 396.2 \\ SCAN & 395.7 & 396.0 & 395.9 & 396.2 \\ B3LYP & 395.7 & 396.0 & 395.9 & 396.1 \\ PBE0 & 395.3 & 395.6 & 395.4 & 395.7 \\ cam-B3LYP & 395.7 & 396.0 & 395.9 & 396.1 \\ $\omega$B97X-D3 & 396.1 & 396.4 & 396.2 & 396.2 \\ $\omega$B97X-V & 396.3 & 396.6 & 396.4 & 396.3 \end{tabular} \caption{Comparison of $\Delta$SCF/aug-cc-pCVTZ core to SOMO excitation energies (in eV) for $^3B_1$ NH$_2^+$ with various functionals. The lowest core excitation energy for the $^1A_1$ singlet is also reported} \label{tab:nh2p} \end{table} The spectrum of NH$_2^+$ has been experimentally characterized\cite{bari2019inner}, but was not considered in Table \ref{tab:somodata} as the two possible excitations to singly occupied levels are unresolved experimentally (assuming the radical cation is in the $^3B_1$ ground state). We used $\Delta$SCF to compute the two transitions separately, and report them in Table \ref{tab:nh2p}. These transitions have nearly the same oscillator strength and thus their average should roughly correspond to the experimental peak. The ROKS results for the lowest lying singlet $^1A_1$ excited state is also reported, in case it contributes to the experimental spectrum as well. Fig \ref{fig:nh2p} presents the representative case of the SCAN functional, with other methods yielding similar figures. The computed average triplet excitation energies in Table \ref{fig:nh2p} agree fairly well with experiment, especially for good performers like SCAN, B3LYP or $\omega$B97X-D3. However, the values are somewhat red-shifted, in stark contrast to the general behavior seen in Table \ref{tab:somodata}. One possible explanation for this would be a blue-shifting of the experimental spectrum due to presence of singlet NH$_2^+$, since this state absorbs fairly strongly (roughly twice the oscillator strength than the individual triplet transitions) at slightly higher energies than the triplet, pushing the overall center of the band to higher energies (as hinted at by Fig \ref{fig:nh2p}). However, the computed triplet excitation average and the experimental maximum are not too far from each other (0.5 eV for SCAN), so it is not entirely impossible for DFT error to be the sole reason behind the discrepancies. $\omega$B97X-V for instance gives quite good agreement with experiment, without needing to invoke the singlet state. \subsection{Spectrum of the Allyl Radical} \begin{figure}[htb!] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{allylrecouple.pdf} \subcaption{Recoupled configurations/SCAN} \label{fig:allylrec} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{allylmixed.pdf} \subcaption{Mixed Configurations/SCAN} \label{fig:allylmix} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{allyltddft.pdf} \subcaption{TDDFT/SRC2-R1} \label{fig:allyltddft} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{allylccsd.pdf} \subcaption{fc-CVS-EOM-CCSD\cite{vidal2019new}} \label{fig:allylccsd} \end{minipage} \caption{Comparison of experimental C K-edge spectrum of the allyl radical (obtained from Ref \onlinecite{alagia2013soft}) with those computed with DFT/aug-cc-pCVTZ and fc-CVS-EOM-CCSD/aug-cc-pCVTZ. The SRC2-R1 functional was employed for the TDDFT spectrum, while SCAN was utilized for both the recoupled and mixed configuration approaches. A Voigt profile with a Gaussian standard deviation of 0.1 eV and Lorentzian $\gamma=0.121$ eV was utilized for broadening the computed spectra. Bars are supplied to denote the location of the predicted excitation energies.} \label{fig:allyl} \end{figure} Having explored the utility of $\Delta$SCF in predicting excitation energies to the SOMO, we next seek to investigate the utility of the theory described in Secs \ref{multi} and \ref{tmu} at predicting the full core-excitation spectrum. The recoupling approach described therein is expected to be most effective for excitations to unoccupied valence orbitals, as then all three unpaired spins (in the core, SOMO and valence excited levels) will be interacting strongly. The scarcity of experimental spectra to compare against is again a problem, and restricts us to only a few data points. Fortunately, the allyl radical has an experimentally characterized spectrum\cite{alagia2013soft} that is dominated by excitations to the unoccupied $\pi^*$ LUMO orbital, making it an excellent example for determining the utility of our recoupling approach, relative to simply using mixed configurations alone. Fig \ref{fig:allyl} compares the performance of the orbital optimized methods in reproducing the C K-edge spectrum of the allyl radical. The performance of fc-CVS-EOM-CCSD and TDDFT with the specialized short-range corrected SRC2-R1\cite{besley2009time} functional is also considered. All three DFT methods are reasonable at predicting the lowest energy allowed excitation (from the terminal C atoms to the SOMO, the corresponding transition from the central C atom being symmetry forbidden), though all systematically overestimate by approximately 0.5 eV, resulting in the computed peak aligning with the vibrational fine structure of the experimental band. This is potentially indicative of some multireference character of this excited state, though it is difficult to draw firm conclusions from density functional data alone (especially since it is possible to get better agreement via a functional that systematically underestimates 1s$\to$SOMO excitation energies, like PBE0). It is however worth noting that fc-CVS-EOM-CCSD is spot on for this excitation, without any need for empirical translation of spectrum (as can be seen from Fig \ref{fig:allylccsd}). Fig \ref{fig:allyltddft} lays bare the the failure of TDDFT at predicting excitations to the LUMO, as the peak positions are completely off. This is not a pecularity of the SRC2-R1 functional but rather a failure of the TDDFT family of methods, as translated TDDFT spectra from other functionals yield a similarly poor picture (as shown in the Supporting Information). Fig \ref{fig:allylccsd} also shows that fc-CVS-EOM-CCSD is unable to yield a qualitatively better spectrum than TDDFT, further highlighting the inadequacies of linear-response methods for this system. It is somewhat interesting that the inclusion of double excitations in fc-CVS-EOM-CCSD did not lead to any significant improvement over TDDFT (which is restricted to single excitations alone). The qualitative failure of both linear-response methods is likely a consequence of both spin-contamination and lack of orbital relaxation. Explicit inclusion of triple excitations should ameliorate both issues but the significant computational expense of full EOM-CCSDT would dramatically constrain practical use. The SCAN based orbital optimized approaches fare better, with both spin-contaminated mixed determinants and the recoupling approach yielding roughly qualitatively correct behavior. However, Fig \ref{fig:allylmix} shows that the mixed determinant approach fails to accurately predict the energy of the higher energy central C to LUMO transition, underestimating it by an eV. This substantially damages the quality of the predicted spectrum, by making this peak appear in an area where none are present experimentally. \begin{table}[htb!] \begin{tabular}{lllllll} Bright Transitions & Experiment & MCSCF & Recoupled & Mixed & TDDFT &EOM-CCSD\\ & & & SCAN & SCAN & SRC2-R1&\\ C$_T\to$SOMO & 282.0 & 281.9 & 282.5 & 282.5 & 282.5 & 282.0\\ C$_C\to$LUMO & 285.3 & 285.7 & 285.2 & 285.1 & 284.8 & 284.4\\ C$_T\to$LUMO & 285.7 & 285.9 & 285.8 & 285.7 & 287.0 & 287.3\\ C$_C\to$LUMO & 287.5 & 288.3 & 287.5 & 286.5 & 286.8& 286.9 \end{tabular} \caption{Comparison of experimentally observed excitation energies (in eV) in the allyl core absorption spectrum with theoretical methods. The experimental values and MCSCF numbers were obtained from Ref \onlinecite{alagia2013soft}. C$_T$ stands for terminal carbon, while C$_C$ is central carbon.}. \label{tab:allyl} \end{table} The recoupling approach shifts this peak to the appropriate location and predicts a spectrum in excellent agreement with experiment (as can be seen from Fig \ref{fig:allylrec}). Indeed, Table \ref{tab:allyl} shows that the peaks predicted by recoupled SCAN agree better with experiment than MCSCF calculations reported in Ref \onlinecite{alagia2013soft} (though not too much should be inferred from this single data point). This good performance is not unique to SCAN alone, as several other functionals yield similar spectra in both the recoupled and mixed regimes (as shown in the Supporting Information). Specifically, we find that recoupled cam-B3LYP, PBE0 and TPSS give good predictions for the 1s$\to$ LUMO portion of the spectrum, while BLYP and PBE yield rather poor performance even after recoupling. SCAN and cam-B3LYP appear to give the best performance, while some of the higher energy peaks with PBE0 and TPSS are somewhat redshifted with respect to the experimental spectrum. This supports our decision of selecting SCAN as the principal functional for the manuscript, despite BLYP and TPSS having the same computational scaling and slightly lower RMSE for excitations to SOMO (as shown in Table \ref{tab:somodata}). The poor qualitative performance by BLYP and PBE also serves as a potential warning against attempting to use GGAs for prediction of core spectra, despite BLYP's excellent behavior for excitations to SOMO. Ultimately, the recoupling scheme cannot correct for deficient physics in the mixed configuration energies and a poor choice of functional could lead to poor results. Nonetheless, it is encouraging to see that all `advanced' functionals (Rungs 3 and 4) tested yield a reasonable spectrum after recoupling. Overall, this example seems to suggest that orbital optimized approaches have an edge over TDDFT/EOM-CCSD when it comes to predicting core-excitation spectra of radicals. Furthermore, recoupling spin-contaminated mixed configurations to yield approximate doublets appears to not degrade performance and leads to some improvements. The overall accuracy of recoupled SCAN at predicting the spectrum of allyl certainly appears to hint at the efficacy of using this approach for XAS studies of large carbon based polyradical systems, such as ones that might arise in soot formation during combustion\cite{johansson2018resonance}. \subsection{O K-edge spectrum of CO$^+$} \begin{figure}[htb!] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{cop_recoupled.pdf} \subcaption{Recoupled configurations/SCAN} \label{fig:coprec} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{cop_mixed.pdf} \subcaption{Mixed Configurations/SCAN} \label{fig:copmix} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{cop_src.pdf} \subcaption{TDDFT/SRC2-R1} \label{fig:coptddft} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{cop_ccsd.pdf} \subcaption{fc-CVS-EOM-CCSD\cite{vidal2019new}} \label{fig:copccsd} \end{minipage} \caption{Comparison of experimental O K-edge spectrum of CO$^+$ (obtained from Ref \onlinecite{couto2020carbon}) with those computed with DFT/aug-cc-pCVTZ and translated fc-CVS-EOM-CCSD/aug-cc-pCVTZ. A Voigt profile with a Gaussian standard deviation of 0.1 eV and Lorentzian $\gamma=0.121$ eV was utilized for broadening the computed spectra. Bars are supplied to denote the location of the predicted excitation energies.} \label{fig:cop} \end{figure} We next consider the rather challenging case of the CO$^+$ radical cation, whose experimental spectrum has been characterized very recently\cite{couto2020carbon}. We focus on the O K-edge as the two doublet states corresponding to the 1s$\to$LUMO excitation are experimentally well resolved, unlike the C K-edge (where vibrational fine structure of the lower energy excitation overlaps with the higher energy one). Fig \ref{fig:cop} presents the orbital optimized SCAN spectrum (both recoupled and mixed), along with those from translated TDDFT and fc-CVS-EOM-CCSD. There are three peaks in all cases: the 1s$\to$ SOMO excitation (lowest in energy) and the two doublets arising from 1s$\to$LUMO excitations. We observe that the linear-response approaches yield a fairly poor picture. Both TD-SRC2-R1 (Fig \ref{fig:coptddft}) and EOM-CCSD (Fig \ref{fig:copccsd}) need to be redshifted by $\sim$ 2 eV to align the 1s $\to$ SOMO peak with experiment (vs the orbital optimized DFT spectra, which needs no such translation). The translated spectra are nonetheless greatly compressed relative to experiment and the relative intensities of the two 1s$\to$ LUMO peaks are incorrect. This is not merely a consequence of spin-contamination, as Fig \ref{fig:copmix} shows that SCAN using mixed configurations does better at reproducing the overall shape of the spectrum, despite having quartet contamination as well. Lack of orbital relaxation thus appears to be the critical factor that compromises the performance of TDDFT and EOM-CCSD for this system. Fig \ref{fig:copmix} however also shows that SCAN with mixed configurations has too small a spacing between the two 1s$\to$LUMO doublets (the two highest energy peaks). Fig \ref{fig:coprec} demonstrates that recoupling fixes this problem (and correctly reduces the intensity of the highest energy peak), yielding a spectrum is in decent agreement with experiment. The spacing between the two highest energy peaks remains somewhat small (2.8 eV) vs experiment ($\sim$3.4 eV but the unresolved broadness of the experimental second peak makes this hard to pinpoint). Other DFT functionals similarly underestimate this splitting (to varying extents), while reproducing the general shape of the spectrum (as can be seen from the Supporting Information). Nonetheless, it is undeniable that the spectrum quality is greatly improved by recoupling. We also note that the NOCIS method\cite{oosterbaan2018non,oosterbaan2019non,oosterbaan2020generalized} (which performs linear-response atop orbitals relaxed for the core-ionized state and is spin-pure in a manner analogous to XCIS\cite{maurice1996nature}) yields spectra in excellent agreement with experiment (as shown in the Supporting Information), further demonstrating the utility of orbital relaxation and configuration recoupling, in an unambiguous, wave function based manner. At any rate, the qualitative failure of TDDFT and EOM-CCSD seems to argue for the use of methods with explicit orbital relaxation and configuration recoupling (like the scheme presented here or NOCIS) for the computation of core-level spectra of open-shell systems, irrespective of whether the computed spectra is translated or not. \subsection{N K-edge spectrum of NO$_2$} \begin{figure}[htb!] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{no2valence.pdf} \subcaption{Valence excitations.} \label{fig:nkval} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\linewidth]{no2rydberg.pdf} \subcaption{Rydberg excitations.} \label{fig:nkryd} \end{minipage} \caption{Comparison of experimental N K-edge spectrum of NO$_2$ (obtained from Ref \onlinecite{zhang1990inner}) with those computed with DFT/d-aug-cc-pCVTZ\cite{dunning1989gaussian,kendall1992electron,woon1995gaussian,woon1994gaussian} for both the valence (left) and Rydberg (right) regimes. The actual intensities of the Rydberg states are roughly an order of magnitude lower than that of the valence states, but have been magnified for easier comparison. The SRC2-R1 functional was employed for the TDDFT spectrum, while SCAN was utilized for both the recoupled and mixed configuration approaches. A Voigt profile with a Gaussian standard deviation of 0.1 eV and Lorentzian $\gamma=0.121$ eV was utilized for broadening the computed spectra.} \label{fig:no2nkedge} \end{figure} NO$_2$ is another rare open-shell system with a known experimental high resolution core-level spectrum\cite{zhang1990inner}, by virtue of being quite stable for a radical. It is isoelectronic with allyl, although the SOMO is not a $\pi^*$ orbital (but is rather a $\sigma$ orbital mostly localized on N). The spectrum is nonetheless dominated by the transitions to the SOMO and the $\pi^*$ LUMO levels. However, some Rydberg states have also been characterized, indicating that it could serve as an example to demonstrate whether our approach is balanced at predicting both valence and Rydberg excitations simultaneously. Fig \ref{fig:no2nkedge} compares the experimental spectrum at the N K-edge with those predicted via DFT (employing the doubly augmented d-aug-cc-pCVTZ basis to properly converge Rydberg states). The valence regime spectrum in Fig \ref{fig:nkval} shows that all methods get the qualitative form right, though the 1s to $\pi^*$ LUMO transition is somewhat redshifted by all methods. The success of TDDFT here stands in contrast to the failure observed for the valence regime of the allyl radical, although the different symmetry of the SOMO ($\sigma$ vs $\pi^*$) may contribute to this. Recoupled SCAN performs better than mixed configuration SCAN for the second excitation by removing the quartet contribution to the energy. This blueshifts the 402.3 eV excitation energy predicted by the mixed configuration approach to 402.9 eV, which is much closer to the experimentally observed peak at 403.3 eV. This disagreement is not particularly small (and is in the opposite direction to the systematic overestimation exhibited by SCAN for excitations to the SOMO), but the recoupled DFT method gives best agreement with experiment. The Rydberg regime depicted in Fig \ref{fig:nkryd} however shows somewhat surprising behavior. It was tempting to believe that the weak coupling between the excited electron and the other unpaired electrons would lead to good performance by all methods. However, TDDFT absolutely fails to reproduce the spectrum in this regime, significantly blueshifting the experimental peak at 408.9 eV to 410.0 eV. On the other hand, the mixed configurations are quartet contaminated, and are thus slightly redshifted from their optimal location. Our recoupling protocol eliminates this problem, giving excellent agreement with experiment. It is also worth noting that the recoupled approach appears to predict the shape of the curve better than individual mixed configurations, indicating that the protocol described in Sec \ref{tmu} was reasonably effective. This is however ultimately only one data point, and comparison against more high resolution experimental spectra would be useful in validating our observation. We therefore hope that spectra of more open-shell species in the Rydberg regime will be available in the near future. We note that high energy spectra for N$_2^+$ and CO$^+$ have been very recently reported\cite{lindblad2020x,couto2020carbon}, but the Rydberg region appears to also contain a large number of doubly excited states with significant multiconfigurational character (involving more than three orbitals), that DFT based methods are unlikely to successfully model. This is less likely to be the case for neutral species. \section{Recommendations for successful calculations}\label{sec:recs} The proposed protocol for recoupling mixed configurations appears to yield improved agreement with experiment, relative to simply using the two individual mixed configurations that correspond to single excitations. Nonetheless, it entails individual optimization of four configurations per excitation ($\ket{Q,M_{1,2,3}}$), to get two doublet state energies. We subsequently recommend the following protocol for ensuring maximum agreement between these configurations and minimizing computational cost. \begin{enumerate} \item Optimize unrestricted KS ground state orbitals. \item Use these orbitals as initial guesses to optimize RO orbitals for the ground state. \item Using the RO ground state orbitals as the initial guess, optimize the RO orbitals for the core-ionized state via SGM. This decouples the relaxation of the core-hole from the rest of the computations. \item Using the RO core-ionized orbitals as the initial guess, optimize RO orbitals corresponding to the desired quartet state with SGM. The core-ionized orbitals can thus be computed only once, and repeatedly utilized for multiple excitations. Furthermore, the unoccupied orbitals for the core ionized state are much more representative of the optimized orbitals for the excited electron, than canonical ground state orbitals. \item Using the RO core-excited quartet orbitals as initial guesses, find the unrestricted orbitals for the quartet $\ket{Q}$ and mixed configurations $\ket{M_{1,2,3}}$ with SGM. \end{enumerate} Steps 1-3 also apply for excitations to the SOMO level, followed by use of the RO core ionized orbitals to initialize the excited state optimization for the core to SOMO excited configuration. They also apply for computation of core-excitations in closed-shell species via ROKS. We believe that the RO energies themselves are not particularly useful for radicals, but the RO orbitals act as useful intermediates to prevent the alpha and beta spatial orbitals from differing prior to the last optimization step (step 5). The RO orbital space in fact is much more tightly constrained and SGM is faster at those optimizations in practice. Difficult convergence cases in general could also be addressed via converging to the same state with a different (ideally, cheaper) functional and using the resulting orbitals as initial guesses. Three additional points regarding orbital optimized core-excitation calculations in general (for both closed and open-shell systems) are worth noting as well. \begin{enumerate} \item Use of a localized core-hole is absolutely critical for systems where there are symmetry equivalent atoms (like the terminal carbons of allyl). Delocalized core-holes lead to substantial delocalization error\cite{perdew1982density,hait2018delocalization} driven underestimation of energy, as shown in Ref \onlinecite{hait2020highly}. Localization of core orbitals can be achieved via explict localization, or via weak electric fields that break symmetry. The mixed basis strategy described in the next point also leads to core orbital localizing symmetry breaking. \item It is absolutely essential to use at least a triple zeta level basis with split core functions (like cc-pCVTZ) at the local site of the core-excitation. The core-hole would otherwise not be able to adequately relax, and energies be systematically overestimated\cite{hait2020highly}. However, a smaller basis can be used for all other atoms, with cc-pVDZ being adequate in our experience\cite{hait2020highly} (though even smaller bases could potentially be fine). This mixed basis strategy helps bring down the computational cost considerably as well, as the overall computation cost is comparable to a double zeta basis DFT ground state calculation per iteration, though excited state orbital optimization does often require many more iterations than ground state computations. \item Many core-excited states possess significant Rydberg character. A good description of these states necessitates the presence of diffuse functions in the basis, and even double augmentation is sometimes necessary (such as the NO$_2$ spectrum presented in Fig \ref{fig:no2nkedge}, where singly augmented aug-cc-pCVTZ blueshifts the Rydberg peaks in Fig \ref{fig:nkryd} by 0.2 eV). This is easily the most onerous basis set requirement for such calculations but is functionally unavoidable for any electronic structure method seeking a correct description of Rydberg states. \end{enumerate} \section{Conclusion} We have investigated orbital optimized density functional approaches to studying core-excitation spectra of open-shell systems, by employing the SGM approach for averting variational collapse. Lack of gas-phase experimental data proves to be a hindrance for assessing the performance of these methods, but existing data shows encouraging behavior. We firstly find that several density functionals like SCAN, TPSS, BLYP, B3LYP, cam-B3LYP and $\omega$B97X-D3 can be employed to predict excitation energies corresponding to 1s to SOMO transitions in radicals, to RMSE at or below 0.3 eV. The 1s$\to$ SOMO transitions are however not very challenging excitations as they do not result in a change in the total number of unpaired electrons and thus can be well approximated by single Slater determinants. Higher excitations entail breaking of electron pairs and thus are natively multiconfigurational. These states therefore cannot be described by single determinants, although somewhat reliable results can at times be obtained from symmetry broken mixed determinants in the limit of weak coupling between unpaired spins (analogous to how unrestricted HF/DFT being effective for single bond dissociations in closed-shell species). For more general accuracy, we present a CI inspired approach for self-consistently recoupling these single determinant mixed configurations with unpaired spins to yield approximately spin-pure results corresponding to multiconfigurational doublet states. The performance of this approach is compared against that of using unrecoupled mixed determinants alone and TDDFT/fc-CVS-EOM-CCSD for the core-level spectra of the allyl radical and CO$^+$ at the O K-edge. The N K-edge spectrum for NO$_2$ is also studied with both orbital optimized DFT and TDDFT. We find that the recoupling scheme leads to no degradation of performance and in fact consistently improves upon results obtained by merely using single mixed determinants (significantly so for the O K-edge of CO$^+$). It is nonetheless worth appreciating that unrecoupled determinants often yield fairly reasonable answers by themselves, especially relative to TDDFT/EOM-CCSD for the allyl radical and the O K-edge of CO$^+$. Our work therefore shows promise in using orbital optimized DFT approaches for predicting core-level spectra of radicals, where high accuracy can be obtained even from local functionals like SCAN, at low computational cost. Available evidence also appears to argue for recoupling mixed configurations, although this is roughly computationally twice as expensive (as four configurations need to be optimized as opposed to only two). The O K-edge of CO$^+$ also seems to suggest that our recoupling scheme somewhat underestimates doublet-doublet splitting in the strong coupling limit. More experimental spectra for open-shell systems (involving transitions to unoccupied valence orbitals) would however be immensely useful in fully characterizing the limitations of the recoupling approach. We will consequently continue to attempt to validate this approach via comparison to experiment as new data arises. In future, we will also seek to develop approaches that optimize a single set of unrestricted orbitals for recoupling mixed configurations vs separately optimizing all four relevant states. This should reduce the computational cost of such calculations substantially, and enhance their utility. It would also be useful to generalize the recoupling approach to higher spin states like triplets, where there are more spins to recouple and a correspondingly larger number of coupling constants. Work along these directions is presently in progress. \section{Computational Methods} All calculations were performed with the Q-Chem 5.3 \cite{QCHEM4} package. Local exchange-correlation integrals were calculated over a radial grid with 99 points and an angular Lebedev grid with 590 points. Experimental geometries (from the NIST database\cite{johnson2015nist}) were used whenever possible, with MP2\cite{cremer2011moller}/cc-pVTZ\cite{dunning1989gaussian} optimized geometries being employed in their absence. The plots labeled `mixed' only used the two mixed configurations corresponding to single excitations from the ground state, as the third configuration is technically a double excitation that would not usually be considered due to formally zero (and in practice, typically small) oscillator strength. All TDDFT calculations employed the Tamm-Dancoff Approximation\cite{dreuw2005single,tamm1991relativistic,dancoff1950non,hirata1999time}. \section{Acknowledgements} D.H., K.J.O. and M.H.-G. were supported by Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, through the Atomic, Molecular, and Optical Sciences Program of the Chemical Sciences Division of Lawrence Berkeley National Laboratory. E.A.H., Z.Y. and S.R.L were funded by the Gas Phase Chemical Physics program, which operates under the same DOE-OS-BES contract number. We also thank Marta L. Vidal for helpful discussions regarding EOM-CCSD calculations. \section{Supporting Information} \noindent Additional spectra for the allyl radical and CO$^+$ (PDF) \newline Raw data (XLXS) \newline Molecular geometries (ZIP) \section{Data Availability} The data that supports the findings of this study are available within the article and its supplementary material.
2023-04-23T08:18:11.919Z
2020-09-23T02:02:55.000Z
redpajama/arxiv
arxiv_0000
1,360
10,119
6ece27d397c24cdca8280ef3b35e796db8283bcf
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \def1.2{1.2} \parskip 6 pt \marginparwidth 0pt \oddsidemargin 0pt \evensidemargin 0pt \marginparsep 0pt \topmargin -0.5in \textwidth 6.5in \textheight 9.0 in \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\"o}{\"o} \newcommand{{\rm Tr}}{{\rm Tr}} \newcommand{\gone}[1]{{}} \newcommand{\noindent $\bullet\ $}{\noindent $\bullet\ $} \newcommand{{\rm Re}}{{\rm Re}} \newcommand{{\rm Im}}{{\rm Im}} \newcommand{\ensuremath{H}\xspace}{\ensuremath{H}\xspace} \newcommand{\ensuremath{f}\xspace}{\ensuremath{f}\xspace} \newcommand{\ensuremath{Q}\xspace}{\ensuremath{Q}\xspace} \newcommand{\ensuremath{R}\xspace}{\ensuremath{R}\xspace} \begin{document} \begin{titlepage} \rule{0ex}{0ex} \vfil \begin{center} {\bf \Large Thermodynamics of $T \bar T$, $J \bar T$, $T \bar J$ deformed \\ conformal field theories } \vfil Soumangsu Chakraborty$^1$, Akikazu Hashimoto$^2$ \vfil {}$^1$ Department of Theoretical Physics,\\ Tata Institute for Fundamental Research, Mumbai 400005, India {}$^2$ Department of Physics, University of Wisconsin, Madison, WI 53706, USA \vfil \end{center} \begin{abstract} We compute the Hagedorn temperature of $\mu T \bar T + \varepsilon_+ J \bar T + \varepsilon_-T \bar J$ deformed CFT using the universal kernel formula for the thermal partition function. We find a closed analytic expression for the free energy and the Hagedorn temperature as a function of $\mu$, $\varepsilon_+$, and $\varepsilon_-$ for the case of a compact scalar boson by taking the large volume limit. We also compute the Hagedorn temperature for the single trace deformed $AdS_3 \times S^1 \times T^3 \times S^3$ using holographic methods. We identify black hole configurations whose thermodynamics matches the functional dependence on $(\mu, \varepsilon_+, \varepsilon_-)$ of the double trace deformed compact scalars. \end{abstract} \vspace{0.5in} \end{titlepage} \section{Introduction} Recently, $\mu T \bar T$ and related deformation of quantum field theories in 1+1 dimensions have been a subject of great interest. Here, $T \bar T$ is a composite operator built in terms of the stress tensor, and $\mu$ is its coefficient whose dimension is square of length. Such a deformation is non-renormalizable. Despite that, it was demonstrated in \cite{Smirnov:2016lqw} that the spectrum of deformed theory living on a cylinder can be determined if the spectrum of the undeformed theory is given. After the deformation, the system appears to exhibit features of non-locality. One clear indication of this is the Hagedorn spectrum in the UV. $T \bar T$ deformation therefore appears to be a way to extend the usual notion of quantum field theories in a controlled setting. A robust observable one can compute for these class of theories is the thermal partition function as a function of inverse temperature $\beta$ and the radius $R$. Since the spectrum is known, the partition function is determined unambiguously. One can also think of this observable as a vacuum amplitude for the Euclidean theory living on a torus whose periods are $\beta$ and $2 \pi R$. Through parameters $\beta$ and $R$, one can probe the various scales of the system. To keep the discussion simple, it is convenient to restrict our attention to the case where the undeformed theory is a conformal field theory. Then, the deformation introduces a single scale $\mu$, which we can probe using $R$ and $\beta$. The torus partition function have many interesting properties and have been computed by many authors. First, a flow equation for the partition function was derived in \cite{Cardy:2018sdv}, and a solution to this flow equation was computed using the JT gravity formulation of $T \bar T$ deformation in \cite{Dubovsky:2018bmo}. In a parallel development, authors of \cite{Aharony:2018bad,Datta:2018thy} highlighted the power of modular properties in computing the partition function. In a theory with $U(1)$ global symmetry with holomorphic and anti-holomorphic currents $J$ and $\bar{J}$ respectively, there are also deformations by $\varepsilon_+ J \bar T$ and $\varepsilon_- T \bar J$ which also turn out to be integrable \cite{Guica:2017lia,Chakraborty:2018vja}. The partition function also exhibits modular property \cite{Aharony:2018ics}. These modular properties are suggestive of the connection between $T \bar T$ and related deformations and the world-sheet sigma model of string theory.\footnote{The connection between $T \bar T$ deformation and Nambu-Goto action was first derived in \cite{Cavaglia:2016oda}. See also \cite{Bonelli:2018kik}.} In order to utilize this relation, the authors of \cite{Chakraborty:2019mdf} integrated the deformation of the world-sheet sigma model by an exactly marginal world-sheet operator whose space-time interpretation is the deformation by irrelevant operators whose coefficients are $(\mu,\varepsilon_+,\varepsilon_-)$. Strictly speaking, the system being considered was that of a $1+1$ dimensional CFT realized holographically as $AdS_3 \times {\cal M}$ in type IIB string theory, deformed by an integrable single trace $T \bar T$ deformation \cite{Giveon:2017nie}. However, the spectrum of states on the long strings sitting in the weakly coupled region can be interpreted as experiencing the double trace deformation $\mu T \bar T + \varepsilon_+ J \bar T + \varepsilon_- T \bar J$. In this way, and by invoking universality, the authors \cite{Chakraborty:2019mdf} were able to compute the spectrum of states for the world-sheet sigma model whose target space is some generic ${\cal M} = S^1 \times {\cal M}'$. The resulting spectrum is in agreement with \cite{LeFloch:2019rut,Frolov:2019xzi} who followed the approach of \cite{Smirnov:2016lqw,Guica:2017lia} more closely. More recently, a direct computation of the world-sheet torus amplitude for the sigma model on $T^2 \times S^1 \times {\cal M}'$, restricted to the sector where the world-sheet wraps the $T^2$ exactly once, was presented as a compact integral kernel (\ref{Zdef}) acting on the partition function of the undeformed theory in \cite{Hashimoto:2019wct}. This expression made the modular properties manifest. It also reproduced the spectrum obtained previously in \cite{Chakraborty:2019mdf,LeFloch:2019rut,Frolov:2019xzi}. In this article, we will explore the application of kernel formula, derived in \cite{Hashimoto:2019wct}, on a single free compact boson CFT. More concretely, we will compute how the Hagedorn temperature depends on $\mu$, $\varepsilon_+$, and $\varepsilon_-$. This analysis enables us to identify points which are special in the $(\mu, \varepsilon_+, \varepsilon_-)$ parameter space. To facilitate the analysis, we found it convenient to analyze the thermodynamics in the large volume limit $R \rightarrow \infty$ keeping $\beta$, $\mu$, $\varepsilon_+$, and $\varepsilon_-$ fixed. Although the thermodynamic quantities do simplify dramatically, some critical information also gets lost in the limit. We will elaborate on the implication of this issue in the following sections. We will also compute the Hagedorn temperature of the holographically realized single trace deformed systems \cite{Giveon:2017nie,Chakraborty:2018vja,Chakraborty:2019mdf} and comment on their features. \section{Review of the kernel formula and universality} In this section, we will review the master formula for computing the partition function of generic $T \bar T$, $J \bar T$, $T \bar J$ deformed CFT derived in \cite{Hashimoto:2019wct}. It is stated compactly as \begin{equation} Z_{def}(\zeta, \bar \zeta, \lambda, \epsilon_+, \epsilon_-)= \int_{{\cal H}} d^2 \tau \int_{{\cal C}} d \chi d \bar \chi \ I(\zeta, \bar \zeta, h, \epsilon_+ \epsilon_-, \tau, \bar \tau, \chi, \bar \chi) Z_{inv}(\tau, \bar \tau, \chi, \bar \chi) \label{Zdef}~, \end{equation} with \begin{equation} I = {1 \over 4 \epsilon_+ \epsilon_- \tau_2^3} \exp\left[ -{\pi \chi \bar \chi \over2 h\epsilon_+ \epsilon_- \tau_2} - {\pi \chi \over 2 \epsilon_+ \tau_2} (\bar \tau - \bar \zeta) + {\pi \bar \chi \over 2 \epsilon_- \tau_2} (\tau - \zeta) \right]\ , \label{I}\end{equation} where we introduced dimensionless deformation parameters \begin{equation} \lambda = {\mu \over R^2} , \qquad \epsilon_+ = {\varepsilon_+ \over R}, \qquad \epsilon_- = {\varepsilon_- \over R}\ . \end{equation} The quantity $Z_{inv}$ is defined as \begin{equation} Z_{inv}(\chi, \bar \chi) = e^{\kappa\pi (\chi - \bar \chi)^2/2 \tau_2} Z_{cft}(\chi, \bar \chi) \label{Zinv}\end{equation} where $\kappa$ is the level of the $U(1)$ current which we set to 1 for the case of single compact scalar which we will consider as our main example, and \begin{equation} Z_{cft}(\tau,\bar \tau, \chi, \bar \chi) = \sum_i e^{- 2 \pi \tau_2 E_i + 2 \pi i \tau_1 p_i + 2 \pi i \chi p_{Li} - 2 \pi i \bar \chi p_{Ri} } \label{Boltzmann}\end{equation} is the Boltzmann sum for a CFT with charges weighted by chemical potential parameters $\chi$ and $\bar \chi$. These formulas are taken from (4.10), (4.11), (A.7), and (A.13) of \cite{Hashimoto:2019wct}. The formula (\ref{Zdef}) was derived by manipulating the sigma model for long strings in the single trace deformed theory of \cite{Chakraborty:2019mdf} where the $U(1)$ isometry was some compact $U(1)$. Since the effects of $T \bar T$, $J \bar T$, and $T \bar J$ deformation acts universally, we can regard (\ref{Zdef}), (\ref{I}), and (\ref{Zinv}) to also be universal. The fact that $Z_{def}$ is invariant under \begin{equation} \zeta \rightarrow {a \zeta + b \over c \zeta + d}, \qquad \lambda \rightarrow {\lambda \over |c \zeta + d|^2}, \qquad \epsilon_+ \rightarrow {\epsilon_+ \over c \bar \zeta + d}, \qquad \epsilon_- \rightarrow {\epsilon_- \over c \zeta + d} \label{zetamod} \end{equation} was also explained in \cite{Hashimoto:2019wct}. The parameters $h$ and $\lambda$ are related according to \begin{equation} h^{-1} = \lambda + a \epsilon_+ \epsilon_- \end{equation} where $a$ is some constant which contributes to $I$ as a factor of $e^{- \pi a \chi \bar \chi / 2 \tau_2}$ which can be absorbed into $Z_{inv}$ without affecting the modular properties. This freedom is a manifestation of changing the contact term between two $U(1)$ current operators explained in footnote 2 of \cite{Hashimoto:2019wct} and in \cite{Kutasov:1988xb}. The goal of this article is to apply the formulas (\ref{Zdef}), (\ref{I}), and (\ref{Zinv}) to extract some physical features. Specifically, we will compute the Hagedorn temperature and the free energy for CFT for a free compact scalar field theory. These quantities simplifies significantly in the infinite volume limit and can be presented in closed form. We will comment on physical features which can be inferred from these results. \section{Thermodynamics and the infinite volume limit} In this section, we will apply the universal formula (\ref{Zdef}) and (\ref{I}) to analyze the thermodynamics of the deformed theory. The dimensionful parameters are deformation parameters $(\mu, \varepsilon_+, \varepsilon_-)$, inverse temperature $\beta$, and radius $R$. We can probe the $(\mu, \varepsilon_+, \varepsilon_-)$ dependence of the free energy and related thermodynamic quantities as a function of the temperature. From this point of view, the finite size effect when $\beta \sim R$ is not interesting. We can therefore scale $R$ out of the problem by sending it to infinity, keeping other dimensionful parameters finite. This will substantially simplify the expression for the free energy and related quantities.\footnote{Thermodyanmics of $T \bar T$ deformed CFT at next to leading order in the large volume limit can be found in \cite{Barbon:2020amo}.} We also need to address the dependence on $\zeta_1$ which is related to momentum along the spatial circle. It turns out that a convenient choice is to integrate $\zeta_1$ in the range $0 \le \zeta_1 \le 1$ which amounts to considering the thermodynamics in the zero momentum sector \begin{equation} Z^0_{def}(\zeta_2) = \int_0^1 d \zeta_1 \ Z_{def}(\zeta, \bar \zeta) \ . \end{equation} One can then show that \begin{equation} Z^0_{def}(\zeta_2, \lambda, \epsilon_+, \epsilon_-) = \int_{{\cal H}} d^2 \tau \int_{{\cal C}} d \chi d \bar \chi \ I(\zeta, \bar \zeta, \lambda, \epsilon_+ \epsilon_-, \tau, \bar \tau, \chi, \bar \chi) Z^0_{inv}(\tau_2, \chi, \bar \chi) ~,\label{Zno1}\end{equation} where \begin{equation} Z^0_{inv}(\tau_2, \chi, \bar \chi) = \int_0^1 d \tau_1 \ Z_{inv}(\tau, \bar \tau, \chi, \bar \chi) \ . \end{equation} Even though the right hand side of (\ref{Zno1}) appears to depend on $\zeta_1$, it only appears in the combination $\tau_1 - \zeta_1$, and this is the only dependence on $\tau_1$ and $\zeta_1$. Therefore the dependence on $\zeta_1$ in (\ref{Zno1}) disappears upon integrating over $\tau_1$. Let us now describe how one extracts the large $R$ limit. When $\epsilon_\pm=0$, \begin{equation} h = \lambda^{-1} = {R^2 \over \mu} \end{equation} and so the large $R$ limit corresponds to large $h$. We can isolate the large $h$ scaling behavior by introducing the rescaling \begin{eqnarray} \tau_i &=& {1 \over \sqrt{h}} t_i ~,\nonumber \\ \zeta_i & = & {1 \over \sqrt{h}} z_i ~,\\ \epsilon_\pm & = & {e_\pm \over \sqrt{h}} ~,\nonumber \end{eqnarray} % so that % \begin{equation} I \sim \exp\left[ -{\pi \sqrt{h} ((t_1 - z_1)^2 + (t_2 - z_2)^2) \over 2 t_2} - {\pi \sqrt{h}\over 2 (e_+ e_-) t_2}(\bar \chi +e_- (\bar t- \bar z))(\chi - e_+ (t - z))\right] \ . \end{equation} We see then that in the large $h$ limit, the kernel is localized so that one can use the saddle point approximation. If one is only interested in the leading large $h$ behavior, we can also ignore the factor outside the exponential in $I$. We also see that the saddle point is dominated at small values of $\tau_2$ of order $h^{-1/2}$. Working in the regime, \begin{equation} h^{-1} \sim \epsilon_+^2 \sim \epsilon_-^2 \ll 1, \end{equation} is therefore a natural scaling limit to explore the behavior of Hagedorn temperature on these variables. \subsection{Hagedorn temperature for the pure $T \bar T$ deformation.} Let us begin by analyzing the simple case where $\epsilon_+$ and $\epsilon_-$ is set to zero. Then, the $\chi$ integral is localized at $\chi=0$ so $Z_{inv}=Z_{cft}$. We can take a generic CFT with central charge $c$ as the undeformed theory. This is a trivial case where the answer is known from previous works, but it provides a concrete template which we can use as a guide in considering more complicated cases. The partition function in the small $\tau$ limit is given by the Cardy's formula \begin{equation} Z_{cft}[\tau, \bar \tau] = \exp\left[ {i\pi c \over 12 \tau} - {i\pi c \over 12 \bar \tau}\right] \ . \end{equation} We need to integrate over $\tau_1$ in order to isolate the zero momentum sector. Fortunately, in our scaling limit, this integral is localized at $\tau_1=0$. So we can consider \begin{equation} Z^0_{cft} = Z^0_{inv} = \exp\left[{\pi c \over 6 \tau_2} \right] \end{equation} to be our starting point. The remaining $\tau_1$ integral is a trivial Gaussian integral. All that remains then is to find the saddle point for $\tau_2$ for the action \begin{equation} -{\pi h (\tau_2 - \zeta_2)^2 \over 2 \tau_2} + {\pi c \over 6 \tau_2} \ . \end{equation} This leads to \begin{equation} \log Z_{def}^0(\zeta_2) = {\pi c \over 3 \zeta_H^2} \left( \zeta_2 - \sqrt{\zeta_2^2 - \zeta_H^2}\right)~, \end{equation} where \begin{equation} \zeta_H = \sqrt{{c \over 3 h}} \end{equation} is the branch point in the $\zeta_2$ dependence of the partition function. This expression contains all the information about the thermodynamic potentials and the equation of state up to standard Maxwell relations. It is straight forward to Legendre transform the partition function to obtain the equation of state (thermal entropy) \begin{equation} S = 2 \pi \sqrt{ {c R {\cal E} \over 3} + \zeta_H^2 R^2 {\cal E}^2} ~,\end{equation} from which we read off the Hagedorn temperature \begin{equation} \beta_H = 2\pi R \zeta_H = 2 \pi \sqrt{c \over 3} \sqrt{R^2 \over h} = 2 \pi \sqrt{c \mu \over 3}~, \label{betaHtt} \end{equation} which is finite in the scaling limit. A useful comparison is to look at the energy of the lowest energy state, \begin{equation} R {\cal E}_0 = \sqrt{{1 \over 4 \lambda^2} + {R E_0 \over \lambda} + (R P_0)^2 } -{1 \over 2 \lambda},\end{equation} which for \begin{equation} R E_0 = -{c \over 12}, \qquad R P_0 = 0 \ , \end{equation} leads to a branching behavior at \begin{equation} {1 \over \lambda} = {c \over 3} \end{equation} or \be2 \pi R = 2 \pi \sqrt{c\mu \over 3} .\end{equation} This is a reflection of the fact that infinite volume at finite Euclidian time coordinate with periodicity $\beta$ is geometrically equivalent to finite volume of period $2 \pi R$ with infinitely extended time coordinate. The Hagedorn behavior from the first point of view corresponds to the appearance of tachyon from the second point of view. This point was also emphasized in \cite{Aharony:2018bad} and will continue to hold for the more general cases we will be considering below. \subsection{Compact Scalars \label{sec:kernel}} Now that the template for analyzing the thermodynamics of pure $T \bar T$ deformation is established, it is straightforward to extend the analysis to the case of compact scalars. Let us consider the case of a single compact scalar. What we need is the large $R$ limit of $Z_{inv}$. We know that $Z_{cft}$ for this case is \begin{equation} Z_{cft}(\tau,\bar \tau, \chi, \bar \chi)= |\eta(\tau)|^{-2} Z_{\rm zeromode} ~,\end{equation} where, using (A.9) of \cite{Hashimoto:2019wct}, we have \begin{equation} Z_{\rm zero mode} = \sum_{n,w} \exp \left[-2 \pi \tau_2 \left({n^2 \over r^2} + {w^2 r^2 \over 4} \right) + 2 \pi i \tau_1 n w+ 2 \pi i \chi \left({n \over r} + {w r \over 2}\right) - 2 \pi i \bar \chi \left({n \over r} - {w r \over 2}\right) \right] \ . \label{charges} \end{equation} Taking the infinite volume limit is equivalent to taking $\tau_2$ to be small. In this limit, the sum over $n$ and $w$ can be approximated by an integral, and we find \begin{equation} Z_{cft}(\tau, \bar \tau, \chi, \bar \chi) = {1 \over \tau_2^2} e^{{\pi \over \tau_2} \left( {1 \over 6} - \chi^2 - \bar \chi^2 \right)} \label{Zchi}\end{equation} and \begin{equation} Z_{inv}(\tau, \bar \tau, \chi, \bar \chi) =\exp\left[ {\pi (\chi - \bar \chi)^2 \over 2 \tau_2} \right] Z_{cft} (\tau, \bar \tau, \chi, \bar \chi) \ . \end{equation} All that remains to be done is to integrate out $\chi$, $\bar \chi$, $\tau_1$, and $\tau_2$ of (\ref{Zdef}) in saddle point approximation. This only requires some algebraic manipulations, and we find \begin{equation} \log Z_{def}^0(\zeta_2,\epsilon_+, \epsilon_-) = {\pi \over 3 \zeta_H^2} \left( \zeta_2 - \sqrt{\zeta_2^2 - \zeta_H^2}\right) ~,\label{ZzH}\end{equation} where \begin{equation} \zeta_H = \sqrt{{\lambda \over 3} + {1 \over 3} (\epsilon_+ - \epsilon_-)^2} \end{equation} and we are setting $h^{-1} = \lambda - 4 \epsilon_+ \epsilon_-$ following \cite{Chakraborty:2019mdf,Hashimoto:2019wct}. The functional form of the partition function (\ref{ZzH}) in terms of $\zeta_2$ and $\zeta_H$ is identical to the pure $T \bar T$ deformation. This was somewhat unexpected. The dependence on $\epsilon_+$ and $\epsilon_-$ only appear in the combination which enters in $\zeta_H$. The conclusion then is that the inverse Hagedorn temperature is given by \begin{equation} \beta_H = 2 \pi \sqrt{{1 \over 3} \left(\mu +(\varepsilon_+ - \varepsilon_-)^2\right)} \ . \label{result1}\end{equation} One can further verify the validity of (\ref{result1}) by exchanging the $x_1 \leftrightarrow(-x_2)$ flip and looking at the mass of the ground state. This exchange of $x_1$ and $(-x_2)$ is essentially the modular transformation $\zeta \rightarrow -1/\zeta$. Assuming that $\zeta=i$ is purely imaginary, They will transform as \begin{equation} \epsilon_+ \leftrightarrow i \epsilon_+~, \qquad \epsilon_- \leftrightarrow -i \epsilon_- \ . \label{exchange}\end{equation} The energy of the ground state can be read off from (4.14) of \cite{Hashimoto:2019wct} and is \begin{equation} ER = {1 \over 2 A} \left(-B - \sqrt{B^2 - 4 A C}\right), \label{ERdef}\end{equation} with \begin{eqnarray} A & = & - {1 \over R^2}(\mu - (\varepsilon_+ + \varepsilon_-)^2), \cr B & = & -1, \cr C & = & -{c \over 12}, \label{ABC} \end{eqnarray} which has a branch point when \begin{equation} 2 \pi R = \sqrt{ {c \over 3}(\mu - (\varepsilon_+ + \varepsilon_-)^2)} \end{equation} where, despite the appearance, the right hand side is independent of $R$, and agrees with (\ref{result1}) upon mapping the transformation (\ref{exchange}). Since the Hagedorn behavior should correspond to appearance of tachyon in $x_1 \leftrightarrow (- x_2)$ flip, this is a non-trivial check on the validity of (\ref{result1}). One might wonder which states are contributing to the Hagedorn density (\ref{result1}). One way to address this question is to explore the spectrum associated with individual $(n,w)$ sectors of (\ref{charges}) separately. The flow equation (4.13) of \cite{Hashimoto:2019wct} for zero momentum relates the undeformed and deformed energy by \begin{equation} RE = (1 + 2 \epsilon_+ q_L - 2 \epsilon_- q_R) R {\cal E} + (\lambda - (\epsilon_+ + \epsilon_-)^2) R^2 {\cal E}^2, \end{equation} where \begin{equation} q_L = {n \over r} + {w r \over 2} , \qquad q_R = {n \over r} - {w r \over 2} \ . \end{equation} The undeformed CFT in fixed $(q_L,q_R)$ charge sector has entropy \begin{equation} S^{q_L,q_R} (E) = 2 \pi \sqrt{ {1 \over 3}R E - {1 \over 6} (q_L^2 + q_R^2)} \ . \end{equation} One can therefore write \begin{equation} S^{q_L,q_R}({\cal E}) = 2 \pi \sqrt{ {1 \over 3}\left( \rule{0ex}{2.5ex}(1 + 2 \epsilon_+ q_L - 2 \epsilon_- q_R) R {\cal E} + (\lambda - (\epsilon_+ + \epsilon_-)^2) R^2 {\cal E}^2 \right) - {1 \over 6} (q_L^2 + q_R^2)} \ .\label{entmc} \end{equation} At energy ${\cal E}$, the dominant contribution of the entropy comes from the sector \begin{equation} q_L = 2 \varepsilon_+ {\cal E}, \qquad q_R = - 2 \varepsilon_- {\cal E} \label{domcharge}\end{equation} for which \begin{equation} S^{q_L,q_R}({\cal E}) = 2 \pi \sqrt{{1 \over 3} \left( R{\cal E} + (\mu + (\varepsilon_+ - \varepsilon_-)^2) {\cal E}^2\right) } \label{SqLqR}\end{equation} and we can read off the Hagedorn density (\ref{result1}) from the coefficient of the ${\cal E}^2$ term inside the square root. It is interesting to note that for fixed $(q_L,q_R)$, the Hagedorn density read off from (\ref{entmc}) \begin{equation} \beta_H^{q_L,q_R} = 2 \pi \sqrt{{1 \over 3} (\mu - (\varepsilon_+ +\varepsilon_-)^2)} \label{betaA} \end{equation} is a different quantity than (\ref{result1}). One way to describe the situation is that the grand canonical and fixed charge ensemble leads to different Hagedorn densities. It is also notable that if condition \begin{equation} - A = \lambda - (\epsilon_+ + \epsilon_-)^2 > 0 \label{Acond} \end{equation} is not satisfied, fixed charge ensembles are ill defined. The quantity $A$ appeared previously in (\ref{ABC}) and indicates that energy of infinitely many states are becoming complex. This can also be seen from the fact that $ \beta_H^{q_L,q_R}$ is not a real when (\ref{Acond}) is not satisfied. A closely related fact noted in \cite{Hashimoto:2019wct} is the fact that integral over $\tau_2$ in the kernel formula (\ref{Zdef}) is unbounded when (\ref{Acond}) is not satisfied. The grand canonical ensemble must also be ill defined since it is a sum over charge sectors. Even though (\ref{ZzH}) does not show any pathology when $(-A)$ flip sign, it and (\ref{result1}) should be considered valid only when (\ref{Acond}) is satisfied. \section{Thermodynamics of holographic single trace deformed system \label{sec:sugra}} In this section, we will examine the thermodynamics of single trace $T \bar T$, $J \bar T$, $T \bar J$ deformed CFT constructed holographically. The prototype of this construction is \cite{Giveon:2017nie,Israel:2003ry} where a background corresponding to a scaling limit of NS5-F1 system was considered. This geometry behaves in core region as $AdS_3\times T^4 \times S^3$, whereas in the asymptotic region it asymptotes to a linear dilaton geometry $R^2 \times R^\phi \times T^4 \times S^3$. In \cite{Giveon:2017nie}, this background was derived by integrating the deformation of world-sheet sigma model by an operator of the form \begin{equation} \lambda \int d^2 \sigma \ J^-\bar{J}^- , \end{equation} where $J^-$ and $\bar{J}^-$ are respectively the left and right-moving null $SL(2,\mathbb{R})$ currents. Here, $\lambda$ is the coefficient of the deformation operator of the world-sheet sigma model. The point is that this deformation corresponds to the deformation of the target space theory which is a holographic dual of a deformed CFT. So we start with $AdS_3 \times S^3 \times T^4$ constructed by taking the near horizon limit of a stack of NS5 and F1. This background in the supergravity language can be written in the form \begin{equation} {ds^2 \over \alpha'} = h d \gamma d \bar \gamma + d \phi^2 + dy^2 + ds^2_{T^3} + k ds^2_{S^3} ~,\label{ttbackground} \end{equation} where we are working in Minkowski signature \begin{equation} \gamma = \gamma_1 + \gamma_0, \qquad \bar \gamma = \gamma_1 - \gamma_0 \ . \end{equation} We have also identified one of the coordinates of $T^4$ as $y$. A comment is in order that $h(\phi)$ \begin{equation} h(\phi)^{-1} = {\alpha' \over R^2} + e^{-2 \phi} = \lambda + e^{-2 \phi} \ \label{hrel} \end{equation} in this context is a field with non-trivial profile along the $\phi$ direction. This is in contrast to the fact that $h$ was treated as a parameter in the previous sections. One way to think about this is the fact that the sigma model treatment in the earlier section was for a long string sitting in the weakly coupled region as was the case in \cite{Chakraborty:2019mdf,Hashimoto:2019wct}. There are other fields such as the dilaton and the form fields which we omit here for brevity but can be found in \cite{Giveon:2017nie}. For the pure single trace $T \bar T$ deformation, the deformation parameter $\lambda$ turns out to be equal to $\alpha'/R^2$. A useful observation to make at this point is that the string theory background found in \cite{Chakraborty:2019mdf} can be reconstructed starting with (\ref{ttbackground}) and performing the following operations: \begin{enumerate} \item Twist the $(\gamma, \bar \gamma, y)$ coordinates by \begin{equation} \gamma \rightarrow \gamma + 2 \epsilon_+ y, \qquad \bar \gamma \rightarrow \bar \gamma+ 2 \epsilon_- y \label{twist1} \end{equation} \item Shift \begin{equation} h^{-1} \rightarrow h^{-1} - 4 \epsilon_+ \epsilon_- ~.\label{hshift} \end{equation} \end{enumerate} Somewhat remarkably, one can reconstruct the same background starting with (\ref{ttbackground}) and acting with the following sequence of operations: \begin{enumerate} \item T-dualize from $y$ to $\tilde y$ \item Perform a twist \begin{equation} \gamma \rightarrow \gamma - 2 \epsilon_+ \tilde y, \qquad \bar \gamma \rightarrow \bar \gamma+ 2 \epsilon_- \tilde y \label{twist2} \end{equation} \item T-dualize back from $\tilde y$ to $y$, \end{enumerate} as well as \begin{enumerate} \item T-dualize on $\gamma_1$ and $\gamma_2$ \item Twist according to \begin{equation} y \rightarrow y - \epsilon_+ \tilde \gamma + \epsilon_- \bar {\tilde \gamma} , \label{epemtwist} \end{equation} \item T-dualize back to $\gamma_1$ and $\gamma_2$ \end{enumerate} as was noted in \cite{Apolo:2018qpq,Apolo:2019yfj}. In the end, one arrives at \begin{equation} {ds^2 \over \alpha'} = {1 \over h(\phi)^{-1} - 4 \epsilon_+ \epsilon_-} (d \gamma + 2 \epsilon_+ dy)(d \bar \gamma+ 2 \epsilon_- dy) + d \phi^2 + dy^2 + ds^2_{T^3} + k ds^2_{S^3}~, \label{ttbackground2} \end{equation} with $h(\phi)$ given by (\ref{hrel}). Dimensionally reducing along $y$, $T^3$, and $S^3$ will bring this background to match the form presented in (4.8) of \cite{Chakraborty:2019mdf}. Let us first recall the analysis of the thermodynamic behavior for this system with $\epsilon_+ = \epsilon_- = 0$. The finite temperature generalization of (\ref{ttbackground}) can be constructed from the non-extremal five dimensional black hole solution which can be read off from (2.44) of \cite{Maldacena:1996ky} up to S-duality and some convention mapping. The thermodynamics of this system can then be read off from the property of the horizon and the analysis of the conical singularity of the Euclidean solution. The intermediate steps of this analysis is somewhat cumbersome, but we will be brief here as the analysis is standard and it was essentially carried out in \cite{Giveon:2017nie,Apolo:2019zai}. The essential step is to solve for $r_0$ from (2.50) of \cite{Maldacena:1996ky} \begin{equation} T(r_0) = {1 \over 2 \pi r_0} \cosh \alpha \cosh \gamma \cosh \sigma \end{equation} and substitute into (2.49) of \cite{Maldacena:1996ky} \begin{equation} S(r_0) = {2 \pi R V r_0^3 \over g^2 \alpha'^4} \cosh \alpha \cosh \gamma \cosh \sigma \end{equation} subject to charges given in (2.45) of \cite{Maldacena:1996ky} \begin{equation} Q_1 = {V r_0^2 \over 2 g \alpha'^3 } \sinh 2 \alpha, \qquad Q_5 = {r_0^2 \over 2 g \alpha'} \sinh 2 \gamma, \qquad N = {R^2 V r_0^2 \over 2 g^2 \alpha'^4} \sinh 2 \sigma \ . \end{equation} For our purposes we set $N=0$ for convenience, and $Q_5$ is taken to be asymptotically large to decouple the asymptotically flat region to make the spacetime asymptote to a linear dilaton geometry. We have reinstated the factors of $\alpha'$ which was set to 1 in \cite{Maldacena:1996ky}. Applying S-duality to the resulting expression gives \begin{equation} S(T) \sim { Q_1 Q_5 R T \over \sqrt{1 - \alpha' Q_5 T^2} } = { Q_1 Q_5 R T \over \sqrt{1 - \lambda Q_5 R^2 T^2} }~,\label{sugraST}\end{equation} where in the last equality, we used the fact that $\alpha' = \lambda R^2$ from (\ref{hrel}). From (\ref{sugraST}), we read off the expected Cardy behavior for small $T$, and the Hagedorn behavior at \begin{equation} T_H \sim {1 \over Q_5^{1/2} \lambda^{1/2} R} ~. \label{sugraTH} \end{equation} In (\ref{sugraST}) and (\ref{sugraTH}), $Q_1$ and $Q_5$ are respectively the F1 string and NS5 brane charges that make up the background. The $\sim$ is used to indicate that we have dropped factors of order one such as $2$ and $\pi$. These results are in agreement with the results of \cite{Giveon:2017nie} and is qualitatively in agreement with what we found in (\ref{betaHtt}) up to factors of $c$ and $Q_5$. One of course does not expect exact agreement as double trace and single trace $T \bar T$ deformations are physically distinct \cite{Giveon:2017nie}. The next logical step is to include the effects of the $\epsilon_\pm$ deformation. The zero temperature and finite temperature solutions both have the same isometries along which we twist and T-dualize. So, the finite temperature solution can be generated readily by applying the same solution generating transformations, similar in spirit to the solution generating technique utilized in \cite{Gimon:2003xk}. One remarkable feature noted in \cite{Gimon:2003xk} is that T-dualities and coordinate twists like (\ref{twist1}), (\ref{twist2}), and (\ref{epemtwist}) do not modify the horizon temperature and its area. One way to see this is to note that T-dualities acting on a two torus keeps the area in Einstein frame invariant as can be seen from (4.2.23) of \cite{Giveon:1994fu}. Similarly, coordinate twists simply deforms the complex structure of the torus without changing its area. There is however one important subtlety. We noted that there are three duality chains (\ref{twist1}), (\ref{twist2}), and (\ref{epemtwist}) which leads to the same background (\ref{ttbackground2}) in the zero temperature case. However, it is easy to convince oneself that these chains lead to different backgrounds when applied to the finite temperature solution. While it is somewhat cumbersome to carry out this exercise for Maldacena's three charge black hole \cite{Maldacena:1996ky}, one can verify this fact easily by considering a non-extremal fundamental string extended on $x_0$ and $x_1$ and smeared on $y$ direction, and then applying (\ref{twist1}), (\ref{twist2}), and (\ref{epemtwist}). This implies that we have a large set of finite temperature solutions one can construct by applying all three twists in some combination. So at this point, we have discovered a large number of black hole solutions which seems puzzling at first since one expects the black hole solution to be unique once the boundary conditions are fixed. We need to provide a physical interpretation for each of these solutions. Upon closer examination, it appears that although these geometries do approach the zero temperature solution in the large radius region, they behave differently in the subleading asymptotic behavior and can be distinguished by imposing boundary conditions at infinity. Related discussions on boundary conditions for scalar and vector fields in anti de Sitter space can be found in \cite{Klebanov:1999tb,Marolf:2006nd}. We can in fact see that the behavior of $g_{\mu y}$ and $B_{\mu y}$ fields are playing an important role which is related to the charges and chemical potential of the $U(1)$ global symmetry. We found the following two constructions to be especially interesting. For the first case, we begin by parametrizing \begin{equation} \epsilon_+ = \epsilon_0 + \epsilon_1, \qquad \epsilon_- = -\epsilon_0 + \epsilon_1 \ . \end{equation} Now, consider applying (\ref{twist2}) transformation by $(\epsilon_+, \epsilon_-) = (\epsilon_0,-\epsilon_0)$ followed by (\ref{twist1}) by $(\epsilon_+,\epsilon_-) = (\epsilon_1,\epsilon_1)$. Applying the transformation (\ref{twist1}) involves an explicit shift (\ref{hshift}) in $\lambda$. This is so that the factor in front of $(d \gamma + 2 \epsilon_+ dy)( d \bar \gamma + 2 \epsilon_- dy)$ in (\ref{ttbackground2}) takes the from \begin{equation} {1 \over h(\phi)^{-1}-4 \epsilon_+ \epsilon_-} \ . \end{equation} Since the application of (\ref{twist2}) by $(\epsilon_0,-\epsilon_0)$ already brought this factor into the form \begin{equation} {1 \over h(\phi)^{-1}+4 \epsilon_0^2} \ , \end{equation} the additional shift needed in applying the transformation (\ref{twist1}) is \begin{equation} \lambda \rightarrow \lambda - 4 \epsilon_+ \epsilon_- - 4 \epsilon_0^2 = \lambda - (\epsilon_+ + \epsilon_-)^2 \ , \label{shift1}\end{equation} instead of (\ref{hshift}). Because we shifted $\lambda$ according to (\ref{shift1}), the Hagedorn temperature also changes and we find \begin{equation} \beta_H \sim Q_5^{1/2} (\lambda- (\epsilon_+ + \epsilon_-)^2)^{1/2} R\ . \label{ans1} \end{equation} As a second case, consider first applying (\ref{twist2}) by $(\epsilon_+, \epsilon_-) = (\epsilon_1,\epsilon_1)$ followed by (\ref{twist1}) by $(\epsilon_+,\epsilon_-) = (\epsilon_0,-\epsilon_0)$ followed by (\ref{twist1}) by $(\epsilon_+,\epsilon_-) = (\epsilon_1,\epsilon_1)$. This time, the shift in $\lambda$ that is needed is \begin{equation} \lambda \rightarrow \lambda - 4 \epsilon_+ \epsilon_- + 4 \epsilon_1^2 = \lambda + (\epsilon_+ - \epsilon_-)^2 \label{shift2}\end{equation} from which we infer that \begin{equation} \beta_H \sim Q_5^{1/2} (\lambda+ (\epsilon_+ - \epsilon_-)^2)^{1/2} R \ . \label{ans2} \end{equation} Rather remarkably, we have succeeded in reproducing the $(\lambda,\epsilon_+,\epsilon_-)$ dependence that we found earlier for the fixed charge (\ref{betaA}) and sum over charge (\ref{result1}) ensembles. This suggests that (\ref{ans1}) and (\ref{ans2}) correspond the fixed charge and sum over charge ensembles, respectively. We can further justify this identification as follows. In the construction of (\ref{ans1}), the twists and dualities generate mixing between $\gamma_1$ and $y$ coordinates in the form of $g_{1y}$ and $B_{1y}$ which are not directly tied to the energy of the black hole, whereas (\ref{ans2}) generates a mixing between the $\gamma_0$ and $y$ coordinates in the form of $g_{0y}$ and $B_{0y}$ which causes expectation values of the charge to be generated as the energy of the black hole is increased, mimicking the behavior of (\ref{domcharge}). We should however stress that there is some element of post priori reasoning at work here. We have not systematically analyzed the asymptotic behavior of solutions associated (\ref{ans1}) and (\ref{ans2}) to establish conclusively that they correspond to fixed charge and sum over charge ensembles. At this point, we are merely asserting the fact that explicit procedures exist to construct holographic backgrounds with Hagedorn scales (\ref{ans1}) and (\ref{ans2}) matching the $(\mu,\varepsilon_+, \varepsilon_-)$ dependence of (\ref{betaA}) and (\ref{result1}). This fact alone is interesting. It would of course be more interesting to systematically map out the boundary conditions for $g_{\mu y}$, $B_{\mu y}$, and other fields in this asymptotically linear dilaton background with a twist along the lines of \cite{Klebanov:1999tb,Marolf:2006nd}. The analysis is complicated in part because the twist and the finite temperature effects destroys much of the symmetries to keep the supergravity solutions manageable. Perhaps there is an efficient way to approach this issue, but for now we are leaving this analysis for future work. Let us make several additional comments before finishing this section. \begin{enumerate} \item In \cite{Chakraborty:2019mdf}, it was observed that in the limit $\lambda - 4 \epsilon_+ \epsilon_- \rightarrow 0$, the geometry (\ref{ttbackground2}) appears to interpolate between $AdS_3$ in the IR to $AdS_2 \times S^1$ when dimensionally reduced along $y$, $T^3$, and $S^3$. From the full ten dimensional perspective, however, it becomes clear that the period of $y$ cycle is going as $1/\sqrt{1-4 h \epsilon_+ \epsilon_-}$ and is becoming large in the large $\phi$ region. Since the period of $y$ cycle is large, one shouldn't dimensionally reduce there. From the oxidized $AdS_3 \times S^1$ perspective, the effect of $\epsilon_\pm$ deformation is merely a twist whose geometric effect is to modify the periodicity conditions. So it seems that the $AdS_2 \times S^1$ geometry does not capture the effective physics of this holographic background. \item In \cite{Chakraborty:2019mdf}, (\ref{Acond}) was interpretable also as the condition for the absence of closed time-like curve. Note, however, that for $\lambda>0$ and $\epsilon_\pm$ taking real values, (\ref{Acond}) is stronger than the bound $\lambda - 4 \epsilon_+ \epsilon_- >0$ except at the point $\epsilon_+ = \epsilon_-$ where they coincide. \item The operation used to construct the supergravity background for the holographic single trace deformed system is identical to the Melvin twist operation which was used in constructing models known as the Dipole theory which is closely related to non-commutative field theories. In that story, the starting point was a near horizon limit of a D-brane \cite{Hashimoto:1999ut,Bergman:2000cw,Ganor:2007qh,Song:2011sr}. Since the construction in this section starts from the NS5 branes and F1 strings, these twists cannot be interpreted exactly as the Dipole field theory, but it is clear that it is in the broad category of non-local field theories related to the Dipole theories via U-duality. \item The fact that thermodynamics of Melvin twisted supergravity background is insensitive to the twist follows essentially from the fact that T-dualities and twists acting on a spatial two torus keeps its area in Einstein frame invariant, as was seen in numerous examples \cite{Hashimoto:1999ut,Maldacena:1999mh,Gimon:2003xk,Ganor:2007qh}. Since the area of this torus was directly proportional to the area of the horizon, the thermodynamic relations turned out to be insensitive to the twist. One exception to this pattern of behavior can be found in the T-s-T interpretation of the pure $T \bar T$ deformation discussed in \cite{Apolo:2019zai}. Upon closer look, one sees that the T-s-T transformation discussed in section 3.2 of \cite{Apolo:2019zai} involves twisting a torus that involves both spatial and temporal directions. (Strictly speaking, the authors of \cite{Apolo:2019zai} twisted and T-dualized along a light like direction, but this can be viewed as a space-like T-duality and a time-like twist that is infinitely boosted.) One can in fact think of the T-s-T of \cite{Apolo:2019zai} as starting with a stack of NS5-F1 and applying the solution generating transformation of \cite{Russo:1996if}. Since the torus being twisted is not entirely proportional to the horizon, there is no reason for the thermodynamics to be unaffected by that twist. \end{enumerate} \section{Discussions} In this article, we explored various tests and applications of universal formula (\ref{Zdef}) and (\ref{I}) for computing the partition function of $T \bar T$, $J \bar T$, $T \bar J$ deformed conformal field theories. One main result is the explicit expression (\ref{ZzH}) and (\ref{result1}) for the free energy and the Hagedorn temperature of the deformed compact scalar theory. These quantities were expressible in a compact analytic form by taking the infinite volume limit. These results were shown to be consistent with the expectations from $x_1 \leftrightarrow (-x_2)$ exchange. We also accounted for the microscopic origin of states with Hagedorn density (\ref{result1}) as arising from the charge sectors (\ref{domcharge}) in the grand canonical ensemble. We also examined the thermodynamics of single trace $T \bar T$, $J \bar T$, $T \bar J$ deformed $AdS_3 \times S^1 \times T^3 \times S^3$ \cite{Chakraborty:2019mdf} using holographic techniques. We argued that the effects of $J \bar T$ and $T \bar J$ deformation can be realized as a chain of duality and twist transformations starting from the background of pure $T \bar T$ deformed system \cite{Giveon:2017nie}. In fact, we explained how three different duality chain leads to the same background. In order to study the thermodynamics, one can construct the finite temperature version of these backgrounds by starting with the finite temperature version of the pure $T \bar T$ deformation \cite{Giveon:2017nie} and applying the same set of twist operations. For the finite temperature background, the three duality chains leads to slightly different backgrounds. We identified specific sequence of dualities and twists which we interpreted as giving rise to the fixed charge and sum over charge ensembles, and found an explicit expression for the Hagedorn temperatures (\ref{ans1}) and (\ref{ans2}) which are in agreement with the dependence on deformation parameters $(\lambda, \epsilon_+, \epsilon_-)$ we found for the free compact boson (\ref{betaA}) and (\ref{result1}). What we provide in this paper can be thought of as some set of data characterizing the $T \bar T$, $J \bar T$, $T \bar J$ deformed CFT in a handful of examples. It is the case nonetheless that the modular properties provided important consistency checks in carrying out these computations and interpreting the results. One can further attribute the modular properties as being inherited from the sigma model perspective which went into the derivation of (\ref{Zdef}) and (\ref{I}) in \cite{Hashimoto:2019wct}. One can in fact think of the duality and twist operations as moving in $SO(3,3)/SO(3) \times SO(3)$ moduli space of sigma model on $T^2 \times S^1 \times {\cal M}'$. The $T\bar T$, $J \bar T$, and $T \bar J$ deformations corresponds to a subspace in this moduli space, and it would be interesting to fully map out the $d^2$ dimensional space of $SO(d,d) /SO(d)\times SO(d)$ which arises when there are $n=d-2$ $U(1)$ isometries \cite{Araujo:2018rho}. For $d=3$, the nine parameters appear to correspond to $\lambda$, $\zeta_1$, $\zeta_2$, $\epsilon_+$, $\epsilon_-$, $\chi_1$, $\chi_2$, $r$, and $b$ where $b$ is the NSNS B-field along the $T^2$ and $\chi_1$ and $\chi_2$ are chemical potential for the deformed theory. Some discussion about this chemical potential can be found in Appendix B of \cite{Hashimoto:2019wct}. So far, we have been unsuccessful in finding a parameterization of this 9 dimensional space which can make their modular transformation and their physical interpretations simultaneously simple. The perspective based on sigma model and their moduli space should nonetheless be useful for organizing the integrable deformations of these 1+1 dimensional field theories. \section*{Acknowledgements} We would like to thank A. Giveon, D. Kutasov and A. Mishra for helpful discussions. The work of SC is supported by the Infosys Endowment for the study of the Quantum Structure of Spacetime. AH thanks IFT-UEPSP for hospitality where part of this work was done. \providecommand{\href}[2]{#2}\begingroup\raggedright
2023-04-23T08:18:12.579Z
2020-07-17T02:20:37.000Z
redpajama/arxiv
arxiv_0000
1,380
7,322
bac32f23381ed1380c7d9851bcb32287bdf70a01
\section{Introduction}\label{s-intro} If you have tried to set up a pick-nick table or used a stepladder, then you are certainly aware how of the wobbly table problem. How can you make this object stop to wobble (without using extra tools)? Well, if you have a square or even rectangular table, then you are in luck. It turns out that rotating the table along some axis will suffice to eventually stabilise it. Unfortunately, many offices are now full of tables which have a trapezoidal shape. What now? Unfortunately, the arguments for square or rectangular table hinge crucially on the symmetry of the object. This note will hopefully convince you, that you should have no fear of buying tables which are less symmetric. Here is a more detailed description of this problem, which was originally made public in \cite{Ga1} and \cite{Ga2}. You have some table (or other object) with 4 legs and you find yourself on some terrain. Find conditions on the legs and quite unrestrictive conditions on the terrain, so that turning the table around some axis can make it stop to wobble. Note that the table may not be level, it will just stop to wobble. The original arguments of \cite{Ga1} and \cite{Ga2} cover square tables on a continuous terrain, but is presented in an abstract setting which overlooks the rigidity of the problem. A more realistic setup is covered in \cite{BLPR} and \cite{Mar}. The terrain needs to be Lipschitz (with a Lipschitz constant bounded above). One of the very first work which relates to this problem is \cite{Dys}; the reader is directed to \cite{BLPR} for an extensive historical overview. The main result of this note is to discuss an extension to cover cyclic quadrilaterals (a conjecture raised in \cite{Mar}). The main narrative of the proof is presented in \S{}2. However, much like the first presentations of this problem in \cite{Ga1} and \cite{Ga2}, this narrative overlooks some technicalities. A closer look at the problem is then discussed in \S{}3. The complete hypothesis are as follows:\\ $-$ \; the quadrilateral formed by the ends of legs is cyclic \\ $-$ \; its diagonals are of equal length. \\ $-$ \; the angle (from the centre of the excircle) which support the diagonal is rational\\ $-$ \; the angle which brings one diagonal on the other is also rational.\\ Then the table can be stabilised by rotating. (As in \cite{Mar} or \cite{BLPR}, the terrain needs to be Lipschitz.) A typical example of a non-rectangular table satisfying the conditions of the theorem which the reader might have in his office has the following shape: take an hexagon and cut along two opposite vertices. The resulting quadrilateral is a symmetric trapezoid (with 3 equal sides). \section{Main steps of the proof} \subsection{Preliminaries} Let us start by some simple results, reductions, assumptions and notations: \begin{itemize} \item First off, it seems the very least to demand that your table be stable if the terrain is perfectly flat. Hence the ends of the legs must be on the same plane $P$. The end of the legs will be denoted $A$, $B$, $C$ and $D$. \item The axis around which you rotate should be perpendicular to the plane $P$ (you don't want to turn your table upside down!). \item Also the legs should all run along the same circle (\emph{i.e.} the quadrilateral $ABCD$ is cyclic) otherwise stabilisation by rotation is not possible (the terrain could be high along the circle described by one leg and low along the circle described by another leg). This is a necessary condition raised in \cite{Mar}. $Z$ will denote the centre of this circle. \item This note stays in a fairly idealised setup, Physical problems, such as ``the terrain is going through the [legs of the] table'', ``the table will turn over since its centre of mass is ill-placed'' or ``the legs have a thickness'', will be ignored. \item The whole problem boils is about the four feet of the table. The actual shape of the table is not really important. \item The table starts in the air. There is some basic position corresponding to the angle $0°$. This means that, in this basic position, the leg $A$ makes an angle of $0°$ as seem from the excentre $Z$. The angles $\theta_B$,$\theta_C$ and $\theta_D$ are the angles which make the other legs when in this basic position. (For convenience $\theta_A =0$.) \item As you turn your table, all the angles of the legs are changed by the same number. (The table is rigid after all.) \end{itemize} \subsection{Putting the table down} The next step is to look at possible touchdowns for your table. As you rotate the table consider the following way to put it down. First, you could put the legs $A$ and $C$ on the ground (and completely ignore where $B$ and $D$ are). If $B$ and $D$ are above the terrain, then the table wobbles, if not, then this was not completely legal, but it is still useful to think about it (as some sort of negative wobbling). \cite{BLPR} discusses the touchdown much more carefully. Next, you could do the same thing with $B$ and $D$ touching down first. Note that these touchdowns should happen on the same curve (some distorted circle), otherwise there is no hope to conclude. Assume for now that this is the case, see \cite{BLPR} and \S{}\ref{stouchdown} for a correct description of this process Consider $X$ to be the intersection point of the diagonals $AC$ and $BD$. If we make a touchdown with $AC$, the coordinates of $X$ do not change when the table wobbles. Likewise with $BD$. This gives us two height functions: $h_{AC}(\theta)$ is the height ($z$-coordinate) of $X$ after a $AC$-touchdown, where $\theta$ is an angle of rotation (of the table). And likewise for $h_{BD}$. \begin{lem}\label{lelem} If the table wobbles then $h_{BD}(\theta) \neq h_{AC}(\theta)$. \end{lem} \begin{proof} If the table wobbles then one of the pair $BD$ or $AC$ can be pushed further down. As a consequence, the height of the centre will differ. \end{proof} The coordinates of $X$ are a convex combinations of the coordinates of the legs. More precisely, it's a small exercise with vectors that if $\tau \frac{ \srl{CX}}{ \srl{CA}}\in ]0,1[$ and $\mu = \frac{ \srl{DX}}{ \srl{DB}}\in ]0,1[$, then \[ \xvc{OX} = \tau \xvc{OA} + (1-\tau) \xvc{OC} = \mu \xvc{OB} + (1-\mu) \xvc{OD} \] In particular, this holds for the height coordinate: \[ h_{AC}(\theta) = \tau h_A(\theta) + (1-\tau) h_C(\theta) \qquad \text{and} \qquad h_{BD}(\theta) = \mu h_B(\theta) + (1-\mu) h_D(\theta) \] where $h_E(\theta)$ gives the height of (the end of) leg $E$ after a touchdown after a rotation [of the table] by the angle $\theta$ and $E \in \{A,B,C,D\}$. Since every leg can be brought at the position of the other leg by a rotation, the functions $h_A$, $h_B$, $h_C$ and $h_D$ are all identical up to a translation (see \S{}\ref{stouchdown} and \S{}\ref{stouchdown2} for details). \subsection{Table turning} \begin{teo}\label{leteo} Assume $h_E$ are continuous, then there is an angle $\theta$ so that the table does not wobble. \end{teo} \begin{proof} Since $h_E$ is measurable, let $\int h_E = H$ (the average height). By assumption $h_E$ is continuous, hence so are $h_{BD}$ and $h_{AC}$. Let $h_\Delta = h_{BD} - h_{AC}$. Assume the table cannot be stabilised, then, by Lemma \ref{lelem}, $h_\Delta$ is never 0. Since $h_\Delta$ is continuous, there is an $\eps>0$ so that, \[ \text{either } \qquad \forall \theta,\; h_{\Delta}(\theta) > \eps \qquad \text{or } \qquad \forall \theta,\; h_{\Delta}(\theta) < -\eps \] Without loss of generality, we may assume the first holds. Let $\theta_0$ be an irrational angle. Then \[ \forall N, \quad \frac{1}{N} \sum_{i=1}^N h(i \cdot \theta_0) > \eps \] On the other hand \[ \frac{1}{N} \sum_{i=1}^N h(i \cdot \theta_0) =\frac{1}{N} \sum_{i=1}^N \bigg( \mu h_B(i \cdot \theta_0) + (1-\mu) h_D(i \cdot \theta_0) - \tau h_A(i \cdot \theta_0) - (1-\tau) h_C(i \cdot \theta_0) \bigg) \] By the equidistribution theorem or the ergodic theorem (see, among many possibilities \cite[Appendix 1]{AA}, \cite[Section 23.10]{HW}, \cite[Exercise 2.2.12]{Nav} and \cite{Z}) \[ \lim_{N \to \infty} \frac{1}{N} \sum_{i=1}^N h_\Delta(i \cdot \theta_0) = \int h_\Delta = \mu \int h_B + (1-\mu) \int h_D - \tau \int h_A - (1-\tau) \int h_C \] But the right-hand side is just \[ \mu H + (1-\mu)H - \tau H + (1-\tau)H = H - H = 0. \] Hence \[ \lim_{N \to \infty} \frac{1}{N} \sum_{i=1}^N h_\Delta(i \cdot \theta_0) = 0 > \eps \] a contradiction. \end{proof} \section{A closer look}\label{slook} \subsection{Equal hovering position}\label{stouchdown} A tool from \cite[\S{}3]{BLPR} is to look only at the touchdown with respect to $AC$ and then consider how far the vertices $BD$ are in the air. This is done so that both vertices $B$ and $D$ are equally far from the ground and is called the ``equal hovering position''. Let us sketch how to apply Theorem \ref{leteo} in this context. Consider the average (over all angles) of $h_{AC}$, $\int h_{AC}$. Since the average of $h_{BD}$ is equal to that of $h_{AC}$, the equal hovering position is on average 0. But the equal hovering position is continuously defined, hence if the average is $0$, it should be 0 somewhere. When the equal hovering position is 0, then the table does not wobble. However there is a hidden hypothesis in this argument. Namely, if the two diagonals do not have the same length, then it could be that $h_{BD}$ takes on different value than $h_{AC}$. Variations in the steepness of terrain could make the pair of vertices $B$ and $D$ linger different time in different regions. This leads to the following \begin{assu}\label{ass-len} The length of the diagonals of $ABCD$ are equal. \end{assu} \subsection{Further assumptions}\label{stouchdown2} As mentioned before, \cite{BLPR} contains a detailed description how to realise the touchdown of two opposite vertices ($AC$ or $BD$). The focus of this section is to point out what the proof of Theorem \ref{leteo} requires. The various height functions $h_A$, $h_B$, $h_C$ and $h_D$ need: \begin{enumerate}\renewcommand{\theenumi}{\bfseries (H\arabic{enumi})} \item to be well-defined \item to be continuous \item to differ only by a translation, \emph{i.e.} $h_E(\theta) = h_A(\theta + \theta_E)$ \end{enumerate} Before looking closer at this, let's point the problem out: while letting the table down, you will need to turn the table according to other axes. This means there is \emph{a priori} an uncertainty in how you put the table down. As mentioned in \cite{Mar} these constructions are highly non-unique. For example, this plays an implicit role in Lemma \ref{lelem}. Indeed, on needs to assume that, for any given angle $\theta$, the touchdown according to $AC$ and $BD$ are done in one go. So you cannot let the table down and see where $AC$ end up, and then let the table down and see where $BD$ end up. You need to let the table down, see which of the two first come up and then go on to the second. In short, for the proof of Lemma \ref{lelem}, a transversality condition is probably necessary, namely that the $z$-coordinate of $X$ is decreasing while letting the table down. This means that for a given rotation angle $\theta$ the touchdown of both diagonals must be taken into account simultaneously. To make sure this is possible requires most certainly three additional assumptions. The first of these assumptions, is that for transversality, the terrain needs to be a $C^1$-function with some upper bound on the first derivative. Lipschitz-continuity (with an upper bound on the Lipschitz constant) is however sufficient, see \cite{BLPR} and \cite{Mar} for details. The second assumption comes form the fact that the height functions $h_E$ should only depend on the angle (and on $\theta_E$). The touchdown $h_{BD}(0)$ needs to be defined simultaneously with the touchdown $h_{AC}(0)$. But then, $h_{AC}(\theta_B)$, $h_{AC}(\theta_C)$ and $h_{AC}(\theta_D)$ are also defined (since the leg $A$ will land where the legs $B$, $C$ and $D$ did). Repeating this process, one sees that all angles $k_1 \theta_B + k_2 \theta_C + k_3 \theta_D$ (where $k_1,k_2$ and $k_3 \in \zz_{\geq 0}$) need to be defined at once. In order to avoid an infinite number of choices, this leads to the following two assumptions: \begin{assu}\label{ass-rot2} The rotation bringing one diagonal on the other diagonal is rational. \end{assu} \begin{assu}\label{ass-rot} The rotation bringing one end of a diagonal on the other end is rational (\ie the angle supporting the diagonals from the excircle is rational). \end{assu} Lastly, note that Assumption \ref{ass-len} is particularly important for condition (H3) mentioned above. Note that Assumptions \ref{ass-len} and \ref{ass-rot} are both automatically verified in the case of a rectangle (diagonals of equal length is a characteristic feature of rectangles among parallelogram; the rotation is 180\textdegree{} since the excentre lies on the diagonal). Assumption \ref{ass-rot2} might however not hold \begin{rmk} One might be able to get rid of the rationality assumptions. A standard way to do so would be to consider a sequence of tables with rational angles which tend to the desired (irrational) table. Since everything is happening over a compact part of $\rr^2$, the stabilising positions of the rational tables will converge to (at least) one position. This position should stabilise the (irrational) table. \end{rmk} \subsection{Invariant measures}\label{sinvmeas} Note that the proof relied on a very specific invariant measure: the uniform measure, which is invariant under translations. Perhaps one way to get rid of Assumption \ref{ass-len} would be to consider a sequence $\theta_i$ which tends to some well-chosen measure, rather than just taking this sequence of angles to be multiples of an irrational angle ($i \cdot \theta_0$ in the proof). More concretely, let $h_B(\theta) = h_A(\theta + f_B(\theta))$, $h_C(\theta) = h_A(\theta + f_C(\theta))$ and $h_D(\theta) = h_A(\theta + f_D(\theta))$. The functions $x \mapsto x + f_E(x)$ (where $E \in \{B,C,D\}$) generate a monoid (of circle maps). If the action of this monoid is amenable, then there is a invariant measure $\mu$ (see \cite{Gr} for generic background on amenability and \cite[\S{}2.3 and \S{}3.2]{Nav} for more specific details in the case of groups acting on the circle). Since The convex hull of Dirac masses is weak$^*$ dense in the space of means, one can the choose the sequence of angles $\theta_i$ so that $\tfrac{1}{N}\sum_{i=1}^N \delta_{\theta_i}$ tends (weak$^*$) to $\mu$. Consequently, the assumptions presented in \S{}\ref{stouchdown} and \S{}\ref{stouchdown2} can perhaps be relaxed. A simple way of satisfying the amenability assumption is to assume that the monoid is contained in the monoid given by some rational rotations (e.g. coming from Assumptions \ref{ass-rot} and \ref{ass-rot2}) as well as some map (which may not be a rational rotation). Since the monoid is then a finite extension of an amenable one (the single map which is not a rational yields a monoid isomorphic to $\mathbb{N}$), it is amenable as well. The author would like to thank Yves de Cornulier \cite{Cor} for pointing out that \cite{Nav} essentially shows there are not other examples where this monoid is amenable (beside the given example where two maps are rational).
2023-04-23T08:18:13.210Z
2020-04-22T02:07:39.000Z
redpajama/arxiv
arxiv_0000
1,400
2,673
a28e7fa3c64beb46c20bc85c0275f2acaf6ad2f9
\section{Introduction} Consider a task whose completion requires the execution of a certain underlying process. What is the effect of restart -- i.e. resetting the underlying process while it is running -- on the task's completion time? The answer to this question depends on the completion-time statistics. For example, if the underlying process is deterministic then the completion time is fixed, and hence: restart will always prolong completion. However, if the underlying process is stochastic then the completion time is a random variable, and matters become intricate \cite{FPUR}-\cite{review}: statistically, while restart can impede completion, it can also expedite completion. The fact that restart can affect completion times -- and in some cases significantly so -- has a host of important practical applications. Examples include: randomized computer algorithms \cite{CS1}-\cite{CS3}, e.g. simulated annealing \cite{CS4}; first-passage times of random motions \cite{RM1}-\cite{RM10}, e.g. Brownian motion \cite{Diff1}-\cite{Diff9}; target-search by agents \cite{Search1}-\cite{Search8}, e.g. animals foraging for food \cite{Foraging1,Foraging2}; and chemical reactions at the molecular level \cite{CR1,CR2}, e.g. enzymatic catalysis \cite{MM1}-\cite{MM4}. The ``tasks" in the above examples, as well as restart in these examples, is diverse. Indeed, a simulated-annealing program is reset by adding a line of code, while the enzymatic conversion of molecule A to molecule B is inherently subject to resetting as enzymes continuously bind and unbind their substrates. In all of the above-mentioned examples, it is vitally important to determine when restart will impede or expedite completion times. To determine the effect of restart studies have by and large focused on average behavior: comparing mean completion time with restart vs. mean completion time without restart. In general, a given restart protocol uses a stochastic timer to schedule the durations between its consecutive resetting epochs. Restart protocols with deterministic (i.e. fixed) timers -- termed, in short, \emph{sharp restart} -- where found to be central due to the following key result \cite{FPUR,CheSok,Diff5,Search5}: if a given restart protocol impedes/expedites mean completion -- then there exists a sharp-restart protocol that impedes/expedites mean completion at least as much. Average-behavior analysis provides researchers with criteria that determine when restart will impede or expedite mean completion. In particular, highly general and potent criteria are available for Poissonian restart (where the stochastic timers are exponentially-distributed) \cite{FPUR,Search8,Foraging2,MM1,MM2,LTRT}, and for sharp restart \cite{MPSR1,MPSR2}. The drawback of average-behavior analysis is that it provides no insight regarding tail-behavior, i.e. the occurrence likelihood of extremely large completion times. To date -- with regard to restart -- researchers do not have at their disposal `extreme criteria' that are analogous to the existing `mean criteria'. The difference between average-behavior and tail-behavior is profound. A system that its design is based on average-behavior analysis will perform well in `usual times', yet it may very well collapse when hit by an extreme event -- a, so called, `Black Swan' \cite{BS1}-\cite{BS5}. Financial crashes, extreme weather phenomena, extreme geological phenomena, and pandemics are vivid examples of `Black Swans'. To design a given system to withstand extreme events, a tail-behavior analysis is an absolute must. Addressing restart, and setting the goal of bridging the knowledge gap between means and extremes, this paper presents a comprehensive tail-behavior analysis of sharp restart. Using the notion of hazard rates, the analysis establishes potent `tail results' for sharp restart: a set of hazard-rate criteria that determine when restart will impede or expedite extreme completion times. The results are general on the one hand, and are highly applicable on the other hand. The paper is organized as follows. After describing sharp restart as an algorithm that maps random inputs to random outputs (section 2), statistical formulations of the input-to-output map are presented (section 3), and the map's fixed points are explored (section 4). Then, the effect of sharp restart on inputs with monotone increasing and monotone decreasing hazard rates is investigated (section 5), and the asymptotic effect of sharp restart on general inputs is further investigated (section 6). Thereafter, the general asymptotic results are discussed in detail (section 7), and the paper concludes with a summary of its key results (section 8). \section{Sharp restart} We consider a general task with completion time $T$, a positive-valued random variable. To this task we apply restart with a deterministic timer $% \tau $, a positive parameter. Specifically, we operate according to the following three-steps \emph{sharp-restart algorithm}. Step I: initiate simultaneously the task and the timer. Step II: if the task is accomplished up to the timer's expiration -- i.e. if $T\leq \tau$ -- then stop upon completion. Step III: if the task is not accomplished up to the timer's expiration -- i.e. if $T>\tau$ -- then, as the timer expires, go back to Step I. The sharp-restart algorithm generates an iterative process of independent and statistically identical task-completion trials. This process halts during its first successful trial, and we denote by $T_{R}$ its halting time. Namely, $T_{R}$ is the overall time it takes -- when the sharp-restart algorithm is applied -- to complete the task. The algorithm is a non-linear mapping whose \emph{input} is the random variable $% T$, whose \emph{output} is the random variable $T_{R}$, and whose (single) parameter is the deterministic timer $\tau $. Stochastically, the \emph{input-to-output map} $T\mapsto T_{R}$ is described as follows: \begin{equation} T_{R}=\left\{ \begin{array}{lll} T & & if\ T\leq \tau ,\\ & \ & \\ \tau +T_{R}^{\prime } & & if\ T>\tau ,% \end{array}% \right. \label{21} \end{equation}% where $T_{R}^{\prime }$ is a copy of the random variable $T_{R}$ that is independent of the random variable $T$. The top line on the right-hand side of Eq. (\ref{21}) corresponds to the Step-II scenario of the sharp-restart algorithm, and the bottom line corresponds to the Step-III scenario. Indeed, if the Step-III scenario occurs then, as the timer expires, the task-completion process is restarted anew; the random variable $T_{R}^{\prime }$ is the halting time of the restarted process. Henceforth, we set the sharp-restart algorithm to initiate at time $t=0$, and thus the process of task-completion trails takes place over the non-negative time axis $t\geq 0$. Along this paper we use the following periodic parameterization of the time axis: $t=\tau n+u$, where $n=0,1,2,\cdots $, and where $0\leq u<\tau $. In this parameterization the timer $\tau $ is the underpinning period; $n=\lfloor t/\tau \rfloor $ is the floor of $t/\tau $; and $u=t-\tau n$ is the reminder of $t$ after its division by $\tau $. With regard to the process of task-completion trails, the periodic parameterization $t=\tau n+u$ has the following interpretation. If the halting time $T_{R}$ is realized at time epoch $t$, i.e. if $T_{R}=t$, then: $n$ is the number of unsuccessful trials; and $u$ is the time epoch, within the first successful trial, at which the task-completion process halted. \section{Statistical formulations} There are alternative ways of characterizing the input's and output's statistical distributions. In this section we employ three such ways -- survival functions, density functions, and hazard functions -- to statistically formulate the input-to-output map $T\mapsto T_{R}$. Hazard functions, also known as \textquotedblleft hazard rates\textquotedblright and \textquotedblleft failure rates\textquotedblright, are widely applied in reliability engineering \cite{BP}-\cite{Dhi}. As we shall see, hazard functions will turn out to be remarkably useful in the tail-behavior analysis of the sharp-restart algorithm. Consider the input's and output's survival functions: $\bar{F}\left(t\right) =\Pr \left( T>t\right)$ and $\bar{F}_{R}\left( t\right) =\Pr \left(T_{R}>t\right)$; these terms manifest, respectively, the probabilities that the input $T$ and the output $T_{R}$ are not realized by time $t$. From a survival-function perspective, the input-to-output map $% T\mapsto T_{R}$ is manifested by \begin{equation} \bar{F}_{R}\left( \tau n+u\right) =\bar{F}\left( \tau \right) ^{n}\bar{F}% \left( u\right) . \label{31} \end{equation}% The derivation of Eq. (\ref{31}) is explained as follows. The output $T_{R}$ is not realized by time $t=\tau n+u$ if and only if two events occur. Event $A$: the first $n$ task-completion trials are unsuccessful. Event $B$: the task is not completed during the first $u$ time units of the task-completion trial $n+1$. The probability that a task-completion trial fails is $\Pr \left( T>\tau \right) =\bar{F}\left( \tau \right) $, and the probability of event $B$ is $\Pr \left( T>u\right) =% \bar{F}\left( u\right) $. As the task-completion trials are independent of each other, the probability of the event $A$ is $\bar{F}\left( \tau \right) ^{n}$, and the probability of the event $A\cap B$ is $\bar{F}\left( \tau \right) ^{n}\cdot \bar{F}\left( u\right) $. Hence, Eq. (\ref{31}) is obtined. The input's and output's density functions are the negative derivatives of their survival functions: $f\left( t\right) =-\bar{F}^{\prime }\left(t\right)$ and $f_{R}\left( t\right) =-\bar{F}_{R}^{\prime }\left( t\right)$; these terms manifest, respectively, the likelihoods that the input $T$ and the output $T_{R}$ be realized at time $t$. Differentiating Eq. (\ref{31}) with respect to the variable $u$ yields the following density-function formulation of the input-to-output map $T\mapsto T_{R}$: \begin{equation} f_{R}\left( \tau n+u\right) =\bar{F}\left( \tau \right) ^{n}f\left( u\right) . \label{32} \end{equation}% The input's and output's hazard functions are the ratios of their density functions to their survival functions: $H\left( t\right) = f\left( t\right)/\bar{F}\left( t\right)$ and $H_{R}\left( t\right) = f_{R}\left( t\right)/\bar{F}_{R}\left(t\right)$.\footnote{Alternatively, the input's and output's hazard functions are the negative logarithmic derivatives of their survival functions: $H\left( t\right) = -\{ln[\bar{F}(t)]\}'$ and $H_{R}\left( t\right) = -\{ln[\bar{F}_R(t)]\}'$.} The terms $H\left( t\right)$ and $H_{R}\left( t\right)$ manifest, respectively, the likelihoods that the input $T$ and the output $T_{R}$ be realized at time $t$ -- provided the information that $T$ and $T_{R}$ were not realized up to time $t$. Dividing the sides of Eq. (\ref{32}) by the corresponding sides of Eq. (\ref{31}) yields the following hazard-function formulation of the input-to-output map $T\mapsto T_{R}$: \begin{equation} H_{R}\left( \tau n+u\right) =H\left( u\right) . \label{41} \end{equation} Eqs. (\ref{31})-(\ref{41}) provide different -- yet equivalent -- statistical formulations of the input-to-output map $T\mapsto T_{R}$. Indeed, in terms of their hazard functions, the input's and output's survival functions are given by $\bar{F}\left( t\right) =\exp \{-\int_{0}^{t}H(s)ds\}$ and $\bar{F}_{R}\left( t\right) =\exp \{-\int_{0}^{t}H_{R}(s)ds\}$ \cite{BP}-\cite{Dhi}. The hazard functions offer, via Eq. (\ref{41}), a most compact and neat formulation of this map. From a hazard-function perspective, the sharp-restart algorithm is described as follows: it takes the input's hazard function over the temporal interval $ 0 \leq t< \tau $, and it generates from this segment -- via periodic repetition -- the output's hazard function (see Fig. 1). \begin{figure}[t] \centering \includegraphics[width=8cm]{Figure1.pdf} \caption{Illustration of Eq. (\ref{41}), the hazard-function formulation of the input-to-output map $T\mapsto T_{R}$. Eq. (\ref{41}) is demonstrated via the example of a type-III Pareto input. The Pareto distributions, which comprise of four types, are the principal models of statistical power-laws in science and engineering \cite{Par}-\cite{Arn}. The type-III Pareto input is characterized by the survival function $\bar{F}\left( t\right)=1/(1+t^{p})$, as well as by the hazard function $H(t)=pt^{p-1}/(1+t^{p})$, where $p$ is a positive parameter. Here, for the Pareto parameter $p=2$, we plot the input's hazard function in dashed black. Also, for the timer parameter $\tau=4$, we plot the output's hazard function in solid blue. Note that the solid blue curve is produced by taking the temporal segment $0\leq t<4 $ of the dashed black curve, and by repeating it periodically.} \label{Hazard_fig} \end{figure} \section{Fixed points} The \emph{fixed points} of the input-to-output map $T\mapsto T_{R}$ are inputs that are statistically invariant to the action of this map. Namely, inputs $T$ such that $T_{R}=T$, the equality being in law. We now set the focus on these fixed points. From a hazard-function perspective the fixed points are characterized by $H_R(t)=H(t)$ ($t\geq 0$). Consequently, using Eq. (\ref{41}), an input $T$ is a fixed point of the input-to-output map $T\mapsto T_{R}$ if and only if: \begin{equation} H\left( \tau n+u\right)=H\left( u\right) , \label{42} \end{equation} where $ n=0,1,2,\cdots $, and where $0\leq u<\tau $. There are two types of fixed points: \emph{specific} and \emph{general}. A specific fixed point is with respect to a specific timer $\tau $. For a specific timer $\tau $ it is evident from Eq. (\ref{42}) that: the \emph{specific fixed points} of the input-to-output map $T\mapsto T_{R}$ are inputs that are characterized by periodic hazard functions with period $\tau $. A general fixed point is with respect to all timers $\tau $ \emph{simultaneously}. Eq. (\ref{42}) holds for all timers $% \tau $ simultaneously if and only if the hazard function is constant. In turn, constant hazard functions characterize Exponentially-distributed inputs \cite{BB}. Indeed, for a positive parameter $\lambda $ we have: $% H(t)=\lambda $ ($t\geq 0$) if and only if $\bar{F}\left( t\right) =\exp \left( -\lambda t\right) $ ($t\geq 0$). Hence, we assert that: the \emph{% general fixed points} of the input-to-output map $T\mapsto T_{R}$ are Exponentially-distributed inputs. Exponentially-distributed inputs are characterized by the \emph{memoryless property} \cite{BB}: $Pr(T>t+s|T>t)=Pr(T>s)$, for all $t \geq 0$ and $s\geq0$. It is evident from the memoryless property that applying the sharp-restart algorithm to Exponentially-distributed inputs will have no effect whatsoever on task-completion. Thus, the fact that Exponentially-distributed inputs are general fixed points of the input-to-output map $T\mapsto T_{R}$ follows also from the memoryless property. \section{\label{5} Stochastic dominance} Reliability engineering distinguishes two important classes of inputs \cite{BP}-\cite{Fin}: \emph{increasing failure rate} (IFR), and \emph{decreasing failure rate} (DFR). The IFR and DFR classes constitute, respectively, all inputs whose hazard functions are monotone increasing and monotone decreasing. In this section we examine the effect of the input-to-output map $T\mapsto T_{R}$ on these classes of inputs. The IFR class manifests the following statistical behavior: the longer we wait for an input $T$ to be realized -- the greater the likelihood that it will soon be realized. The lifespans of aging systems -- e.g. cars, planes, machines, and our own adult bodies -- are considered IFR. Namely, in aging systems the likelihood of system-failure grows as the age of the system grows. The DFR class manifests a statistical behavior that is antithetical to that of the IFR class. Specifically, for the DFR class: the longer we wait for an input $T$ to be realized -- the smaller the likelihood that it will soon be realized. The lifespans of technologies -- e.g. the English alphabet, the Gregorian calendar, the wheel, and the cutlery we use -- are considered DFR \cite{Tal}-\cite{Lind}. Indeed, the longer we have been using a technology, the more likely it is that we will keep on using it. \begin{figure}[t] \centering \includegraphics[width=8cm]{Figure2.pdf} \caption{Illustration of Eq. (\ref{41}) -- the hazard-function formulation of the input-to-output map $T\mapsto T_{R}$ -- in the case of IFR inputs. The IFR case is demonstrated via the example of a Gompertz input. The Gompertz distribution serves as a principal statistical model, in demography and in actuary, for adults' lifespans \cite{Gom}-\cite{PHG}; this distribution is generated by accelerating-change processes \cite{AccCha}, and is intimately related to Moore's law \cite{MooClo}. The Gompertz input is characterized by the survival function $\bar{F}\left( t\right) =\exp \left\{ -p\left[ \exp \left(t\right) -1\right] \right\} $, as well as by the monotone increasing hazard function $H\left( t\right) =p\exp \left( t\right) $, where $p$ is a positive parameter. Here, for the Gompertz parameter $p=2$, we plot the input's hazard function in dashed black. Also, for the timer parameter $\tau=4$, we plot the output's hazard function in solid blue. Note that, over the temporal ray $4<t<\infty$, the solid blue curve is strictly below the dashed black curve.} \label{Hazard_fig} \end{figure} For general inputs Eq. (\ref{41}) implies that $H_{R}\left( t\right) = H\left( t\right) $ for all $t \leq \tau $. For IFR and DFR inputs Eq. (\ref{41}) further yields the following pair of observations. If the input is IFR then $H_{R}\left( t\right) <H\left( t\right) $ for all $t>\tau $ (see Fig. 2). And, if the input is DFR then $H_{R}\left( t\right) >H\left( t\right) $ for all $t>\tau $ (see Fig. 3). As noted above, in terms of their hazard functions, the input's and output's survival functions are given by $\bar{F}\left( t\right) =\exp \{-\int_{0}^{t}H(s)ds\}$ and $\bar{F}_{R}\left( t\right) =\exp \{-\int_{0}^{t}H_{R}(s)ds\}$. Also, in terms of their survival functions, the input's and output's means are given by $\mathbf{E}\left[ T% \right] =\int_{0}^{\infty }\bar{F}\left( t\right) dt$ and $\mathbf{E}\left[ T_{R}\right] =\int_{0}^{\infty }\bar{F}_{R}\left( t\right) dt$. These survival-function formulae and mean formulae, combined together with the above IFR and DFR observations, yield the following pair of IFR and DFR results. \begin{figure}[t] \centering \includegraphics[width=8cm]{Figure3.pdf} \caption{Illustration of Eq. (\ref{41}) -- the hazard-function formulation of the input-to-output map $T\mapsto T_{R}$ -- in the case of DFR inputs. The DFR case is demonstrated via the example of a type-II Pareto input. As noted above, the Pareto distributions are the principal models of statistical power-laws in science and engineering \cite{Par}-\cite{Arn}. The type-II Pareto input is characterized by the survival function $\bar{F} \left( t\right) =1/(1+t)^{p}$, as well as by the monotone decreasing hazard function $H\left( t\right) =p/(1+t)$, where $p$ is a positive parameter. Here, for the Pareto parameter $p=2$, we plot the input's hazard function in dashed black. Also, for the timer parameter $\tau=4$, we plot the output's hazard function in solid blue. Note that, over the temporal ray $4<t<\infty$, the solid blue curve is strictly above the dashed black curve.} \label{Hazard_fig} \end{figure} \begin{enumerate} \item[$\bullet $] If the input is IFR then the output's survival function is larger than the input's survival function \begin{equation} \bar{F}_{R}\left( t\right) >\bar{F}\left( t\right) \label{51} \end{equation}% for all $t> \tau $; consequently, the output's mean is larger than the input's mean, $\mathbf{E}\left[ T_{R}\right] >\mathbf{E}\left[ T\right] $. \item[$\bullet $] If the input is DFR then the output's survival function is smaller than the input's survival function \begin{equation} \bar{F}_{R}\left( t\right) <\bar{F}\left( t\right) \label{52} \end{equation}% for all $t> \tau $; consequently, the output's mean is smaller than the input's mean, $\mathbf{E}\left[ T_{R}\right] <\mathbf{E}\left[ T\right] $. \end{enumerate} From a survival-function perspective, as well as from a mean perspective, these results assert that: the sharp-restart algorithm impedes task-completion in the case of IFR inputs, and expedites task-completion in the case of DFR inputs. The IFR and DFR results hold valid for all timers $% \tau $ simultaneously. Eqs. (\ref{51})-(\ref{52}) manifest \emph{stochastic dominance} \cite{SDN}-\cite{Levy}: that of the output $T_{R}$ over the input $T$ [Eq. (\ref{51})], and that of the input $T$ over the output $T_{R}$ [Eq. (\ref{52})]. \section{\label{6} Asymptotic stochastic dominance} The IFR and DFR results of the previous section enable an immediate determination of the impeding/expediting effect of sharp restart on task-completion. However, these results come with a caveat: they are not always applicable. Indeed, while many inputs are IFR (e.g. the Gompertz input of Fig. 2), and while many other inputs are DFR (e.g. the type-II Pareto input of Fig. 3), there are also many inputs that are neither IFR nor DFR (e.g. the type-III Pareto input\footnote{The hazard function of the type-III Pareto input, in the parameter range $p>1$, has a unimodal shape.} of Fig. 1). Can we, by modifying the setting underpinning the IFR and DFR results, obtain results that are applicable to \emph{all} inputs? The answer, as we shall argue and establish in this section, is affirmative. The IFR and DFR results of section \ref{5} focus on the input's and output's survival functions, $\bar{F}\left( t\right) $ and $\bar{F}_{R}\left( t\right) $, over the temporal ray $ \tau < t < \infty $. We now shift the focus from the temporal ray $ \tau < t < \infty $ to the temporal limit $t\rightarrow \infty $. Specifically, we now set the focus on the \emph{asymptotic tail-behavior}, relative to each other, of the input's and output's survival functions. To that end we use two `end terms' of the input's hazard function: zero-end and infinity-end. The zero-end term is the average \begin{equation} \bar{H}\left(\tau \right) =\frac{1}{\tau }\int_{0}^{\tau }H\left( t\right) dt \end{equation} of the input's hazard function over the temporal interval $ 0 \leq t < \tau $. The infinity-end term is the limit \begin{equation} H\left( \infty \right) =\lim_{t\rightarrow \infty }H\left( t\right) \end{equation} of the input's hazard function at infinity; we assume that this limit exists in the wide sense, i.e. $0\leq H\left( \infty \right) \leq \infty $. On the one hand, the survival-function formula $\bar{F}\left( t\right) =\exp \{-\int_{0}^{t}H(s)ds\}$ implies that the limit $H\left( \infty \right) $ affects the asymptotic tail-behavior of the input's survival function $\bar{F}\left(t\right)$. On the other hand, the survival-function formula $\bar{F}_{R}\left( t\right) =\exp \{-\int_{0}^{t}H_{R}(s)ds\}$ together with Eq. (\ref{41}) imply that the average $\bar{H}\left( \tau \right)$ affects the asymptotic tail-behavior of the output's survival function $ \bar{F}_{R}\left( t\right)$. In turn, we find that the relative asymptotic tail-behavior of the input's and output's survival functions is determined by the difference between the limit $H\left( \infty \right) $ and the average $\bar{H}\left( \tau \right)$ as follows: \begin{equation} \lim_{t\rightarrow \infty }\frac{1}{t}\ln \left[ \frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }\right] =H\left( \infty \right) -\bar{H}% \left( \tau \right) . \label{60} \end{equation}% The proof of Eq. (\ref{60}) is detailed in the Methods. When $ H\left( \infty \right) <\infty $, an alternative way of formulating Eq. (\ref{60}) is: $\bar{F}_{R}\left( t\right)/\bar{F}\left( t\right) =\exp\left\{t[H\left( \infty \right) -\bar{H}% \left( \tau \right) + \delta(t)]\right\}, \label{exp_approx}$ where $\delta(t)$ is a temporal function that vanishes at infinity, $\lim_{t\to\infty} \delta(t)=0$. As explained in the Methods, Eq. (\ref{60}) yields the following pair of asymptotic results. \begin{enumerate} \item[$\bullet $] If $\bar{H}\left( \tau \right) <H\left( \infty \right) $ then the output's survival function decays infinitely slower than the input's survival function:% \begin{equation} \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=\infty . \label{61} \end{equation} \item[$\bullet $] If $\bar{H}\left( \tau \right) >H\left( \infty \right) $ then the output's survival function decays infinitely faster than the input's survival function:% \begin{equation} \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=0. \label{62} \end{equation} \end{enumerate} From an asymptotic tail-behavior perspective these results assert when sharp restart dramatically impedes task-completion, and when it dramatically expedites task-completion. Eq. (\ref{61}) and Eq. (\ref{62}) are, respectively, the ``asymptotic stochastic dominance'' counterparts of Eq. (\ref{51}) and Eq. (\ref{52}). Last, we note that the asymptotic results of Eq. (\ref{61}) and Eq. (\ref{62}) are in full accord, respectively, with the IFR and DFR results of section \ref{5}. Indeed, if the input is IFR then its hazard function is monotone increasing, hence $\bar{H}\left( \tau \right) < H\left( \infty \right) $ for all timers $\tau $, and thus we conclude that: the asymptotic result of Eq. (\ref{61}) holds for all timers $\tau $ simultaneously. Similarly, if the input is DFR then its hazard function is monotone decreasing, hence $\bar{H}\left( \tau \right) > H\left( \infty \right) $ for all timers $\tau $, and thus we conclude that: the asymptotic result of Eq. (\ref{62}) holds for all timers $\tau $ simultaneously. \section{\label{7} Discussion} We now turn to discuss, in detail, the implications of the asymptotic stochastic-dominance results that were presented in the previous section. \subsection{\label{hl}The hazard limit} Evidently, the hazard-function's limit $H\left( \infty \right)$ plays a key role in the asymptotic results of section \ref{6}. There are three possible scenarios for this limit: zero, positive, and infinite. The `boundary scenarios' straightforwardly yield the following pair of `boundary corollaries'. \begin{enumerate} \item[$\bullet $] If $H\left( \infty \right) =\infty $ then the asymptotic result of Eq. (\ref{61}) holds for all timers $\tau $ simultaneously. \item[$\bullet $] If $H\left(\infty \right) =0$ then the asymptotic result of Eq. (\ref{62}) holds for all timers $\tau $ simultaneously.\footnote{Here we assume that the input's density function is positive-valued over the positive half-line: $f\left( t\right) >0$ for all $t>0$. In general, the scenario $H\left( \infty \right) =0$ implies that Eq. (\ref{62}) holds for all timers $\tau >t_{\ast }$ simultaneously, where $t_{\ast }=\inf \{t\geq 0$ $|$ $\bar{F}\left( t\right) <1\}$ is the lower bound of the input's admissible values.} \end{enumerate} The positive scenario, $0<H\left( \infty \right) <\infty $, is more intricate. In this scenario the asymptotic results of section \ref{6} need not apply simultaneously to all timers $\tau $. Namely (see Fig. 4): for some timer parameters we may have $\bar{H} \left( \tau \right) <H\left( \infty \right) $, yielding Eq. (\ref{61}); and for other timer parameters we may have $\bar{H}\left( \tau \right) >H\left( \infty \right) $, yielding Eq. (\ref{62}). Additional remarks regarding the intricacy of the positive scenario are detailed in the Methods. Last, we note that the limit $H\left( \infty \right)$ can be formulated also in terms of the negative logarithmic derivative of the input's density function: $G\left( t\right) =-f^{\prime }\left( t\right) /f\left( t\right) $. Indeed, assume that the input's density function vanishes at infinity, $\lim_{t\rightarrow \infty}f\left( t\right) =0$. Then, L'Hospital's rule implies that: $H\left( \infty \right) =G\left( \infty \right) $, where $G\left( \infty \right) =\lim_{t\rightarrow \infty }G\left( t\right) $. \begin{figure}[t] \centering \includegraphics[width=8cm]{Figure4.pdf} \caption{An example of the positive scenario $0<H\left( \infty \right) <\infty $, and an illustration of the optimization results. The example we use here is an input with hazard function $H(t)=(2t+t^2)/(1+t^2)$. We plot this hazard function -- whose limit is $ H\left( \infty \right) =1$ -- in dashed black. Also, we plot the corresponding average function, $\bar{H}\left(t \right) =\frac{1}{t }\int_{0}^{t }H\left( s \right) ds $, in solid orange. With regard to subsection \ref{hl}, note that: the solid orange curve has values that are smaller than the level $ H\left( \infty \right) =1$, as well as values that are larger than this level. With regard to subsection \ref{opti}, note that: the maximum of the solid orange curve is attained at the time point at which this curve intersects the dashed black curve; and that at this time point the dashed black curve is decreasing.} \label{positive_scenario} \end{figure} \subsection{Fast restart} In this subsection we address the case of `fast restart', i.e.: the application of the sharp-restart algorithm with small timers $\tau \ll1 $. To that end we note that L'Hospital's rule yields the following limit: \begin{equation} \lim_{\tau \rightarrow 0}\bar{H}\left( \tau \right) =H\left( 0\right) . \label{71} \end{equation}% As the average $\bar{H}\left( \tau \right) $ is a continuous function of the timer parameter $\tau $, Eq. (\ref{71}) yields the following pair of `fast-restart corollaries'. \begin{enumerate} \item[$\bullet $] If $H\left( 0\right) <H\left( \infty \right) $ then there exist sufficiently small timers $\tau $ for which the asymptotic result of Eq. (\ref{61}) holds. \item[$\bullet $] If $H\left( 0\right) >H\left( \infty \right) $ then there exist sufficiently small timers $\tau $ for which for which the asymptotic result of Eq. (\ref{62}) holds. \end{enumerate} Note that, at zero, the value of the input's hazard function coincides with the value of the input's density function: $H\left( 0\right) =f\left( 0\right) $. This follows from the fact that, as the input $T$ is positive-valued, the value of its survival function at zero is one, $\bar{F}\left( 0\right) =1$. \subsection{Slow restart} Considering the positive scenario, $0<H\left( \infty \right) <\infty $, in this subsection we address the case of `slow restart', i.e.: the application of the sharp-restart algorithm with large timers $\tau \gg1 $. To that end we use the following limit-result: \begin{equation} \lim_{\tau \rightarrow \infty }\tau \left[ \bar{H}\left( \tau \right) -H\left( \infty \right) \right] =\int_{0}^{\infty }\left[ H\left( t\right) -H\left( \infty \right) \right] dt. \label{72} \end{equation}% The derivation of Eq. (\ref{72}) is detailed in the Methods. As the average $\bar{H}\left( \tau \right) $ is a continuous function of the timer parameter $\tau $, Eq. (\ref{72}) yields the following pair of `slow-restart corollaries'; in these corollaries $I$ denotes the integral appearing on the right-hand side of Eq. (\ref{72}). \begin{enumerate} \item[$\bullet $] If $I<0$ then there exist sufficiently large timers $\tau $ for which the asymptotic result of Eq. (\ref{61}) holds. \item[$\bullet $] If $I>0$ then there exist sufficiently large timers $\tau $ for which the asymptotic result of Eq. (\ref{62}) holds. \end{enumerate} \subsection{Existence} Considering the positive scenario, $0<H\left( \infty \right) <\infty $, in this subsection we investigate the very existence of timer parameters that either impede or expedite task-completion. To that end we use the following result: \begin{equation} \int_{0}^{\infty }\left[ \bar{H}\left( \tau \right) -H\left( \infty \right) % \right] f_{1}\left( \tau \right) d\tau =\frac{1}{\mu }-H\left( \infty \right) , \label{73} \end{equation}% where $\mu =\mathbf{E}\left[ T\right] $ is the input's mean, and where $% f_{1}\left( \tau \right) =\frac{1}{\mu }\tau f\left( \tau \right) $. The proof of Eq. (\ref{73}) is detailed in the Methods.\footnote{In the proof we also show that if the limit $H\left( \infty \right) $ is positive then so is the input's mean $\mu $.} As the term $f_{1}\left( \tau \right) $ is non-negative valued, Eq. (\ref{73}) yields the following pair of `existence corollaries'. \begin{enumerate} \item[$\bullet $] If $H\left( \infty \right) >\frac{1}{\mu }$ then there exist timers $\tau $ for which the asymptotic result of Eq. (\ref{61}) holds. \item[$\bullet $] If $H\left( \infty \right) <\frac{1}{\mu }$ then there exist timers $\tau $ for which the asymptotic result of Eq. (\ref{62}) holds. \end{enumerate} \subsection{\label{opti}Optimization} Excluding the boundary scenario $H\left( \infty \right)=\infty $, in this subsection we address the optimization of the right-hand side of Eq. (\ref{60}). Specifically, Eq. (\ref{60}) yields the following pair of optimization observations. If impeding task-completion is a goal, then one would seek to minimize the average $\bar{H}\left( \tau \right) $. And, if expediting task-completion is a goal, then one would seek to maximize the average $\bar{H}\left( \tau \right) $. The local minima and the local maxima of the average $\bar{H}\left( \tau \right) $, as a function of the timer parameter $\tau $, are attained at its critical points: timers $\tau _{c}$ at which the average's derivative vanishes, $\bar{H}^{\prime }\left( \tau _{c}\right) =0$. A calculation detailed in the Methods implies that the average's derivative is given by $\bar{H}^{\prime }(\tau)=\frac{1}{\tau }\left[ H\left( \tau \right) -\bar{H}\left( \tau \right) \right]$. Consequently, we obtain that the critical points $\tau _{c}$ are the points at which the average $\bar{H}\left( \tau \right) $ intersects the input's hazard function: $\bar{H}^{\prime }\left( \tau _{c}\right) =0\Leftrightarrow \bar{H} \left( \tau _{c}\right) =H\left( \tau _{c}\right)$ (see Fig. 4). A calculation detailed in the Methods implies that, at its critical points, the second derivative of the average is given by $\bar{H}^{\prime \prime }(\tau_{c})=\frac{1}{\tau_{c} }H^{\prime }(\tau_{c})$. Thus, for a given critical point $\tau _{c}$, we obtain the following pair optimization conclusions. If the input's hazard function is increasing at the critical point, $H^{\prime}\left( \tau _{c}\right) >0$, then this critical point yields a local minimum of the average $\bar{H}\left( \tau \right) $. Analogously, if the input's hazard function is decreasing at the critical point, $H^{\prime}\left( \tau _{c}\right) <0$, then this critical point yields a local maximum of the average $\bar{H}\left( \tau \right) $ (see Fig. 4). Last, we note that the optimization conclusions can be formulated also in terms of the negative logarithmic derivative of the input's density function: $G\left( t\right) =-f^{\prime }\left( t\right) /f\left( t\right) $. Indeed, a calculation detailed in the Methods implies that the derivative of the input's hazard function admits the representation ${H}^{\prime }(t)=H(t)[H(t)-G(t)]$. Consequently -- assuming that the input's hazard function is positive at the critical point $\tau _{c}$ -- the aforementioned optimization conclusions admit the following formulations. Minimization conclusion: if $H(\tau _{c})>G(\tau _{c})$ then the critical point yields a local minimum of the average $\bar{H}\left( \tau \right) $. Maximization conclusion: if $H(\tau _{c})<G(\tau _{c})$ then the critical point yields a local maximum of the average $\bar{H}\left( \tau \right) $. \section{Conclusion} A central issue, in the context of the sharp-restart algorithm, is determining if this algorithm impedes or expedites task-completion. To date, this issue was investigated mainly via the average-behavior perspective: determining if the output's mean is larger than the input's mean, $\mathbf{E}\left[ T_{R}\right] >\mathbf{E}\left[ T\right] $; or if the output's mean is smaller than the input's mean, $\mathbf{E}\left[ T_{R}\right] <\mathbf{E}\left[ T\right] $. Evidently, the average-behavior perspective provides no insight regarding the occurrence likelihood of extremely large completion times. Using hazard rates, this paper shifted from the average-behavior perspective to a tail-behavior perspective. Firstly, a compact and neat hazard-rate formulation of the input-to-output map $T\mapsto T_{R}$ was presented, Eq. (\ref{41}). Secondly, using Eq. (\ref{41}), tail-dominance results -- for the classes of IFR and DFR inputs -- were established. Specifically, if an input is IFR then the output's survival function is larger than that of the input: $\bar{F}_{R}\left( t\right) > \bar{F}\left( t\right) $, over the ray $ \tau < t <\infty $. And, if an input is DFR then the output's survival function is smaller than that of the input: $\bar{F}_{R}\left( t\right) < \bar{F}\left( t\right) $, over the ray $ \tau < t <\infty $. These tail-dominance results were shown to induce corresponding mean results. Thirdly, focusing on the temporal limit $t\rightarrow \infty $, asymptotic tail-dominance results -- for all inputs -- were established. Specifically, general and explicit hazard-rate criteria asserted when the output's survival function decays infinitely slower than the input's survival function: $\lim_{t\rightarrow \infty }\bar{F}_{R}\left( t\right) /\bar{F}\left( t\right) =\infty$. And, the hazard-rate criteria also asserted when the output's survival function decays infinitely faster than the input's survival function: $\lim_{t\rightarrow \infty }\bar{F}_{R}\left( t\right) /\bar{F}\left( t\right) =0 $. The asymptotic tail-dominance results, as well as various corollaries of these results, are summarized in Tables I and II. \newpage \begin{center} {\Large Table I} \bigskip \begin{tabular}{||l||l||l||} \hline\hline $% \begin{array}{c} \ \\ \textbf{Timer} \\ \ \end{array}% $ & $% \begin{array}{c} \ \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=0 \ \end{array}% $ & $% \begin{array}{c} \ \\ \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=\infty \\ \ \end{array}% $ \\ \hline\hline $% \begin{array}{c} \ \\ Specific \\ \ \end{array}% $ & $\frac{1}{\tau }\int_{0}^{\tau }H\left( t\right) dt>H\left( \infty \right) $ & $\frac{1}{\tau }\int_{0}^{\tau }H\left( t\right) dt<H\left( \infty \right) $ \\ \hline\hline $% \begin{array}{c} \ \\ All \\ \ \end{array}% $ & $H\left( t\right) $ decreasing & $H\left( t\right) $ increasing \\ \hline\hline $% \begin{array}{c} \ \\ All \\ \ \end{array}% $ & $H\left( \infty \right) =0$ & $H\left( \infty \right) =\infty $ \\ \hline\hline \end{tabular} \end{center} \textbf{Table I}: Summary of key asymptotic results. The table specifies, in terms of the input's hazard function $H(t)$, criteria leading to the limits $\lim_{t\rightarrow \infty }\bar{F}_{R}\left(t\right) /\bar{F}\left( t\right) =0$ and $\lim_{t\rightarrow \infty }\bar{F}_{R}\left( t\right) /\bar{F}\left( t\right) =\infty $. The criteria appearing in the first row apply to any specific (i.e. fixed) timer $\tau $. The criteria appearing in the second and third rows apply to all timers, $0< \tau < \infty$, simultaneously. See section \ref{6} for the details of these criteria. \bigskip \begin{center} {\Large Table II} \bigskip \begin{tabular}{||l||l||l||} \hline\hline $% \begin{array}{c} \ \\ \textbf{Timer} \\ \ \end{array}% $ & $% \begin{array}{c} \ \\ \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=0 \\ \ \end{array}% $ & $% \begin{array}{c} \ \\ \lim_{t\rightarrow \infty }\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) }=\infty \\ \ \end{array}% $ \\ \hline\hline $% \begin{array}{c} \ \\ General \\ \ \end{array}% $ & $ \frac{1}{\mu }> H\left( \infty \right)$ & $ \frac{1}{\mu } < H\left( \infty \right) $ \\ \hline\hline $% \begin{array}{c} \ \\ Small \\ \ \end{array}% $ & $H\left( 0\right) >H\left( \infty \right) $ & $H\left( 0\right) <H\left( \infty \right) $ \\ \hline\hline $% \begin{array}{c} \ \\ Large \\ \ \end{array}% $ & $\int_{0}^{\infty }\left[ H\left( t\right) -H\left( \infty \right) % \right] dt>0$ & $\int_{0}^{\infty }\left[ H\left( t\right) -H\left( \infty \right) \right] dt<0$ \\ \hline\hline \end{tabular} \end{center} \textbf{Table II}: Summary of key existence results for the scenario $0<H\left( \infty \right) <\infty $. The table specifies, in terms of the input's hazard function $H(t)$, criteria that determine the very existence of timers $\tau $ for which the limits $\lim_{t\rightarrow \infty }\bar{F}_{R}\left(t\right) /\bar{F}\left( t\right) =0$ and $\lim_{t\rightarrow \infty }\bar{F}_{R}\left( t\right) /\bar{F}\left( t\right) =\infty $ hold. First row: general timers, $0< \tau < \infty $. Second row: sufficiently small timers, $\tau \ll 1 $. Third row: sufficiently large timers, $\tau \gg 1 $. See section \ref{7} for the details of these criteria. \section{Methods} \subsection{Proof of Eq. (\protect\ref{60})} Consider the ratio \begin{equation} \rho \left( t\right) =\frac{\bar{F}_{R}\left( t\right) }{\bar{F}\left( t\right) } \label{A200} \end{equation}% at the time $t=n\tau +u$ where: $n$ is a non-negative integer, and $u$ is a fixed value in the range $0\leq u<\tau $. As the input's and output's survival functions are monotone decreasing we have% \begin{equation} \bar{F}\left[ \left( n+1\right) \tau \right] \leq \bar{F}\left( n\tau +u\right) \leq \bar{F}\left( n\tau \right) , \label{A201} \end{equation}% and \begin{equation} \bar{F}_{R}\left[ \left( n+1\right) \tau \right] \leq \bar{F}_{R}\left( n\tau +u\right) \leq \bar{F}_{R}\left( n\tau \right) . \label{A202} \end{equation}% In turn, Eqs. (\ref{A201})-(\ref{A202}) imply that% \begin{equation} \frac{\bar{F}_{R}\left[ \left( n+1\right) \tau \right] }{\bar{F}\left( n\tau \right) }\leq \rho \left( n\tau +u\right) \leq \frac{\bar{F}_{R}\left( n\tau \right) }{\bar{F}\left[ \left( n+1\right) \tau \right] } . \label{A203} \end{equation} In terms of the input's hazard function, the input's survival function is given by $\bar{F}\left( t\right) =\exp \{-\int_{0}^{t}H(s)ds\}$. Hence, using Eq. (\ref{31}): for the lower bound appearing on the left-hand side of \eref{A203} we have \begin{equation} \frac{\bar{F}_{R}\left[ \left( n+1\right) \tau \right] }{\bar{F}\left( n\tau \right) }=\frac{\bar{F}\left( \tau \right) ^{n+1}}{\bar{F}\left( n\tau \right) }=\frac{\exp \left[ -\left( n+1\right) \int_{0}^{\tau }H\left( s\right) ds\right] }{\exp \left[ -\int_{0}^{n\tau }H\left( s\right) ds\right] } , \label{A211} \end{equation}% and for the upper bound appearing on the right-hand side of \eref{A203} we have \begin{equation} \frac{\bar{F}_{R}\left( n\tau \right) }{\bar{F}\left[ \left( n+1\right) \tau % \right] }=\frac{\bar{F}\left( \tau \right) ^{n}}{\bar{F}\left[ \left( n+1\right) \tau \right] }=\frac{\exp \left[ -n\int_{0}^{\tau }H\left( s\right) ds\right] }{\exp \left[ -\int_{0}^{\left( n+1\right) \tau }H\left( s\right) ds\right] }. \label{A212} \end{equation}% In turn, Eq. (\ref{A203}) and Eqs. (\ref{A211})-(\ref{A212}) imply that% \begin{equation} \left. \begin{array}{l} \int_{0}^{n\tau }H\left( s\right) ds-\left( n+1\right) \int_{0}^{\tau }H\left( s\right) ds \\ \ \\ \leq \ln \left[ \rho \left( n\tau +u\right) \right] \\ \ \\ \leq \int_{0}^{\left( n+1\right) \tau }H\left( s\right) ds-n\int_{0}^{\tau }H\left( s\right) ds. \end{array}% \right. \label{A213} \end{equation} Introduce the average \begin{equation} \bar{H}\left( l\right) =\frac{1}{l}\int_{0}^{l}H\left( s\right) ds \label{A221} \end{equation}% ($l>0$). In terms of this average we can re-write the left-hand side and the right-hand side of Eq. (\ref{A213}) as follows:% \begin{equation} \left. \begin{array}{l} \int_{0}^{n\tau }H\left( s\right) ds-\left( n+1\right) \int_{0}^{\tau }H\left( s\right) ds \\ \ \\ =n\tau \left[ \frac{1}{n\tau }\int_{0}^{n\tau }H\left( s\right) ds-\frac{n+1% }{n}\frac{1}{\tau }\int_{0}^{\tau }H\left( s\right) ds\right] \\ \ \\ =n\tau \left[ \bar{H}\left( n\tau \right) -\frac{n+1}{n}\bar{H}\left( \tau \right) \right] , \end{array}% \right. \label{A222} \end{equation}% and% \begin{equation} \left. \begin{array}{l} \int_{0}^{\left( n+1\right) \tau }H\left( s\right) ds-n\int_{0}^{\tau }H\left( s\right) ds \\ \ \\ =\left( n+1\right) \tau \left[ \frac{1}{\left( n+1\right) \tau }% \int_{0}^{\left( n+1\right) \tau }H\left( s\right) ds-\frac{n}{n+1}\frac{1}{% \tau }\int_{0}^{\tau }H\left( s\right) ds\right] \\ \ \\ =\left( n+1\right) \tau \left\{ \bar{H}\left[ (n+1)\tau \right] -\frac{n}{n+1% }\bar{H}\left( \tau \right) \right\} .% \end{array}% \right. \label{A223} \end{equation} L'Hospital's rule implies that% \begin{equation} \lim_{l\rightarrow \infty }\bar{H}\left( l\right) =\frac{1}{l}% \int_{0}^{l}H\left( s\right) ds=\lim_{l\rightarrow \infty }H\left( l\right) =H\left( \infty \right) . \label{A231} \end{equation}% Eq. (\ref{A231}) implies that% \begin{equation} \lim_{n\rightarrow \infty }\left[ \bar{H}\left( n\tau \right) -\frac{n+1}{n}% \bar{H}\left( \tau \right) \right] =H\left( \infty \right) -\bar{H}\left( \tau \right) , \label{A232} \end{equation}% and that% \begin{equation} \lim_{n\rightarrow \infty }\left\{ \bar{H}\left[ (n+1)\tau \right] -\frac{n}{% n+1}\bar{H}\left( \tau \right) \right\} =H\left( \infty \right) -\bar{H}% \left( \tau \right) . \label{A233} \end{equation} Eq. (\ref{A213}) and Eqs. (\ref{A222})-(\ref{A223}) imply that \begin{equation} \left. \begin{array}{l} \frac{n\tau }{n\tau +u}\left[ \bar{H}\left( n\tau \right) -\frac{n+1}{n}\bar{% H}\left( \tau \right) \right] \\ \ \\ \leq \frac{\ln \left[ \rho \left( n\tau +u\right) \right] }{n\tau +u} \\ \ \\ \leq \frac{\left( n+1\right) \tau }{n\tau +u}\left\{ \bar{H}\left[ (n+1)\tau % \right] -\frac{n}{n+1}\bar{H}\left( \tau \right) \right\} . \end{array}% \right. \label{A234} \end{equation}% Taking the limit $n\rightarrow \infty $ in Eq. (\ref{A234}), while using the limits of Eqs. (\ref{A232})-(\ref{A233}), yields% \begin{equation} H\left( \infty \right) -\bar{H}\left( \tau \right) \leq \lim_{n\rightarrow \infty }\frac{\ln \left[ \rho \left( n\tau +u\right) \right] }{n\tau +u}\leq H\left( \infty \right) -\bar{H}\left( \tau \right) . \label{A235} \end{equation}% As Eq. (\ref{A235}) holds for any fixed value $u$ (in the range $0\leq u<\tau $) it proves Eq. (\ref{60}):% \begin{equation} \lim_{t\rightarrow \infty }\frac{1}{t}\ln \left[ \rho \left( t\right) \right] =H\left( \infty \right) -\bar{H}\left( \tau \right) . \label{A236} \end{equation} \subsection{Proofs of Eqs. (\protect\ref{61}) and (\protect\ref{62})} The proof of Eq. (\protect\ref{60}) yielded Eq. (\ref{A235}). As we shall now argue, Eq. (\ref{A235}) leads to Eqs. (\protect\ref{61}) and (\protect\ref{62}). If $\bar{H}\left( \tau \right) <H\left( \infty \right) $ then Eq. (\ref{A235}% ) implies that% \begin{equation} \infty \leq \lim_{n\rightarrow \infty }\ln \left[ \rho \left( n\tau +u\right) \right] \leq \infty , \label{A243} \end{equation}% and hence \begin{equation} \lim_{n\rightarrow \infty }\rho \left( n\tau +u\right) =\infty . \label{A244} \end{equation}% As Eq. (\ref{A244}) holds for any fixed value $u$ (in the range $0\leq u<\tau $) it proves Eq. (\ref{61}):% \begin{equation} \lim_{t\rightarrow \infty }\rho \left( t\right) =\infty . \label{A245} \end{equation} If $\bar{H}\left( \tau \right) >H\left( \infty \right) $ then Eq. (\ref{A235}% ) implies that% \begin{equation} -\infty \leq \lim_{n\rightarrow \infty }\ln \left[ \rho \left( n\tau +u\right) \right] \leq -\infty , \label{A253} \end{equation}% and hence \begin{equation} \lim_{n\rightarrow \infty }\rho \left( n\tau +u\right) =0 . \label{A254} \end{equation}% As Eq. (\ref{A254}) holds for any fixed value $u$ (in the range $0\leq u<\tau $) it proves Eq. (\ref{62}):% \begin{equation} \lim_{t\rightarrow \infty }\rho \left( t\right) =0. \label{A255} \end{equation} \subsection{The scenario $0<H\left( \infty \right) <\infty $} Consider the scenario $0<H\left( \infty \right) <\infty $. In this scenario one may intuitively assume that the input's survival function is asymptotically exponential: \begin{equation} \bar{F}\left( t\right) \approx \exp [-H\left(\infty \right) t] , \label{76} \end{equation}% where the asymptotic equivalence is in the limit $t\rightarrow \infty $. However -- as we shall now argue -- Eq. (\ref{76}) does \emph{not} hold in general. Using the representation of the input's survival function in terms of its hazard function, $\bar{F}\left( t\right) =\exp \{-\int_{0}^{t}H(s)ds\}$, we have: \begin{equation} \left. \begin{array}{l} \lim_{t\rightarrow \infty }\left\{ \exp \left[ H\left( \infty \right) t% \right] \cdot \bar{F}\left( t\right) \right\} \\ \ \\ =\lim_{t\rightarrow \infty }\left\{ \exp \left[ \int_{0}^{t}H\left( \infty \right) ds\right] \cdot \exp \left[ -\int_{0}^{t}H(s)ds\right] \right\} \\ \ \\ =\lim_{t\rightarrow \infty }\exp \left\{ -\int_{0}^{t}\left[ H\left( s\right) -H\left( \infty \right) \right] ds\right\} \\ \ \\ =\exp \left\{ -\lim_{t\rightarrow \infty }\int_{0}^{t}\left[ H\left( s\right) -H\left( \infty \right) \right] ds\right\} \\ \ \\ =\exp \left\{ -\int_{0}^{\infty }\left[ H\left( s\right) -H\left( \infty \right) \right] ds\right\} .% \end{array}% \right. \label{A270} \end{equation}% Consequently, denoting by $I=\int_{0}^{\infty }\left[ H\left( s\right) -H\left( \infty \right) \right] ds$ the integral appearing on the bottom line of Eq. (\ref{A270}), we assert that: Eq. (\ref{76}) holds if and only if the integral $I$ is convergent, $-\infty <I<\infty $. As an illustrative example consider an input $T$ whose statistical distribution is governed by the survival function $\bar{F}\left( t\right) =\left( 1+t\right) ^{1-p}\exp (-t)$ ($t\geq 0$), where $p$ is a positive parameter. In turn, the input's hazard function is $H\left( t\right) =(p+t)/(1+t)$ ($t\geq 0$), and hence $H\left( \infty \right) =1$. For all parameter values $p\neq 1$ the input's survival function is \emph{not} asymptotically exponential, and indeed: $I=-\infty $ when $p<1$, and $I=\infty $ when $p>1$. On the other hand, for $p=1$ the input's survival function is exponential, and we have $I=0$. \subsection{Proofs of Eqs. (\ref{72}) and (\ref{73})} Considering the scenario $0<H\left( \infty \right) <\infty $, note that% \begin{equation} \left. \begin{array}{l} \tau \left[ \bar{H}\left( \tau \right) -H\left( \infty \right) \right] =\tau % \left[ \frac{1}{\tau }\int_{0}^{\tau }H\left( s\right) ds\right] -\tau H\left( \infty \right) \\ \ \\ =\int_{0}^{\tau }H\left( s\right) ds-\int_{0}^{\tau }H\left( \infty \right) ds=\int_{0}^{\tau }\left[ H\left( s\right) -H\left( \infty \right) \right] ds. \end{array}% \right. \label{A271} \end{equation}% In turn, taking the limit $\tau \rightarrow \infty $ in Eq. (\ref{A271}) yields Eq. (\ref{72}):% \begin{equation} \lim_{\tau \rightarrow \infty }\tau \left[ \bar{H}\left( \tau \right) -H\left( \infty \right) \right] =\int_{0}^{\infty }\left[ H\left( s\right) -H\left( \infty \right) \right] ds. \label{A272} \end{equation} We now move to the proof of Eq. (\ref{73}). Set $K\left( t\right) $ to be the running integral of the input's hazard function, i.e.% \begin{equation} K\left( t\right) =\int_{0}^{t}H\left( s\right) ds \label{A273} \end{equation}% ($t\geq 0$). As the input's survival function $\bar{F}\left( t\right) $ decreases monotonically from $\bar{F}\left( 0\right) =1$ to $% \lim_{t\rightarrow \infty }\bar{F}\left( t\right) =0$, and as $\bar{F}\left( t\right) =\exp \left[ -K\left( t\right) \right] $, we obtain that: the function $K\left( t\right) $ increases monotonically from $K\left( 0\right) =0$ to $\lim_{t\rightarrow \infty }K\left( t\right) =\infty $. Assume that there exists a positive level $l_{\ast }$ above which the input's density function is positive-valued: $f\left( t\right) >0$ for all $t>l_{\ast }$. Note that, over the ray $\left( l_{\ast },\infty \right) $, the function $K\left( t\right) $ has an inverse function $K^{-1}\left( \cdot \right) $. Set an arbitrary level $l>l_{\ast }$. In terms of the input's survival function the input's mean $\mu =\mathbf{E}\left[ T\right] $ admits the representation% \begin{equation} \mu =\int_{0}^{\infty }\bar{F}\left( t\right) dt=\int_{0}^{l}\bar{F}\left( t\right) dt+\int_{l}^{\infty }\bar{F}\left( t\right) dt. \label{A274} \end{equation}% Using the fact that $\bar{F}\left( t\right) =\exp \left[ -K\left( t\right) % \right] $, and the change-of-variables $u=K\left( t\right) $, we have% \begin{equation} \left. \begin{array}{l} \int_{l}^{\infty }\bar{F}\left( t\right) dt=\int_{l}^{\infty }\exp \left[ -K\left( t\right) \right] dt \\ \ \\ =\int_{K(l)}^{\infty }\exp \left( -u\right) \frac{1}{H\left[ K^{-1}\left( u\right) \right] }du. \end{array}% \right. \label{A275} \end{equation}% Also, note that \begin{equation} \lim_{u\rightarrow \infty }H\left[ K^{-1}\left( u\right) \right] =\lim_{t\rightarrow \infty }H\left( t\right) =H\left( \infty \right). \label{A276} \end{equation}% If $0<H\left( \infty \right) <\infty $ then Eq. (\ref{A276}) implies that the integral appearing on the right-hand side of Eq. (\ref{A275}) is convergent. Consequently, we obtain the following implication: $0<H\left( \infty \right) <\infty \Rightarrow 0<\mu <\infty$. Considering the scenario $0<H\left( \infty \right) <\infty $, introduce the function \begin{equation} f_{1}\left( t\right) =\frac{1}{\mu }tf\left( t\right) \label{A260} \end{equation}% ($t\geq 0$). Note that $f_{1}\left( t\right) $ is a density function: it is non-negative, $f_{1}\left( t\right) \geq 0$; and it is normalized, $% \int_{0}^{\infty }f_{1}\left( t\right) dt=1$. In turn, note that \begin{equation} \left. \begin{array}{l} \int_{0}^{\infty }\left[ \bar{H}\left( \tau \right) -H\left( \infty \right) % \right] f_{1}\left( \tau \right) d\tau \\ \ \\ =\int_{0}^{\infty }\bar{H}\left( \tau \right) f_{1}\left( \tau \right) d\tau -\int_{0}^{\infty }H\left( \infty \right) f_{1}\left( \tau \right) d\tau \\ \ \\ =\frac{1}{\mu }\int_{0}^{\infty }\bar{H}\left( \tau \right) \left[ \tau f\left( \tau \right) \right] d\tau -H\left( \infty \right) . \end{array}% \right. \label{A261} \end{equation}% Using the definition of the average $\bar{H}\left( \tau \right) $, as well as the definitions of the input's hazard function and survival function, we have% \begin{equation} \left. \begin{array}{l} \int_{0}^{\infty }\bar{H}\left( \tau \right) \left[ \tau f\left( \tau \right) \right] d\tau =\int_{0}^{\infty }\left[ \frac{1}{\tau }% \int_{0}^{\tau }H\left( s\right) ds\right] \left[ \tau f\left( \tau \right) % \right] d\tau \\ \ \\ =\int_{0}^{\infty }\left[ \int_{0}^{\tau }H\left( s\right) ds\right] f\left( \tau \right) d\tau =\int_{0}^{\infty }H\left( s\right) \left[ \int_{s}^{\infty }f\left( \tau \right) d\tau \right] ds \\ \ \\ =\int_{0}^{\infty }H\left( s\right) \bar{F}\left( s\right) ds=\int_{0}^{\infty }\frac{f\left( s\right) }{\bar{F}\left( s\right) }\bar{F}% \left( s\right) ds \\ \ \\ =\int_{0}^{\infty }f\left( s\right) ds=1. \end{array}% \right. \label{A262} \end{equation}% Substituting Eq. (\ref{A262}) into the right-hand side of Eq. (\ref{A261}) yields Eq. (\ref{73}):% \begin{equation} \int_{0}^{\infty }\left[ \bar{H}\left( \tau \right) -H\left( \infty \right) % \right] f_{1}\left( \tau \right) d\tau =\frac{1}{\mu }-H\left( \infty \right) . \label{A263} \end{equation} \subsection{Optimization calculations} Evidently, the average $\bar{H}\left( \tau \right) =\frac{1}{\tau }\int_{0}^{\tau }H\left( t\right)$ is a function of the timer $0<\tau<\infty$. Differentiating the average with respect to the timer yields \begin{equation} \left. \begin{array}{l} \bar{H}^{\prime }\left( \tau \right) =\frac{H\left( \tau \right) \tau -\int_{0}^{\tau }H\left( t\right) dt}{\tau ^{2}} \\ \ \\ =\frac{1}{\tau }\left[ H\left( \tau \right) -\bar{H}\left( \tau \right) % \right] . \end{array}% \right. \label{A301} \end{equation} Differentiating Eq. (\ref{A301}) with respect to the timer further yields \begin{equation} \left. \begin{array}{l} \bar{H}^{\prime \prime }\left( \tau \right) =\frac{\left[ H^{\prime }\left( \tau \right) -\bar{H}^{\prime }\left( \tau \right) \right] \tau -\left[ H\left( \tau \right) -\bar{H}\left( \tau \right) \right] }{\tau ^{2}} \\ \ \\ =\frac{1}{\tau }\left[ H^{\prime }\left( \tau \right) -2\bar{H}^{\prime }\left( \tau \right) \right] . \end{array}% \right. \label{A303} \end{equation}% Hence, at critical timers -- $\tau_{c}$ that satisfy $\bar{H}^{\prime}(\tau_{c})=0$ -- we have: \begin{equation} \bar{H}^{\prime \prime }(\tau_{c})=\frac{1}{\tau_{c} }H^{\prime }(\tau_{c}). \end{equation} Last, consider the negative logarithmic derivative of the input's density function, $G\left( t\right) =-f^{\prime }\left( t\right) /f\left( t\right) $. Using the function $G(t)$, the derivative of the input's hazard function admits the following representation: \begin{equation} \left. \begin{array}{l} H^{\prime }\left( t\right) =\left[ \frac{f\left( t\right) }{\bar{F}\left( t\right) }\right] ^{\prime } \\ \ \\ =\frac{f^{\prime }\left( t\right) \bar{F}\left( t\right) -f\left( t\right) % \left[ -f\left( t\right) \right] }{\bar{F}\left( t\right) ^{2}} \\ \ \\ =\frac{f\left( t\right) }{\bar{F}\left( t\right) }\frac{f^{\prime }\left( t\right) }{f\left( t\right) }+\left[ \frac{f\left( t\right) }{\bar{F}\left( t\right) }\right] ^{2} \\ \ \\ =H\left( t\right) \left[ -G\left( t\right) \right] +H\left( t\right) ^{2} \\ \ \\ =H\left( t\right) \left[ H\left( t\right) -G\left( t\right) \right]. \end{array}% \right. \label{A305} \end{equation} \newpage \textbf{Acknowledgments}. The authors thanks Ofek Lauber Bonomo for help with preprations of figures in this paper. Shlomi Reuveni acknowledges support from the Azrieli Foundation, from the Raymond and Beverly Sackler Center for Computational Molecular and Materials Science at Tel Aviv University, and from the Israel Science Foundation (grant No. 394/19).
2023-04-23T08:18:13.294Z
2020-04-21T02:27:11.000Z
redpajama/arxiv
arxiv_0000
1,404
9,899
3bc27f3249ac94e3b9dec9a2b74b6454e532e5ad
\section{Introduction} \IEEEPARstart{I}{n} power flow studies, the linearized dc power flow (DCPF) model offers compelling computational advantages over the nonlinear ac power flow (ACPF) model based on the Newton-Raphson (NR) method, which requires iterative solutions. However, the DCPF model results become more inaccurate in the cases where its assumptions no longer hold true, e.g. with high \textit{R/X} ratios, large phase angles and heavy or light loads present \cite{dcpf}. Additionally, it can be difficult to find the initial conditions for NR based ACPF to converge \cite{nr}. This paper presents a framework that produces initial conditions that reduce the ACPF iterations and solution times, and is generalizable to grid topologies of different sizes. We use feed-forward artificial neural networks, specifically, one dimensional convolutional neural networks (1D CNNs) to achieve these goals. Feed-forward neural networks with nonlinear activation functions can be used to approximate any continuous functions \cite{cnn}. CNNs, in particular, can capture the local features of interest more effectively than other feed-forward neural networks such as Multilayer Perceptrons (MLPs), as proven in many computer vision applications \cite{imagenet}. In the context of this paper, an example of such a local feature is the small voltage angle difference between neighboring buses. Additionally, we choose the 1D CNNs since the signals in the buses we are interested in (real and reactive power, voltage magnitude and phase) can be represented as vectors. For 1D CNNs trained on DCPF results as input data and corresponding ACPF results as ground truth values, our goal is to produce bus voltage values that, when used as initial conditions to run NR ACPF, result in lower solution iterations and time compared to cold-start (also known as ``flat start") conditions, i.e., $1.0\angle 0.0^{\circ}$ for all load (PQ) bus voltages \cite{overbye}, or warm-start conditions such as the ones generated by DCPF or past solutions. Our proposed method considers only the fluctuations of PQ bus demands, i.e., we vary the real and reactive power demand levels at each load bus and solve for the voltage magnitude and phase at each bus, for a specific set of load bus demands. Fluctuations in the generator (PV) buses, e.g. real power injection variations from wind generation or changes in bus voltage magnitudes are not included in our data, but can be incorporated relatively easily in future studies. In Section \ref{s2}, we review some related studies and provide a high-level description of the proposed model. We then present the data generation process, CNN training and the hot-start ACPF procedure in detail in Sections \ref{s3} and \ref{s4}. Finally, in Section \ref{s5}, we present some results based on the IEEE 118-bus and the \textsc{Pegase} 2869-bus systems \cite{pegase1, pegase2} available from \textsc{Matpower} \cite{matpower}. We conclude by giving a short summary and pointing out some limitations of our proposed method, as well as potential directions for future studies on this topic. \\ \section{Background and Proposed Method}\label{s2} \subsection{Related Work}\label{s2-a} There have been attempts at either improving the DCPF results or directly predicting ACPF results using artificial neural networks, specifically MLPs, in the past \cite{annpf, saudi, small}. However, they suffer from one or more of the following deficiencies: insufficient dataset size, poorly justified MLP input feature selection which could potentially lead to numerical instability during training, arbitrary and/or unclear performance criteria, and small system sizes where full ACPF can be easily and efficiently computed. Aside from these, there have also been studies on solving or reducing the optimal power flow problem using artificial neural networks \cite{opf-gnn, opf-meta, deepOpf}. Compared to the MLPs that we trained on the same dataset (formatted differently from as shown in Section \ref{s3-c}, for implementation purposes), 1D CNNs are capable of producing results with $\Delta\mathcal{L}$ (see Equation \ref{delL}) that are almost 10 times smaller than those produced by MLPs. Generally speaking, although 1D CNNs take longer to train, they often have fewer model parameters, thus, lower memory requirement than MLPs to achieve the same or better results. This fact can be a major advantage for extremely large systems. \subsection{Proposed Method}\label{s2-b} Our proposed method is as follows. Suppose that for a specific system with $L$ buses, we have $N$ different load conditions which can be solved by warm-start ACPF. We need to find the respective load bus voltage magnitude and phase values that meet the mismatch tolerance for these load conditions. Let $\mathcal{N}$ represent this set of $N$ load conditions. First, we take a subset of $\mathcal{N}$, denoted $\mathcal{W}$ for ``warm-start," and let the remaining $\mathcal{N \setminus W}$ be $\mathcal{H}$ for ``hot-start." Let $T = \vert \mathcal{W} \vert$, i.e., the number of load conditions in $\mathcal{W}$. Next, we compute the DCPF results for all load conditions in $\mathcal{N}$, and compute ACPF results with DCPF results as initial conditions (i.e., warm-start) for all load conditions in $\mathcal{W}$. We then use the DCPF and ACPF results corresponding to the load conditions in $\mathcal{W}$, as input data and output targets to train the 1D CNNs. Finally, for load conditions in $\mathcal{H}$, for which we only have the DCPF results, we produce the hot-start conditions for them by passing their DCPF results and corresponding load conditions into the trained 1D CNNs, and compute the ACPF results for these load conditions with the hot-start conditions. This process is shown in Figure \ref{flowchart}. Once the 1D CNNs are trained, any new load conditions from the same system but are not in $\mathcal{N}$, would follow the path that the load conditions in $\mathcal{H}$ take in Figure \ref{flowchart} --- first compute the DCPF results, then use the trained 1D CNNs to generate the initial conditions to compute the ACPF results. \begin{figure}[H] \centering \includegraphics[trim={0cm 1cm 17cm .75cm},clip,scale=0.575,center]{flowchart.pdf} \caption{Proposed method} \label{flowchart} \end{figure} The reason to treat the CNN-predicted values as hot-start conditions for ACPF instead of as finished products is so that we have a fair comparison between our method and the ACPF model with warm-start conditions, since both, if they converge successfully, will have a final mismatch within the same tolerance level. Clearly, since ACPF is computationally expensive, and neural network training with more data takes more time (empirically, quasi-linearly on a single GPU), we would like to find the minimum $T$ that provides reasonably good hot-start conditions for load conditions in $\mathcal{H}$ on the chosen CNN architecture. We also point out that in this study, we are investigating the feasibility of this approach by evaluating its effectiveness on the IEEE 118-bus and, more realistically, the larger \textsc{Pegase} 2869-bus systems; we are not focused on devising the best possible CNN architecture, which will be discussed in Section \ref{s4}. \\ \section{Data Generation}\label{s3} \subsection{Load Fluctuation Modeling}\label{s3-a} As discussed in Section \ref{s2-b}, the first step to create our dataset is to generate load demand fluctuations for all PQ buses in the system. For the $i$-th PQ bus in a \textsc{Matpower} case, we extract the default real power demand value, $P_i$ as mean real power demand, and compute the corresponding standard deviation $\sigma_{i}$ as follows (from \cite{pqvar}): \begin{equation}\label{eq1} \sigma_{i} = 5.44130 + 0.17459\sqrt{|P_i|}+0.001673{|P_i|} \end{equation} Next, we generate $N$ samples from the Gaussian distribution $\mathcal{N}(P_i,\, \sigma^2_{i})$ to create the real power demand fluctuations $[P^{(1)}_i,\, P^{(2)}_i,\, ...,\, P^{(N)}_i]$ for the $i$-th bus if it is a PQ bus; otherwise we keep the default value, i.e., $P^{(k)}_i = P_i$ for $1 \leq k \leq N$. We generate the demand realizations for each bus independently and with a fixed random seed for reproducibility. We can now represent our real power demand fluctuations for all $L$ buses in a given system as a matrix: \[ \mathbf{P} = \begin{bmatrix} P^{(1)}_1 & \dots & P^{(N)}_1 \\ \vdots & \ddots & \vdots \\ P^{(1)}_L & \dots & P^{(N)}_L \end{bmatrix} \] To solve the power flow problem, the known values for each load bus are real power P and reactive power Q. Therefore, we now need to generate the \textbf{Q} matrix to similarly represent the fluctuations in reactive power. First, we generate p.f., a vector containing $N$ samples of lagging power factor, with a Gaussian distribution $\mathcal{N}(\mu=1.0,\, \sigma=0.05)$ truncated between $[0.7, 1.0]$. The choices of $\mu$ and $\sigma$ are based on the distributions of power factor values for all PQ buses in the \textsc{Matpower} cases. We choose the truncation lower bound of 0.7 because a utility would step in and fix the power factors lower than that (e.g., by penalizing businesses to discourage low power factors) to avoid loss \cite{pf, lowpf}. We then calculate each entry of the \textbf{Q} matrix as follows: \begin{equation}\label{eq2} Q^{(k)}_i = P^{(k)}_i \cdot \tan (\arccos(\text{p.f.}^{(k)})) \\ \end{equation} Similar to the $\mathbf{P}$ matrix, we keep the default value for $Q^{(k)}_i$, $1 \leq k \leq N$ if the $i$-th bus is not a PQ bus. \subsection{Power Flow Computation}\label{s3-b} As discussed in Section \ref{s2}, we first run DCPFs based on the P, Q values in $\mathcal{N}$ and warm-start ACPF for $\mathcal{W}$, then use the DCPF and ACPF results corresponding to P, Q values in $\mathcal{W}$ to train a CNN. For bench-marking purposes, we also run warm-start ACPF for P, Q values in $\mathcal{H}$ and prensent the warm-start performance in Section \ref{s5}. Once a model is trained, we can then follow Section \ref{s2-b} and only run hot-start ACPF for the load levels in $\mathcal{H}$ or any new load conditions. We use \textsc{Matpower}'s \texttt{rundcpf} and \texttt{runpf} functions to perform the dc and ac power flow computations with the mismatch tolerance for \texttt{runpf} set at $10^{-3}$ per unit. For the $k$-th execution of \texttt{rundcpf} and \texttt{runpf}, we replace the default $P_i$, $Q_i$ values of the $i$-th bus with $P^{(k)}_i$ and $Q^{(k)}_i$ from the $k$-th column of $\mathbf{P}$ and \textbf{Q}. We do not change any other values. We then collect the solved voltage magnitude and phase values --- ${V_{DC}}^{(k)}_{i} $ (which will always be 1.0) and ${\Theta_{DC}}^{(k)}_{i}$ (in radians) from the DCPF results, along with ${V_{AC}}^{(k)}_{i}$ and ${\Theta_{AC}}^{(k)}_{i}$ (in radians) from the ACPF results. The reason to extract voltage phase values in radians is that large negative phase values in degrees will easily cause exploding loss when passed through the ELU activation function (Equation \ref{ELU}). We collect the DCPF solution time, $t_{DC}$ as a vector of length $T$ to calculate the average hot-start ACPF time as described in Section \ref{s5}. We also collect the ACPF solution time and iterations corresponding to the load levels in $\mathcal{H}$, $t_{AC, warm}$, $n_{AC, warm}$, as vectors with length $(N-T)$, which are used to compare with the hot-start ACPF performances. Finally, since ${\Theta_{DC}}^{(k)}_{i}$, ${\Theta_{AC}}^{(k)}_{i}$ are in radians, we would like to ensure all the input data are in a similar range, so that the model parameter updates are input unit agnostic. Therefore, we perform the following data processing steps. We subtract 1.0, the nominal voltage from all ${V_{DC}}^{(k)}_{i}$ and ${V_{AC}}^{(k)}_{i} $, so that they have near 0 mean. We do not normalize the input and target voltage magnitude and phase values to be within a certain range (e.g., in computer vision applications, we could normalize the input data values to be in $[0,1]$) since the lower and upper bound of the ground truth, which are required for the normalization are not known \textit{a priori}. We also compute the $\mathbf{P_d}$ and $\mathbf{Q_d}$ matrices with entries ${P_d}^{(k)}_{i} = P^{(k)}_i - P_i$ and ${Q_d}^{(k)}_{i} = Q^{(k)}_i - Q_i$ in per unit, respectively. We construct the following matrices with the same dimensions as $\mathbf{P}$ and \textbf{Q}, for $1 \leq i \leq L$, and $1 \leq k \leq N$: \[ \mathbf{V_{DC}} = \begin{bmatrix} {V_{DC}}^{(k)}_{i} \\ \end{bmatrix} \hspace{4em} \mathbf{\Theta_{DC}} = \begin{bmatrix} {\Theta_{DC}}^{(k)}_{i} \end{bmatrix} \] \[ \mathbf{V_{AC}} = \begin{bmatrix} {V_{AC}}^{(k)}_{i} \\ \end{bmatrix} \hspace{4em} \mathbf{\Theta_{AC}} = \begin{bmatrix} {\Theta_{AC}}^{(k)}_{i} \end{bmatrix} \] Note that in Section \ref{s2-b}, we assumed none of the load conditions in $\mathcal{N}$ causes the non-convergence of ACPF. Therefore, $\mathbf{V_{AC}}, \mathbf{\Theta_{AC}}$ contain the same number of samples, $N$, as $\mathbf{V_{DC}}, \mathbf{\Theta_{DC}}$. However, this assumption is not realistic, since not all load conditions are guaranteed to converge with warm-start conditions. Therefore, if the $k$-th ACPF execution fails to converge and it is a load condition in $\mathcal{W}$, we add it to a set $\mathcal{F}$ that contains all load conditions that fail to converge. We stop our data generation process once $N$ samples successfully converges. If $\mathcal{F} \neq \emptyset$, we also note the successful ACPF convergence rate. For the bus voltage magnitudes and phase values generated by the 1D CNNs, we shift the voltage magnitude values back up by 1.0 and convert the phase values back to be in degrees as required by \textsc{Matpower}, before performing the ACPF computations with these as the initial conditions. \subsection{Dataset Format}\label{s3-c} We now form the dataset for training. The input to the 1D CNNs, $\mathbf{X}$, containing the offset DCPF bus voltage values, $\mathbf{P_d}$ and $\mathbf{Q_d}$ is a tensor with dimension $L \times 4 \times N$ and the format shown in Figure \ref{X}. Specifically, the $k$-th sample in $\mathbf{X}$, $1 \leq k \leq N$, is shown in Figure \ref{Xk}. \begin{figure}[h] \centering \begin{tikzpicture}[every node/.style={anchor=north east,fill=white,minimum width=4em,minimum height=3em}] \matrix (mA) [draw,matrix of math nodes] {\mathbf{Q_{d}} \\}; \matrix (mB) [draw,matrix of math nodes] at ($(mA.south west)+(0.15, 0.6)$) {\mathbf{P_{d}}\\}; \matrix (mC) [draw,matrix of math nodes] at ($(mB.south west)+(0.15, 0.6)$) {\mathbf{\Theta_{DC}} \\}; \matrix (mD) [draw,matrix of math nodes] at ($(mC.south west)+(0.15, 0.6)$) {\mathbf{V_{DC}}\\}; \draw[dashed](mA.north east)--(mD.north east); \draw[dashed](mA.north west)--(mD.north west); \draw[dashed](mA.south east)--(mD.south east); \end{tikzpicture} \\ \caption{Dataset $\mathbf{X}$} \label{X} \end{figure} \begin{figure}[h] \centering \includegraphics[trim={2cm 9cm 24cm 3.2cm},clip,scale=1,center]{X.pdf} \caption{The $k$-th sample of the dataset $\mathbf{X}$} \label{Xk} \end{figure} We train two 1D CNNs with the identical architecture (shown in Figure \ref{model}), both with $\mathbf{X}$ as the input --- one for producing hot-start bus voltage magnitudes (V model) and the other for bus voltage phases ($\Theta$ model). This approach is to ensure that the 1D CNN model parameter updates are computed based on the loss with respect to distinct targets, $\mathbf{V_{AC}}$ and $\mathbf{\Theta_{AC}}$, even though we offset $\mathbf{V_{AC}}$ to have near 0 mean. Additionally, we reshape $\mathbf{X}$ to add a ``width" dimension of length 1, in order to fit the Width -- Height -- Channel -- Samples format, which the machine learning library we used, Flux \cite{flux}, requires. \\ \section{CNN Training} \label{s4} \subsection{Loss Function and Performance Criteria} \label{s4-a} The loss function $\ell$ we use to train our model is the squared $\mathcal{L}_2$ norm of the difference between predicted values and ground truth: $\ell(\hat{y_i} , y_i) = \Vert \hat{y_i} - y_i \Vert^2_2$, where $\hat{y_i}$ is the prediction and $y_i$ is the ground truth for the $i$-th element. This loss is commonly used for optimization in regression problems, and we found that it outperforms the mean square error, another popular loss function in regression problems, in terms of $\Delta\mathcal{L}$ defined below. We do not have an accuracy measure since there is no robust and generalizable way of establishing such a criteria in our problem. The downside of using relative measurements (e.g. mean absolute relative error), in particular in this problem, is that the error would explode when the denominator is close or equal to 0, which is not uncommon in the voltage angle or offset voltage magnitude values. Instead, we compare the initial and final $\mathcal{L}_2$ norms on the test set to see how much the norm decreased at the end of training by $\Delta\mathcal{L}$ in Equation \ref{delL}, where $\mathcal{L}_i$ is the $\mathcal{L}_2$ norm between dc and ac power flow results for $\mathcal{H}$, and $\mathcal{L}_f$ is the $\mathcal{L}_2$ norm between the predicted hot-start conditions and true ac power flow results for $\mathcal{H}$ (both $\mathcal{L}_i$ and $\mathcal{L}_f$ are the average of voltage magnitude and phase results), which we computed for benchmarking purposes but will not be available in practice. \begin{equation}\label{delL} \Delta \mathcal{L} = \frac{\mathcal{L}_f}{\mathcal{L}_i} \times 100\% \end{equation} The effectiveness of the 1D CNNs, however, is best demonstrated by the performance of computing ACPF results for $\mathcal{H}$, i.e., by how much the 1D CNN produced bus voltage values can decrease the ACPF iterations and solution time (as discussed in Section \ref{s5}). \subsection{CNN Model Architecture and Hyperparameter Selection} \label{s4-b} We determine the 1D CNN model architecture and select the hyperparameters in the following way. (To see the meaning of the terms used here, please refer to the Appendix for a brief overview of CNNs.) For the IEEE 118-bus system, we first use a training set with $T=2000$, and a set of hyperparameters commonly seen in machine learning applications \cite{batchsizes, Goodfellow2015DL}: initial learning rate $\eta = 10^{-3}$, batch size of 64, and maximum epochs of 500. With these hyperparameters selected, we train multiple 1D CNNs with different combinations of number of convolutional layers, kernel sizes and number of channels. We then make a decision on these hyperparameters based on the validation set losses of each candidate architecture. We repeat this process for the \textsc{Pegase} 2869-bus system. Once we arrive at a satisfactory final validation loss, we go back to test our initial hyperparameter choices, similarly by the validation loss. In our case studies, we keep the initial learning rate $\eta$ and maximum epoch the same as initially chosen, and only change the batch size from 64 to 32. \begin{figure}[h] \includegraphics[trim={0cm 1cm 0cm .5cm},clip,scale=0.45,center]{model.pdf} \caption{1D CNN architectures. We train two identical models (one trained with $\mathbf{V}_{AC}$ as output target the other with $\mathbf{\Theta}_{AC}$ as target) for each case, and only one is shown here} \label{model} \end{figure} We will now discuss our chosen 1D CNN architectures in detail. Figure \ref{model} shows the \textit{Height} (which equals to the number of buses, $L$) and \textit{Channel} dimensions in each convolutional layer, and a single sample (as in Figure \ref{Xk}) as the input to each 1D CNN model. Each of the outputs in Figure \ref{model} is a vector containing the predicted voltage or phase values. Thus, the output would be a matrix when we feed more than one input samples into the 1D CNN. For the \textsc{Pegase} 2869-bus system, we use a deeper architecture with 5 convolutional layers and a final fully connected layer. The first convolutional layer has kernel size 7 with channel size 8, zero padding size 3 and stride 1, and identical remaining convolutional layers with kernel size 3, channel size 8, zero padding size 1 and stride 1. The CNN model for the smaller IEEE 118-bus system has three identical convolutional layers with kernel size 3, zero padding size 1 and stride 1, i.e., the same as the second to the fifth convolutional layers of the larger model. The fully connected layer for both architectures first reshapes the data to a vector of length $8L$ (for 1 sample), then produces the final bus voltage magnitude and phase values. Empirically, adding more than 8 channels to the convolutional layers result in worse $\Delta\mathcal{L}$ of the validation set. We apply the zero paddings and stride 1 to all convolutional layers, and omit pooling layers since we want to keep the hidden layer dimension the same as the input feature vector dimension ($L$, i.e., number of buses), throughout the architecture. The reason for this choice is that CNNs for the systems with odd number of buses, after convolution, pooling and up-sampling operations with strides larger than 1, will produce predictions with one extra or one fewer value, i.e., an extra or a missing bus. This is not to say that such architectures will never work in our case, since we can, for example, apply asymmetric padding to solve this off-by-1 caveat. However, operations such as pooling and upsampling, even if they improve the final predictions, will make the justification of our method more difficult. In particular, the pooling operation is lossy, since it downsamples the input data into a low-dimensional representation. Since the number of positive and negative values in our datasets are roughly equal, the nonlinear activation function $f$ we use has to account for both. We compared the performance and rate of convergence of multiple popular activation functions --- Rectified Linear Unit (ReLU) \cite{relu}, LeakyReLU \cite{leakyrelu}, and Exponential Linear Unit (ELU) \cite{elu}, and chose ELU with the default parameter $\alpha=1.0$, which has the piecewise definition and derivative in Equations \ref{ELU} and \ref{dELU}, respectively. \begin{equation}\label{ELU} f(x) = \left\{ \begin{array}{cl} x & \text{if } x > 0,\\ \alpha (e^x - 1) & \text{if } x \leq 0 \end{array} \right. \end{equation} \begin{equation}\label{dELU} f'(x) = \left\{ \begin{array}{cl} 1 & \text{if } x > 0,\\ \alpha e^x & \text{if } x \leq 0 \end{array} \right. \end{equation} \begin{figure}[H] \includegraphics[trim={2cm 0.5cm 2cm 1cm},clip,scale=0.4,center]{elu.eps} \caption{Exponential Linear Unit ($\alpha=1.0$) and its derivative} \label{case118_va} \end{figure} With the CNN architecture fixed for a particular system, the most important hyperparameter left in this problem is the size of $T$, i.e., we want to find the optimal trade-off point between training set size (with which the training time scales quasi-linearly, as Tables \ref{table-118} and \ref{table-2869} in the next section shows) and the quality of the trained model (i.e., how much its prediction can decrease solution time and iterations). Since the training and validation targets are the ACPF results, a smaller $T$ value means a smaller number of warm-start ACPF we have to run in data generation process, as mentioned in Section \ref{s3}. Since we also computed the ACPF results for $\mathcal{H}$ for benchmarking purposes, we have the ground truth values for all $N$ samples. Thus, we can use the true ACPF results of the load conditions in $\mathcal{H}$ as the test set (in practice, we would not have these data since we do not compute warm-start ACPF for all samples). We separate $\mathcal{W}$ into training and validation sets, with a 90/10 split, i.e., the training set will have $0.9T$ number of samples and the validation set will have $0.1T$ number of samples. Since the dataset is generated with Gaussian distributions instead of gathered from real systems, the training and validation loss values throughout the training are extremely close, and as a result, we do not need a large validation set to adjust model hyperparameters, or use methods such as cross-validation during training. Since learning hot-start conditions is a rather uncommon application of CNNs, we cannot take advantage of previously trained models and use transfer learning \cite{transferlearning1, transferlearning2} to accelerate training. Therefore, we initialize the CNN parameters with the Xavier Initialization \cite{xavier}, and train the models from scratch with the Adam optimizer \cite{adam}. During training, we randomly shuffle the batch indices with a fixed random seed before each epoch starts. We also use the following learning rate decay policy \cite{alexnet}: if training set loss does not decrease for 5 consecutive epochs, we decrease the learning rate $\eta$ by a factor of 10 (until $\eta = 10^{-9}$) to encourage the model to jump out of local minimums. Training is terminated if the elapsed epochs reach maximum of 500 or $\Delta \mathcal{L}$ of the validation set becomes less than 0.01\%. \\ \section{Case Study} \label{s5} The dataset generation and power flow computations are done with MATLAB and \textsc{Matpower} on a local computer with Intel Core i7-8750H CPU and 32 GB RAM. The 1D CNN training is done on a Compute Canada cluster \cite{graham} using a single Intel Xeon Gold 5120 CPU and a single NVIDIA Tesla V100 GPU with 16 GB of GPU memory. We use Flux \cite{flux}, an open-source machine learning library developed in the Julia language \cite{julia}, for the 1D CNN implementation, training and inference. The following tables show the average solution time and average iteration count for $N=10000$ samples, all of which successfully converged. The first row contains the warm-start performance, where $t_{avg} = \Bar{t}_{AC,warm}$, i.e., the average time for warm-start ACPF with DCPF solutions as initial conditions. The rows below contain the hot-start performances as $T$, the number of samples used in CNN training, is varied. The hot-start average time is calculated by $t_{avg} = \Bar{t}_{DC} + \Bar{t}_{inf} + \Bar{t}_{AC,hot}$, where $\Bar{t}_{DC}$ is the average DCPF solution time, $\Bar{t}_{AC,hot}$ is the average hot-start ACPF solution time, and $\Bar{t}_{inf}$ is the average inference time (which is negligible compared to $\Bar{t}_{DC}$ or $\Bar{t}_{AC,hot}$). We report the hot-start results with 1000-sample increments for $T$. We start at $T=3000$ for the \textsc{Pegase} 2869-bus case, since using a smaller $T$ produced initial conditions causing non-convergence for some load conditions. The $\Delta\mathcal{L}$ is calculated as described by Equation \ref{delL}. In particular, $\Delta\mathcal{L} = 100\%$ for the warm-start results since $\mathcal{L}_i = \mathcal{L}_f = \frac{1}{2}\Vert V_{DC} - V_{AC}\Vert_2 + \frac{1}{2}\Vert \Theta_{DC} - \Theta_{AC}\Vert_2$. We also include the training times in the tables. These are extremely large compared to $t_{avg}$, but they can be amortized over the usable time of the model since it is a one-time only cost. Additionally, training can be performed in parallel on clusters with multiple GPUs, which can greatly reduce the time needed. For example, prior work \cite{multi-gpu1, multi-gpu2, multi-gpu3} has shown that the training throughput (number of samples processed per second) increases quasi-linearly with the number of GPUs. The main reason they are included here is to show the trade-off between the training time and the quality of output initial conditions. Finally, we highlight the chosen $T$ in bold for both cases in Tables \ref{table-118} and \ref{table-2869}. \subsection{IEEE 118-bus Results} From Table \ref{table-118}, we can see that $T=3000$ results in a good balance of $T$ (i.e., the number of warm-start executions required to train the 1D CNN) and the quality of hot-start conditions. Even though the $T=5000$ results have both lower final $\Delta\mathcal{L}$ and solution iterations than $T=3000$ results, it has a higher $t_{avg}$ and longer training time. Compared to the warm-start results, the proposed method provides a $33.56\%$ reduction in solution time, and a $66.47\%$ reduction in the average solution iterations required, with the chosen $T$. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{IEEE 118-bus Results} \label{table-118} \centering \begin{tabular}{|p{11mm} | p{11mm} | p{10mm} | p{6mm} | p{11mm} | p{13mm}|}\hline \multicolumn{2}{|l|}{} & $t_{avg}$ \newline (ms) & Avg. \newline Iter. & $\Delta\mathcal{L}$ & Training \newline Time (s) \\ \cline{1-6} \multicolumn{2}{|l|}{Warm Start} & 1.57971 & 3.000 & 100\% & N\slash A \\ \cline{1-6} \multirow{5}{*}{Hot Start} & {T=1000} & 1.22120 & 1.391 & 0.2185\% & 120.96713 \\ \cline{2-6} & {T=2000} & 1.06184 & 1.022 & 0.0929\% & 222.45857 \\ \cline{2-6} & \textbf{T=3000} & \textbf{1.04953} & \textbf{1.006} &\textbf{0.05937\%} & \textbf{326.47676} \\ \cline{2-6} & T=4000 & 1.05426 & 1.006 & 0.06306\% & 437.17291 \\ \cline{2-6} & T=5000 & 1.07008 & 1.000 & 0.03435\% & 544.79339\\ \cline{1-6} \end{tabular} \end{table} \subsection{\textsc{Pegase} 2869-bus Results} From Table \ref{table-2869}, $T=8000$ is a reasonably good choice for a larger study system, considering the rate of increase of the training time as $T$ grows, and the diminishing gain in the quality of hot-start conditions (between $T=8000$ and $T=9000$, the improvements in solution time and iterations are $0.18\%$ and $0.48\%$, respectively, but the training time increased by $1.33$ hours, or $24.65\%$). By choosing $T=8000$, $t_{avg}$ and the average ACPF iterations required are decreased by $30.06\%$ and $49.52\%$ compared to the warm-start results, respectively. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Pegase 2869-bus Results} \label{table-2869} \centering \begin{tabular}{|p{11mm} | p{9mm} | p{10mm} | p{6mm} | p{11mm} | p{15mm}|}\hline \multicolumn{2}{|l|}{} & $t_{avg}$ \newline (ms) & Avg. \newline Iter. & $\Delta\mathcal{L}$ & Training \newline Time (s) \\ \cline{1-6} \multicolumn{2}{|l|}{Warm Start} & 29.71916 & 4.000 & 100\% & N\slash A\\ \cline{1-6} \multirow{7}{*}{Hot Start} & T=3000 & 27.99254 & 3.146 & 0.2452\% & 4098.02066 \\ \cline{2-6} & T=4000 & 27.11124 & 3.003 & 0.1688\% & 5450.61909 \\ \cline{2-6} & T=5000 & 24.46336 & 2.896 & 0.1395\% & 6475.0688 \\ \cline{2-6} & T=6000 & 22.86895 & 2.356 & 0.1038\% & 8952.58763 \\ \cline{2-6} & T=7000 & 23.40030 & 2.435 & 0.09801\% & 14539.66965\\ \cline{2-6} & \textbf{T=8000} & \textbf{20.78429} & \textbf{2.019} & \textbf{0.07374\%} & \textbf{19398.17254} \\\cline{2-6} & T=9000 & 20.73071 & 2.000 & 0.05416\% & 24179.44558 \\\cline{1-6} \end{tabular} \end{table} \section{Conclusion and Future Research Directions} In this paper, we propose a generalizable framework to obtain better initial conditions for Newton-Raphson based ACPF using 1D CNNs. The performance of the proposed method on the IEEE 118-bus and \textsc{Pegase} 2869-bus systems show that it is capable of effectively decreasing both solution time and solution iterations. Although our proposed method is shown to be generalizable on both large and small systems, we acknowledge one limitation: systems with different topologies (specifically, different number of buses and/or connectivities) would require training different 1D CNNs to generate system-specific hot-start conditions. Consequently, we need to address the problem of long training time associated with this limitation. As we discussed in Sections \ref{s4-b} and \ref{s5}, the CNN training time can be amortized as it is a one-time cost for each system, and it can be further reduced by applying transfer learning, and by training in parallel on multiple GPUs. A potential research direction that builds on our proposed method is to incorporate the effect of ACPF solution times and/or iterations directly into the 1D CNN training stage, e.g., through a similar meta-optimization step as discussed in \cite{opf-meta}. We can also include different contingency scenarios in the dataset generation step, and train 1D CNNs to produce initial conditions to perform contingency analysis more efficiently compared to the DCPF model. Finally, we could also try out different CNN architectures, e.g., deeper architectures such as the fully convolutioonal ResNet in \cite{fcresnet}, although they will require significantly longer training time and a much larger and diverse dataset to be properly trained. \\
2023-04-23T08:18:14.059Z
2020-04-21T02:28:24.000Z
redpajama/arxiv
arxiv_0000
1,428
5,511
3b2e7e722addbec2db82995cb5451a6d70abf530
\section*{Abstract} In addition to the learning check testing results performed at each lectures, we have extended the factors to find the key dropping out factors. Among them are, the number of successes in the learning check testing, the number of attendances to the follow-up program classes, and etc. Then, we have found key factors strongly related to the students at risk. They are the following. 1) Badly failed students (score range is 0-39 in the final examination) tend to be absent for the regular classes and fail in the learning check testing even if they attended, and they are very reluctant to attend the follow-up program classes. 2) Successful students (score range is 60-100 in the final examination) attend classes and get good scores in every learning check testing. 3) Failed students but not so badly (score range is 40-59 in the final examination) reveal both sides of features appeared in score range of 0-39 and score range of 60-100. Therefore, it is crucial to attend the lectures in order not to drop out. Students who failed in learning check testing more than half out of all testing times almost absolutely failed in the final examination, which could cause the drop out. Also, students who were successful to learning check testing more than two third out of all testing times took better score in the final examination. \\[2mm] {\it Keywords: }learning check testing, placement test, follow-up program, item response theory, multiple linear regression, final examination. \section{Introduction} It is crucial to identify students at risk for failing courses and/or dropping out as early as possible because a variety of students are now enrolled in universities and we teachers have to educate them altogether. This circumstance prohibits us to use conventional methods such as a mass education method. However, the number of staffs and classes are limited. New assisting systems using ICTs shall be introduced to solve such a difficulty. To overcome this, we established online testing systems aimed at helping students who need further learning skills in mathematics subjects. Such systems include 1) the learning check testing, the LCT, for every class to check if students comprehend the contents of lectures or not, 2) the collaborative working testing, the CWT, for training skills with supporters and teachers, and 3) the follow-up program testing, the FPT, to check if the follow-up program class members understand the standard level of the lectures. The system has been successfully operating (see \cite{LTLE2016a}, \cite{LTLE2016b}), and some computational results were reported \cite{LTLE2017}. In addition, other relevant cases were well investigated (see \cite{LTLE2016c}, \cite{BIC2016}, \cite{IEE2018}, \cite{PISM2018}, \cite{IJSCAI2017}, \cite{LTLE2016d}). Using the accumulated data in the database, we may find some key factors strongly related to the students at risk, as indicated in \cite{Elouazizi}, \cite{Siemens2015}, \cite{Siemens2012}, and \cite{Waddington2016}, if we pay attention to learning analytics. Then, we may be able to actively make a appropriate decision for better learning methods. As indicated in \cite{WiseShaffer}, it is also important to analyze the data theoretically. This paper is aimed at obtaining effective learning strategies for students at risk for failing courses and/or dropping out, using a large-scale of learning data accumulated from the follow-up program systems. They consists of the placement scores, every LCT scores, FPT success/failure times, FPC attendances, etc. In this paper, we use the ability values for students' learning skills obtained from the item response theory (IRT, e.g., see \cite{Ayala}, \cite{Hambleton91}, \cite{LindenHambleton}). Although the subjects we deal with are analysis basic (similar to calculus) and linear algebra, we show the case of linear algebra as a typical case. \section{Success/Failure Responses and the LCT Ability Values} The LCT is a kind of short-time testing using five questions in each LCT in the first semester in 2017. All the students in regular classes take the LCT for ten minutes via the online testing system. All the questions are the same to each student, but the order to each question to a student is sorted in a different order from the next student. We have fourteen lectures with one midterm and one final examinations in the semester; thus, the number of LCT is fourteen. We can estimate the students abilities to each LCT using the item response theory (IRT) evaluation method. Each problem in five items consists of multiple small questions. Students select appropriate answers to each small question from many choices. We adopts the two-parameter logistic function $P(\theta_i;a_j,b_j)$ shown below instead of the three-parameter logistic function including pseudo-guessing parameter. \begin{eqnarray} P_{i,j}=P(\theta_i;a_j,b_j)={1 \over 1+\exp\{-1.7a_j(\theta_i-b_j)\} }=1-Q_{i,j}, \end{eqnarray} where $\theta_i$ expresses the ability for student $i$, and $a_j, b_j$ are constants in the logistic function for item $j$, and they are called the discrimination parameter and the difficulty parameter, respectively. Then, the likelihood for all the examinees, $i=1,2,\dots,N$, and all the items, $j=1,2,\dots,n$, will become \begin{eqnarray} L=\prod_{i=1}^N \prod_{j=1}^n \left(P_{i,j}^{\delta_{i,j}} \times Q_{i,j}^{1-\delta_{i,j}} \right), \end{eqnarray} where $\delta_{i,j}$ denotes the indicator function such that $\delta=1$ for success and $\delta=0$ for failure. In a sense, $P_{i,j}$ in Equation (1) is a logistic probability distribution function with unknown parameters $a_j$ and $b_j$, and the random variable is $\theta_i$. However, $a_j$, $b_j$, and $\theta_i$ are all unknown here. We have to obtain the maximum likelihood estimates for $a_j$ and $b_j$, and $\theta_i$ simultaneously by maximizing $L$ in Equation (2). However, as easily imagined with so small number of questions, the estimated ability values tend to have biases and the variances are large (see \cite{IJSCAI2017}, \cite{LTLE2016d}). It would be difficult to classify the students into a successful group and a failed group in the final examination using each LCT result. Thus, we first use all the LCT results in classifying. Figure 1 shows the histogram of estimated abilities of LCT to successful students overlaid the histogram of estimated abilities of LCT to failed students in the case of linear algebra in the first semester in 2017. We can see that it would be difficult to find the optimal discriminating threshold to success/failure students. The numbers of successful students is 898, and failed students is 145; the ratio of failed students to all the students is $0.14$. \begin{figure}[htbp] \begin{center} \includegraphics[height=6.5cm]{LAASFhistogram2.pdf} \end{center} \caption{Histograms of estimated abilities of LCT to successful students and to failed students (linear algebra in the first semester in 2017).} \end{figure} Except for very low values of estimates, the histograms indicate the normal distributions with different mean values (around $0.63$ for successful students and $-0.17$ for failed students); the lowest estimates around $-3.0$ in both groups were resulting from the absence for testing. However, it seems very difficult to discriminate students into two groups by using certain ability threshold value. When we adopt the decision tree method, the most appropriate ability threshold values becomes to be $-0.1065$. The confusion matrix using this threshold is illustrated in table 1. The misclassification rate for this confusion matrix is $0.11$. Limited to failed students, the decision tree predicted 107 students may fail, and 70 students actually failed; the hitting ratio is $65\%$. \begin{table}[htbp] \caption{Confusion matrix determined by decision tree using full response matrix.} \begin{center} \begin{tabular}{cc|ccc} \hline &&&predicted \\ & & successful & failed & total \\ \hline & successful & 861 & 37& 898 \\ observed & failed & 75 & 70 & 145\\ & total & 936 & 107 & 1043\\ \hline \multicolumn{4}{l}{\qquad \qquad \qquad \qquad \quad threshold $= -0.1065$} \end{tabular} \label{tab1} \end{center} \end{table} In addition to the LCT results, we have incorporated the placement test (PT) results taken at the very beginning of the first semester. We have two kinds of tests: one is rather fundamental test and the other is advanced test in high school level. Using the fundamental PT and the LCT results, we plotted the correlations for these two tests in three groups in Figure 2 in the case of linear algebra in the first semester in 2017; first group is the successful in the final examination (score range is 60-100 expressed by green dots in the figure), second group is the badly failed group (score range is 0-39 expressed by red dots), and the rest is the group (score range is 40-59 expressed by yellow dots). The horizontal axis means the ability values standardized to the standard normal distribution, and the vertical axis means the fundamental PT score. Although the information is added, it is still hard to find the boundaries to classify students into three groups or two successful/failed groups. In order to discriminate the successful/failed students much more clearly, it would be recommended to include other kind of information. \begin{figure}[htbp] \begin{center} \includegraphics[height=5.5cm]{PTLCTvsSuccessFail2.pdf} \end{center} \caption{Correlations for the LCT results and the placement test results in three successful/failed groups (linear algebra in the first semester in 2017).} \end{figure} \section{Attendance to the Lectures and the Follow-up Program Classes} Attendance/absence to classes is other discrete type information. Intuitively, we feel that the more frequently attend the classes, the higher the scores of the final examination. Recently, it is often seen that attendance/absence information is memorized to the database automatically using the electric card attendance check system. However, the system is not perfectly working; some students may disappear after exposing their cards. The LCT compensate this defect. The attendance information cannot be guaranteed unless the testing is completed. Figure 3 shows that the attendance/absence information are classified into three groups: the first is for score range is 60-100 seen on the right in the figure, the second is for score range is 40-59 seen in the middle, and third is for score range is 0-39 seen on the left. In these matrices, row means the student id, and column means the question id. Using two kinds of attendance/absence information by electric cards (expressed by $y$ shown below) and LCT results (expressed by $x$ shown below), the value of each element, $s$, is determined and is colored by the formula of $s = 10 x + y$, where meanings of $s$, $x$, and $y$ are indicated in Figure 4. The figure shows the scheme of the attendance/absence information and LCT successful/failed information. For example, $s=55$ means that a student was absolutely absent for the class, and $s=11$ means that a student is absolutely attended the class; they are also indicated in Figure 3. Since each element is colored by green to red according to $s$ value from lower to higher, red and orange colors indicate the absence or failed in the LCT, and green color indicate the success in the LCT. Obviously, three groups can be classified clearly by these colors by looking at the figure. This indicates that the attendance/absence information may play a key role in determining the risk of a student in addition to the LCT results. \begin{figure}[htbp] \begin{center} \includegraphics[height=9.5cm]{AttendanceVsSuccessFail3.pdf} \end{center} \caption{Three groups classified by using the attendance/absence information and LCT successful/failed information.} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[height=9cm]{AttendanceAbsenceInformation.pdf} \end{center} \caption{Scheme of the attendance/absence information and LCT successful/failed information.} \end{figure} \section{Finding the Important Factors for Risk} We first show the relationships among the factors we are concerned with in Figures 5 and 6. In these figures, for example, we see that there is a strong relationship between the LCT successes and the no requirement for the FPT (see first column and sixth row in the figures), but it seems unclear which factors are key factors in classifying the successful/failed groups. In this paper, however, we will not deeply discuss the dimension reduction problem. We are only interested in finding the key factors related to the risky students in the final examination. Thus, a much easier method will be taken in the following. \begin{figure}[htbp] \begin{center} \includegraphics[height=8cm]{0-59old.pdf} \end{center} \caption{Relationships among the factors when the score range is 0-59.} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[height=8cm]{60-100old.pdf} \end{center} \caption{Relationships among the factors when the score range is 60-100.} \end{figure} Since we have known that the attendance/absence information may be effective for classifying the students groups into successful/failed students in the final examination, we apply the multiple regression analysis of $Y=X\beta$ in finding the key factors; candidates of factors are shown in Figure 7. \begin{figure}[htbp] \begin{center} \includegraphics[height=7cm]{MLR.pdf} \end{center} \caption{Factors in the multiple regression analysis.} \end{figure} Applying the multiple linear regression using the accumulated learning data, e.g., estimated LCT ability values, placement scores, class attendance/absence, follow-up class attendance/absence, and etc., we obtained the result shown in Figure 8. Marked symbols by asterisks indicate that these factors are significant with given $p$-values in using R {\cite R}, the statistical computing and graphics language and environment. The symbol of FPTnotrequired means that students took the LCT and successful, resulting no requirement for follow-up class attendance. That is, attendance/absence for FPT is the most significant information in deciding successful/failed students. \begin{figure}[htbp] \begin{center} \includegraphics[height=4.3cm]{regressionanalysis3.pdf} \end{center} \caption{Multiple linear regression analysis result.} \end{figure} Therefore, we next focus on this factor. Figure 9 shows the 2-dimensional relationship between the number of successes in the LCT and the number of absents for the follow-up classes for the three groups, score ranges are 60-100, 40-59, and 0-39 in the final examination. At a first glance, we can see that a clear linear relationship between the number of successes in the LCT and the number of absents for the FPC when score range is 0-39. We also see some similarity between the cases score range 40-59 and the cases score range 60-100, but it is unclear. Since each dot represents a student in the figure, overlaid dots representing the same position hinder the accumulated numbers of the students. \begin{figure}[htbp] \begin{center} \includegraphics[height=8cm]{regression1.pdf} \end{center} \caption{2-dimensional relationship between the number of successes in the LCT and the number of absents for the FPC.} \end{figure} Figure 10 shows the 3-dimensional bar charts representing the relationship between the number of successes in the LCT and the number of absents for the follow-up classes for the three groups, score ranges are 60-100, 40-59, and 0-39 in the final examination. By looking at the figure, we find the following: 1) When score range is 60-100, almost all the students show successful results in the LCT and very small number of absences for the FPC (almost all are not required the attendance for the FPC). 2) When score range is 0-39, we see a clear linear relationship between the number of successes in the LCT and the number of absents for the FPC, which means that almost all the failed students in the LCT or students absent for the classes ignore the attendance for the FPC. 3) When score range is 40-59, students reveal both sides of features appeared in score range of 0-39 and score range of 60-100. Some students tried to make effort to be successful, and some were successful but unfortunately some were not. Therefore, we have found that failed students in the final examination were reluctant to attend the classes and showed failed LCT results, and they were unwilling to attend the FPC in addition. As intuition suggests, the most crucial factor for the success in the final examination is attendance to the class. \begin{figure}[htbp] \begin{center} \includegraphics[height=8cm]{3Dplot.pdf} \end{center} \caption{3-dimensional bar charts representing the relationship between the number of successes in the LCT and the number of absents for the FPC.} \end{figure} \section{Discussions} We have been looking at some factors to classify successes and failures in the final examination. To investigate such factors much more precisely, more detailed information may be required. Thus, we have classified the successful group into four groups such as A+, A, B, C, where scores in these groups are distributed to be 90-100, 80-89, 70-79, 60-69. The possible factor to discriminate these groups is considered to be the number of successful LCT. Figure 11 shows the frequency bar charts for the number of successful LCT to each group. Taking a look at the figure, we can see that students who failed in LCT more than seven times almost absolutely failed in the final examination, which could cause the drop out. Also, students who were successful to LCT more than ten times took better score in the final examination. Since all the testing times were 13 in this case, this means that students who failed in learning check testing more than half out of all testing times almost absolutely failed in the final examination, and students who were successful to learning check testing more than two third out of all testing times took better score in the final examination. \begin{figure}[htbp] \begin{center} \includegraphics[height=10cm]{LCTsuccess.pdf} \end{center} \caption{Histograms of estimated abilities of LCT to successful students and to failed students (linear algebra in the first semester in 2017).} \end{figure} \section{Concluding Remarks} It is crucial to identify students at risk for failing courses and/or dropping out as early as possible because a variety of students are now enrolled in universities and we teachers have to educate them altogether. To overcome this, we established online testing systems aimed at helping students who need further learning skills in mathematics subjects, including the learning check testing, the collaborative working testing, and the follow-up program testing. Using the accumulated data from these testings in the database, we aimed at obtaining effective learning strategies for students at risk for failing courses and/or dropping out. Although the subjects we deal with are analysis basic (similar to calculus) and linear algebra, we focused on linear algebra case as a typical one. In this paper, we have found some key factors strongly related to the students at risk. The findings are the following. 1) Badly failed students (score range is 0-39 in the final examination) tend to be absent for the regular classes and fail in the learning check testing even if they attended, and they are very reluctant to attend the follow-up program classes. 2) Successful students (score range is 60-100 in the final examination) attend classes and get good scores in every learning check testing. 3) Failed students but not so badly (score range is 40-59 in the final examination) reveal both sides of features appeared in score range of 0-39 and score range of 60-100. Therefore, it is crucial to attend the lectures in order not to drop out. Students who failed in learning check testing more than half out of all testing times almost absolutely failed in the final examination, which could cause the drop out. Also, students who were successful to learning check testing more than two third out of all testing times took better score in the final examination. \section*{Acknowledgment} The author would like to thank mathematical staffs at Hiroshima Institute of Technology.
2023-04-23T08:18:15.315Z
2019-01-14T02:22:28.000Z
redpajama/arxiv
arxiv_0000
1,458
3,329
648804c78609625300042cdafa6fb5c3a708de8c
\section{Introduction} Network design models have found wide application in the planning, design and operations management of transportation, power \& energy distribution, supply chain logistic and telecommunications networks. Usually, they are based on mixed-integer programming models, and many such models have been developed over the decades for network design and expansion problems, see, e.g., \cite{Magnanti1984,Minoux1989,Bertsekas1998}. In telecommunications for instance, network design models can be used to curb congestion and to provide an acceptable quality of service to the subscribers. Effort to provide an acceptable service has resulted in capital expenditure of billions of USD in global telecoms investment. Optimization of investments has thus attained a key strategic role in this industry. Moreover, these decisions need to be made well ahead of time based on a forecast of future traffic demand. Unfortunately, traffic demand has proven to be difficult to predict accurately. In order to factor in this uncertainty and design a network that is immune to traffic variability, robust optimization approaches have been proposed. For this purpose, a number of uncertainty models have already been developed and investigated (see \cite{goerigk2016algorithm,Ben-Tal2009,Bertsimas2011}). The drawback of classic approaches, however, is that the uncertainty set is assumed to be given, i.e., the decision maker can advise us how the uncertainty is shaped. Moreover, an inappropriate choice of uncertainty set may result in models that are too conservative or in some cases computationally intractable. As the decision maker cannot be expected to make this choice in practice, data-driven and learning approaches have been recently proposed (see \cite{Bertsimas2017,Chassein2018}). To the best of our knowledge, we follow this approach for the first time for network design problems, by comparing which uncertainty set actually fits real-world data. We compare two robust optimization approaches for a network capacity expansion model with outsourceable demand (see, e.g., \cite{Bertsekas1998,Bektas2009}). In this setting, we need to invest into the network infrastructure now, so that each commodity can be routed to satisfy its uncertain demand later. Demand which cannot be satisfied is outsourced, which is modeled through a linear penalty on its amount. The two approaches under consideration are (1) a discrete uncertainty set which assumes that all demands are in closed form; and (2) a polyhedral set with wider range of possible scenarios which results in a heuristic mix-integer program to solve the resulting robust problem. These two are compared on real-world data taken from SNDlib and also compared with performance of a third model outside the robust framework, a simple stochastic optimization approach. The rest of this paper is organized as follows. \autoref{sec:literature} presents a literature review of related research. In \autoref{sec:problem}, we introduce the problem description of robust network capacity expansion with outsourcing and mathematical models for both the discrete and polyhedral uncertainty sets with detailed construction of the robust counterparts. Experimental results and main findings using data from the SNDlib (see \cite{Orlowski2010}) are discussed in \autoref{sec:computational}. Finally, \autoref{sec:conclusion} concludes our work and points out future research directions. \section{Literature Review } \label{sec:literature} The study of uncertainties in decision problems has resulted in two broad areas of research, namely \emph{stochastic} (see, e.g., \cite{Birge2011}) and \emph{robust} (see, e.g., \cite{Ben-Tal2009}) optimization frameworks. While the stochastic approach usually assumes that a probability distribution of the uncertain data is known with precision, the robust approach assumes that the uncertain data lies within a predetermined set. The renewed interest in the latter can be attributed to the works of \cite{Ben_Tal_1999} and \cite{El_Ghaoui_1998} with many other collaborators. The two frameworks also have a dynamic context, where a part of the decision has to be made after the realization of the uncertain data. This is known as two-stage stochastic and robust optimization. Depending on the context, two-stage robust problems are also known as adjustable robust counterparts (ARC). Here, the decision variables are partitioned into two sets: the non-adjustable ones ("here and now" decisions) which must be fixed in advance before the realization of the uncertainty sets and the adjustable ones ("wait and see" decisions), which are computed after the uncertain parameters are revealed (\cite{Ben_Tal_2004}). As the ARC is more representative of real life situations where decisions are made over multiple periods, this framework has attracted interest from the research community. However, its its general form is known to be computationally intractable, which has led to an approximate model using affine decision rules \textit{(ADR)}. In this affine adjustable robust counterpart \textit{(AARC)}, the adjustable part of the decision is assumed to be an affine linear function of the uncertain data (\cite{Ben_Tal_2004}). This emulates a linear feedback as a controller to adjust for the desired output. Just like in many other fields, robust optimization has found increasing use and application in the network design area. \cite{Atamt_rk_2007} considered a two-stage robust network flow problem under demand uncertainty following the work of \cite{Ben_Tal_2004}, while \cite{Ouorou2007} introduced affine routing in the their robust network capacity planning model. \cite{Ord_ez_2007} looked at network capacity expansion under both demand and cost uncertainty. \cite{Koster_2013} considered a robust network design problem with static routing in the setting of \cite{Bertsimas_2004}. \cite{Poss_2012} apply the AARC to robust network design with polyhedral uncertainty and \cite{Babonneau2013} used a refined version of ADR in their robust capacity assignment for networks with uncertain demand. Recently, \cite{Pessoa_2015} used a cutting plane algorithm while taking into consideration the uncertainty in unmet demand outsourced cost. Regarding uncertainty sets, polyhedral sets are most frequently used in radio network design, along with hose models from the works of \cite{Duffield1999,Fingerhut_1997}, budget uncertainty by \cite{Atamt_rk_2007} and cardinal constrained uncertainty by \cite{Bertsimas_2004}, and interval uncertainty among others. Little research compares these models. \cite{Atamt_rk_2007} compared their single-stage robust model using budget uncertainty with a scenario-based two-stage stochastic approach. \cite{Chassein2018} constructed different uncertainty sets from real world data and compared performance within and outside sample for shortest path problems. Our focus is to compare the discrete and the polyhedral uncertainty sets in network capacity expansion, to arrive at which one better fits real-world data, while also comparing to the performance of a simple stochastic model using the mean demand. \section{Problem Description} \label{sec:problem} We consider a multi-commodity network flow design problem where incremental capacities are installed in response to uncertain traffic demand. The problem is modeled in a way that allows for capacity expansion such that routing of traffic for the different commodities over the arcs subject to design and network constraints is possible while minimizing the total cost involved. We refer to this model as the robust network capacity expansion problem \textit{(RNCEP)}. \subsection{The Basic RNCEP} \label{sec:basic problem} The network under consideration can be represented by a directed graph, $G=(\cV, \cA)$. Each of the arcs $a \in \cA$ has an original capacity $u_a$. The original capacity on each arc $ a $ can be upgraded at a cost $c_a$ per each additional unit $x_a$ of capacity. There is a set of commodities $\cK=\{1,\ldots,K\}=:[K]$ which need to be routed across the network, each commodity $k \in \cK$ consisting of a demand $d^k \ge 0$, a source node $s^k\in\cV$, and a sink node $t^k\in\cV$. Additionally, let $\sigma$ be the cost of not satisfying one unit of demand over the planning horizon (i.e., by outsourcing it). If all demands are known, the nominal network capacity expansion problem can then be formulated as follows: \begin{align} \min\ &\sum_{a\in \cA} c_a x_a + \sigma \sum_{k\in\cK} \left[ d_k - \sum_{a\in\delta^-(t^k)} f^k_a + \sum_{a\in\delta^+(t^k)} f^k_a \right]_+ \label{con1}\\ \text{s.t. } & \sum_{a\in \delta^-(v)} f^k_a - \sum_{a\in \delta^+(v)} f^k_a \ge 0 & \forall k\in \cK, v\in \cV\setminus\{s^k,t^k\} \label{con2} \\ & \sum_{k\in \cK} f^k_a \leq u_a + x_a & \forall a\in \cA \label{con3}\\ & f^k_a \ge 0 & \forall k\in\cK,d\in\cU,a\in\cA \label{con4} \\ & x_a \geq 0 & \forall a\in\cA \label{con5} \end{align} Here, $[y]_+$ denotes $\max\{0,y\}$, while $\delta^+(v)$ and $\delta^-(v)$ are the sets of the outgoing and incoming arc at node $v \in \cV$, respectively. Variables $f^k_a$ denote the flow of commodity $k\in\cK$ along edge $a\in\cA$, while $x_a$ models the amount of capacity being added to arc $a$. The objective function~\eqref{con1} is to minimize the sum of capacity expansion cost and outsourcing costs. Constraints~\eqref{con2} are a variant of flow constraints, where we allow an arbitrary amount of flow to leave the source node $s^k$. Through the objective, only the flow arriving in $t^k$ is counted. It is allowed to diminish the flow outside of $s^k$ and $t^k$; note that there is an optimal solution where this does not happen. We do not assume equality in Constraints~\eqref{con2} to apply our robust optimization approach in the following section. Finally, Constraints~\eqref{con3} model the capacity on each edge. The actual demand values $\pmb{d}$ are uncertain, and can take any value in a predetermined uncertainty set $\cU$. The two sets under consideration in this work are the \textit{discrete uncertainty set}, which can be represented as $\cU = \{\pmb{d}^1,\ldots,\pmb{d}^N\}$, and the \textit{polyhedral uncertainty set}, which can be represented as $\cU = \left\{ \pmb{d}\in\mathbb{R}^K_+ : V\pmb{d} \le \pmb{b}, d_k\in[\underline{d}_k,\overline{d}_k] \right\}$. The robust network capacity expansion problem then is to find a minimum installation cost of additional capacities while satisfying all potential traffic demands such that actual flows do not exceed cumulative link capacities whatever the realization of demands in $\cU$. Thus, the RNCEP is a two stage robust problem with recourse applying the general framework of \cite{Ben_Tal_2004}. The capacity expansion represented by variables $\pmb{x}$ is the first stage decision variable which has to be fixed before the realization of $\pmb{d} \in \cU$. Once the uncertain demand data is revealed, the traffic adjustment takes place by routing a multi-commodity flow with second stage variable $f^k_a(\pmb{d})$. This can be modeled as follows: \begin{align} \min\ &\sum_{a\in \cA} c_a x_a + \max_{d\in\cU} \sigma \sum_{k\in\cK} \left[ d_k - \sum_{a\in\delta^-(t^k)} f^k_a(\pmb{d}) + \sum_{a\in\delta^+(t^k)} f^k_a(\pmb{d}) \right]_+ \label{con1a}\\ \text{s.t. } & \sum_{a\in \delta^-(v)} f^k_a(\pmb{d}) - \sum_{a\in \delta^+(v)} f^k_a(\pmb{d}) \ge 0 & \forall k\in \cK, \pmb{d} \in \cU, v\in \cV\setminus\{s^k,t^k\} \label{con2a} \\ & \sum_{k\in \cK} f^k_a(\pmb{d}) \leq u_a + x_a & \forall \pmb{d} \in \cU, a\in \cA \label{con3a}\\ & f^k_a(\pmb{d}) \ge 0 & \forall k\in\cK, \pmb{d}\in\cU,a\in\cA \label{con4a} \\ & x_a \geq 0 & \forall a\in\cA \label{con5a} \end{align} Here, we have modified Constraints~(\ref{con1}-\ref{con5}) to take all scenarios into account. Being a robust model, we consider the worst-case costs in Objective~\eqref{con1a}, while all constraints need to hold for all scenarios $\pmb{d}\in\cU$. In the following, we reformulate the general model~(\ref{con1a}-\ref{con5a}) for specific uncertainty sets. \subsection{Robust Optimization with Discrete Uncertainty} \subsubsection{Model} Let $\cU=\{\pmb{d}^1, \dots, \pmb{d}^N\}$ be a discrete uncertainty set, where $N$ is the number of scenarios. In this case, variables $f^k_a(\pmb{d})$ become $f^{k,i}_a$ for all $i\in[N]$. The robust objective function~\eqref{con1a} is reformulated using additional variables $h^{k,i}:=[d^i_k - \sum_{a\in\delta^-(t^k)} f^{k,i}_a + \sum_{a\in\delta^+(t^k)} f^{k,i}_a]_+$ for $k\in\cK$, $i\in[N]$, and $\tau := \max_{i\in[N]} \sum_{k\in\cK} h^{k,i}$. The problem then becomes: \begin{align} \min\ &\sum_{a\in \cA} c_a x_a + \sigma \tau \label{disct1}\\ \text{s.t. } & \tau \ge \sum_{k\in\cK} h^{k,i} & \forall i\in[N] \label{disct3}\\ & h^{k,i} \ge d^i_k - \sum_{a\in\delta^-(t^k)} f^{k,i}_a + \sum_{a\in\delta^+(t^k)} f^{k,i}_a & \forall i\in[N], k\in\cK \label{disct4}\\ & \sum_{a\in \delta^-(v)} f^{k,i}_a - \sum_{a\in \delta^+(v)} f^{k,i}_a \ge 0 & \forall k\in \cK, i\in[N], v\in \cV\setminus\{s^k,t^k\} \label{disct2} \\ & \sum_{k\in \cK} f^{k,i}_a \leq u_a + x_a & \forall i\in[N], a\in \cA \label{disct5} \\ & f^{k,i}_a \ge 0 & \forall k\in\cK,i\in[N],a\in\cA \label{disct6}\\ & h^{k,i} \ge 0 & \forall k\in\cK,i\in[N] \label{disct7}\\ & x_a \geq 0 & \forall a\in\cA \end{align} Here, Constraints~\eqref{disct2} and~\eqref{disct5} correspond to Constraints~\eqref{con2a} and~\eqref{con3a}, whereas the additional Constraints~\eqref{disct3} and~\eqref{disct4} are used to ensure variables $\tau$ and $h^{k,i}$ have the intended effect. Note that, as we minimize, the maximum operator can be expressed by using $\ge$-constraints over the set. \subsubsection{Constructing Data-Based Discrete Uncertainty} To construct discrete uncertainties uncertainties, we assume that scenarios \[ \cR = \{ \pmb{r}^1, \ldots, \pmb{r}^N\} \] of real demands with $\pmb{r}^i \in\mathbb{R}^K_+$ are given, along with the respective source and sink nodes. The trivial approach would be to use directly $\cU=\cR$. However, previous research (see \cite{Chassein2018}) has shown that this may result in an overfitting to the available data. Instead, we consider different scalings. For a fixed commodity $k\in\cK$, let $N'\le N$ denote the absolute frequency that $r^{i,k} > 0$ over all $i\in[N]$. Then \[ \hat{r}^k = \frac{1}{N'}\sum_{i\in[N]} r^{i,k} \] be the average of the demand scenarios for each $k\in\cK$. For a given $\lambda\in[0,1]$, we set $d^{i,k}(\lambda) = \lambda r^{i,k} + (1-\lambda) \hat{r}^k$ and \[\cU(\lambda) = \left\{ \pmb{d}^1(\lambda), \ldots, \pmb{d}^N(\lambda) \right\}. \] The case $\lambda = 0$ means that we ignore uncertainty and use the average case, while $\lambda=1$ uses the original demand scenarios $\cR$. \subsection{Robust Optimization with Polyhedral Uncertainty} \label{sec:polyhedral} \subsubsection{Model} We now assume the demand uncertainty is given through a general polyhedron of the form \[ \cU = \big\{ \pmb{d}\in\mathbb{R}^K_+ : V\pmb{d} \le \pmb{b}, d_k\in[\underline{d}_k,\overline{d}_k] \big\} \] where $V=(v_{ik})$ is a matrix in $\mathbb{R}^{M\times K}$ and $\pmb{b}$ is a vector in $\mathbb{R}^{M}$ (i.e., there are $M$ linear constraints on the demand vector). To find a tractable robust counterpart, we apply the framework of affine decision rules (ADR) by restricting the flow variables to be affine functions of the uncertainty, i.e., \[ f^k_a(\pmb{d}) = \phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_{\ell} \] with $\phi^k_a$ and $\Phi^{k,\ell}_a$ being unknown coefficients of the affine linear function in $\pmb{d}$. We now consider each constraint and the objective of problem~(\ref{con1a}-\ref{con5a}) and reformulate them using strong duality. By substituting for $f^k_a(\pmb{d})$, the flow constraints~\eqref{con2a} become: \[ \sum_{a\in\delta^-(v)} \left( \phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_{\ell} \right) - \sum_{a\in\delta^+(v)} \left(\phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_{\ell}\right) \ge 0 \qquad \forall k\in\cK, v\in\cV\setminus\{s^k,t^k\}, \pmb{d}\in\cU, \] which is equivalent to \begin{equation} \sum_{a\in\delta^-(v)}\phi^k_a - \sum_{a\in\delta^+(v)} \phi^k_a \ge \sum_{\ell\in\cK} \left(\sum_{a\in\delta^+(v)}\Phi^{k,\ell}_a - \sum_{a\in\delta^-(v)} \Phi^{k,\ell}_a\right) d_\ell \qquad \forall k\in\cK, v\in\cV\setminus\{s^k,t^k\}, \pmb{d}\in\cU.\label{con2b} \end{equation} For each $k\in\cK, v\in\cV\setminus\{s^k,t^k\}$ we can write the worst-case problem as \begin{align*} \max\ & \sum_{\ell\in\cK} \left(\sum_{a\in\delta^+(v)}\Phi^{k,\ell}_a - \sum_{a\in\delta^-(v)} \Phi^{k,\ell}_a\right) d_\ell \\ \text{s.t. } & \sum_{\ell\in\cK} v_{i\ell} d_\ell \le b_i & \forall i\in[M] && [\alpha^{k,v}_i] \\ & d_{\ell} \le \overline{d}_\ell & \forall \ell\in\cK && [ \overline{\beta}^{k,v}_\ell]\\ & -d_{\ell} \le -\underline{d}_{\ell} & \forall \ell\in\cK && [\underline{\beta}^{k,v}_{\ell}] \end{align*} We now consider the dual of this linear optimization problem. In brackets behind every constraint of the primal problem, we have listed the corresponding dual variable. The dual problem then becomes \begin{align*} \min\ & \sum_{i\in[M]} b_i\alpha^{k,v}_i + \sum_{\ell\in\cK} ( \overline{d}_l\overline{\beta}^{k,v}_{\ell} - \underline{d}_\ell\underline{\beta}^{k,v}_\ell) \\ \text{s.t. } & \sum_{i\in[M]} v_{i\ell}\alpha^{k,v}_i + \overline{\beta}^{k,v}_\ell - \underline{\beta}^{k,v}_\ell \ge \sum_{a\in\delta^+(v)}\Phi^{k,\ell}_a - \sum_{a\in\delta^-(v)} \Phi^{k,\ell}_a &\forall \ell\in\cK \\ & \alpha^{k,v}_i \ge 0 & \forall i\in[M] \\ & \overline{\beta}^{k,v}_\ell \ge 0 & \forall \ell\in\cK \\ & \underline{\beta}^{k,v}_{\ell} \ge 0 & \forall \ell \in\cK. \end{align*} By applying strong duality, we can conclude that the optimal objective value of this dual problem is equal to the worst-case of the right-hand side of Constraint~\eqref{con2b}. Overall, \textbf{Constraint~\eqref{con2a}} is replaced by the following set of constraints and variables: \begin{align*} & \sum_{a\in\delta^-(v)}\phi^k_a - \sum_{a\in\delta^+(v)} \phi^k_a \ge \sum_{i\in[M]} b_i\alpha^{k,v}_i + \sum_{\ell\in\cK} ( \overline{d}_l\overline{\beta}^{k,v}_{\ell} - \underline{d}_\ell\underline{\beta}^{k,v}_\ell) & \forall k\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \sum_{i\in[M]} v_{i\ell}\alpha^{k,v}_i + \overline{\beta}^{k,v}_\ell - \underline{\beta}^{k,v}_\ell \ge \sum_{a\in\delta^+(v)}\Phi^{k,\ell}_a - \sum_{a\in\delta^-(v)} \Phi^{k,\ell}_a &\forall k,\ell\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \alpha^{k,v}_i \ge 0 & \forall i\in[M], k\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \overline{\beta}^{k,v}_\ell \ge 0 & \forall k,\ell\in\cK, v\in\cV\setminus\{s^k,t^k\}\\ & \underline{\beta}^{k,v}_{\ell} \ge 0 & \forall k,\ell \in\cK, v\in\cV\setminus\{s^k,t^k\} \end{align*} We follow a similar procedure for the other constraints. Constraint~\eqref{con3a} can be rewritten as \[ \sum_{k\in\cK} \left( \phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_{\ell} \right) \le u_a + x_a \qquad \forall d\in\cU, a\in\cA \] The subproblem \begin{align*} \max\ &\sum_{\ell\in\cK} (\sum_{k\in\cK} \Phi^{k,\ell}_a ) d_\ell \\ \text{s.t. } & \pmb{d}\in\cU \end{align*} has the same structure as before. Using dual variables $\pi^a_i,\overline{\rho}^a_\ell,\underline{\rho}^a_\ell$, we can replace \textbf{Constraint~\eqref{con3a}} with the following: \begin{align*} & \sum_{k\in\cK} \phi^k_a + \sum_{i\in[M]} b_i \pi^a_i + \sum_{\ell\in\cK} (\overline{d}_\ell\overline{\rho}^a_\ell - \underline{d}_{\ell}\underline{\rho}^a_\ell) \le u_a + x_a & \forall a\in\cA \\ & \sum_{i\in[M]} v_{i\ell}\pi^a_i + \overline{\rho}^a_\ell -\underline{\rho}^a_\ell \ge \sum_{k\in\cK} \Phi^{k,\ell}_a & \forall \ell\in\cK, a\in\cA \\ & \pi^a_i \ge 0 & \forall i\in[M],a\in\cA \\ & \overline{\rho}^a_\ell \ge 0 & \forall a\in\cA,\ell\in\cK \\ & \underline{\rho}^a_\ell \ge 0 & \forall a\in\cA,\ell\in\cK \end{align*} We now consider the positivity constraint~\eqref{con4a}. This becomes \[ \phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_{\ell} \ge 0 \qquad \forall k\in\cK,a\in\cA,\pmb{d}\in\cU \] Using duality with variables $\xi^{k,a}_i,\overline{\zeta}^{k,a}_\ell,\underline{\zeta}^{k,a}_\ell$ we replace \textbf{Constraint~\eqref{con4a}} with the following: \begin{align*} & \phi^k_a \ge \sum_{i\in[M]} b_i\xi^{k,a}_i + \sum_{\ell\in\cK} (\overline{d}_\ell \overline{\zeta}^{k,a}_\ell - \underline{d}_\ell \underline{\zeta}^{k,a}_\ell) & \forall k\in\cK,a\in\cA \\ & \sum_{i\in[M]} v_{i\ell} \xi^{k,a}_i + \overline{\zeta}^{k,a}_\ell - \underline{\zeta}^{k,a}_\ell \ge - \Phi^{k,\ell}_a & \forall k,\ell\in\cK,a\in\cA \\ & \xi^{k,a}_i \ge 0 & \forall k\in\cK,a\in\cA,i\in[M] \\ &\overline{\zeta}^{k,a}_\ell \ge 0 & \forall k,\ell\in\cK,a\in\cA \\ &\underline{\zeta}^{k,a}_\ell \ge 0 & \forall k,\ell\in\cK,a\in\cA \end{align*} Finally, we consider the objective function~\eqref{con1a}. We need to solve the following problem: \begin{align*} \max\ &\sum_{k\in\cK} \left[ d_k - \sum_{a\in\delta^-(t^k)} \left(\phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_\ell \right) + \sum_{a\in\delta^+(t^k)} \left(\phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_\ell \right) \right]_+ \\ \text{s.t. } & \sum_{\ell\in\cK} v_{i\ell} d_\ell \le b_i & \forall i\in[M] \\ & d_\ell \le \overline{d}_\ell & \forall \ell\in\cK \\ & -d_\ell \le -\underline{d}_\ell & \forall \ell \in\cK \end{align*} We introduce new variables $z_k\in\{0,1\}$ to remove the positivity bracket from the objective. \begin{align*} \max \ & \sum_{k\in\cK} \left( d_k - \sum_{a\in\delta^-(t^k)} \left(\phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_\ell \right) + \sum_{a\in\delta^+(t^k)} \left(\phi^k_a + \sum_{\ell\in\cK} \Phi^{k,\ell}_a d_\ell \right) \right) z_k \\ \text{s.t. } & \sum_{\ell\in\cK} v_{i\ell} d_\ell \le b_i & \forall i\in[M] \\ & d_\ell \le \overline{d}_\ell & \forall \ell\in\cK \\ & -d_\ell \le -\underline{d}_\ell & \forall \ell \in\cK\\ & z_k\in\{0,1\} & \forall k\in\cK \end{align*} We set $z'_{k,\ell} := d_\ell z_k$ and get \begin{align*} \max\ & \sum_{k\in\cK} \left( z'_{kk} - \sum_{a\in\delta^-(t^k)} \left(\phi^k_az_k + \sum_{\ell\in\cK} \Phi^{k,\ell}_a z'_{k\ell} \right) + \sum_{a\in\delta^+(t^k)} \left(\phi^k_az_k + \sum_{\ell\in\cK} \Phi^{k,\ell}_a z_{k\ell} \right) \right) \\ \text{s.t. } & \sum_{\ell\in\cK} v_{i\ell} d_\ell \le b_i & \forall i\in[M] && [\mathfrak{q}_i]\\ & z'_{k\ell} \le d_\ell & \forall k,\ell\in\cK && [\mathfrak{r}_{k\ell}]\\ & z'_{k\ell} \le \overline{d}_\ell z_k & \forall k,\ell\in\cK && [\mathfrak{s}_{k\ell}]\\ & d_\ell + \overline{d}_\ell z_k - z'_{k\ell} \le \overline{d}_\ell & \forall k,\ell\in\cK && [\mathfrak{t}_{k\ell}] \\ & d_\ell \le \overline{d}_\ell & \forall \ell\in\cK && [\mathfrak{u}_{\ell}] \\ & -d_\ell \le -\underline{d}_\ell & \forall \ell \in\cK && [\mathfrak{v}_\ell] \\ & z_k\in\{0,1\} & \forall k\in\cK && [\mathfrak{w}_k] \\ & z'_{k\ell} \ge 0 & \forall k,\ell\in\cK \end{align*} By relaxing constraints $z_k\in\{0,1\}$ to $z_k\in[0,1]$ for a conservative approximation and dualizing the problem, we arrive at \begin{align*} \min\ & \sum_{i\in[M]} b_i\mathfrak{q}_i + \sum_{k\in\cK}\sum_{\ell\in\cK}\overline{d}_\ell\mathfrak{t}_{k\ell} + \sum_{\ell\in\cK} \overline{d}_\ell\mathfrak{u}_\ell - \sum_{\ell\in\cK} \underline{d}_\ell\mathfrak{v}_\ell + \sum_{k\in\cK} \mathfrak{w}_k \\ \text{s.t. } & \sum_{i\in[M]} v_{i\ell} \mathfrak{q}_i - \sum_{k\in\cK} \mathfrak{r}_{k\ell} + \sum_{k\in\cK} \mathfrak{t}_{k\ell} + \mathfrak{u}_\ell - \mathfrak{v}_\ell \ge 0 & \forall \ell\in\cK \\ & -\sum_{\ell\in\cK} \overline{d}_\ell \mathfrak{s}_{k\ell} + \sum_{\ell\in\cK}\overline{d}_\ell \mathfrak{t}_{k\ell} + \mathfrak{w}_k \ge \sum_{a\in\delta^+(t^k)} \phi^k_a - \sum_{a\in\delta^-(t^k)} \phi^k_a & \forall k\in\cK \\ & \mathfrak{r}_{k\ell} + \mathfrak{s}_{k\ell} - \mathfrak{t}_{k\ell} \ge 1_{k=\ell} + \sum_{a\in\delta^+(t^k)} \Phi^{k,\ell}_a - \sum_{a\in\delta^-(t^k)} \Phi^{k,\ell}_a & \forall k,\ell\in\cK \\ & \mathfrak{q}_i \ge 0 & \forall i\in[M] \\ & \mathfrak{r}_{k\ell},\mathfrak{s}_{k\ell},\mathfrak{t}_{k\ell} \ge 0 & \forall k,\ell\in\cK \\ & \mathfrak{u}_\ell,\mathfrak{v}_\ell,\mathfrak{w}_\ell \ge 0 & \forall \ell\in\cK \end{align*} Overall, we get the following affine adjustable robust counterpart to Problem~(\ref{con1a}-\ref{con5a}): \begin{align*} \min\ & \sum_{a\in\cA} c_ax_a + \sigma\left(\sum_{i\in[M]} b_i\mathfrak{q}_i + \sum_{k\in\cK}\sum_{\ell\in\cK}\overline{d}_\ell\mathfrak{t}_{k\ell} + \sum_{\ell\in\cK} \overline{d}_\ell\mathfrak{u}_\ell - \sum_{\ell\in\cK} \underline{d}_\ell\mathfrak{v}_\ell + \sum_{k\in\cK} \mathfrak{w}_k\right)\hspace*{-2.5cm} \\ \text{s.t. } & \sum_{i\in[M]} v_{i\ell} \mathfrak{q}_i - \sum_{k\in\cK} \mathfrak{r}_{k\ell} + \sum_{k\in\cK} \mathfrak{t}_{k\ell} + \mathfrak{u}_\ell - \mathfrak{v}_\ell \ge 0 & \forall \ell\in\cK \\ & -\sum_{\ell\in\cK} \overline{d}_\ell \mathfrak{s}_{k\ell} + \sum_{\ell\in\cK}\overline{d}_\ell \mathfrak{t}_{k\ell} + \mathfrak{w}_k \ge \sum_{a\in\delta^+(t^k)} \phi^k_a - \sum_{a\in\delta^-(t^k)} \phi^k_a & \forall k\in\cK \\ & \mathfrak{r}_{k\ell} + \mathfrak{s}_{k\ell} - \mathfrak{t}_{k\ell} \ge 1_{k=\ell} + \sum_{a\in\delta^+(t^k)} \Phi^{k,\ell}_a - \sum_{a\in\delta^-(t^k)} \Phi^{k,\ell}_a & \forall k,\ell\in\cK \\ & \sum_{a\in\delta^-(v)}\phi^k_a - \sum_{a\in\delta^+(v)} \phi^k_a \ge \sum_{i\in[M]} b_i\alpha^{k,v}_i + \sum_{\ell\in\cK} ( \overline{d}_l\overline{\beta}^{k,v}_{\ell} - \underline{d}_\ell\underline{\beta}^{k,v}_\ell) & \forall k\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \sum_{i\in[M]} v_{i\ell}\alpha^{k,v}_i +\overline{\beta}^{k,v}_\ell - \underline{\beta}^{k,v}_\ell \ge \sum_{a\in\delta^+(v)}\Phi^{k,\ell}_a - \sum_{a\in\delta^-(v)} \Phi^{k,\ell}_a &\forall k,\ell\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \sum_{k\in\cK} \phi^k_a + \sum_{i\in[M]} b_i \pi^a_i + \sum_{\ell\in\cK} (\overline{d}_\ell\overline{\rho}^a_\ell - \underline{d}_{\ell}\underline{\rho}^a_\ell) \le u_a + x_a & \forall a\in\cA \\ & \sum_{i\in[M]} v_{i\ell}\pi^a_i + \overline{\rho}^a_\ell - \underline{\rho}^a_\ell \ge \sum_{k\in\cK} \Phi^{k,\ell}_a & \forall \ell\in\cK, a\in\cA \\ & \phi^k_a \ge \sum_{i\in[M]} b_i\xi^{k,a}_i + \sum_{\ell\in\cK} (\overline{d}_\ell \overline{\zeta}^{k,a}_\ell - \underline{d}_\ell \underline{\zeta}^{k,a}_\ell) & \forall k\in\cK,a\in\cA \\ & \sum_{i\in[M]} v_{i\ell} \xi^{k,a}_i + \overline{\zeta}^{k,a}_\ell - \underline{\zeta}^{k,a}_\ell \ge - \Phi^{k,\ell}_a & \forall k,\ell\in\cK,a\in\cA \\ & x_a \ge 0 &\forall a\in\cA \\ & \mathfrak{q}_i \ge 0 & \forall i\in[M] \\ & \mathfrak{r}_{k\ell},\mathfrak{s}_{k\ell},\mathfrak{t}_{k\ell} \ge 0 & \forall k,\ell\in\cK \\ & \mathfrak{u}_\ell,\mathfrak{v}_\ell,\mathfrak{w}_\ell \ge 0 & \forall \ell\in\cK \\ & \alpha^{k,v}_i \ge 0 & \forall i\in[M], k\in\cK, v\in\cV\setminus\{s^k,t^k\} \\ & \overline{\beta}^{k,v}_\ell,\underline{\beta}^{k,v}_{\ell} \ge 0 & \forall k,\ell\in\cK, v\in\cV\setminus\{s^k,t^k\}\\ & \pi^a_i \ge 0 & \forall i\in[M],a\in\cA \\ & \overline{\rho}^a_\ell,\underline{\rho}^a_\ell \ge 0 & \forall a\in\cA,\ell\in\cK \\ & \xi^{k,a}_i \ge 0 & \forall k\in\cK,a\in\cA,i\in[M] \\ &\overline{\zeta}^{k,a}_\ell,\underline{\zeta}^{k,a}_\ell \ge 0 & \forall k,\ell\in\cK,a\in\cA \end{align*} \subsubsection{Constructing Data-Based Polyhedral Uncertainty} \label{polcon} Constructing a polyhedron that contains the demand scenarios $\cR$ can be considered as an optimization problem on its own. We would like to determine constraint coefficients $(v_{i1},\ldots,v_{iK},b_i)$ that determine a polyhedron $\cU$ such that the distance of $\mathcal{R}$ to the boundary of $\cU$ with respect to a norm $\|\cdot\|$ is as small as possible. Recall that the distance between a point $\pmb{p}$ and a hyperplane $(a_1,\ldots,a_K,b)$ is given through \[ \frac{ |\sum_{i\in[K]} a_i p_i - b| }{\|\pmb{a}\|^* } \] where $\|\cdot\|^*$ is the dual norm of $\|\cdot\|$. An optimization model to determine $\cU$ is hence: \begin{align*} \min\ & \sum_{i\in[N]} \min_{j\in[M]} (b_j - \sum_{k\in[K]} r^{i,k} v_{jk}) \\ \text{s.t. } & \sum_{k\in[K]} r^{i,k} v_{jk} \le b_j & \forall i\in[N],j\in[M] \\ & \| \pmb{v}_{j\cdot} \|^* = 1 \end{align*} where $\pmb{v}_{j\cdot}$ denotes the $j$th row of $V$. While such an approach is useful for low-dimensional data (i.e., few commodities $K$), it is less efficient for high-dimensional data. In fact, the additional lower and upper bounds $[\underline{d}_k,\overline{d}_k]$ may already suffice to determine a polyhedron where every point in $\cR$ is on its boundary. Therefore, we also consider randomly generated hyperplanes. To this end, we sample each $v_{ik}$ randomly uniformly from $[0,1]$. Then we set \[ b_i := \max_{j\in[N]} \sum_{k\in[K]} v_{ik} r^{j,k} \] to find a tight constraint. In particular, we always contain the sum-constraint where $v_{ik}=1/K$ for all $k\in[K]$. \subsection{Stochastic Optimization with Distribution Mean} \subsubsection{Model} Let $\overline{\pmb{d}}$ be the vector of mean demands of distributions fitted independently to every commodity using demand scenarios $\cR=\{\pmb{r}^1, \ldots, \pmb{r}^N\}$. We reformulate Problem~(\ref{con1}-\ref{con5}) using only this single mean demand scenario. To linearize the positivity brackets $[\cdot]_+$, we introduce variables $h^k$ for every commodity $k\in\cK$. The problem then becomes: \begin{align*} \min\ &\sum_{a\in \cA} c_a x_a + \sigma \sum_{k\in\cK} h^{k} \\ \text{s.t. } & h^{k} \ge \overline{d}^k - \sum_{a\in\delta^-(t^k)} f^{k}_a + \sum_{a\in\delta^+(t^k)} f^{k}_a & \forall k\in\cK \\ & \sum_{a\in \delta^-(v)} f^{k}_a - \sum_{a\in \delta^+(v)} f^{k}_a \ge 0 & \forall k\in \cK, v\in \cV\setminus\{s^k,t^k\} \\ & \sum_{k\in \cK} f^{k}_a \leq u_a + x_a & \forall a\in \cA \\ & f^{k}_a \ge 0 & \forall k\in\cK,a\in\cA \\ & h^{k} \ge 0 & \forall k\in\cK\\ & x_a \geq 0 & \forall a\in\cA \end{align*} \subsubsection{Generating Data-Based Distribution Mean} \label{sec:stochastic} The demand for the stochastic optimization model is generated from the demand scenarios $\cR$ using the mean of the zero-inflated uniform distribution in the following way. For a fixed commodity $k\in\cK$, let $N'\le N$ denote the absolute frequency that $r^{i,k} > 0$ over all $i\in[N]$. To fit a uniform distribution, set \[ r^k_{min} = \min_{i\in [N] : r^{i,k} > 0} r^{i,k} \text{ and } r^k_{max}= \max_{i\in [N]}\ r^{i,k} \] and the mean of the uniform distribution is $\bar{r}^k=1/2(r^k_{min}+r^k_{max})$. The remaining absolute frequency, $N - N'$, is considered for observing a zero demand, yielding the mean demand of the zero-inflated uniform distribution $\overline{d}^k = \overline{r}^k N'/N$. \section{Computational Experiments} \label{sec:computational} \subsection{Setup} The aim of our experiments is to determine which model gives the best solution to uncertain network design. On the one hand, the discrete uncertainty model is simpler than the polyhedral uncertainty model, and we can expect it to be solvable using more commodities, thus giving a more detailed description of the uncertainty. The polyhedral model on the other hand will use less commodities, but has a more complex description of the uncertainty available. As noted in the literature review (Section~\ref{sec:literature}), polyhedral models are popular in current research. We consider the following experimental setup to address our question. Using a data set of real-world scenarios, we separate it into a training set and an evaluation set. We construct different uncertainty sets only based on the training set, and solve the resulting robust (or stochastic) optimization problems. We then only keep the here-and-now part of the solution, i.e., the decision $\pmb{x}$ on the infrastructure investment. This investment is then assessed on the evaluation set by calculating optimal flows for each scenario. As the first-stage investment costs are already fixed, the flow problem only aims at minimizing the outsourced demand. We then compare investment costs and outsourced demand for all models. The experimental setup is summarized in \autoref{TabS}. The $\text{Discrete}_1$ experiment fixes two $\sigma$ values for varying values of $\lambda$, while the $\text{Discrete}_2$ experiment fixes $\lambda$ for varying values of $\sigma$. The $\text{Polyhedral}_1$ experiment uses a polyhedron with only one constraint (the sum-constraint) for all eleven values of $\sigma$, $\text{Polyhedral}_2$ uses a polyhedron with two hyperplanes and eight values for $\sigma$, while $\text{Polyhedral}_3$ uses a polyhedron with eight hyperplanes and seven possible $\sigma$ values. The reduced choice for $\sigma$ values with increasing number of hyperplanes was due to increased computation times. \begin{table}[tbp]\footnotesize \centering \caption{Experimental Setup} \begin{tabular}{rcrrrc} \toprule \textbf{Experiment} & \textbf{Nr of $\sigma$ values} &\textbf{$\sigma$ values} &\textbf{nr $\lambda$ values}&\textbf{$\lambda$ values} & \textbf{Nr of hyperplanes}\\ \toprule $\text{Discrete}_1$& 2 & 12,450 and 24,900&11&0.0 to 1.0& \\ $\text{Discrete}_2$& 11 & 0 to 24,900& 2& 0.5 and 1.0& \\%$ Stochastic& 11& 0 to 24,900& 1& 1.0& \\ $\text{Polyhedral}_1$& 11& 0 to 24,900& &&1\\ $\text{Polyhedral}_2$& 8 & 0 to 17,430& &&2\\ $\text{Polyhedral}_3$& 7 & 0 to 14,940& &&7\\ \bottomrule \end{tabular} \label{TabS} \end{table} The $\text{Discrete}_1$ experiment therefore has to solve $22$ optimization models, and each of these $22$ results was then evaluated on each of the demand scenarios from the evaluation set. The same was carried out for the other five experiments. The choice of $\sigma$, which represent the penalty for unmet demand, was a key consideration for these models and hence in the experimental setup. If $\sigma$ is too small there is incentive for unmet demand where almost all demand are outsourced with no addition of new installed capacity to the network while with a large $\sigma$, the incentive is for negative violation of the constraint which encourages the deployment of new network capacity. Several values of $\sigma$ were tested in a preliminary experiment using discrete uncertainty, see Table~\ref{Tab2}. Based on the outcomes, the value range for $\sigma$ was selected, taking the $95^{th}$ percentile of the capacity cost distribution into account. \begin{table}[tbp]\footnotesize \centering \caption{Impact of $\sigma$ on outsourced demand.} \begin{tabular}{rcrrrr} \toprule \textbf{Objective} & \textbf{Commodity} &\textbf{Capacity Add} &\textbf{Sol Time}&\textbf{Outsourced D} & \textbf{Penalty($\sigma$)}\\ \toprule 530,226.88& 400 & 0.00&199.56&127,254.45& 100\\ 5,302,268.77& 400 & 0.00&127.48&127,254.45& 1,000\\%$ 49,191,424.59& 400 & 2,191.83&443.11& 99,409.83&10,000\\ 68,676,799.21& 400 & 16,139.27&372.44& 13,262.58&20,000\\ 72,391,760.98& 400 & 18,151.57&575.90& 6,074.70&30,000\\ 74,113,211.23& 400 & 19,736.41&455.89& 2,335.66&40,000\\ 74,479,094.31& 400 & 20,873.46&378.29& 0.00&50,000\\ \bottomrule \end{tabular} \label{Tab2} \end{table} In total, over $34,000$ numerical experiments were carried out according to the setup. Models were implemented using Julia and Gurobi version 7.5 on a Lenovo desktop machine with 8 GB RAM and Intel Core i5-65 CPU with 2.50GHz using Windows 10 OS 64-bit. In Gurobi, we have used a time limit of $9000s$ for each problem instance and optimality is achieved once the optimality gap is below $0.01\%$. \subsection{Data} We tested the discrete, polyhedral and stochastic models using network data instances taken from the online SNDlib library\footnote{See \url{http://sndlib.zib.de}}, see \cite{Orlowski2010}. The particular network data considered in this work is Germany-50 with $50$ nodes and $176$ directed arcs (we also included arcs in opposite directions). There are three levels of aggregation for real-world traffic measurement data available. These are one full day (in 5 minute intervals), one full month (in 1 day intervals) and one whole year (in 1 month intervals). For our experiment, we focus on the full day dataset, consisting of $N=288$ scenarios. The peak demand of $7,649.83$ was recorded at 3pm for the demand profile, see Figure~\ref{FigDP}. We separate the scenarios into a training set consisting of 24 scenarios, which is generated by taking every 12th demand scenario (i.e., one scenario per hour), and the evaluation set consisting of the remaining 264 scenarios. We refer to the training set as MS-24. \begin{figure}[tbp] \begin{minipage} {.48\textwidth} \centering \includegraphics[width=\linewidth]{"Demand-Profile"} \caption{A full day demand profile.}\label{FigDP} \end{minipage \hfill \begin{minipage} {.48\textwidth} \centering \includegraphics[width=\linewidth]{"Commodities"} \caption{A full day commodities profile.}\label{FigCP} \end{minipage} \end{figure} Each scenario has a different number of commodities, see Figure~\ref{FigCP}. Some of the demand values were found to be very small. While the $99^{th}$ percentile of all demand values is 0.415, some values are in the range of $10^{-6}$. To simplify the optimization problems, we sort the commodities in descending order of demand and then choose a fixed value of commodities for all demand scenarios that covers over 98\% of the original demand data, which is the case for 400 commodities. \autoref{Tab1} shows the different numbers of commodities against the percentage of original data captured in the streamlined data. This approach was implemented instead of allowing for varying commodities per demand scenario and allows us to consider all significant demands values while discarding very low ones, thus significantly reducing the average numbers of commodities per demand scenario. \begin{table}[tbp]\footnotesize \centering \caption{Impact of choice of $K$ on the presorted original data.} \begin{tabular}{lcc \toprule \textbf{Options with MS-24} & \textbf{Commodity} &\textbf{\% of Original Data Captured} \\ \toprule All Demand & 300 & 97.48\% \\ All Demand & 400 & 98.88\% \\ All Demand & 450 & 99.25\% \\%$ All Demand & 500 & 99.50\% \\ All Demand & 900 & 99.88\% \\ \bottomrule \end{tabular} \label{Tab1} \end{table} We observed that a model based on a polyhedron with 400 commodities computed from the training set demand matrix could not be solved in reasonable time, hence the polyhedron was generated for a reduced number of commodities to allow for an optimal solution in a reasonable amount of time that will encourage its practical usage in the industry. Instead, we work with $K=20$ that captures the top commodities in the training set. This reflects that the more complex the model for the uncertainty, the harder becomes the optimization model itself, and the less data we can use for building our sets. This trade-off is investigated in our experiments. Additionally, our polyhedrons were calcualted using the random constraint sampling method from Section~\ref{polcon}, as lower and upper bounds already gave an optimal solution to the optimization approach for constructing polyhedra. \subsection{Computational Results} We consider the performance of the capacity expansion solutions on the evaluation scenarios. We used four metrics on these 264 scenarios: The average, the maximum, the average of the worst 10\% (known as conditional-value-at-risk, or CVaR), and the standard deviation. Note that all these measures were calculated for scenarios that were not known to the models at the time of solution. We first of all note that all polyhedral models $\text{Polyhedral}_1$ to $\text{Polyhedral}_3$ gave the same results, so we do not differentiate between them in the following. In \autoref{Fig1} to \autoref{Fig4}, the four metrics are shown against the first-stage investment costs for two values of $\sigma$ (i.e., using $\text{Discrete}_1$). As expected, increasing $\sigma$ results in building more capacity in the network and hence reducing the amount of demand being outsourced. This is true for both robust the stochastic models. For the discrete uncertainties, network capacity built increases with increasing value of $\lambda$ from 0 (ignoring uncertainty) to 1 (using the real demands) for a fixed $\sigma$ value. \begin{figure}[tbp] \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=\textwidth]{"mean-OD-si"} \caption{Mean outsourced demand. Discrete model uses varying values of $\lambda$.}\label{Fig1} \end{minipage \hfill \begin{minipage}[t]{.48\textwidth} \includegraphics[width=\textwidth]{"max-OD-si"} \caption{Maximum outsourced demand. Discrete model uses varying values of $\lambda$.}\label{Fig2} \end{minipage} \end{figure} \begin{figure}[tbp] \begin{minipage}[t] {.48\textwidth} \centering \includegraphics[width=\linewidth]{"CVar-OD-si"} \caption{CVaR of outsourced demand. Discrete model uses varying values of $\lambda$.}\label{Fig3} \end{minipage \hfill \begin{minipage}[t] {.48\textwidth} \includegraphics[width=\linewidth]{"SD-OD-si"} \caption{Standard deviation of outsourced Demand. Discrete model uses varying values of $\lambda$.}\label{Fig4} \end{minipage} \end{figure} In \autoref{Fig5} to \autoref{Fig8}, varying penalty values $\sigma$ were considered for the three models while fixing $\lambda$ for the discrete uncertainty model (using $\text{Discrete}_2$). The outsourced demand $\tau$ decreases with an increase in $\sigma$ value. The implication of higher penalty is that overall risk is minimized deploying additional infrastructure in capacity for the network rather than outsourcing demand. In \autoref{Fig3} and \autoref{Fig7}, the CVaR was observed to decrease with increasing robustness of the models. \autoref{Fig2} seems to be providing almost the same information as the CVaR, and it turned out that the two metrics are highly correlated having a correlation coefficient of $0.9993$ with a gradient of approximately $1$ as shown in \autoref{Fig9}. Though the analysis done was for the discrete model, the same result is consistence with that from the other two models. \begin{figure}[tbp] \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=\linewidth]{"mean-OD-ld"} \caption{Mean outsourced demand. All models use varying values of $\sigma$.}\label{Fig5} \end{minipage \hfill \begin{minipage}[t]{.48\textwidth} \includegraphics[width=\linewidth]{"max-OD-ld"} \caption{Maximum outsourced demand. All models use varying values of $\sigma$.}\label{Fig6} \end{minipage} \end{figure} \begin{figure}[tbp] \begin{minipage}[t] {.48\textwidth} \centering \includegraphics[width=\linewidth]{"CVar-OD-ld"} \caption{CVaR of outsourced demand. All models use varying values of $\sigma$.}\label{Fig7} \end{minipage \hfill \begin{minipage}[t] {.48\textwidth} \includegraphics[width=\linewidth]{"SD-OD-ld"} \caption{Standard deviation of outsourced demand. All models use varying values of $\sigma$.}\label{Fig8} \end{minipage} \end{figure} Ideally, a good solution is in the bottom left corner of these plots. We note that some of the points corresponding to polyhedral models are dominated, and so are the stochastic solutions. The discrete model produces the best trade-off solutions between investment and outsourcing. For instance in \autoref{Fig7}, with the same link capacity investment of $\$40$ million, the stochastic model has a higher CVaR figure. The data point line for this discrete model with $\lambda=0.5$ is below that for the stochastic model and this can bee seen in \autoref{Fig5} to \autoref{Fig8}. Hence, the discrete model provides the best compromise between a too simple and a too complex approach for the data under consideration. \begin{figure}[tbp] \centering \includegraphics[width=.48\linewidth]{"CVar-Max"} \caption{CVaR of outsourced demand and max outsourced demand correlation.}\label{Fig9} \end{figure} \section{Conclusion} \label{sec:conclusion} In the robust optimization literature, the shape of uncertainty is often an assumption made without any grounding in actually available data. This also holds for network expansion problems, where polyhedral models have been popular. In this paper, we considered the question whether such an approach leads to solutions which perform well on unseen data, i.e., what kind of uncertainty sets are most appropriate for our model. We developed robust (using discrete and polyhedral uncertainty sets) and stochastic approaches to a multi-commodity network capacity expansion problem with the option of demand outsourcing. These models were implemented for a real-world network data taken from the SNDlib and their results were subsequently compared. In the experimental setup, a number of penalty values for demand outsourcing were considered while also varying the robustness of the discrete model with different sizes of the uncertainty set. Increasing the penalty results in additional capital expenditure for network capacity build as this reduces the amount of demand outsourced as well as the conditional-value-at-risk (CVaR). However, of these three models, the robust model with discrete uncertainty set produced the best trade-off solutions on all performance metrics. It was also observed that the discrete set seems easy to generate (as expected, since the original data is already in this form), the model is simple and produces optimal result faster. Robust model with polyhedral uncertainty set, on the other hand, is more complex and with more options to describe data, and it results in computationally more challenging problems. In our case, the extra effort associated with polyhedral model may not be really worth it in the end. Surprisingly, the simple stochastic optimization model which we have used for benchmarking was relatively competitive, and thus might be appropriate for use in more complex situations in which the uncertainty-based robust models are computationally intractable.
2023-04-23T08:18:15.710Z
2019-01-14T02:16:40.000Z
redpajama/arxiv
arxiv_0000
1,471
8,035
88fdfa186379df6282014f879b87b6cfd6f82dd5
\section{Introduction} In recent years, it has been exceedingly clear that theories of quantum gravity and their holographic duals are {\it maximally chaotic}. This idea leverages the striking insight that there is a universal upper bound on quantum chaos \cite{Maldacena:2015waa}. The Maldacena-Shenker-Stanford (MSS) bound \cite{Maldacena:2015waa} states that in thermal quantum systems with a large number of degrees of freedom, a class of out-of-time-order correlators (OTOCs), which measures chaos, cannot grow with time faster than $e^{\lambda_L t}$, with a Lyapunov exponent $\lambda_L\le 2\pi/\beta$. This observation provides a precise definition of the general idea of maximal chaos \cite{Sekino:2008he} in terms of OTOCs that saturate the MSS bound. However, it is also known that this description is incomplete since these OTOCs are also bounded and hence cannot grow indefinitely. This conflict implies that the rate of growth of all maximally chaotic OTOCs must deviate significantly from the MSS saturation as they approach the {\it scrambling time} $t_*$. In other words, maximal chaos is a statement about the leading order behavior of an OTOC, which requires an {\it analytic completion}. So, an important problem is to systematically study analytic completions of maximal chaos, finding universal features. One of the goals of this paper is to develop some tools to address this problem. Interestingly, the above problem is closely related to another more conceptual problem that has emerged from a new set of chaos bounds obtained in \cite{Kundu:2021qcx}. In particular, it was shown in \cite{Kundu:2021qcx} that the same class of OTOCs must satisfy an infinite set of constraints. The MSS bound, which is just one of these constraints, can be regarded as the leading chaos bound. However, there are infinitely many additional {\it subleading chaos bounds} that, in principle, can also be saturated. So, what could be more natural than to ask whether, and in what way, all these chaos bounds can be consistently saturated? It is almost unsurprising that there is a connection between this problem and the problem of analytic completions of maximal chaos. Let us now make these questions more precise. The MSS bound applies to all unitary chaotic systems with a large number of degrees of freedom and a simple Hamiltonian $H$. In such a system, consider the thermal OTOC at temperature $T=1/\beta$ \begin{equation}\label{eq:otoc} F(t)=\operatorname{tr} \[y V(0) y W(t)yV(0)yW(t)\]\ , \qquad y^4=\frac{e^{-\beta H}}{\operatorname{tr}\[e^{-\beta H}\]}\ \end{equation} of any two simple Hermitian local operators $V$ and $W$ with vanishing thermal one-point functions. This OTOC is an analytic function in the half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t>0\ \text{and}\ |\mbox{Im}\ t|\le \frac{\beta}{4} \}$, obeying the Schwarz reflection condition $\(F(t)\)^*=F\(t^*\)$. The MSS bound was derived under an additional well-motivated assumption that this OTOC is also bounded $|F(t)|\le F_d$ in the half-strip by the factorized correlator\footnote{For a definition see equation (\ref{def:Fd}).} above the {\it factorization time scale} $\mbox{Re}\ t \ge t_0$, where the factorization time $t_0$ is larger than the dissipation time but much smaller than the scrambling time $t_*$. This imposes a sharp bound on the rate of growth $ \( \frac{2\pi}{\beta}- \partial_t\)\(F_d-F(t)\)\ge 0$ for $t\gg t_0$ \cite{Maldacena:2015waa}. A quantum system is said to be maximally chaotic when OTOCs exhibit a period of Lyapunov growth saturating the MSS bound at the leading order \begin{equation}\label{intro:maximal} F(t)=F_d-\frac{c_1}{{\cal N}} e^{\frac{2\pi}{\beta} t}+\cdots \ , \qquad \text{for}\qquad t_0\ll t\ll t_*\ , \end{equation} where $c_1$ is a positive order one coefficient and ${\cal N}\gg 1$ is an effective measure of the number of degrees of freedom per site, determining the scrambling time $t_*=\frac{1}{2\pi}\ln {\cal N}$ \cite{Sekino:2008he}. However, the maximally chaotic OTOC (\ref{intro:maximal}), by itself, is neither bounded nor analytic in the entire half-strip $\S=\{t\in \mathbb{C}|\ \mbox{Re}\ t\ge t_0\ \text{and}\ |\mbox{Im}\ t|\le \frac{\beta}{4} \}$. Hence, the maximally chaotic OTOC must be accompanied by correction terms that make it consistent with the analyticity and the boundedness conditions in the entire half-strip $\S$. These correction terms, which are denoted by dots in (\ref{intro:maximal}), start to dominate at some time scale $t_{\rm eff}\le t_*$. In other words, the full OTOC $F(t)$ is analytic and bounded in $\S$, approaching the maximally chaotic OTOC (\ref{intro:maximal}) only at early times $t<t_{\rm eff}$. We can now ask the following question. What rigorous statements can we make about the full OTOC from basic principles and symmetries? This problem is worth exploring since this class of OTOCs will be of importance in quantum gravity. Recently, it was shown in \cite{Kundu:2021qcx} that the OTOC (\ref{eq:otoc}), under the same set of assumptions, must satisfy an infinite set of additional local constraints beyond the MSS bound. These new chaos bounds also constrain the correction terms in (\ref{intro:maximal}). In itself, this should not be too surprising since not any early-time expansion of $F(t)$ can resum into a function that is analytic and bounded even at late times. The chaos bounds of \cite{Kundu:2021qcx} provide a systematic realization of this fact by introducing a local {\it moment} $\mu_J(t)$ of the OTOC, as defined in (\ref{def:moments}). These bounds state that the moment $\mu_J(t)$, for integer $J\ge 0$, must be a positive, bounded, monotonically decreasing, log-convex function of $J$ for all real $t\ge t_0$ \cite{Kundu:2021qcx}.\footnote{An analogous but strictly weaker statement is that $\left[\prod_{I=1}^N \( \frac{2\pi(2I-1)}{\beta}- \partial_t\)\right]\(F_d-F(t)\)\ge 0$ for all integer $N\ge 1$ and $t\gg t_0$ \cite{Kundu:2021qcx}. Of course, the MSS bound is the special case $N=1$.} Importantly, this includes an infinite subset of bounds that allow saturation. So, from the perspective of these new bounds, the maximally chaotic OTOC (\ref{intro:maximal}) appears to be conceptually incomplete since it saturates only one of the infinitely many constraints. This leads to the idea of extremal chaos, which we explain next. \begin{figure} \centering \includegraphics[scale=0.4]{fig_intro.pdf} \caption{ \label{figure:intro} \small The extremally chaotic OTOC coincides with the maximally chaotic OTOC (red dashed line) only before the effective time scale $t_{\rm eff}$. The extremally chaotic OTOC has a minimum at $t=t_{\rm eff}$, irrespective of the scrambling time $t_*$. After $t=t_{\rm eff}$, the OTOC grows monotonically approaching the factorized value (also the initial value) $F_d$. } \end{figure} At this stage, we have compelling reasons to ask whether there are OTOCs that satisfy all of the following conditions: \begin{enumerate} \item{$F(t)$ saturates, in a consistent way, all the chaos bounds of \cite{Kundu:2021qcx} that allow saturation. } \item{$F(t)$ coincides with the maximally chaotic OTOC (\ref{intro:maximal}) at early times. } \item{$F(t)$, as a function of complex $t$, is analytic inside the entire half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t\ge t_0\ \text{and}\ |\mbox{Im}\ t|< \frac{\beta}{4} \}$.}\footnote{Note that we are excluding the boundary of the half-strip $|\mbox{Im}\ t|=\frac{\beta}{4}$. There is no non-trivial solution of this set of conditions when the boundary is included. This fact has important implications, as we will discuss later.} \item{$F(t)$ is bounded by the factorized correlator $F_d$ on the real line $t\ge t_0$. } \end{enumerate} At first sight, the above conditions seem to be overconstraining. So, it is rather surprising that there is a unique solution to the above set of conditions \begin{equation}\label{eq:intro} F_{\rm ext}(t)= F_d - \frac{c_1}{{\cal N}} \mathcal{F}_{\rm ext}(t;t_{\rm eff})\ , \qquad \mathcal{F}_{\rm ext}(t;t_{\rm eff})=\frac{e^{\frac{2\pi}{\beta}t}}{1+ e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})}} \end{equation} up to terms that decay exponentially for $t\gg t_0$. We will refer to this solution as the {\it extremally chaotic} (or {\it extremal}) OTOC.\footnote{Note that extremal chaos, as defined in this paper, should not be confused with chaos in extremal black holes, which is discussed in \cite{Poojary:2018esz,Banerjee:2019vff,Craps:2020ahu,Craps:2021bmz}.} In the above result, $t_{\rm eff}$ is an {\it effective} time scale at which correction terms in (\ref{intro:maximal}) become significantly large. The extremally chaotic OTOC reaches its minimum value at $t=t_{\rm eff}$ indicating maximal scrambling of the initial perturbation. So, in some sense, $t_{\rm eff}$ is the effective scrambling time. However, in general $t_{\rm eff}$ is independent of the traditional scrambling time $t_*$. The extremally chaotic OTOC grows monotonically for $t>t_{\rm eff}$, as shown in figure \ref{figure:intro}. So, the information of the initial perturbation is not completely lost. In particular, this information can be fully recovered in the limit $t\rightarrow \infty$ at which $F_{\rm ext}(t)\rightarrow F_d$. In contrast, thermal OTOCs of large $N$ holographic CFTs asymptote to zero for $t>t_*$, a fact that can be deduced for heavy operators from the large $c$ identity Virasoro block in 2d CFT \cite{Roberts:2014ifa} and from the elastic eikonal approximation in 3d gravity \cite{Shenker:2014cwa}. Interpretation of the extremal solution (\ref{eq:intro}), however, is more subtle. This is because the extremal OTOC has singularities at $t=t_{\rm eff}\pm i \frac{\beta}{4}$. From this non-analyticity, one could conclude that the extremal solution (\ref{eq:intro}) is unphysical, but this would be premature. The non-analyticity simply means that $F_{\rm ext}(t)$ should be interpreted as a distribution.\footnote{A closely related general statement is the Vladimirov’s theorem \cite{Vladimirov} (for a recent review see \cite{Kravchuk:2020scc}).} To give this statement a definite meaning, we next introduce a spectral representation of the OTOC (\ref{eq:otoc}). In this paper, we will also argue that there exists a K\"{a}llen-Lehmann-like representation of the OTOC (\ref{eq:otoc}): \begin{equation}\label{intro:KL} F_d-F(t)=\int_{t_0}^\infty dt' \mathcal{F}_{\rm ext}(t;t')\rho(t')\ , \qquad 0\le \rho(t')\le \frac{8}{\beta}e^{-\frac{2\pi t'}{\beta}}F_d \end{equation} for $\mbox{Re}\ t\gg t_0$ and $|\mbox{Im}\ t|< \beta/4$, where $\mathcal{F}_{\rm ext}(t;t')$ is the extremal OTOC (\ref{eq:intro}). This is true even for OTOCs that are not maximally chaotic in any duration of time. Hence, $\mathcal{F}_{\rm ext}(t;t')$ has a natural interpretation as a universal distribution,\footnote{We cannot help but notice that the distribution $\mathcal{F}_{\rm ext}(t;t')$, as a function of $t'$, looks very similar to the Fermi-Dirac distribution, which is perhaps just a coincidence. Nevertheless, this observation enables us to utilize various mathematical tools available for the Fermi gas to analyze quantum chaos. } whereas $\rho(t')$ can be thought of as a theory-dependent {\it density function}. It is possible to write an inversion formula for the density function $\rho(t')=\frac{4}{\beta}e^{-\frac{2\pi t'}{\beta}}\(F_d-\mbox{Re}\ F\(t'+i\frac{\beta}{4}\)\)$ that implies the two-sided bound for the density function in (\ref{intro:KL}).\footnote{The inversion formula also implies that the density function $\rho(t')$ is a smooth function of class $C^\infty$.} The representation (\ref{intro:KL}) has a significant technical as well as conceptual advantage. Any OTOC written in the form (\ref{intro:KL}) is automatically consistent with all the chaos bounds. So, the representation (\ref{intro:KL}) provides a natural language to study the OTOC (\ref{eq:otoc}) in physical systems with many degrees of freedom. This framework is going to be particularly useful for studying analytic completions of maximal chaos. The extremal OTOC (\ref{eq:intro}) is well-behaved even after the scrambling time, however, it is not a true analytic completion of maximal chaos. The extremal OTOC is characterized by a density function $\rho(t')=\frac{c_1}{{\cal N}}\delta(t'-t_{\rm eff})$ that is at odds with the boundedness property (\ref{intro:KL}). This tension is a manifestation of the fact that the extremal OTOC has singularities at $t=t_{\rm eff}\pm i \frac{\beta}{4}$. Fortunately, these kinds of singularities are very familiar to us from quantum field theory (QFT). We adopt the standard $i\epsilon$-prescription of QFT and move these singularities outside the half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t>0\ \text{and}\ |\mbox{Im}\ t|\le \frac{\beta}{4} \}$. This $i\epsilon$-regularization essentially replaces the delta function in $\rho(t')$ by a narrow distribution obeying the boundedness property (\ref{intro:KL}). The resulting regularized extremal OTOC can be computed exactly by using the representation (\ref{intro:KL}) even when $\epsilon$ is finite. The regularized OTOC differs only slightly from the extremal OTOC (\ref{eq:intro}) for real $t$, as shown in figure \ref{figure:intro2}. So in principle, a physical system, to a good approximation, can be extremally chaotic. \begin{figure}[h] \centering \includegraphics[scale=0.51]{figure_intro2.pdf} \caption{ \label{figure:intro2} \small Extremal chaos in physical systems. The extremally chaotic OTOC (dashed red) is non-analytic on the boundary: $\mbox{Im}\ t=\pm \beta/4$. This non-analyticity can be removed by a standard $i\epsilon$-shift. The associated regularized OTOC, which is shown in blue, differs only slightly from the unregularized OTOC. However, a late-time long-tailed correction to the density function $\rho(t>t_\Lambda)=\rho_\infty e^{-\frac{2 \pi }{\beta}t} $ does change the asymptotic behavior of the OTOC (shown in brown). Various relevant time scales are also shown in the figure: $t_d$= dissipation time $\sim \beta$, $t_0$= factorization time, $t_{\rm eff}=$ effective (or a gap) time scale, $t_*=$ scrambling time, and $t_\Lambda>t_*$ is a cut-off scale above which late-time corrections can become important. In the presence of a late-time long-tailed correction, a physical system can be extremally chaotic only approximately up to the cut-off scale $t_\Lambda$. } \end{figure} The $i\epsilon$-regularized extremal OTOC is a ``tree-level" analytic completion of maximal chaos (\ref{intro:maximal}), which asymptotes to the extremally chaotic OTOC (\ref{eq:intro}) away from the time scale $t_{\rm eff}$.\footnote{In theories of quantum gravity, the scrambling time is determined by the Newton constant $G_N$. The time scale $t_{\rm eff}$ can be thought of as the analog of the string scale.} So, it is expected that the density function, in general, will have higher-order $1/{\cal N}$ corrections. The extremal OTOC, regularized or unregularized, has the property that its general structure is insensitive to almost all of these corrections. Only a late-time {\it long-tailed correction}\footnote{Note that a density function cannot decay slower than $e^{-\frac{2 \pi }{\beta}t}$ because of the boundedness condition (\ref{intro:KL}).} to the density function $\delta\rho(t>t_\Lambda)=\rho_\infty e^{-\frac{2 \pi }{\beta}t}$, where $t_\Lambda>t_*$ is a cut-off scale, can affect the OTOC significantly by changing the asymptotic behavior as $t\gg t_\Lambda$. Interestingly, the resulting OTOCs typically have a familiar shape with a late-time plateau, as shown in figure \ref{figure:intro2}. We expect that higher-order $1/{\cal N}$ effects will generate such long-tailed corrections to the density function. This expectation is indeed realized in the Schwarzian theory, as we will show in appendix \ref{app:ST}. Finally, we come back to the question of general analytic completions of the maximally chaotic OTOC. In general, the OTOC (\ref{intro:maximal}) can have complicated analytic completions that differ significantly from the extremal OTOC. Nevertheless, we will argue that all analytic completions of a long period of maximal chaos, in the representation (\ref{intro:KL}), must be small deformations of extremal chaos. More precisely, the associated density function is a narrow distribution such that the integral $\int_{t_0}^\infty dt' \rho(t')$ is dominated by a small region $t_{\rm eff}-\Delta t_{\rm eff} \le t \le t_{\rm eff} +\Delta t_{\rm eff}$, where $\Delta t_{\rm eff} \ll t_{\rm eff}\lesssim t_*$. This observation enables a systematic analysis of OTOCs that analytically complete the maximally chaotic OTOC at early and late times. There is compelling evidence suggestive of the fact that chaos has a hydrodynamic origin in maximally chaotic systems \cite{Blake:2017ris, Blake:2021wqj}. It would be interesting to study the results of this paper in the hydrodynamic effective field theory (EFT) framework of \cite{Blake:2017ris,Blake:2021wqj}. In particular, the EFT approach might be a good guide in further understanding what extremal chaos physically means. There is also a related interesting story of pole-skipping \cite{Blake:2017ris,Grozdanov:2017ajz,Haehl:2018izb,Blake:2018leo,Grozdanov:2018kkt,Haehl:2019eae,Ahn:2019rnq,Ahn:2020bks,Ramirez:2020qer,Choi:2020tdj} as a way of understanding maximal chaos, which may provide additional insights. The rest of the paper is organized as follows. We begin with a review of the chaos bounds in section \ref{sec:chaos}. In section \ref{sec:extremal}, we derive the extremally chaotic OTOC (\ref{eq:intro}) and argue for its uniqueness. In section \ref{sec:KL}, we introduce the spectral representation (\ref{intro:KL}) and use it to regularize the extremally chaotic OTOC. Besides, we also study various corrections to extremal chaos. In section \ref{sec:maximal}, we argue that general analytic completions of maximal chaos are small deformations of extremal chaos and discuss its implications. Finally, we end with concluding remarks in section \ref{sec:conclusions}. \section{Bounds on Chaos}\label{sec:chaos} In this section, we review some general properties of chaos for thermal quantum systems with a large number of degrees of freedom. In such a system, consider any two simple Hermitian local operators $V$ and $W$ with vanishing thermal one-point functions. The OTOC (\ref{eq:otoc}) measures the effect of a small perturbation induced by the operator $V$ on another operator $W$ at a later time $t>0$. In recent years, this thermal OTOC has emerged as a good measure of quantum chaos since it has some rather nice features. First of all, the OTOC (\ref{eq:otoc}) is thermally regularized and hence it does not require additional regularization in quantum field theory to remove coincident point singularities. Secondly, this OTOC enjoys certain analyticity and boundedness properties providing us with some level of mathematical control. In this paper, we are interested in interacting unitary quantum systems with a large number of degrees of freedom in which the Hamiltonian is made out of finite products of simple operators. In such systems, for times larger than the dissipation time (also known as the local thermalization time) $t_d\sim \beta$ but before the onset of chaos, the OTOC (\ref{eq:otoc}) can be well approximated by the factorized correlator \begin{equation}\label{def:Fd} F_d= \operatorname{tr}\[y^2 V(0) y^2 V(0)\]\operatorname{tr}\[y^2 W(t)y^2W(t)\]>0\ , \end{equation} irrespective of the choice of operators. If the system is chaotic, the OTOC $F(t)$ starts decreasing rapidly for $t\gg t_d$. This leads to a new time scale relevant for quantum chaos, the {\it scrambling time} $t_*$, at which $F_d-F(t_*) \sim {\cal O}(1) F_d$. For this class of systems, it is expected that there is a parametric separation between these two time scales: $t_*\gg t_d$. More generally, it is expected that this class of systems satisfies all the assumptions made in the derivation of the MSS chaos bound in \cite{Maldacena:2015waa} and the subleading chaos bounds in \cite{Kundu:2021qcx}. In particular, the OTOC (\ref{eq:otoc}) has the following properties \cite{Maldacena:2015waa,Kundu:2021qcx}: \begin{itemize} \item[(i)] {\bf Analyticity}: $F(t)$ is an analytic function of $t$ in the half-strip (see figure \ref{figure:halfstrip}): \begin{equation}\label{halfstrip} \mbox{Re}\ t >0 \qquad \text{and} \qquad -\frac{\beta}{4}\le \mbox{Im}\ t \le \frac{\beta}{4}\ . \end{equation} \item[(ii)] {\bf Schwarz Reflection}: $F(t)$ obeys the Schwarz reflection condition \begin{equation}\label{eq:SR} \(F(t)\)^*=F\(t^*\)\ . \end{equation} \item[(iii)] {\bf Boundedness}: $F(t)$ is bounded in the half-strip (\ref{halfstrip}) \begin{equation}\label{eq:positive} |F(t)|\le F_d \qquad \text{for} \qquad t\ge t_0 \ . \end{equation} \end{itemize} The first two properties follow directly from the structure of the correlator (\ref{eq:otoc}) and hence they hold in general. On the other hand, property (iii) is more subtle. The time scale $t_0$ in equation (\ref{eq:positive}) is the {\it factorization time}, which is defined as follows. It is the minimum time above which the time ordered thermal correlator $\operatorname{tr} \[y^2 W(t) V(0) y^2 V(0)W(t)\]$ approximately factorizes \begin{equation}\label{eq:factorize} \operatorname{tr} \[y^2 W(t) V(0) y^2 V(0)W(t)\]\approx F_d \qquad \text{for}\qquad t\ge t_0 \end{equation} and at which \begin{equation}\label{eq:factorize2} |F(t)|\le F_d \qquad \text{for}\qquad \mbox{Re}\ t=t_0\ . \end{equation} For the class of systems we are considering, the factorization time $t_0$ is expected to be larger than the dissipation time but much smaller than the scrambling time: $t_d<t_0\ll t_*$. Of course, $t_0$ can depend on the choice of operators in a specific quantum system. Nevertheless, for any simple Hermitian local operators $V$ and $W$, two time scales $t_0$ and $t_d$, parametrically, are not very different \cite{Maldacena:2015waa}. It is very likely that properties (i)-(iii) are general enough to hold for even a larger class of chaotic systems. For all these quantum systems, Maldacena, Shenker, and Stanford proved a universal bound in \cite{Maldacena:2015waa} on the rate of growth of $F(t)$: \begin{equation}\label{eq:MSS} \frac{d}{dt}\(F_d-F(t)\) \le \frac{2\pi}{\beta}\(F_d-F(t)\) \qquad \text{for}\qquad t\gg t_0\ . \end{equation} For systems with a large separation between the factorization time and the scrambling time, a clear signature of chaos is that the OTOC exhibits Lyapunov behavior \begin{equation}\label{Lyapunov} F_d-F(t)=\frac{c_1}{{\cal N}} e^{\lambda_L t}+\cdots \ , \qquad \text{for}\qquad t_*\gg t\gg t_0, t_d \end{equation} where $c_1$ is a positive order one coefficient and ${\cal N}\gg 1$ is an effective measure of the number of degrees of freedom per site. Hence, the bound (\ref{eq:MSS}) translates into a bound on the Lyapunov exponent $\lambda_L$ \cite{Maldacena:2015waa} \begin{equation}\label{MSS:Lyapunov} \lambda_L \le \frac{2\pi}{\beta}\ . \end{equation} This surprising bound on chaos, however, is not a mathematical coincidence, but part of a more general set of constraints that are contained in properties (i)-(iii). We review these general bounds below. \subsection{Bounds} The MSS bound (\ref{eq:MSS}) does not fully utilize properties (i)-(iii). For example, there are other bounds on subleading growing terms that are present in the OTOC. All these constraints can be organized systematically by defining local {\it moments of the OTOC} \cite{Kundu:2021qcx} \begin{equation}\label{def:moments} \mu_J\(t\)=e^{\frac{4\pi J}{\beta}t} \int_{t-i \frac{\beta}{4}}^{t+i \frac{\beta}{4}} dt' e^{-\frac{2\pi }{\beta}\(t'-i \frac{\beta}{4}\)(2J+1)} \(F(t')-F_d\) \end{equation} for real $t\ge t_0$ and any $J$. Note that the Schwarz reflection condition (\ref{eq:SR}) implies that moments $\mu_J(t)$ are real for integer $J$. It was shown in \cite{Kundu:2021qcx} that in interacting unitary quantum systems with a large number of degrees of freedom and a Hamiltonian which is made out of finite products of simple operators, these moments must obey positivity, monotonicity, and log-convexity conditions: \begin{align} &\mu_J(t)>0\ ,\qquad \mu_{J+1}<\mu_J(t)\ ,\label{bound1}\\ &\mu_{J+1}(t)^2\le \mu_J\(t\)\mu_{J+2}\(t\)\label{bound2}\ , \end{align} for $t\ge t_0$ and all integer $J\ge 0$.\footnote{Similar to the MSS bound, these bounds are actually valid, in general, for all OTOCs satisfying (i) analyticity, (ii) Schwarz reflection, and (iii) boundedness properties with a well-defined factorization time $t_0$. However, strictly speaking, the factorization condition (\ref{eq:factorize}) can break down for large $t\gg t_*$ because of Poincare recurrences. Hence, chaos bounds (\ref{bound1})-(\ref{bound1b}) are valid only up to some time scale which is smaller than the Poincare recurrence time of the system. This is true even for the MSS bound (\ref{eq:MSS}) \cite{Maldacena:2015waa}.} Moreover, the moments are also bounded \begin{equation}\label{bound1b} \mu_J(t)<\frac{2\beta F_d}{\pi(2J+1)}e^{-\frac{2\pi }{\beta}t} \end{equation} for real $t\ge t_0$ and integer $J\ge 0$. The MSS bound is contained in these more general constraints. In particular, the MSS bound is the leading constraint that one obtains from (\ref{bound1}). In addition, the above constraints also lead to bounds on subleading growths, as shown in \cite{Kundu:2021qcx}. We wish to emphasize that conditions (\ref{bound1})-(\ref{bound1b}) lead to two types of constraints on $F(t)$: (I) inequalities that can be saturated and (II) strict inequalities. This observation will play an important role when we derive the extremally chaotic OTOC (\ref{eq:intro}). \subsection{Maximal Chaos vs Extremal Chaos} A quantum system is said to be maximally chaotic when its OTOCs exhibit a period of Lyapunov growth (\ref{Lyapunov}) where the Lyapunov exponent saturates the MSS bound (\ref{MSS:Lyapunov}) \begin{equation}\label{maximal} F_d-F(t)=\frac{c_1}{{\cal N}} e^{\frac{2\pi}{\beta} t}+\cdots \ , \qquad \qquad \text{for}\qquad\qquad t\gg t_0, t_d\ . \end{equation} Theories of quantum gravity and their holographic duals are known to be maximally chaotic at the leading order in $1/{\cal N}$ \cite{Roberts:2014isa,Shenker:2013pqa,Shenker:2013yza,kitaev2014hidden,Shenker:2014cwa,Maldacena:2015waa}. Clearly, the OTOC must deviate significantly from (\ref{maximal}) for times comparable to the scrambling time $t_*=\frac{\beta}{2\pi} \ln {\cal N}$, since $|F(t)|$ is bounded from above (\ref{eq:positive}). In fact, there has to be corrections (however small) to the maximally chaotic OTOC (\ref{maximal}) even for $t\ll t_*$. In order to see that, we can compute the moments (\ref{def:moments}) associated with the OTOC (\ref{maximal}): \begin{align}\label{qg:moments} \mu_0(t)=\frac{c_1 \beta}{2{\cal N}}>0\ , \qquad \mu_{J\ge 1}(t)=0 \end{align} for integer $J$. This is at odds with (\ref{bound1}) implying that the term $e^{\frac{2\pi}{\beta} t}$ alone, in any time duration, is inconsistent with properties (i)-(iii) of the OTOC. So, there has to be correction terms. The maximally chaotic OTOC (\ref{maximal}) can be analytically completed in many different ways. We introduce the extremally chaotic OTOC as a very specific analytic completion of the maximally chaotic OTOC (\ref{maximal}), saturating all the chaos bounds obtained from (\ref{bound1}) and (\ref{bound2}) that can be saturated. As mentioned in the introduction, the extremal OTOC $F_{\rm ext}(t)$ is defined as the OTOC with the following properties: (1) it saturates all the chaos bounds obtained from (\ref{bound1}) and (\ref{bound2}) that allow saturation, (2) it coincides with (\ref{maximal}) in some duration within $t_0< t< t_*$, (3) it is an analytic function inside the entire half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t\ge t_0\ \text{and}\ |\mbox{Im}\ t|< \frac{\beta}{4} \}$, obeying the Schwarz reflection property, and (4) it is bounded (\ref{eq:positive}) for real $t$ even after $t_*$. The constraints (\ref{bound1}) and (\ref{bound2}) lead to an infinite set of chaos bounds that allow saturation. To begin with, it is not obvious whether these bounds can all be saturated by any non-trivial OTOC in a consistent way. So, it is indeed satisfying to find that the OTOC (\ref{eq:intro}) saturates all these chaos bounds, both leading and subleading. This OTOC automatically satisfies conditions (3) and (4), even for times large compared to the scrambling time. Furthermore, the constraints (\ref{bound1}) and (\ref{bound2}) guarantee that $F_{\rm ext}(t)$ is unique up to terms that decay exponentially for $t\gg t_0$. \section{Extremal OTOC}\label{sec:extremal} In this section, we derive the extremal OTOC (\ref{eq:intro}) and argue for its uniqueness. First, we give a simple derivation by considering OTOCs that can be written as a sum of Lyapunov growths: $F(t)\sim \sum_i e^{\lambda_i t}$. Later, we will provide a more rigorous derivation. \subsection{A Lyapunov Expansion of the OTOC} As we have explained in the last section, the maximally chaotic OTOC (\ref{maximal}) must contain additional correction terms which we parametrize as follows: \begin{equation}\label{eq:para} F_d - F(t)= \frac{1}{{\cal N}}\(c_1e^{\frac{2\pi}{\beta} t}+ c_2 e^{\lambda_2 (t-t_f)}+\cdots\) \qquad t_0\ll \mbox{Re}\ t\le t_f\ , \end{equation} where $t_f$ is a new time scale the physical meaning of which will be clear later. The constraints (\ref{bound1}) now impose \cite{Kundu:2021qcx} \begin{equation}\label{MSS2} \lambda_2 \le \frac{6\pi}{\beta}\ . \end{equation} At first sight, it might be surprising that the above bound can be saturated since it follows from a set of strict inequalities (\ref{bound1}). So, the saturation of the bound $\lambda_2 = \frac{6\pi}{\beta}$ requires some discussion. In this case, one can check that $\mu_{J\ge 2}(t)=0$ for integer $J$. This is inconsistent with constraints (\ref{bound1}), however, that does not mean $\lambda_2 = \frac{6\pi}{\beta}$ is not allowed. It only means that the saturation of (\ref{MSS2}) necessarily requires additional correction terms in (\ref{eq:para}) that generate positive contributions for $\mu_{J\ge 2}(t)$. So, we impose that the OTOC (\ref{eq:para}) saturates the bound (\ref{MSS2}) as well, fixing $\lambda_2 = \frac{6\pi}{\beta}$. Similarly, we can repeat the preceding argument again and again by adding exponential correction terms. At each step, the upper bound on the Lyapunov exponent increases by $\frac{4\pi}{\beta}$ \cite{Kundu:2021qcx}. Hence, we write the extremal OTOC as an infinite series \begin{equation}\label{eq:specialcase} F_d - F(t)= \frac{1}{{\cal N}}e^{\frac{2\pi}{\beta} t}\sum_{n=0}^{\infty}c_{n+1} e^{\frac{4n\pi}{\beta} (t-t_f)}\ , \qquad t_0\ll \mbox{Re}\ t\le t_f\ . \end{equation} From the MSS bound, we expect that $|c_2|,|c_3|,\cdots<c_1$ because the leading growing term should not violate (\ref{MSS:Lyapunov}). However, to begin with we will not assume any such restrictions on the $c$-coefficients. Rather, they will automatically follow from constraints (\ref{bound1}) and (\ref{bound2}). This should not be surprising since the MSS bound is contained in (\ref{bound1}). Let us emphasize that even though the OTOC (\ref{eq:specialcase}) saturates an infinite number of bounds, there are additional (infinitely many) constraints from conditions (\ref{bound1}) and (\ref{bound2}). Importantly, some of the remaining constraints also allow saturation. It was shown in \cite{Kundu:2021qcx} that the OTOC (\ref{eq:specialcase}) is consistent with bounds (\ref{bound1}) and (\ref{bound2}) for $ \mbox{Re}\ t\le t_f$ if and only if \begin{align} &(-1)^{n-1} c_{n}>0\ , \qquad |c_{n+1}|<|c_n|\ ,\label{bound3}\\ &c_{n+1}^2\le c_n c_{n+2}\ ,\label{bound4} \end{align} for all integer $n\ge 1$.\footnote{These constraints are closely related to analogous positivity, monotonicity, and log-convexity conditions of certain CFT Regge correlators \cite{Kundu:2020gkz,Kundu:2021qpi}. These CFT conditions follow directly from basic properties of Lorentzian correlators and can be regarded as causality constraints.} We obtain the extremally chaotic OTOC by saturating bounds (\ref{bound4}) without violating any of the strict inequalities (\ref{bound3}), yielding \begin{equation} F_{\rm ext}(t)= F_d - \frac{c_1 e^{\frac{2\pi}{\beta} t}}{{\cal N}}\sum_{n=0}^\infty (-1)^n \varepsilon^n e^{\frac{4n \pi}{\beta} (t-t_f)}\ , \end{equation} where, $\varepsilon\equiv |c_2|/c_1<1$. The above sum converges only for $\varepsilon e^{\frac{4 \pi}{\beta} (t-t_f)}<1$, however, it can be analytically continued beyond the regime of convergence. In particular, after defining a new time scale $t_{\rm eff}= t_f-\frac{\beta}{4\pi} \ln \varepsilon>t_f$, the above expansion can be ressumed, obtaining \begin{equation}\label{fmax1} F_{\rm ext}(t)= F_d - c_1 \frac{e^{\frac{2\pi}{\beta}( t-t_*)}}{1+ e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})}}\equiv F_d -\frac{c_1}{{\cal N}} \mathcal{F}_{\rm ext}(t;t_{\rm eff})\ . \end{equation} The OTOC $F_{\rm ext}(t)$ is well-defined even when $t>t_{\rm eff},t_*$. So, the function $\mathcal{F}_{\rm ext}(t;t_{\rm eff})$ is completely fixed only up to a theory dependent time scale $t_{\rm eff}$. Physically, $t_{\rm eff}$ represents the time of maximum scrambling of the initial perturbation since the extremal OTOC $F_{\rm ext}(t)$ has a global minimum at $t=t_{\rm eff}$. However, the time scale $t_{\rm eff}$, in general, is independent of the scrambling time $t_*$. Furthermore, $t_{\rm eff}$ is not required to be parametrically smaller than the scrambling time. Nevertheless, $t_{\rm eff}$ is not completely free of constraints. The boundedness condition (\ref{eq:positive}) on the real line does impose an upper bound: \begin{equation} t_{\rm eff}\le t_* +\frac{\beta}{2\pi}\ln \(\frac{4F_d}{c_1}\)\ . \end{equation} Clearly, the OTOC $F_{\rm ext}(t)$ is an analytic function obeying the Schwarz reflection condition (\ref{eq:SR}) inside the half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t\ge t_0\ \text{and}\ |\mbox{Im}\ t|< \frac{\beta}{4} \}$. On the other hand, from our derivation, it is unclear whether $F_{\rm ext}(t)$ is unique. After all, this derivation relies heavily on our initial assumption that the extremally chaotic OTOC can be written as $F(t)\sim \sum_i e^{\lambda_i t}$. We will close this loophole next by providing a more rigorous argument. \subsection{Uniqueness of the Extremal OTOC} We now show that the extremal OTOC (\ref{eq:intro}) is unique up to terms that decay for $t\gg t_0$ by using tools developed in \cite{Kundu:2021qcx}. We start with the dispersion relation of \cite{Kundu:2021qcx} \begin{align}\label{eq:late} F_d-F(t)= \frac{2}{\beta} e^{-\frac{2\pi (t-t_0)}{\beta}}& \int^{\frac{\beta}{4}}_{-\frac{\beta}{4}} d\tau \frac{e^{\frac{2\pi i \tau}{\beta}}\(F_d-F(t_0+i \tau)\)}{1- e^{\frac{4\pi i \tau}{\beta}}e^{-\frac{4\pi (t-t_0)}{\beta}}}- \frac{2}{\beta} e^{\frac{2\pi}{\beta}t}\int_{t_0}^\infty dt' \frac{\mu_0'(t')}{1+e^{\frac{4\pi }{\beta}(t-t')}}\ , \end{align} written in terms of the primary moment $\mu_0(t)$. Any OTOC that satisfies properties (i)-(iii) of section \ref{sec:chaos} can be written in this form for $\mbox{Re}\ t> t_0$ and $|\mbox{Im}\ t|< \beta/4$. One advantage of writing the OTOC in this form is that the first term decays as $e^{-\frac{2\pi (t-t_0)}{\beta}}$ for $\mbox{Re}\ t\gg t_0$. So, all we need to do is to determine $\mu_0(t)$ associated with the extremal chaotic OTOC. At this point, it is useful to saturate the log-convexity bound (\ref{bound2}) first. The most general $\mu_J(t)$ that saturates the log-convexity bound (\ref{bound2}) is \begin{equation}\label{eq:saturate} \mu_J(t)= \mu_0(t) e^{-\frac{4\pi J}{\beta} g(t)} \end{equation} for integer $J\ge 0$, where $g(t)$ is a real function independent of $J$. This automatically satisfies bounds (\ref{bound1}), provided both $\mu_0(t)$ and $g(t)$ are positive functions. Set of functions (\ref{eq:saturate}), for arbitrary $g(t)$, does not represent a well-defined set of moments. For example, the moments (\ref{def:moments}) must satisfy the consistency condition \cite{Kundu:2021qcx} \begin{equation}\label{eq:consistency} \mu_J'(t)-\frac{4\pi J}{\beta} \mu_J(t)=\mu_0'(t) \end{equation} for all positive integer $J$. This consistency condition is actually highly constraining. This becomes obvious when we apply it to (\ref{eq:saturate}): \begin{equation}\label{eq:consistency2} \mu_0'(t)e^{-\frac{4\pi J}{\beta} g(t)}- \frac{4\pi J}{\beta}e^{-\frac{4\pi J}{\beta}g(t) }\(g'(t)+1\)\mu_0(t)=\mu_0'(t)\ . \end{equation} The left-hand-side must be independent of $J$ implying that $g'(t)=-1$ when $\mu_0(t)$ is non-zero. So, we find \begin{equation} g(t)=t_{\rm eff}-t\ , \end{equation} where, $t_{\rm eff}$ is a constant. This constant should be regarded as the effective scrambling time (or a gap time scale), as we discuss below. Note that the consistency condition (\ref{eq:consistency2}) is still not satisfied, unless \begin{equation}\label{sol:mu0} \mu_0'(t)=0 \qquad \text{when}\qquad t\neq t_{\rm eff} \ . \end{equation} However, $\mu_0(t)$ cannot be the same everywhere. From the definition (\ref{def:moments}), we know that $ \mu_J(t)$ goes to zero as $t\rightarrow \infty$, for all integer $J\ge 0$. On the other hand, even a brief period of maximal chaos requires $\mu_0(t)$ to be non-zero in that interval. Hence, for extremal chaos, as defined in this paper, $\mu_0(t)$ must be a piecewise constant function of time. Alternatively, the relation (\ref{eq:consistency}) implies that \begin{equation} \mu_J( t)=- e^{\frac{4\pi J}{\beta}t}\int_{t}^\infty dt' e^{-\frac{4\pi J}{\beta}t'} \mu_0'(t')\ , \end{equation} where $\mu_0'(t)$ satisfies (\ref{sol:mu0}). The above integral can be non-zero for $J\ge 1$ only if $\mu_0'(t)\propto \delta(t-t_{\rm eff})$. Therefore, there is a unique solution that is consistent with (\ref{eq:saturate}) and (\ref{eq:consistency}): \begin{equation}\label{eq:extchaos} \mu_J(t)=\mu e^{\frac{4\pi J}{\beta}(t-t_{\rm eff})} \Theta\(t_{\rm eff}-t\)\ , \end{equation} where $\mu$ is a positive constant. Moreover, a period of maximal chaos requires $t_{\rm eff}\gg t_0$, implying $t_{\rm eff}$ can also be interpreted as a gap time scale. We now utilize the representation (\ref{eq:late}) to obtain the unique extremally chaotic OTOC for $t\gg t_0$: \begin{equation}\label{otoc:max} F_{\rm ext}(t)= F_d - \frac{2\mu}{\beta} \frac{e^{\frac{2\pi}{\beta}t}}{1+ e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})}}+{\cal O}\(e^{-\frac{2\pi (t-t_0)}{\beta}}\) \end{equation} where we identify $\mu= \frac{\beta c_1}{2{\cal N}}$. This agrees with our previous derivation, upto terms that decay at late times. The extremal OTOC $F_{\rm ext}(t)$ decays fast for $t_0\ll t\ll t_{\rm eff}$, saturating the MSS bound. It reaches its minimum value at $t=t_{\rm eff}$. So, in some sense $t_{\rm eff}$ is an effective scrambling time, even though it is independent of the traditional scrambling time $t_*=\frac{\beta}{2\pi}\ln {\cal N}$. Importantly, $F_{\rm ext}(t)$ is well-defined even for $t>t_{\rm eff},t_*$, provided $|\mbox{Im}\ t|< \beta/4$. Above $t>t_{\rm eff}$, the OTOC $F_{\rm ext}(t)$ increases monotonically, indicating that information of the initial perturbation is not completely lost. The information can be fully recovered in the limit $t\rightarrow \infty$ at which $F_{\rm ext}(t)\rightarrow F_d$. Let us now compare the extremal OTOC (\ref{otoc:max}) with thermal OTOCs of large $N$ holographic CFTs. For example, in 2d CFTs with a large central charge $c$ (and a sparse spectrum), $F(t)$ can be computed beyond the leading $\frac{1}{c}$ term for certain heavy operators \cite{Roberts:2014ifa,Shenker:2014cwa}. These OTOCs asymptote to zero for $t>t_*$, indicating a loss of information. Similarly, $F(t)$ in the Schwarzian theory also asymptotes to zero for $t> t_*$ \cite{Maldacena:2016upp,Lam:2018pvp}. There is an important caveat that we must address. The extremal solution (\ref{eq:extchaos}) appears to be inconsistent with the bound (\ref{bound1}) for $t> t_{\rm eff}$. This inconsistency stems from the fact that $F_{\rm ext}(t)$ has singularities on the boundary of the half-strip (\ref{halfstrip}). We will argue that this non-analyticity simply means $F_{\rm ext}(t)$ should be interpreted as a distribution. \subsection{Non-Analyticity} \begin{figure} \centering \includegraphics[scale=0.55]{analytic.pdf} \caption{ \label{figure:halfstrip} \small $F(t)$, as a function of complex $t$, is analytic in the shaded blue region. However, the extremal OTOC (\ref{eq:intro}) has singularities at $t=t_{\rm eff}\pm i\beta/4$. These singularities can be removed by a simple $i \epsilon$-shift. } \end{figure} We notice that the OTOC (\ref{otoc:max}) has simple poles at $t=t_{\rm eff}\pm i\beta/4$. Strictly speaking, the expression (\ref{otoc:max}) is not valid on the boundary $|\mbox{Im}\ t|= \beta/4$, since it has been derived from the dispersion relation (\ref{eq:late}). Nevertheless, the value of the OTOC on the boundary of the half-strip (\ref{halfstrip}) can also be determined directly from the extremal solution (\ref{eq:extchaos}). In particular, from \cite{Kundu:2021qcx} (see appendix B) we find that for real $t$: \begin{align} & \mbox{Re}\ F_{\rm ext}\(t+i\frac{\beta}{4}\)=F_d-\frac{\beta c_1}{4{\cal N}}e^{\frac{2\pi}{\beta}t_{\rm eff}}\delta(t-t_{\rm eff})\ ,\label{boundary1}\\ &\mbox{Im}\ F_{\rm ext}\(t+i\frac{\beta}{4}\)= -\frac{c_1}{{\cal N}} \frac{e^{\frac{2\pi}{\beta}t}}{1-e^{\frac{4\pi }{\beta}(t-t_{\rm eff})}}+{\cal O}\(e^{-\frac{2\pi (t-t_0)}{\beta}}\)\ ,\label{boundary2} \end{align} implying that the extremal OTOC is indeed singular at $t=t_{\rm eff}\pm i\beta/4$. We are familiar with these kinds of singularities in QFT. Mathematically, these singularities can be easily removed by a standard $i \epsilon$-shift\footnote{Consequently, the extremal OTOC (\ref{otoc:max}) is not consistent with the boundedness condition (\ref{eq:positive}) near the boundary of the half-strip (\ref{halfstrip}). This can also be resolved by the $i\epsilon$-shift, leading to a lower bound on $\epsilon$.} \begin{equation}\label{reg:OTOC} F_{\rm ext}^\epsilon\(t\pm i\frac{\beta}{4}\)= F_d \mp i \frac{c_1}{{\cal N}} \frac{e^{\frac{2\pi}{\beta}t}}{1- e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})\mp i \epsilon}}+{\cal O}\(e^{-\frac{2\pi (t-t_0)}{\beta}}\)\ , \end{equation} where $t\gg t_0$ is real and $\epsilon>0$ is small. We recover boundary values of the extremal OTOC (\ref{boundary1}) and (\ref{boundary2}) from $ F_{\rm ext}^\epsilon$ by taking the limit $\epsilon\rightarrow 0$. Equivalently, the role of the $i\epsilon$-shift can be stated in the following way. A well-behaved OTOC obeying properties (i)-(iii) of section \ref{sec:chaos} cannot have $\mu_0'(t)\propto \delta(t-t_{\rm eff})$ since $|F(t)|$ is bounded in the half-strip $\S=\{t\in \mathbb{C}|\ \mbox{Re}\ t\ge t_0\ \text{and}\ |\mbox{Im}\ t|\le \frac{\beta}{4} \}$. The $i\epsilon$-shift in (\ref{reg:OTOC}) replaces this delta function by a smooth but a narrow function. In fact, in physical systems there is a lower bound on $\epsilon$ which may not always be infinitesimally small. We will discuss this in section \ref{sec:physical}. Of course, the $i\epsilon$-shift (\ref{reg:OTOC}) is a very specific small deformation of the extremal OTOC $F_{\rm ext}(t)$, making it regular at $t=t_{\rm eff}\pm i\beta/4$. It is actually possible to study deformations, large or small, of the extremal OTOC in a very general way. This can be achieved by treating the extremal OTOC as a distribution, as we discuss next. \section{General Deformations of Extremal Chaos}\label{sec:KL} \subsection{Spectral Representation of OTOC} We begin this section with an observation that the extremally chaotic OTOC gives a K\"{a}llen-Lehmann-type representation of any general OTOC (not necessarily maximally chaotic in any duration). In particular, any OTOC obeying properties (i)-(iii) of section \ref{sec:chaos} can be written as \begin{equation}\label{eq:KL} F_d-F(t)=\int_{t_0}^\infty dt' \mathcal{F}_{\rm ext}(t;t')\rho(t')+{\cal O}\(e^{-\frac{2\pi (t-t_0)}{\beta}}\) \end{equation} for $\mbox{Re}\ t\gg t_0$ and $|\mbox{Im}\ t|< \beta/4$, where $\mathcal{F}_{\rm ext}(t;t')$ is the extremal function as defined in (\ref{eq:intro}). The function $\rho(t)$ parallels the spectral density of the original K\"{a}llen-Lehmann representation, and hence it can be thought of as a density function of chaos. The spectral representation (\ref{eq:KL}) follows directly from the dispersion relation (\ref{eq:late}) once we notice that the kernel in the second term of (\ref{eq:late}) is precisely $\mathcal{F}_{\rm ext}(t;t')$. Furthermore, the dispersion relation (\ref{eq:late}) provides a simple inversion formula for the density function \begin{equation}\label{eq:inversion} \rho(t)=-\frac{2}{\beta}\mu_0'(t)=\frac{4}{\beta}e^{-\frac{2\pi t}{\beta}}\(F_d-\mbox{Re}\ F\(t+i\frac{\beta}{4}\)\)\ . \end{equation} This relation implies that the density function must have the following properties for $t\ge t_0$: \begin{itemize} \item{It is real and non-negative: $\rho(t)\ge 0$.} \item{It is smooth (infinitely differentiable).} \item{It is bounded: \begin{equation}\label{rho:bound} \rho(t)\le \frac{8}{\beta}e^{-\frac{2\pi t}{\beta}}F_d \end{equation} and hence $\rho(t\rightarrow \infty)\rightarrow 0$. } \item{A period of maximal chaos over which the MSS bound is saturated necessarily requires $\rho(t)\approx 0$ over the same time duration.\footnote{Note that $\rho(t)$ can only have isolated zeroes since $F(t)$ is analytic in the domain (\ref{halfstrip}). On the other hand, if the OTOC saturates the MSS bound exactly, then $\rho(t)=0$. So, this again implies that a term $e^{\frac{2\pi t}{\beta}}$ in the OTOC always comes with correction terms (see figure \ref{figure:density}).} } \end{itemize} So, any OTOC can be thought of as a deformation of the extremal OTOC (\ref{eq:intro}) in a very precise way. Moreover, the representation (\ref{eq:KL}) has the important advantage that all the chaos bounds, leading or subleading, are automatically satisfied, provided $\rho(t)$ obeys the above conditions. Interestingly, the representation (\ref{eq:KL}) can be mapped to a statistical physics problem of a {\it Fermi gas}. In particular, $1-\exp(-\frac{2\pi t}{\beta})\mathcal{F}_{\rm ext}(t;t')\equiv f_{FD}(t;t')$ is exactly the {\it Fermi-Dirac distribution} once we substitute $t'\rightarrow E$, $t\rightarrow \mu$, and $\beta \rightarrow 2\pi k_B T$. Hence, the representation (\ref{eq:KL}) can be viewed as an integrals of the Fermi-Dirac distribution computing the number density in a Fermi gas. Conceptually, this is consistent with our interpretation of $\rho(t)$ as a density. More practically, this identification enables us to evaluate (\ref{eq:KL}), exactly or approximately, by utilizing various mathematical tools available for integrals of the Fermi-Dirac distribution. \begin{figure} \centering \includegraphics[scale=0.55]{figure_density.pdf} \caption{ \label{figure:density} \small Density functions (\ref{eq:inversion}) associated with various chaotic systems are shown here schematically. The dashed black line represents the bound (\ref{rho:bound}). Figure (a): The blue line is the density function for a period of Lyapunov growth with $0<\lambda_L< \frac{2\pi}{\beta}$. This density function becomes inconsistent with the bound (\ref{rho:bound}) for large $t$, where correction terms must become significantly large so that the density function decays faster. Figure (b): The blue line represents a typical density function associated with a period of maximal chaos. Over the same duration of time the density function $\rho(t)\approx 0$, however, it starts to grow around some time scale $t=t_{\rm eff}$. The red line represents the delta-function density associated with the extremally chaotic OTOC. } \end{figure} The density function $\rho(t)$ is obtained from a given OTOC-behavior. Typically, OTOCs exhibit various classes of functional behaviors: exponential (chaotic), power-law (not quite chaotic), constant (integrable). All these statements translate straightforwardly to some specific features of $\rho(t)$. For example, consider a period of exponential growth \begin{equation}\label{Lyapunov} F(t)=F_d-\frac{c_1}{{\cal N}} e^{\lambda_1 t}+\cdots \ , \end{equation} for $t\gg t_0$, where $c_1>0$ and $\lambda_1< \frac{2 \pi }{\beta }$. As explained above, this OTOC can be written as (\ref{eq:KL}) where the density function is given by \begin{equation} \rho(t)=\frac{4 c_1}{{\cal N} \beta} e^{t \left(-\frac{2 \pi }{\beta }+\lambda_1 \right)}\cos \left(\frac{\beta \lambda_1 }{4}\right) \end{equation} which is real and positive. The density function decays with time, however, it does not decay fast enough when $\lambda_1>0$, as shown in figure \ref{figure:density}. In particular, this density function becomes inconsistent with the bound (\ref{rho:bound}) near the scrambling time, implying a breakdown of the approximation (\ref{Lyapunov}). \subsection{Extremal Chaos in Physical Systems}\label{sec:physical} In this language, extremal chaos is characterized by a ``single-particle state": $\rho(t)=\frac{c_1}{{\cal N}}\delta(t-t_{\rm eff})$ (see figure \ref{figure:density}). Clearly, this is in tension with the boundedness property (\ref{rho:bound}) of the density $\rho(t)$. This tension is a manifestation of the non-analyticity of the extremally chaotic OTOC on the boundary: $\mbox{Im}\ t=\pm \beta/4$. A natural resolution of this tension is to remove these singularities by performing the $i\epsilon$-regularization of (\ref{reg:OTOC}). By using (\ref{eq:inversion}) we find that this $i\epsilon$-regularization essentially replaces the delta function by a narrow distribution (see figure \ref{figure:reg}): \begin{equation}\label{rho:extremal} \rho(t)=\frac{4\tilde{c}_1}{{\cal N} \beta} \frac{e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})}\sin \epsilon}{\(1- e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})+i \epsilon}\)\(1- e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})-i \epsilon}\)} \end{equation} up to terms that decay for $t\gg t_0$. Note that we are using a normalization $\tilde{c}_1\propto c_1$ such that the coefficient of the maximally chaotic term $e^{\frac{2\pi t}{\beta}}$ remains unchanged $c_1/{\cal N}$, even when $\epsilon$ is finite. So, the proportionality constant $\tilde{c}_1/ c_1$, which we will determine later, depends on $\epsilon$ and approaches 1 in the limit $\epsilon \rightarrow 0$.\footnote{Alternatively, one may normalize (\ref{rho:extremal}) by keeping the integral $\int_{t_0}^\infty dt \rho(t)$ fixed (and independent of $\epsilon$). These two normalizations, however, are equivalent since the coefficient of the term $e^{\frac{2\pi t}{\beta}}$ is exactly this integral. This is actually true in general, as we will discuss in section \ref{sec:maximal}.} The bound (\ref{rho:bound}) now imposes a lower bound on $\epsilon$: \begin{equation}\label{epsilon:bound} \tan \(\frac{\epsilon}{2}\) \ge \frac{\tilde{c}_1}{4F_d}e^{\frac{2\pi }{\beta}\(t_{\rm eff}-t_*\)}\ . \end{equation} In general, $t_{\rm eff}$ is independent of the scrambling time $t_*$. However, in a strongly chaotic system $F_d-F(t\sim t_{\rm eff})\sim {\cal O}(1) F_d$, implying these two time scales cannot be parametrically separated. So, physical systems can be extremally chaotic only approximately where the density function $\rho(t)$ is a narrow peak at $t=t_{\rm eff}\gg t_0$ with width $\Delta t_{\rm eff}\sim \epsilon \frac{\beta}{2\pi}$. The associated OTOC is now an analytic function everywhere in the half-strip (\ref{halfstrip}) that saturates the MSS bound in the regime $t_0\ll t\ll t_{\rm eff},t_*$. This represents a small deformation, even when $\epsilon$ is finite, since the moments $\mu_J(t)$ asymptote to the extremal solution (\ref{eq:extchaos}) away from the time scale $t_{\rm eff}$. As a consequence, the resulting OTOC can be well-approximated by the extremal OTOC (\ref{eq:intro}) in the regime $|t-t_{\rm eff}|\gg \frac{\beta}{2\pi}$. We can determine the exact OTOC even when $\epsilon$ is finite (and real). This is a straightforward exercise because of the spectral representation (\ref{eq:KL}). In particular, for $t,t_*, t_{\rm eff}\gg t_0$ we obtain \begin{equation}\label{otoc:reg} F_{\rm ext}^{\rm reg}(t)=F_d-\frac{\tilde{c}_1}{{\cal N}}e^{\frac{2\pi t}{\beta}} \mbox{Re}\(\frac{1-\frac{\epsilon}{\pi}-\frac{4 i }{\beta }(t-t_{\rm eff})}{1+ e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})- i \epsilon}}\) \end{equation} for real $t$. When $\mbox{Im}\ t \neq 0$, the OTOC can be obtained from the spectral representation (\ref{eq:KL}) exactly the same way. This OTOC is now an analytic function everywhere in the half-strip (\ref{halfstrip}) obeying the Schwarz reflection (\ref{eq:SR}) and the boundedness (\ref{eq:positive}) conditions. Note that the regularized OTOC (\ref{otoc:reg}) has the same qualitative features as the extremal OTOC (\ref{otoc:max}), differing only slightly even when $\epsilon$ is order 1 (see figure \ref{figure:reg}). \begin{figure} \centering \includegraphics[scale=0.55]{fig_epsilon.pdf} \caption{ \label{figure:reg} \small The regularized density functions (a) and the associated OTOCs (b) are shown for various values of $\epsilon$. For the plot, we have chosen $\beta=2\pi, F_d=1, c_1=2, t_{\rm eff}=50$ and $t_*=51$. For this set of parameters, the bound (\ref{epsilon:bound}) requires that $\epsilon \ge 0.1415$. The dashed black lines represent the unregularized OTOC (\ref{eq:intro}). The regularized OTOC differs only slightly from the unregularized OTOC even when $\epsilon=1$.} \end{figure} We now focus on the regime $t_0\ll t\ll t_{\rm eff},t_*$, in which the regularized OTOC (\ref{otoc:reg}) is well-approximated by \begin{equation}\label{otoc:approx} F_{\rm ext}^{\rm reg}(t)=F_d-\frac{\tilde{c}_1 \(1-\frac{\epsilon}{\pi}\)}{{\cal N}}e^{\frac{2\pi t}{\beta}}\(1-e^{\frac{4 \pi}{\beta} (t-t_{\rm eff})}\(\cos \epsilon -\frac{4(t-t_{\rm eff})\sin \epsilon}{\beta \(1-\frac{\epsilon}{\pi}\)}\)+\cdots\)\ . \end{equation} From the above limit, we find \begin{equation} c_1=\tilde{c}_1 \(1-\frac{\epsilon}{\pi}\) \end{equation} such that the coefficient of the term $e^{\frac{2\pi t}{\beta}}$ remains unchanged $c_1/{\cal N}$. The OTOC (\ref{otoc:reg}) is an analytic completion of the leading Lyapunov behavior (\ref{maximal}). This analytic completion asymptotes to the extremally chaotic OTOC (\ref{eq:intro}) away from the time scale $t_{\rm eff}$. In theories of quantum gravity and their holographic duals, the scrambling time is determined by the Newton constant $G_N$. On the other hand, the time scale $t_{\rm eff}$, when independent, can be thought of as the analog of the string scale and the OTOC (\ref{otoc:reg}) can be interpreted as a ``tree-level" analytic completion of maximal chaos. \subsection{Late-Time Corrections from Long Tails} The $i\epsilon$ regularization makes the extremal OTOC analytic even on the boundary of the half-strip (\ref{halfstrip}) while preserving all the qualitative features on the real line. For example, the OTOC (\ref{otoc:reg}) asymptotes to $F=F_d$ in the limit $t\rightarrow \infty$ for all $\epsilon$. This fact depends heavily on the late-time behavior of the density function (\ref{rho:extremal}) \begin{equation}\label{densityfunction_late} \rho(t> t_{\rm eff}) \approx \frac{4\tilde{c}_1 \sin \epsilon}{{\cal N} \beta} e^{\frac{4 \pi}{\beta} (t_{\rm eff}-t)}\ . \end{equation} This density function becomes very small for $t>t_*$ and hence other effects can dominate at very late times. In particular, parametrically $ \rho(t)\ll 1/{\cal N}$ for $t-t_{\rm eff}\gg \beta$ and hence in this regime higher-order $1/{\cal N}$ corrections to (\ref{rho:extremal}) can be important. However, as we show next, the structure of the extremal OTOC is rather rigid and only a very specific late-time correction to the density function can change the OTOC (\ref{otoc:reg}) significantly for $t-t_{\rm eff}\gg \beta$. Let us introduce a new time scale $t_\Lambda>t_*$ above which late-time corrections to the density function (\ref{densityfunction_late}) is significantly large.\footnote{Since the density function (\ref{rho:extremal}) is also exponentially suppressed for $t\ll t_{\rm eff}$, one might consider higher order $1/{\cal N}$ corrections to (\ref{rho:extremal}) in this regime. However, contributions of such early-time corrections to the OTOC are always small and hence they can never affect the general structure of the extremally chaotic OTOC. Whereas, late-time corrections are interesting, as they can change the asymptotic form of the OTOC in the limit $t\rightarrow \infty$.} We can make general comments about the effects of such late-time ($t>t_\Lambda$) corrections on the asymptotic form of the OTOC. For simplicity, we approximate late-time corrections to the density function (\ref{rho:extremal}) that start becoming important for $t>t_\Lambda$ as follows \begin{equation}\label{eq:tk} \delta \rho(t) = \rho_\infty e^{-\frac{2 \pi a}{\beta}t}\Theta\(t-t_\Lambda\)\ . \end{equation} The exponent $a$ is constant but arbitrary, however it has a lower bound $a\ge 1$. This lower bound follows from the boundedness condition of the density function (\ref{rho:bound}). Moreover, we have assumed that $t_\Lambda>t_*$, especially for $a=1$, such that parametrically $\delta \rho(t)<1/{\cal N}$. Our goal is to show that only a very specific $\delta \rho(t)$ can significantly change the late-time behavior of the OTOC. The above approximation (\ref{eq:tk}) for the late-time correction is sufficient to establish that point. We now argue that the density function can alter the asymptotic value of the OTOC only when it has a {\it long tail} that saturates the bound $a\ge1$ in (\ref{eq:tk}). The contribution of a correction term (\ref{eq:tk}) to the OTOC can be computed analytically by using the representation (\ref{eq:KL}). At early times $t_\Lambda-t\gg \frac{\beta}{2\pi}$, we find that such contributions are exponentially suppressed \begin{equation} \delta F(t)=\frac{\rho_\infty \beta}{2\pi a} e^{\frac{2 \pi }{\beta}(t-t_*)}e^{\frac{2 \pi }{\beta}(t_*-a t_\Lambda)}+\cdots \end{equation} for any $a\ge 1$. Likewise, the contribution of a correction term (\ref{eq:tk}) to the OTOC, for $a>1$, decays exponentially fast even at late times $t-t_\Lambda\gg \frac{\beta}{2\pi}$ \begin{equation} \delta F(t) \propto \beta\rho_\infty e^{-\frac{2 \pi }{\beta}(t-t_\Lambda)}e^{-\frac{2 \pi }{\beta}(a-1) t_\Lambda}\ , \end{equation} implying the OTOC still asymptotes to $F=F_d$ in the limit $t\rightarrow \infty$. So, a correction term (\ref{eq:tk}) with $a>1$ can only control how the OTOC approaches its asymptotic value $F=F_d$, however, it cannot change the general structure of the extremal OTOC (\ref{eq:intro}). On the other hand, the same is not true when the density function has a long tail: $a=1$.\footnote{Note that now there is also a bound on $\rho_\infty$ from (\ref{rho:bound}): $\frac{8F_d}{\beta}\ge \rho_\infty\ge 0$.} In this case, we obtain \begin{equation}\label{eq:heavy} \delta F(t)=-\frac{\rho_\infty \beta}{4\pi }\(\pi -2 \tan ^{-1}\left(e^{\frac{2 \pi (t_\Lambda-t)}{\beta }}\right)\) \end{equation} implying that the full OTOC now asymptotes to \begin{equation} F(t\gg t_\Lambda)=F_d-\frac{\rho_\infty \beta}{4 }\ , \end{equation} irrespective of the value of $t_\Lambda$. So, in this case the regularized OTOC (\ref{otoc:reg}) provides a good approximation only up to the cut-off scale $t_\Lambda$. We wish to note that the connection between the asymptotic value of the OTOC and long tails of the density function is true in general. We will discuss this more in the next section. So, long-tailed corrections of (regularized or unregularized) extremally chaotic OTOCs are interesting since they can change the asymptotic structure of the extremal OTOC (\ref{eq:intro}). Furthermore, we expect that such long-tailed corrections are generated in theories of quantum gravity from higher-order $1/{\cal N}$ effects. For example, the density functions do exhibit long tails in the Schwarzian theory (see appendix \ref{app:ST}) and also in 2d CFT with a large central charge.\footnote{This fact can be deduced in 2d CFT directly from \cite{Roberts:2014ifa}.} \begin{figure} \centering \includegraphics[scale=0.5]{figure_heavytails.pdf} \caption{ \label{figure:tail} \small A long-tailed correction can change the asymptotic behavior of the extremally chaotic OTOC. This is a plot of the (regularized) extremally chaotic OTOC with long-tailed corrections $F_{\rm ext}^{\rm reg}(t)+ \delta F(t)$ for various values of $\rho_\infty$. For the plot, we have chosen $\beta=2\pi, F_d=1, c_1=2, t_{\rm eff}=50, \epsilon=0.5, t_*=51$ and $t_\Lambda=53$, where $t_\Lambda$ is the time scale at which long-tailed corrections start to become important. At early times $t_\Lambda-t\gg \frac{\beta}{2\pi}$, the OTOC remains unaffected. } \end{figure} It is, therefore, interesting that extremal OTOCs with a long-tailed correction \begin{equation}\label{QG} F(t)=F_{\rm ext}^{\rm reg}(t)+ \delta F(t)\ , \end{equation} where $ \delta F(t)$ is given in (\ref{eq:heavy}), have features that are qualitatively very similar (see figure \ref{figure:tail}) to the spectral form factor of the SYK model \cite{Cotler:2016fpe}. This suggests that there is universality in the ramp-and-plateau behavior at very late times. Perhaps, a similar analysis can also be performed for the spectral form factor, though we will have to leave this question for the future. \section{Analytic Completions of Maximal Chaos}\label{sec:maximal} The $i\epsilon$-regularized OTOC (\ref{otoc:reg}), with or without a long-tailed correction, is an example of a small deformation of the extremally chaotic OTOC (\ref{eq:intro}). One advantage of the spectral representation (\ref{eq:KL}) is that it enables us to study general small deformation of the extremal OTOC (\ref{eq:intro}). \subsection{Small Deformations of Extremal Chaos} We begin by providing a precise definition of a general small deformation. We define small deformations as follows: the density function $\rho(t)$ is a narrow distribution (however can be a complicated function) which is small everywhere outside a window $t_{\rm eff}-\Delta t_{\rm eff} \le t \le t_{\rm eff} +\Delta t_{\rm eff}$, where $\Delta t_{\rm eff} \ll t_{\rm eff}$. In particular, it obeys \begin{equation}\label{eq:narrow} \int_{t_{\rm eff}-\Delta t_{\rm eff}}^{t_{\rm eff}+\Delta t_{\rm eff}} dt' \rho(t')\gg \int_{t_0}^{t_{\rm eff}-\Delta t_{\rm eff}} dt' \rho(t')+\int_{t_{\rm eff}+\Delta t_{\rm eff}}^\infty dt' \rho(t')\ . \end{equation} Moreover, the density function must also obey all the properties that follow from (\ref{eq:inversion}) as discussed before. The resulting OTOCs have the following properties. In the regime $t_0\ll t\ll t_{\rm eff},t_*$, they saturate the MSS bound. Furthermore, they are analytic functions obeying the Schwarz reflection (\ref{eq:SR}) and the boundedness (\ref{eq:positive}) conditions in the half-strip (\ref{halfstrip}) (including the boundary). However, these OTOCs, in general, saturate (exactly or approximately) only a subset of all the chaos bounds in any time duration. Any analytic completion of a long period of maximal chaos (\ref{maximal}) must be a small deformation of extremal chaos. This becomes obvious once we rewrite (\ref{eq:KL}) as \begin{equation}\label{eq:FD} F_d-F(t)=\frac{c_1}{{\cal N}} e^{\frac{2\pi}{\beta} t}- e^{\frac{2\pi}{\beta} t}\int_{t_0}^\infty dt' f_{FD}(t;t')\rho(t')+{\cal O}\(e^{-\frac{2\pi (t-t_0)}{\beta}}\)\ , \end{equation} where $f_{FD}(t;t')\equiv 1- e^{-\frac{2\pi}{\beta} t} \mathcal{F}_{\rm ext}(t;t')$ and $\int_{t_0}^\infty dt' \rho(t')=\frac{c_1}{{\cal N}}$. The function $f_{FD}(t;t')$ is exactly the Fermi-Dirac distribution that asymptotes between $1$ for $t'\ll t$ and $0$ for $t'\gg t$. This provides some insight into the spectral representation of the OTOC. Let us now consider an OTOC which is maximally chaotic (\ref{maximal}) for $t_0\ll t\le t_i$. This necessarily requires that the second term in (\ref{eq:FD}) is small compared to the first term for $t\le t_i$, implying \begin{align} \int_{t_0}^\infty dt' \rho(t')\gg \int_{t_0}^\infty dt' f_{FD}(t_i;t')\rho(t')> \int_{t_0}^{t_i} dt' f_{FD}(t_i;t')\rho(t')\ . \end{align} The last integral has a two-sided bound \begin{equation} \frac{1}{2}\int_{t_0}^{t_i} dt' \rho(t')\le \int_{t_0}^{t_i} dt' f_{FD}(t_i;t')\rho(t')\le \int_{t_0}^{t_i} dt' \rho(t') \end{equation} that leads to \begin{equation}\label{max_con1} \int_{t_0}^\infty dt' \rho(t')\gg \int_{t_0}^{t_i} dt' \rho(t')\ . \end{equation} Moreover, we assume that the OTOC starts to deviate from the maximally chaotic OTOC (\ref{maximal}) only near the scrambling time scale. In other words, the difference between two time scales $t_i$ and $t_*$ is not too large compared to $\frac{\beta}{2\pi}$. On the other hand, the bound (\ref{rho:bound}) dictates that the density function is also small $\rho(t)\ll 1/{\cal N}$ for $t\ge t_f$, where $t_f=t_*+{\cal O}(1) \frac{\beta}{2\pi}$. In particular, for any such $t_f$ \begin{equation} \int^{\infty}_{t_f} dt' \rho(t')<\frac{4 F_d}{\pi}e^{-\frac{2\pi}{\beta} t_f}\ll \frac{1}{{\cal N}}\ . \end{equation} Hence, the above integral is exponentially suppressed compared to $\int_{t_0}^\infty dt \rho(t)=\frac{c_1}{{\cal N}}$, which is the coefficient of the term $e^{\frac{2\pi t}{\beta}}$ in (\ref{maximal}).\footnote{Throughout the paper, we are assuming that $F_d$ is order 1. This can be ensured by normalizing operators $V$ and $W$ appropriately. } This immediately implies (\ref{eq:narrow}) with $t_i\le t_{\rm eff} \le t_f$ and $\Delta t_{\rm eff}\le \frac{1}{2}(t_f-t_i)\sim {\cal O}(1) \frac{\beta}{2\pi}$. Note that $t_{\rm eff}$ and $t_*$ are not parametrically separated when we have a long period of maximal chaos. So, $t_{\rm eff}$ can simply be $t_*$ or it can also be an independent time scale. In any case, for strongly chaotic systems $F_d-F(t\sim t_*)\sim {\cal O}(1) F_d$, indicating that the difference between these two time scales is not very large compared to $\frac{\beta}{2\pi}$. So, we conclude that all analytic completions of a long period of maximal chaos are small deformations of extremal chaos.\footnote{The discussion of this section is valid even when the period of maximal chaos is short, {\it i.e.,}\ $t_i\ll t_*$. For such systems, the only difference is that $\Delta t_{\rm eff}$ can be large $\Delta t_{\rm eff}\gg \frac{\beta}{2\pi}$.} It should be noted that the resulting OTOCs can defer significantly from the extremal OTOC (\ref{eq:intro}), however, from the perspective of the density function they are always small deformations. As a consequence, these OTOCs have universal qualitative features far away from $t=t_{\rm eff}$, as we discuss next. \subsection{Early-Time Behavior} We first consider the regime $t\gg t_0$ and $t_{\rm eff}-t\gg \frac{\beta}{2\pi}$. The integral (\ref{eq:FD}) in this limit can be simplified, obtaining \begin{equation}\label{OTOC:early} F(t)=F_d-e^{\frac{2\pi t}{\beta}}\(\frac{c_1}{{\cal N}}-\int_{t_0}^{t_{\rm eff}-\Delta t_{\rm eff}} f_{FD}(t;t')\rho(t')dt' -c_2 e^{\frac{4\pi}{\beta} (t-t_{\rm eff})} \)+\cdots\ , \end{equation} where dots represent terms that are further exponentially suppressed ${\cal O}(e^{\frac{8\pi}{\beta} (t-t_{\rm eff})}, e^{\frac{2\pi}{\beta} (t_0-t)})$. Note that $\rho(t)$ is vanishingly small near $t=t_0$ and hence the above expression is independent of $t_0$. Moreover, $c_1$ and $c_2$ coefficients are given by \begin{equation} \frac{c_1}{{\cal N}}=\int_{t_0}^\infty \rho(t')dt'\ , \qquad c_2\approx \int_{t_{\rm eff}-\Delta t_{\rm eff}}^{t_{\rm eff}+\Delta t_{\rm eff}} \rho(t')e^{\frac{4\pi}{\beta} (t_{\rm eff}-t')}dt' \ . \end{equation} Both corrections to the maximally chaotic term in (\ref{OTOC:early}) are such that they slow down the initial $e^{\frac{2\pi t}{\beta}}$ growth of the OTOC.\footnote{Analyticity of $F(t)$ dictates that $\rho(t)$ cannot be exactly zero outside $t_{\rm eff}-\Delta t_{\rm eff} \le t \le t_{\rm eff} +\Delta t_{\rm eff}$. So, the integral in (\ref{OTOC:early}) is small but non-zero. Of course, this contribution still has a fixed sign since the density function is always non-negative.} This is true irrespective of the details of the density function $\rho(t)$, provided $t_*,t_{\rm eff}\gg t_0$. The expression (\ref{OTOC:early}) provides a physical interpretation of $\rho(t)$ in the early-time regime: $t\gg t_0$ and $t_{\rm eff}-t\gg \beta/2\pi$. If $\rho(t)$, in this regime, is a slowly varying function of time, one can use the Sommerfeld approximation trick to simplify \begin{equation} \int_{t_0}^{t_{\rm eff}-\Delta t_{\rm eff}} f_{FD}(t;t')\rho(t')dt' \approx \int_{t_0}^t dt' \rho (t')\ . \end{equation} One can interpret this subleading contribution in (\ref{OTOC:early}) as coming from a small correction to the Lyapunov exponent: $F_d-F(t) \sim \exp(\frac{2\pi}{\beta}t+\int dt \delta\lambda_L(t))$. Hence, at the leading order we obtain \begin{equation}\label{eq:epsilon} \delta\lambda_L(t)=-\frac{\rho(t)}{\int_{t_0}^\infty \rho(t')dt'}\ , \end{equation} implying that the Lyapunov exponent decreases monotonically from the MSS saturation value as one approaches the time scale $t_{\rm eff}$. It is well-known that holographic theories dual to Einstein gravity saturate the MSS bound, where $\frac{1}{{\cal N}}=\exp(\frac{2\pi}{\beta}t_*)$ is determined by the Newton constant $G_N$. However, if we include stringy correction to the Einstein gravity result of $\lambda_L=\frac{2\pi}{\beta}$, the OTOC no longer saturates the MSS bound \cite{Shenker:2014cwa}. A comparison with the above result (\ref{eq:epsilon}) suggests that at early times $\rho(t)=\rho_0\approx$ constant, which is determined by the string scale. In particular, from \cite{Shenker:2014cwa} we find that for theories dual to planar AdS$_{d+1}$ black holes \begin{equation} \rho_0= \frac{\pi d(d-1)}{2\beta}\(\frac{l_s}{l_{\rm AdS}}\)^2\frac{c_1}{{\cal N}}\ , \end{equation} where $l_s$ is the string length and $l_{\rm AdS}$ is the AdS radius. \subsection{Late-Time Behavior} We can perform a similar analysis at late times $t\gg t_{\rm eff}$, obtaining \begin{align}\label{OTOC:late} F(t)=F_d&-\frac{\beta}{4}\rho_\infty \nonumber\\ &-\int_{t_{\rm eff}+\Delta t_{\rm eff}}^\infty \frac{dt' }{1+e^{\frac{4\pi }{\beta}(t-t')}}\(\rho(t')e^{\frac{2\pi t}{\beta}}-\rho_\infty\)-\tilde{c}_2 e^{\frac{2\pi }{\beta}(t_{\rm eff}-t)}+\cdots\ , \end{align} where $\rho_\infty=\lim_{t\rightarrow \infty}e^{\frac{2\pi t}{\beta}}\rho(t)\ge 0$ and dots represent subleading terms. The coefficient $\tilde{c}_2$ is given by \begin{equation} \tilde{c}_2=\int_{t_0}^{t_{\rm eff}+\Delta t_{\rm eff}} dt' e^{\frac{4\pi }{\beta}(t'-t_{\rm eff})}\rho(t')\approx \int_{t_{\rm eff}-\Delta t_{\rm eff}}^{t_{\rm eff}+\Delta t_{\rm eff}} dt' e^{\frac{4\pi }{\beta}(t'-t_{\rm eff})}\rho(t') \end{equation} which is strictly positive. Clearly, the first line of (\ref{OTOC:late}) is the asymptotic value of the OTOC for large $t$. This asymptotic value depends entirely on whether the density function has a long tail. The second line of (\ref{OTOC:late}) contains the leading terms that control how the OTOC approaches its asymptotic value. Note that the second line of (\ref{OTOC:late}) does not have a fixed sign in general since the integral in (\ref{OTOC:late}) can have either sign when $\rho_\infty>0$. ~\\ Finally, we make some general comments. As stated before, the regularized extremally chaotic OTOC (\ref{otoc:reg}) can be regarded as a ``tree-level" analytic completion of the leading Lyapunov behavior (\ref{maximal}), where $e^{\frac{2\pi t_{\rm eff}}{\beta}}$ plays the role of an effective string scale. However, it should be noted that small deformations of extremal chaos, as described above, are more general. For example, they also capture analytic completions of the leading Lyapunov behavior (\ref{maximal}) by summing over higher order $1/{\cal N}$ contributions. In such a case, scales $t_*$ and $t_{\rm eff}$ are not independent since they are both determined by ${\cal N}$. This is exactly what happens in the Schwarzian theory, which describes 2D Jackiw-Teitelboim (JT) gravity \cite{Jackiw:1984je,Teitelboim:1983ux}, as we discuss in appendix \ref{app:ST}. In the Schwarzian theory, it is possible to compute the OTOC (\ref{eq:otoc}) as an expansion in $\beta/C = 1/{\cal N}$ (see \cite{Maldacena:2016upp,Lam:2018pvp}) that is analytic even in the regime $t>t_*$ where it asymptotes to zero. As a check, one can compute the associated density function which is indeed a narrow distribution (a single peak) around $t\sim t_*$. Likewise, we observe from \cite{Roberts:2014ifa} that 2d CFTs with a large central charge $c$ have the same qualitative features when we include $1/c$ corrections. This is perfectly consistent with our general discussion that any analytic completion of maximal chaos must be a small deformation of extremal chaos. Moreover, the density functions for $t> t_*$, in both of these cases, have long tails with $\rho_\infty=\frac{4F_d}{\beta}$. \section{Conclusions and Outlook}\label{sec:conclusions} It is always important to ask what general lessons we can learn from various models of quantum gravity. Quantum chaos provides a profound answer to this question. There are compelling reasons to believe that all theories of quantum gravity and their holographic duals are maximally chaotic. More precisely, in these theories, the OTOC (\ref{eq:otoc}) saturates the MSS bound on chaos (\ref{MSS:Lyapunov}) for $t_0\ll t\ll t_*$. In this paper, we showed that the extremally chaotic OTOC (\ref{eq:intro}) analytically completes the maximally chaotic OTOC inside the half-strip $\{t\in \mathbb{C}|\ \mbox{Re}\ t\gg t_0\ \text{and}\ |\mbox{Im}\ t|< \frac{\beta}{4} \}$, saturating even the subleading chaos bounds of \cite{Kundu:2021qcx}. Furthermore, we argued for its uniqueness. Interestingly, the extremal OTOC provides a spectral representation (\ref{intro:KL}) of all OTOCs that are analytic functions obeying the boundedness and the Schwarz reflection conditions in the half-strip (\ref{halfstrip}). A non-trivial implication of this representation is that all analytic completions of a long period of maximal chaos must be small deformations of extremal chaos from the perspective of the density function. Any physical system cannot be exactly extremally chaotic since the extremal OTOC (\ref{eq:intro}) is non-analytic on the boundary of the half-strip (\ref{halfstrip}). This problem can be resolved naturally by a standard $i\epsilon$-regularization, making the OTOC analytic everywhere in the half-strip (\ref{halfstrip}) obeying the Schwarz reflection (\ref{eq:SR}) and the boundedness (\ref{eq:positive}) conditions. The regularized OTOC, for real $t$, has the same qualitative features as the extremal OTOC, differing only slightly for $t>t_{\rm eff}$ even when $\epsilon$ is finite. Thus, a physical system, in principle, can be approximately extremally chaotic up to some time scale $t_\Lambda>t_*$.\footnote{Of course, it is completely self consistent to have $t_\Lambda=\infty$. } Unfortunately, we are not aware of any chaotic system which exhibits this behavior even qualitatively. It would be interesting to construct such systems. At this point, it is reasonable to make the conjecture that all theories of quantum gravity (and their holographic duals) are strongly chaotic systems that are small deformations of extremally chaotic systems. In particular, the associated density function $\rho(t)$ is a narrow distribution that is small everywhere outside a window around $t_{\rm eff}$ obeying (\ref{eq:narrow}). Of course, $\rho(t)$ can be a complicated function inside the narrow window. Moreover, the integral $\int_{t_0}^\infty dt \rho(t)=\frac{c_1}{{\cal N}}$ is exactly the coefficient of the term $e^{\frac{2\pi t}{\beta}}$ in (\ref{maximal}), determining the scrambling time scale $t_*$. This conjecture follows directly from the fast scrambling conjecture of \cite{Sekino:2008he} (or equivalently the more precise version in terms of the MSS bound \cite{Maldacena:2015waa}). A more tempting but speculative conjecture would be that OTOCs associated with quantum gravity have qualitative features similar to the extremally chaotic OTOC with a long-tailed correction (\ref{QG}). This conjecture perhaps can be checked in the SYK model by utilizing numerical tools developed in \cite{Kobrin:2020xms}. \section*{Acknowledgments} It is my pleasure to thank Diptarka Das, Thomas Hartman, Jared Kaplan, Arnab Kundu, and Douglas Stanford for helpful discussions and commenting on a draft. I was supported in part by the Simons Collaboration Grant on the Non-Perturbative Bootstrap. \begin{appendix} \section{Schwarzian Theory and Late-Time Long Tail}\label{app:ST} The Sachdev-Ye-Kitaev (SYK) model is maximally chaotic \cite{kitaev2014hidden,Sachdev:1992fk,Polchinski:2016xgd,Maldacena:2016hyu,Jevicki:2016bwu,Jevicki:2016ito,Cotler:2016fpe}. The SYK model also exhibits other features suggestive of the fact that it is the holographic dual of a 2D dilaton gravity theory on AdS. It was pointed out by Kitaev \cite{kitaev2014hidden,Kitaev:2017awl} that the IR dynamics of the SYK model is described by the Schwarzian theory of a single effective degree of freedom $t(u)$ \begin{equation} S=-C\int du \{t(u),u\}\ , \end{equation} where the Schwarzian derivative $\{t(u),u\}=\frac{t'''}{t'}-\frac{3}{2}\frac{t''^2}{t'^2}$. The Schwarzian theory also describes JT gravity \cite{Jackiw:1984je,Teitelboim:1983ux}, providing a precise connection between the SYK model and 2D dilaton gravity theory \cite{Almheiri:2014cka,Jensen:2016pah,Maldacena:2016upp,Engelsoy:2016xyb,Cvetic:2016eiv,Nayak:2018qej}. In the Schwarzian theory, it is possible to compute the OTOC (\ref{eq:otoc}) as an expansion in $\beta/C\equiv 1/{\cal N}$. In particular, the OTOC is given by the confluent hypergeometric $U$-function \cite{Maldacena:2016upp,Lam:2018pvp} \begin{equation}\label{otoc:sch} \frac{F(t)}{F_d}=\frac{1}{z^{2\Delta}}U\(2\Delta, 1, \frac{1}{z}\)\ , \qquad z=\frac{\beta}{16\pi C}e^{\frac{2\pi t}{\beta}}\equiv \frac{1}{16\pi }e^{\frac{2\pi }{\beta}(t-t_*)} \end{equation} in the limit $C/\beta\gg 1$, $e^{\frac{2\pi t}{\beta}}\gg 1$ with $z$ fixed, where operators $V$ and $W$ have the same scaling dimension $\Delta$. This OTOC is maximally chaotic for $t_*-t\gg \frac{\beta}{2\pi}$. Moreover, it is also well-behaved for $t>t_*$, where it asymptotes to zero. This is in sharp contrast with the extremally chaotic OTOC (\ref{eq:intro}), which asymptotes to $F_d$ for $t>t_*$. In the language of extremal chaos, the OTOC (\ref{otoc:sch}) is a special case in which two time scales: $t_*, t_{\rm eff}$ are not independent. Nevertheless, the discussion of this paper still applies implying that the OTOC (\ref{otoc:sch}) is a small deformation of the extremal OTOC. Besides, we will show that the OTOC (\ref{otoc:sch}) has a density function with a long tail. \begin{figure} \centering \includegraphics[scale=0.45]{figure_sch.pdf} \caption{ \label{contour} \small A generic OTOC and the associated density function (normalized by $C$) are shown for the Schwarzian theory. The density function $\rho(t)$ has a narrow peak near the scrambling time $t_*=\frac{\beta}{2\pi}\ln (C/\beta)$. So from the perspective of the density function, this is exactly a small deformation of extremal chaos where the time scale $t_{\rm eff}$ is also set by the scrambling time. Note that OTOCs of 2d CFTs with a large central charge also have the same qualitative features. }\label{fig:sch} \end{figure} We begin by noting the asymptotic behaviors of the OTOC (\ref{otoc:sch}). Initially, the leading growing term saturates the MSS bound \begin{equation} F(t)=F_d \(1-\frac{ \Delta ^2 }{4 \pi }e^{\frac{2 \pi (t-t_*)}{\beta }}+\frac{ (2 \Delta +1)^2 \Delta ^2 }{128 \pi ^2}e^{\frac{4 \pi (t-t_*)}{\beta }}+\cdots\)\qquad \frac{\beta}{2\pi}\ll t, t_*-t \end{equation} however, correction terms start to become important near the scrambling time. On the other hand, $F(t)$ decays exponentially at late times \begin{equation} F(t)=\frac{2\pi F_d}{\beta} \frac{(16\pi)^{2\Delta}}{\Gamma(2\Delta)}(t-t_*)e^{-\frac{4 \pi \Delta (t-t_*)}{\beta }}+\cdots \qquad t- t_*\gg \frac{\beta}{2\pi}\ . \end{equation} We can now obtain the density function $\rho(t)$ by using the inversion formula (\ref{eq:inversion}): \begin{equation} \rho(t)=\frac{4F_d}{\beta}e^{-\frac{2\pi t}{\beta}}\(1-\mbox{Re}\ \frac{e^{-i \pi \Delta}U\(2\Delta, 1,- \frac{i}{z}\)}{z^{2\Delta}}\) \qquad z= \frac{1}{16\pi }e^{\frac{2\pi }{\beta}(t-t_*)} \end{equation} which is a narrow distribution around $t\sim t_*$, as shown in figure \ref{fig:sch}. In particular, the density function is small away from the peak \begin{align} \rho(t)=F_d\frac{\Delta ^2 (2 \Delta +1)^2 }{32 \pi ^2 C}e^{\frac{2 \pi (t-t_*)}{\beta }}\ , \qquad \qquad \frac{\beta}{2\pi}\ll t, t_*-t\ . \end{align} Similarly at late times $t- t_*\gg \frac{\beta}{2\pi}$, the density function is given by \begin{equation} \rho(t)=\frac{4 F_d}{\beta}e^{-\frac{2\pi t}{\beta}}-\frac{8\pi F_d}{\beta^2} \frac{(16\pi)^{2\Delta}}{\Gamma(2\Delta)}e^{-\frac{2\pi t}{\beta}-\frac{4 \pi \Delta (t-t_*)}{\beta }}\((t-t_*)\cos(\pi \Delta)+\frac{\beta}{4}\sin(\pi \Delta)\) \end{equation} implying that the density function has a long tail. Note that $C \rho(t)/\beta$, for $t\gg t_*$, is exponentially suppressed. So, the long tail of $\rho(t)$, strictly speaking, is not a perturbative $\frac{1}{C}$ effect. All these results are perfectly consistent with our general discussion that any analytic completion of maximal chaos must be a small deformation of extremal chaos. For example, one can check that the above asymptotic expressions satisfy (\ref{OTOC:early}) and (\ref{OTOC:late}). However, the Schwarzian theory represents a very special case in which $F(t)$ goes to zero for $t\gg t_*$ indicating that the information about the initial perturbation is completely lost. Of course, the OTOC (\ref{otoc:sch}) is not valid in the limit $t\rightarrow \infty$. It is definitely possible that non-perturbative contributions to $F(t)$ are such that the exact OTOC asymptotes to some non-zero value in the limit $t\rightarrow \infty$. A similar analysis can also be performed for 2d CFTs with a large central charge $c$, where we also know $F(t)$ beyond the leading $\frac{1}{c}$ term \cite{Roberts:2014ifa}. The corresponding density function exhibits very similar features with exactly the same long tail. \end{appendix} \end{spacing} \bibliographystyle{utphys}
2023-04-23T08:18:16.700Z
2021-09-21T02:00:17.000Z
redpajama/arxiv
arxiv_0000
1,510
14,191
ce2102673709ffef15e41523262fde8f04fc94f4
\section{Introduction} \label{secIntro} In many applications in physical reasoning and in computer graphics, shapes deform continuously. However, what kinds of functions from time to shapes count as ``continuous'' depends on the topology of the space of regions; and this, as we will discuss here, is not as clear-cut as one might suppose. Over the space of points in $\mathbb{E}^{n}$, there are a number of different metrics in common use: the standard Euclidean distance, the Manhattan distance, and more generally the Minkowski distance with parameter $p$. But all of these, except the discrete metric, are fundamentally similar, in the sense that they generate the same topology. If a sequence ${\bf x}_{1}, {\bf x}_{2}$ converges to $\bf y$ in any of them, then it converges to $\bf y$ in all of them; and if a function $\phi({\bf x})$ is continuous in any of them, it is continuous in all of them. When one considers the space of regions in $\mathbb{E}^{n}$, however, the situation is very different. Here, again, there are many different possible natural metrics, with no obvious clear favorite, and these are fundamentally different in the sense that they generate different topologies (Davis 2001; Galton 2000). For instance, (figure~\ref{figAreaVsHausdorff}) consider the sequence of regions in the plane ${\bf Q}_{1}, {\bf Q}_{2} \ldots$ where ${\bf Q}_{i}=((0,1) \times (0,1)) \cup ((2,2+1/i) \times (0,1))$. Let ${\bf P} = (0,1) \times (0,1)$. If one measures the difference between two regions {\bf X} and {\bf Y} as the area of their symmetric difference \[V({\bf X,Y}) = \mbox{area}(({\bf X} \setminus {\bf Y}) \: \cup \: ({\bf Y} \setminus {\bf X})) \] then $V({\bf Q}_{i},{\bf P}) = 1/i$, so the sequence ${\bf Q}_{1}, {\bf Q}_{2}$ converges to ${\bf P}$. If one measures it using the Hausdorff distance $H({\bf X},{\bf Y})$, then $H({\bf Q}_{i},{\bf P}) > 1$ for all $i$, so the sequence does not converge to {\bf P}. \begin{figure} \begin{center} \includegraphics[width=4in]{AreaVsHausdorff.png} \caption{A sequence of regions that converges in the symmetric-difference metric but not in the Hausdorff metric} \end{center} \label{figAreaVsHausdorff} \end{figure} In this paper, we consider limited classes of regions and well-known metrics that satisfy two specified well-behavedness conditions. We consider the relations between the topologies that these metrics generate over these classes. We prove results of two general flavors. First, in section~\ref{secConvex}, we show that, over the space of {\em convex\/} regions there is only one natural metric topology. More precisely, the theorem show that any metric satisfying these well-behavedness conditions generate the same topology. Thus, for instance, there is no way to construct an example analogous to figure~\ref{figAreaVsHausdorff} using convex regions; if a sequence of convex open regions converges to a convex open region in the area metric, it also converges in the Hausdorff metric, and in any other well-behaved metric over regions. The second flavor of result, show that, as figure~\ref{figAreaVsHausdorff} illustrates, if one expands the space of regions under consideration to a broader class of regions, then the different metrics we consider generate different topologies. Section~\ref{secNotation} will introduce notational conventions and basic functions. Section~\ref{secWellBehaved} will define our two well-behavedness conditions: A well-behaved topology ``supports continuous morphing''and ``satisfies the region separation condition''. Section~\ref{secMetrics} defines the metrics we will consider: \\ \hspace*{2em} A homeomorphism-based metric $M({\bf A,B})$; \\ \hspace*{2em} The Hausdorff metric $H({\bf A,B})$; \\ \hspace*{2em} The dual-Hausdorff metric $H^{d}({\bf A,B})$; \\ \hspace*{2em} The symmetric-difference metric $V({\bf A,B})$; and \\ \hspace*{2em} The family of Wasserstein metrics $W^{\psi}{\bf A,B})$ We demonstrate that: \begin{itemize} \item Over the space of convex regions, all five metrics, and indeed any metric that satisfies two general well-behavedness constraints, induce the same topology (section~\ref{secConvex}). \item Over the space of convex regions and unions of two separated convex regions, these five metrics are all ordered by ``strictly finer than" relations. In descending order of fineness, these are: the homeomorphism-based, the dual-Hausdorff, the Hausdorff, the Wasserstein, and the symmetric difference. Also, Wasserstein metrics are strictly ordered among themselves (section~\ref{secTwoConvex}). \item Over the space of star-shaped regions, the topologies induced by the Hausdorff metric, the symmetric difference metric, and the Wasserstein metrics are incomparable in terms of fineness (section~\ref{secStar}). \end{itemize} \subsection{Notation and basic concepts} \label{secNotation} $\mathbb{R}$ is the space of real numbers. $\mathbb{E}^{n}$ is $n$-dimensional Euclidean space. We will generally assume that $n \geq 2$; many of our concepts become vacuous or trivial in one-dimensional space, though some carry over. Real numbers and distances will be notated with italicized variables: $x,d$. Points in $\mathbb{E}$ will be notated with boldface lower-case variables: ${\bf p,q}$. In some of the proofs, it will be convenient to choose an origin and notate points as vectors: $\vec{p}, \vec{q}$. The standard Euclidean distance between points {\bf x} and {\bf y} will be denoted $d({\bf p,q})$. Subsets of $\mathbb{E}^{n}$ will be notated with boldface capital letters: $\bf P,Q$. A {\em region} will be a open subset of $\mathbb{E}^{n}$ that is bounded and equal to the interior of its closure (topologically regular). The class of all regions in $\mathbb{E}^{n}$ will be denoted $\mbox{$\mathcal R$}$ (the dimension of the space being left implicit). The closure of {\bf A} is denoted $\bar{\bf A}$. The topological boundary of region {\bf A} (i.e. the closure of {\bf A} minus {\bf A}) is denoted @{\bf A} = $\bar{\bf A} \setminus {\bf A}$. The $n$-dimensional volume of region {\bf A} is denoted $v({\bf A})$. The open ball of radius $d$ centered at point ${\bf p}$ is denoted ${\bf B}({\bf p},d) \subset \mathbb{E}^{n}$. The {\em radius} of region {\bf A} at point ${\bf o} \in {\bf A}$ is the radius of the largest spherical open ball that fits inside ${\bf A}$. The radius (1 argument) of region {\bf A} is its maximal radius. $\mbox{radius}({\bf A})=\max_{{\bf o} \in {\bf A}} \: \mbox{radius}({\bf A,o})$. The {\em diameter} of {\bf A} is the maximal distance between two points in $\bar{\bf A}$: $\mbox{diameter}({\bf A}) = \sup_{{\bf p,q} \in \bar{A}} d({\bf p,q})$. The {\em distance from point ${\bf p}$ to region {\bf Q}} is the distance from {\bf p} to the closest point in the closure of {\bf Q}. \[ d({\bf p},{\bf Q}) = \min_{{\bf q} \in \bar{\bf Q}} \: d({\bf p,q}) \] The {\em distance} between regions {\bf A} and {\bf B} is the smallest distance between points in their closure: $d({\bf A,B}) = \min_{{\bf a}\in \bar{\bf A},{\bf b} \in \bar{\bf B}} d({\bf a,b})$. The distance $d({\bf A,B})$ is not, of course, a metric over regions. \begin{definition} Let {\bf P} be a region. Let $\delta > 0$. The {\em dilation} of {\bf P} by $\delta$ is the set of all points within $\delta$ of {\bf P}. \\ dilate(${\bf P},\delta$) = $\{ {\bf w} \: | \: d({\bf w,P}) \leq \delta \}$. The {\em erosion\/} of {\bf P} by $\delta$ is the set of all points more than $\delta$ from the complement of {\bf P}. \\ erode(${\bf P},\delta$) = $\{ {\bf x} \: | \: d({\bf x,P}^{c}) \geq \delta \}$. The {\em outer shell} of {\bf P} by $\delta$, ${\bf O}({\bf P},\delta) = \mbox{dilate}({\bf P},\delta) \setminus {\bf P}$. The {\em inner shell} of {\bf P} by $\delta$, ${\bf I}({\bf P},\delta) = {\bf P} \setminus \mbox{erode}({\bf P},\delta)$. \end{definition} The regularization of $\bf X \subset \mathbb{E}^{n}$ is the interior of the closure of $\bf X$. Boolean operators, as applied to regions, are implicitly regularized. For instance if ${\bf P}=(0,1) \times (0,1)$, ${\bf Q}=(1,2) \times (0,1)$, and ${\bf R}=(0,2) \times (0,1)$, then ${\bf P} \cup {\bf Q} = {\bf R}$ and ${\bf R} \setminus {\bf Q}={\bf P}$. Subsets of $\mbox{$\mathcal R$}$ -- that is, sets of subsets of $\mathbb{E}^{n}$ --- will be denoted using calligraphic letters: $\mathcal U$, $\mathcal V$. \\ In particular $\mathcal C$ is the set of all convex regions. \\ ${\mathcal D}^{2}$ is the set of all regions that are the union of two separated convex regions: \\ ${\mathcal D}^{2} = \{ {\bf X} \cup {\bf Y} \: | \: {\bf X,Y} \in {\mathcal C}, d({\bf X,Y}) > 0)$. \\ $\mathcal D$ will be the set of all regions that are either a single convex region or the union of two separated convex regions; thus ${\mathcal D} = {\mathcal C} \cup {\mathcal D}^{2}$. \\ ${\mathcal S}$ will be the set of all bounded, star-shaped regions. We will use $\mu: \mbox{$\mathcal R$} \times \mbox{$\mathcal R$} \mapsto \mathbb{R}$ to represent a generic metric over $\mbox{$\mathcal R$}$; that is $\mu({\bf A,B})$ is some measure of the difference between regions $\bf A$ and $\bf B$ that satisfies the standard axioms for metrics. We will use upper-case italic letters for specific metrics, as defined in section~\ref{secMetrics}; for instance, the Hausdorff distance is denoted $H({\bf P},{\bf Q})$. Otherwise, the font of function symbols will correspond to the type of the value returned by the function. In particular, the ball of radius $d$ relative to the metric $\mu$ centered at region $\bf P$ is denoted ${\mathcal B}_{\mu}({\bf P},d) = \{ {\bf Q} \: | \: \mu({\bf P},{\bf Q}) < d \}$ Finally $\Topo_{\mu}$ will be the topology generated by metric $\mu$ over $\mbox{$\mathcal R$}$; since a topology is a set of open sets, $\Topo_{\mu}$ is a set of sets of subsets of $\mathbb{E}^{n}$. Throughout this paper, the phrases ``$\Topo_{\alpha}$ is finer than $\Topo_{\beta}$'' or ``is coarser'', if unqualified, are to be interpreted as a non-strict relation; that is, as ``finer than or equal to'' or ``coarser than or equal to''. When a strict relation is intended, the phrases ``strictly finer/coarser'' will be used. The phrase ``$\Topo_{\alpha}$ is not finer/coarser than $\Topo_{\beta}$'' will mean ``It is not the case that $\Topo_{\alpha}$ is finer/coarser than $\Topo_{\beta}$.'' \section{Well-behaved topologies} \label{secWellBehaved} \begin{definition} Let $\mathcal U$ be a set of regions (a subset of $\mbox{$\mathcal R$}$). A {\em history} over $\mathcal U$ is a function $\phi: [0,1] \mapsto \mathcal U$. \end{definition} \begin{definition} A {\em morphing} over $\mathbb{E}^{n}$ is a uniformly continuous function $\psi : [0,1] \times \mathbb{E}^{n} \mapsto \mathbb{E}^{n}$ with the following properties: \begin{itemize} \item[a.] $\psi(0,\cdot)$ is the identity over $\mathbb{E}^{n}$ \item[b.] For $t \in [0,1]$, $\phi(t, \cdot$) is a homeomorphism of $\mathbb{E}^{n}$ to itself. \end{itemize} \end{definition} \begin{definition} \label{defCorrespondsMorphing} A history $\phi:\mathbb{R} \mapsto \mbox{$\mathcal R$}$ {\em corresponds to morphing $\psi$} if $\phi(t) = \psi(t,\phi(0))$. \end{definition} \begin{definition} A topology $\Topo$ over a subspace $\mathcal U$ of $\mbox{$\mathcal R$}$ {\em supports continuous morphing} if every history over $\mathcal U$ that corresponds to a morphing is continuous relative to $\Topo$. \end{definition} Intuitively, if you start with a spatial region {\bf A} and you morph it around continuously relative to the regular spatial topology, then its trajectory as a function of time is continuous in $\Topo$. This is an upper bound on the fineness of $\Topo$; the topology cannot be so fine that morphings are discontinuous. If $\Topo$ supports continuous morphing and $\Topo'$ is coarser than $\Topo$, then $\Topo'$ also supports continuous morphing. The following is an example of a metric that does not support continuous morphing. Let $\mathcal U$ be the set of regions in $\mathbb{E}^{2}$ with a finite perimeter. Define the metric over $\mathcal U$, $\mu({\bf X,Y}) = H({\bf X,Y}) + |\mbox{perimeter}({\bf X})-\mbox{perimeter}({\bf Y})|$. Then one can easily define a morphing in which $\phi(0)$ is the unit square and $\phi(t)$ is the unit square with a saw-toothed boundary, where the teeth are at $45^{\circ}$ and the length of the teeth is $t$. Then for all $t > 0$, the perimeter of $\phi(t)$ is approximately $4 \sqrt{2}$, so the morphing is not continuous relative to $\Topo_{\mu}$. \begin{definition} \label{defSeparates} A topology $\Topo$ over $\mbox{$\mathcal R$}$ {\em satisfies the region separation condition\/} if the following hold for any regions ${\bf P,Z} \in \mbox{$\mathcal R$}$: \begin{itemize} \item[i.] If ${\bf P} \cap {\bf Z} = \emptyset$, then in $\Topo$ there exists a neighborhood $\mathcal U$ of $\bf P$ such that no superset of ${\bf Z}$ is in $\mathcal U$. \item[ii.] If ${\bf P} \supset {\bf Z}$, then in $\Topo$ there exists a neighborhood $\mathcal U$ of $\bf P$ such that no region that is disjoint from ${\bf Z}$ is in $\mathcal U$. \end{itemize} \end{definition} \begin{lemma} \label{lemSuddenEmergence} Let $\Topo$ be a topology over $\mbox{$\mathcal R$}$ that satisfies the region separation condition. Let $\phi:\mathbb{R} \mapsto \mbox{$\mathcal R$}$ be a history that is continuous under $\Topo$. Let ${\bf Z} \in \mbox{$\mathcal R$}$ be any open region. Then there exists a neighborhood $U$ of 0 such that \begin{itemize} \item if ${\bf Z} \cap \phi(0) = \emptyset$ then there is no $t \in U$ such that ${\bf Z} \subset \phi(t)$; \item if ${\bf Z} \subset \phi(0)$ then there is no $t \in U$ such that ${\bf Z} \cap \phi(t) = \emptyset$. \end{itemize} \end{lemma} {\bf Proof:} Taking ${\bf P}=\phi(0)$, construct the set $\mathcal U$ to satisfy the conclusion of definition~\ref{defSeparates}. Take $U = \phi^{-1}({\mathcal U})$. By continuity, $U$ is open and by construction it satisfies the conditions of the theorem. \begin{definition} A topology is {\em well-behaved} if it supports continuous morphing and satisfies the region separation condition. \end{definition} It is immediate from the definitions that if a topology supports continuous morphing, then every coarser topology does; and that if a topology satisfies the region separation condition, then every finer topology does. \section{Metrics on the space of regions} \label{secMetrics} In this paper, we primarily consider five metrics, or families of metrics, over the space of regions: a homeomorphism-based metric $M({\bf A,B})$; the Hausdorff metric $H({\bf A,B})$; the dual-Hausdorff metric $H^{d}({\bf A,B})$; the symmetric-difference metric $V({\bf A,B})$; and the family of Wasserstein metrics $W^{\psi}({\bf A,B})$ Some other metrics will be discussed in passing at various points. \subsection{Homeomorphism-based metric} \label{secHomeoMetrics} There are a number of different ways of defining the difference between two regions {\bf A} and {\bf B} in terms of homeomorphisms between them or between their boundaries. Perhaps the oldest and the best known is the Fr\'{e}chet distance. In this paper we will use the {\em homeomorphism distance} $M({\bf A,B})$, defined as follows: Let {\bf A} and {\bf B} be two regions in $\mathbb{E}^{n}$. Let $\Gamma({\bf A},{\bf B})$ be the set of all homeomorphisms $\gamma$ of $\mathbb{E}^{n}$ to itself such that $\gamma({\bf A})={\bf B}$. Define the metric \[ M({\bf A,B}) = \inf_{\gamma \in \Gamma} \sup_{{\bf x}\in \mathbb{E}^{n}} d({\bf x},\gamma({\bf x})) \] (If $\Gamma=\emptyset$ --- that is, there are no homeomorphisms of the space that map {\bf A} to {\bf B} --- then $M({\bf A,B}) = \infty$.) In other words: for any $\gamma$ that is an homeomorphism from $\mathbb{E}^{n}$ to itself and that maps $\bf A$ to $\bf B$, we define a cost which is the maximum distance from $\bf x$ to $\gamma({\bf x})$ for any $\bf x$ in $\mathbb{E}^{n}$ We then define the metric $M({\bf A,B})$ as the smallest cost attained by any such $\gamma$ (more precisely, the infimum). \begin{theorem} \label{thmMContinuous} The topology $\Topo_{M}$ supports continuous morphings over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} Immediate from the definition. A converse of theorem~\ref{thmMContinuous} would be the claim that if a history is continuous relative to $\Topo_{M}$ then it corresponds to a morphing. I suspect that this is true, but have not been able to prove it. \subsection{The Hausdorff and dual-Hausdorff metrics} The {\em one-sided Hausdorff distance\/} from region {\bf P} to {\bf Q} is the supremum over points {\bf p} in {\bf P} of the distance from {\bf p} to {\bf Q}. \[ H^{1}({\bf P,Q}) = \sup_{{\bf p} \in {\bf P}} d({\bf p,Q}) \] The {\em Hausdorff distance\/} between regions and {\bf P} to {\bf Q} is the maximum of (the one-sided Hausdorff distance from {\bf P} to {\bf Q}) and (the one-sided Hausdorff distance from {\bf Q} to {\bf P}) \[ H({\bf P,Q}) = \max(H^{1}({\bf P,Q}), H^{1}({\bf Q,P})) \] The {\em dual-Hausdorff distancei\/} (Davis 1995) is the maximum of (the Hausdorff distance between {\bf P} and {\bf Q}) and (the Hausdorff distance between the complements of {\bf P} and {\bf Q}). \[ H^{d}({\bf P,Q}) = \max(H({\bf P,Q}), H({\bf Q}^{c},{\bf P}^{c})) \] This metric is not discussed in (Deza and Deza 2006) but the proof that it is a metric over the space of regular regions is immediate. It is immediate from the definitions that for all regions, $H({\bf P,Q}) \leq H^{d}({\bf P,Q}) \leq M({\bf P,Q})$ and therefore $\Topo_{M}$ is finer than $\Topo_{H^{d}}$ which is finer than $\Topo_{H}$. \begin{theorem} \label{thmHausdorffContMorph} Topologies $\Topo_{H^{d}}$ and $\Topo_{H}$ support continuous morphing over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} Immediate from theorem~\ref{thmMContinuous} together with the above. \begin{theorem} \label{thmHausdorffSeparation} The Hausdorff distance has the region separation property over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} i. Let {\bf P, Z} be regions such that ${\bf P}\cap {\bf Z} =\emptyset$. Let ${\bf Y} \supset {\bf Z}$. Let {\bf z} be a point in {\bf Z}. Then $H({\bf Y,P}) \geq d({\bf z,P})$. So for $\epsilon < d({\bf z,P})$, the open ball ${\mathcal B}_{H}({\bf P},\epsilon)$ excludes all {\bf Z} and any superset of {\bf Z}. ii. Let {\bf P, Z} be regions such that ${\bf Z} \subset {\bf P}$. Let {\bf Y} be a region such that ${\bf Z}$ and {\bf Y} are disjoint. Let {\bf z} be a point in {\bf Z}. Then $H({\bf Y,P}) \geq \mbox{radius}({\bf P,z})$. So for $\epsilon < \mbox{radius}({\bf Z,z})$, the open ball ${\mathcal B}_{H}({\bf P},\epsilon)$ excludes all {\bf Y} and any subset of {\bf Y}. \begin{corollary} \label{corSeparation} The metrics $M({\bf P,Q})$ and $H^{d}({\bf P,Q})$ have the region separation property over $\mbox{$\mathcal R$}$. \end{corollary} {\bf Proof:} It is immediate that, if a topology has the property, then any finer topology also has the property. \subsection{The symmetric-difference metric} \label{secSymDiff} Define the function {\bf S(P,Q)}: $\mbox{$\mathcal R$} \times \mbox{$\mathcal R$} \mapsto \mbox{$\mathcal R$}$ as the symmetric difference of regions {\bf P} and {\bf Q}: \\ ${\bf S(P,Q)} = ({\bf P} \setminus {\bf Q}) \cup ({\bf Q} \setminus {\bf P})$ The {\em symmetric-difference} metric is the $n$-dimensional measure of the symmetric difference: \\ $V({\bf P,Q}) = v({\bf S(P,Q}))$ \begin{theorem} \label{thmTopoDHFinerVolume} Over the space $\mbox{$\mathcal R$}$, $\Topo_{H^{d}}$ is finer than $\Topo_{V}$. \end{theorem} {\bf Proof:} See (Davis 2001), corollary 8.2. \begin{theorem} \label{thmVolumeSupportsMorphing} $\Topo_{V}$ supports continuous morphings over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} Immediate from theorem~\ref{thmHausdorffContMorph} and lemma~\ref{thmTopoDHFinerVolume}. \begin{theorem} \label{thmVolumeSeparation} $\Topo_{V}$ has the region separation property over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} \\ i. Let {\bf P, Z} be regions such that ${\bf P}\cap {\bf Z} =\emptyset$. Let ${\bf Y} \supset {\bf Z}$. Then ${\bf Z} \subset S({\bf P,Y})$, $V({\bf P,Y}) \geq v({\bf Z})$. So for $\epsilon < v({\bf Z})$, the open ball ${\mathcal B}({\bf P},\epsilon)$ excludes {\bf Z} and any superset of {\bf Z}. ii. Let {\bf P,Z} be regions such that ${\bf Z} \subset {\bf P}$. Let {\bf Y} be a region such that ${\bf Z}$ and {\bf Y} are disjoint. Then again ${\bf Z} \subset S({\bf P,Y})$, So for $\epsilon < v({\bf Z})$, the open ball ${\mathcal B}({\bf P},\epsilon)$ excludes all sets disjoint from {\bf Z}. \subsection{Wasserstein metrics} \label{secWasserstein} The family of Wasserstein distances $W^{\psi}({\bf P,Q})$ are generalizations of the ``earth-movers'' metric often used in comparing probability distributions. {\bf Definition} A function $\psi:\mathbb{R}^{\geq 0} \mapsto \mathbb{R}^{\geq 0}$ is a {\it Mulholland function} if it is continuous and monotonically increasing; $\psi(0) = 0$; $\lim_{x \mbox{$\rightarrow$} \infty} \psi(x) = \infty$; and $\psi$ satisifies the Mulholland (1949) inequality \[ \psi^{-1}(\sum_{i=1}^{n} \psi(x_{i}+y_{i})) \leq \psi^{-1}(\sum_{i=1}^{n} \psi(x_{i}))+\psi^{-1}(\sum_{i=1}^{n} \psi(y_{i})) \] The Minkowski inequality is the special case where $\phi(x)=x^{p}$. The Wasserstein distance corresponding to a Mulholland function $\psi$ is a metric over probability distributions. (It is usually defined using the particular function $\psi(x) = x^{p}$. However, since the only property of $x^{p}$ that is used in proving that the Wasserstein distance is a metric is that it satisfies the Mulholland inequality, one can generalize it to use any Mulholland function (Clement and Desch, 2008).) \begin{definition} \label{defWassersteinOverDist} Let $\psi$ be a Mulholland function. Let $\theta({\bf x})$ and $\zeta({\bf x})$ be probability densities over $\mathbb{E}^{n}$. Let $\gamma$ be a function from $\mathbb{E}^{n}$ to $\mathbb{E}^{n}$ such that, if random variable $X$ has density $\theta({\bf x})$ then $\gamma(X)$ will have density $\zeta({\bf x})$. Define the integral \[ I(\gamma) = \int_{{\bf x} \in \mathbb{E}^{n}} \theta({\bf x}) \cdot \psi(d({\bf x}, \gamma({\bf x}))) \: d{\bf x} \] Let $\Gamma(\theta,\zeta)$ be the set of all such $\gamma$. Then the {\em Wasserstein distance between $\theta$ and $\zeta$ corresponding to $\psi$} is defined as follows: \[ W^{\psi}(\theta,\zeta) = \inf_{\gamma \in \Gamma(\theta,\zeta)} \psi^{-1}(I(\gamma)) \] \end{definition} We adapt the above definition to be a distance between regions ${\bf P}$ and ${\bf Q}$ by taking $\theta$ and $\zeta$ to be the uniform distributions over ${\bf P}$ and ${\bf Q}$. \begin{definition} For any region {\bf P}, $U_{\bf P}$ represents the uniform distribution over {\bf P}: \\ $U_{\bf P}({\bf x}) = 1/v({\bf P})$ for ${\bf x} \in {\bf P}$. \\ $U_{\bf P}({\bf x}) = 0$ for ${\bf x} \not \in {\bf P}$. \\ \end{definition} \begin{definition} \label{defWassersteinReg1} Let {\bf P} and {\bf Q} be regions in $\mbox{$\mathcal R$}$. Let $\psi$ be a Mulholland function. Define $W^{\psi}({\bf P,Q})$ to be $W^{\psi}(U_{\bf P},U_{\bf Q})$ \end{definition} We can reformulate this definition as follows: \begin{definition} Let $\bf P$ and $\bf Q$ be regions. Let $\gamma$ be a function from {\bf P} to {\bf Q}. We say that $\gamma$ is {\em uniform} if, for all ${\bf X} \subset {\bf P}$, $v(\gamma({\bf X}))) = v({\bf X})\cdot v({\bf Q})/v({\bf P})$. That is, $\gamma$ preserves relative measure. Define the following two functions of $\gamma$ and ${\bf P}$: \[ I^{\psi}(\gamma,{\bf P}) = \frac{1}{v({\bf P})} \cdot \int_{{\bf x} \in P} \psi(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} \] \[ C^{\psi}(\gamma,{\bf P}) = \psi^{-1}(I^{\psi}(\gamma,{\bf P})) \] Let $\Gamma({\bf P,Q})$ be the set of all uniform functions $\gamma$ from {\bf P} to {\bf Q}. Then $W^{\psi}({\bf P,Q}) = \inf_{\gamma \in \Gamma({\bf P,Q})} C^{\psi}({\bf P,Q})$. \end{definition} In the case of the identity function $\psi(x)=x$, this can be given an intuitive motivation as follows: Suppose that you have dirt uniformly spread over $\bf P$ and you want to move it so that it is uniformly spread out over $\bf Q$. To move a small piece of dirt of mass $m$ from {\bf x} to {\bf y} will cost $m \cdot d({\bf x,y})$. Then if you follow $\gamma$ as a guide for how to move the dirt, the total cost will be $C^{\psi}(\gamma)$. Thus the cost of the cheapest way of moving the dirt is $W^{\psi}({\bf A,B})$. Hence this is known as the ``earth-mover's" metric. \begin{lemma} \label{lemProbDist} Let {\bf P} be a bounded region; let $W^{\psi}$ be a Wasserstein metric; let $\zeta$ and $\theta$ be probability distributions that are zero outside {\bf P}. Let $p =$diameter({\bf P}). Let $m= \int_{{\bf x} \in \bf P} \max(0,\zeta(x)-\theta(x)) \: dx$ Then $W^{\psi}(\zeta,\theta) \leq \psi^{-1}(m \cdot \psi(p))$. \end{lemma} {\bf Informal proof:} The amount of ``dirt'' that has to be moved in turning $\zeta$ into $\theta$ is \\ $ \int_{{\bf x} \in \bf P} \max(0,\zeta({\bf x})-\theta({\bf x})| \: d{\bf x}$. The distance that any piece of dirt can be moved is at most $p$. So for any $\gamma$ that turns $\theta$ into $\phi$, $I^{\psi}(\gamma,{\bf P}) \leq m \cdot \psi(p)$. Then $W^{\psi}(\psi,\theta) \leq \psi^{-1}(I^{\psi}(\gamma,{\bf P}) = \psi^{-1}(m \cdot \psi(p))$. \begin{lemma} \label{lemTopoDHFinerWasserstein2} Let ${\bf P,Q}$ be regions. Let $p$=diameter({\bf P}), $h=H({\bf P,Q})$, and $a=V({\bf P,Q})$. Assume that $a < v({\bf P})/2$ and that $h < p/2$. Let $\psi$ be a Mulholland function. Then $W^{\psi}({\bf P,Q}) \leq \psi^{-1}(4a\psi(p)/v({\bf P}))$. \end{lemma} {\bf Proof:} Let $\zeta = U_{\bf P}$ and $\theta = U_{\bf Q}$. Let ${\bf R} = {\bf P} \cup {\bf Q}$; thus $\zeta$ and $\theta$ are zero outside $\bf R$. Note that $v({\bf P})+a \geq v({\bf Q}) \geq v({\bf P})-a \geq v({\bf P})/2$ \\ so $|1/(v({\bf P}) -1/v({\bf Q}))| = |v({\bf Q})-v({\bf P})|/(v({\bf P})v({\bf Q})) \leq a/2v^{2}({\bf P})$. \[ \int_{{\bf x} \in \bf R} \max(\zeta({\bf x})-\theta({\bf x}),0) \: dx = \int_{{\bf x} \in {\bf P} \cap {\bf Q}} \max(\zeta({\bf x})-\theta({\bf x}),0) \: dx + \int_{{\bf x} \in {\bf S}({\bf P},{\bf Q})} \max(\zeta({\bf x})-\theta({\bf x}),0) \: dx \] But in the first integral in the sum, the volume of the region of integration is at most $v({\bf P})$ and the integrand is at most $|1/v({\bf Q})-1/v({\bf P})|$ so the value of the integral is at most $2a/v({\bf P})$. In the second integral, the volume of integration is ${\bf S}({\bf P,Q})$ and the integrand is at most $1/\min(v({\bf P}),v({\bf Q}))$ so value of the integral is at most $2a/v({\bf P})$. Thus \[ \int_{{\bf x} \in \bf R} \max(\zeta({\bf x})-\theta({\bf x}),0) \: dx \leq 4a/v({\bf P}) \] Using lemma~\ref{lemProbDist} it follows that $W^{\psi}({\bf P,Q}) \leq \psi^{-1}(4a\psi(p)/v({\bf P}))$. \begin{theorem} \label{thmTopoDHFinerWasserstein} For any Mulholland function $\psi$, the topology generated by Wasserstein distance $\Topo_{W^{\psi}}$ is coarser over $\mbox{$\mathcal R$}$ than the topology generated by the dual-Hausdorff distance $\Topo_{H^{d}}$ \end{theorem} {\bf Proof:} Choose region {\bf P} and $\epsilon > 0$. Let $p$=diameter({\bf P}). Let $b = \psi(\epsilon) v({\bf P})/4\psi(p)$. Using theorem~\ref{thmTopoDHFinerVolume}, choose $\delta_{1}$ such that, such that, for all regions $\bf Q$, if $H^{d} < \delta_{1}$ then $V({\bf P,Q}) < b$. Let $\delta = \min(\delta_{1},p/2)$. Then by lemma~\ref{lemTopoDHFinerWasserstein2} it follows that $W^{\psi}({\bf P,Q}) < \epsilon$. \begin{corollary} \label{corWassersteinMorphing} For any Mulholland function $\psi$, the Wasserstein distance $W^{\psi}$ supports continuous morphing over $\mbox{$\mathcal R$}$. \end{corollary} {\bf Proof:} Immediate from theorems~\ref{thmTopoDHFinerWasserstein} and \ref{thmHausdorffContMorph}. \begin{theorem} \label{thmWassersteinSep} For any Mulholland function $\psi$, the Wasserstein distance $W^{\psi}$ satisfies the region separation condition over $\mbox{$\mathcal R$}$. \end{theorem} {\bf Proof:} {\bf Part 1:} Let ${\bf P, Z}$ be regions such that $d({\bf P,Z}) > 0$. Let $c=d({\bf P,Z})/2$. Let ${\bf Q}=\mbox{dilate}({\bf P},c)$. Let {\bf Y} be any superset of {\bf Z}. The part of {\bf Y} that is more than $c$ from $\bf P$ includes at least $\bf Z$; the part {\bf Y} that is less than $c$ from $\bf P$ is a subset of $\bf Q$. So the fraction of {\bf Y} that is more than $c$ from {\bf P} is at least $v({\bf Z})/(v({\bf Z})+v({\bf Q})$. So, for any uniform function $\gamma$ from {\bf P} to {\bf Y}, $I^{\psi}({\bf P},\gamma) \geq (v({\bf Z})/(v({\bf Z})+v({\bf Q})) \cdot \psi(c)$, so there is a positive lower bound on $W^{\alpha}({\bf P,Y})$. The proof of Part 2 is analogous. \section{The topology of the space of bounded convex open regions} \label{secConvex} We show that there is a unique well-behaved topology over the space of convex regions. Since all of the metric topologies we consider are well-behaved over that space, it follows that they all generate the same topology. Shephard and Webster (1995) demonstrated that the Hausdorff metric and the symmetric-difference metric generate identical topologies over the space of convex regions; that two further metrics, which they named the ``difference body metric'' and the ``homogeneous symmetric difference'' likewise generate the same topology. The latter two results are subsumed in theorem~\ref{thmConvex} below, though we do not prove that here. Groemer (2000) gives strong bounds between the relative size of the Hausdorff distance and the symmetric-difference distance between two convex regions. \begin{lemma} \label{lemIncreasing} Let $\bf A$ be a bounded open, convex region in $\mathbb{E}^{n}$. Let ${\bf p} \in \bf A$, and let ${\bf q} \in @{\bf A}$. For $t \geq 0$, let ${\bf w}(t) = {\bf q}+t({\bf q}-{\bf p})$. Then, for $t \geq 0$, the function $f(t) = d({\bf w}(t), @{\bf A})$ is an increasing function of $t$. \end{lemma} {\bf Proof:} (Figure~\ref{figIncreasing}). Let $0 < t_{1} < t_{2}$. Let $\bf b$ be the point on $@{\bf A}$ closest to ${\bf w}(t_{2})$. Let $L$ be the line from {\bf p} to $\bf b$. Since $\bf A$ is convex, the portion of $L$ between $\bf b$ and ${\bf p}$ is entirely in $\bf A$. Let $M$ be the line through ${\bf w}(t_{1})$ parallel to the line ${\bf bw}(t_{2})$ and let $\bf c$ be the intersection of $L$ and $M$. Then the triangle $\bigtriangleup {\bf q}, {\bf w}(t_{1}),{\bf c}$ is similar to the triangle $\bigtriangleup {\bf q}, {\bf w}(t_{2}),{\bf b}$ and lies inside it. Hence \[ f(t_{1}) = d({\bf w}(t_{1}),{\bf A}) \leq d({\bf w}(t_{1}),{\bf c}) < d({\bf w}(t_{2}),{\bf b}) = f(t_{2}) \] \begin{figure} \begin{center} \includegraphics[width=4in]{Increasing.png} \end{center} \caption{Proof of lemma~\ref{lemIncreasing}} \label{figIncreasing} \end{figure} \begin{lemma} \label{lemBall} Let $\bf P$ and $\bf Q$ be bounded, convex, open sets, and let {\bf o} be a point in {\bf P}. Let $h=H({\bf P,Q})$ and $r=\mbox{radius}({\bf P,o})$. If $h < r$ then ${\bf B}({\bf o},r-h) \subset {\bf Q}$. \end{lemma} {\bf Proof:} For convenience, take $\vec{0}={\bf o}$. Let $\vec{x}$ be a point in ${\bf B}(\vec{0},r) \setminus {\bf Q}$ (If there is no such point, the conclusion is trivial.) Then there is a hyperplane $\bf X$ through $\vec{x}$ such that ${\bf Q}$ lies on one side of $\bf X$. Let ${\bf C}$ be the intersection of {\bf X} with ${\bf B}(\vec{0},r)$. ($\bf C$ is an $n-1$-dimensional solid circular disk). Let $\vec{c}$ be the center of ${\bf C}$; thus $\vec{c}$ is the closest point to $\vec{0}$ on $\bf C$, so $|\vec{c}| \leq |\vec{x}|$. {\bf Q} must lie in the side of {\bf X} that contains $\vec{0}$; if it lies on the far side of {\bf X}, then its distance from the point in ${\bf B}(\vec{0},r)$ opposite $\vec{c}$ would be greater than $r$, which is impossible. Let $\vec{y} = r \cdot \vec{c}/|\vec{c}|$. Then $\vec{c}$ is the closest point on ${\bf B}(\vec{0},r)$ to $\vec{y}$. In particular $d(\vec{y},\vec{c}) \leq d(\vec{y},{\bf Q}) \leq h$. But $d(\vec{y},\vec{c}) = r-|\vec{c}| \geq r-|\vec{x}|$ so $|\vec{x}| \geq r-h$, so $\vec{x} \not \in {\bf B}(\vec{0},r-h)$. (Figure~\ref{figLemBall}) $\QED$ \begin{figure} \begin{center} \includegraphics[width=4in]{LemBall.png} \end{center} \caption{Lemma~\ref{lemBall}} \label{figLemBall} \end{figure} \begin{definition} \label{defStandardMorphing} Let ${\bf P,Q,W}$ be open convex bounded regions such that ${\bf P} \cap {\bf Q} \neq \emptyset$, $\bar{\bf P} \subset {\bf W}$ and $\bar{\bf Q} \subset {\bf W}$. That is, {\bf P} and {\bf Q} overlap, and {\bf W} contains them both, with some separation between ${\bf P} \cup {\bf Q}$ and the outside of {\bf W} (figure~\ref{figStandardMorphing}). Let {\bf o} be a point in ${\bf P} \cap {\bf Q}$. For convenience, let $\vec{0}=\bf o$ and $\vec{x} = {\bf x}-{\bf o}$. For any unit vector $\hat{v}$, let ${\bf R}(\hat{v})$ be the ray $\{ t\hat{v} \: | \: t \in (0, \infty) \}$. Let $\vec{p}(\hat{v})$, $\vec{q}(\hat{v})$, $\vec{w}(\hat{v})$ be the intersections of ${\bf R}(\hat{v})$ with $@{\bf P}$, $@\bf Q$, and $@{\bf W}$ respectively. Since {\bf P}, {\bf Q} and {\bf W} are convex, it is immediate that $\vec{p}(\hat{v})$ and $\vec{q}(\hat{v})$ and $\vec{w}(\hat{x})$ are uniquely defined (in any direction $\hat{v}$ there is only one such intersection for each) and are continuous functions of $\hat{v}$. The {\em standard morphing of {\bf P} into {\bf Q} within {\bf W} centered at {\bf o}, denoted $\Gamma_{\bf P,Q,W,o} : [0,1] \times \mathbb{E}^{n} \mapsto \mathbb{E}^{n}$} is defined as the following function: \begin{quote} For all $t \in [0,1]$, $\Gamma_{\bf P,Q,W,o}(t,\vec{0}) =\vec{0}$. For $\vec{x} \neq \vec{0}$, let $\hat{x} = \vec{x}/|\vec{x}|$. To simplify the expression, fix a direction of $\hat{x}$, and let $x = |\vec{x}|$. $p=|\vec{p}(\hat{x})|$, $q=|\vec{q}(\hat{x})|$, and $w=|\vec{w}(\hat{x})|$. Then, for any $\vec{x}$ in the ray ${\bf R}(\hat{x})$, \begin{itemize} \item If $x \leq p$, then $\Gamma(t,\vec{x}) = ((1-t)x + t(xq/p)) \cdot \hat{x}$. \item If $p < x < w$, then $\Gamma(t,\vec{x}) = ((1-t)x+t(q+(w-q)(x-p)/(w-p))) \cdot \hat{x}$. \item If $w \leq x$, then $\Gamma(t,\vec{x}) = \vec{x}$. \end{itemize} \end{quote} \end{definition} Thus, each ray ${\bf R}(\hat{x})$ is divided into three parts: the part inside {\bf P}, the part between part {\bf P} and {\bf W}, and the part outside {\bf W}. $\Gamma$ is a transformation, piecewise bilinear in both $t$ and $x$, which transforms the first part into the part of the ray inside $\bf Q$, the second part into the part of the ray between $\bf Q$ and $\bf W$, and is the identity outside $\bf W$. \begin{figure} \begin{center} \includegraphics[width=4in]{StandardMorphing.png} \end{center} \caption{The standard morphing} \label{figStandardMorphing} \end{figure} \begin{lemma} \label{lemExistsContMorph} Let $\bf P, Q, W, o$ be as in definition~\ref{defStandardMorphing}. Let $h = H({\bf P,Q})$, $r$ = radius({\bf P,o}), and $a=\mbox{diameter}({\bf P})$. If $r > h$, then the standard morphing $\Gamma_{\bf P,Q,W,o}$ has the following properties: \begin{itemize} \item[a.] $\Gamma$ is a continuous morphing. \item[b.] for all ${\bf x} \in \mathbb{E}^{n}$, $\Gamma(0,{\bf x})={\bf x}$. \item[c.] for all $t \in [0,1]$ and ${\bf x} \not \in {\bf W}$, $\Gamma(t,{\bf x})={\bf x}$. \item[d.] $\Gamma(1,{\bf P})={\bf Q}$; \item[e.] for all $t \in [0,1]$, $H(\Gamma(t,{\bf P}),{\bf P}) \leq h$; and \item[f.] for all ${\bf x} \in \mathbb{E}^{n}$ and $t \in [0,1]$, $d(\Gamma(t,{\bf x}),{\bf x}) \leq d({\bf x,o}) \cdot ah/(r-h)$. \end{itemize} \end{lemma} {\bf Proof:} Properties (a), (b), and (c) are immediate by construction. Let $\vec{0}=\bf o$; $\vec{x} = {\bf x}-{\bf o}$ and define $\hat{x}$, $\vec{p}(\hat{x})$, and $\vec{q}(\hat{x})$ as in definition~\ref{defStandardMorphing}. For (d): for any point $\vec{p}(\hat{v}) \in @{\bf P}$, $\Gamma(0,\vec{p}(\hat{v})) = \vec{p}(\hat{v})$ and $\Gamma(1,\vec{p}(\hat{v})) = \vec{q}(\hat{v})$. Since {\bf P} and {\bf Q} are convex, it follows that $\Gamma(1,@{\bf P})=@{\bf Q}$ and therefore $\Gamma(1,{\bf P})={\bf Q}$. Condition (e) of the lemma asserts that, for all $t$, $H({\bf P},\Gamma(t,{\bf P})) \leq H({\bf P},{\bf Q})$; that is for all $\vec{x} \in \Gamma(t,{\bf P})$, $d(\vec{x},{\bf P}) \leq h$ and for all $\vec{x} \in {\bf P}$, $d(\vec{x},\Gamma(t,{\bf P})) \leq h$ To prove this, let $\vec{x}$ be a point in $\Gamma(t,{\bf P})$, and let $\hat{x}=\vec{x}/|\vec{x}|$. Then the points $\vec{0}, \vec{x}, \vec{p}(\hat{x})$, and $\vec{q}(\hat{x})$ are collinear. If $|\vec{x}| < |\vec{p}(\hat{x})|$ then $\vec{x} \in {\bf P}$, so $d(\vec{x},{\bf P})=0$. If $|\vec{x}| \geq |\vec{p}(\hat{x})|$ then $|\vec{q}(\hat{x})| > |\vec{p}(\hat{x})|$ and $\vec{x}$ is on the line between $\vec{p}(\hat{x})$ and $\vec{q}(\hat{x})$ so, by lemma~\ref{lemIncreasing}, $d(\vec{x},{\bf P}) \leq d(\vec{q}(\hat{x}),{\bf P}) \leq H({\bf Q},{\bf P})$. Now let $\vec{x}$ be a point in $\bf P$, and let $\hat{x}=\vec{x}/|\vec{x}|$. If $\vec{x} \in \Gamma(t,{\bf P})$ then $d(\vec{x},f(t,{\bf P}))=0$. If $\vec{x} \not \in \Gamma(t,{\bf P})$ then $\vec{x}$ must be on the line through $\vec{q}(\hat{x})$ and $\vec{p}(\hat{x})$ with $|\vec{q}(\hat{x})| < |\vec{x}| < \vec{p}(\hat{x})$. By lemma~\ref{lemIncreasing} $d(\vec{x},{\bf Q}) \leq d(\vec{p}(\hat{x}),{\bf Q}) \leq H({\bf Q},{\bf P})$. Condition (f) of the lemma asserts that for all $\vec{x} \in \mathbb{E}^{n}$ and $t \in [0,1]$, $d(\Gamma(t,\vec{x}),\vec{x}) \leq ph/(r-h)$. By construction, the point on the ray $\{ t\hat{x} | t > 0 \}$ that is moved furthest is $\vec{p}(\hat{x})$, so it suffices to prove the inequality for that point. Since \[ \Gamma(t, \vec{x}) = \vec{x} \cdot (1 + \frac{t \cdot (|\vec{q}(\hat{x}|-|\vec{p}(\hat{x}|}{|\vec{p}(\hat{x})|}) \] we have \[ d(\Gamma(t,\vec{x}),\vec{x}) = |\vec{x}| \cdot t \frac{\mbox{abs}(|\vec{q}(\hat{x}|-|\vec{p}(\hat{x}|)}{|\vec{p}(\hat{x})|} \] Our goal, then, is to bound the above fraction as a function of $r$ and $h$. For convenience since $\hat{x}$ will be fixed, we will drop the argument and just write $\vec{p}$ and $\vec{q}$. Consider first the case where $|\vec{q}| < |\vec{p}|$. The ray ${\bf R} = \{ t \hat{x} | t \in (0, \infty) \}$ is thus divided into three parts: the segment from $\vec{0}$ to $\vec{q}$ is in both {\bf Q} and {\bf P}; the segment from $\vec{q}$ to $\vec{p}$ is in ${\bf P}$ but not $\bf Q$; and the segment past $\vec{p}$ is in neither. By lemma~\ref{lemBall}, the ball ${\bf B}(\vec{0},r-h) \subset {\bf Q}$. Construct the cone $\bf C$ with apex $\vec{q}$ that is tangent to ${\bf B}(\vec{0},r-h)$ (figure~\ref{figCone}). Since $\bf Q$ is convex, ${\bf C} \subset {\bf Q}$. Let $\bf C'$ be the reflection of {\bf C} through $\vec{q}$. Then $\bf C'$ must be disjoint from $\bf Q$. (For any point $\vec{w} \in {\bf C'}$ there are points $\vec{v}$ on the part of the ray {\bf R} past $\vec{q}$ and $\vec{u} \in {\bf C}$ such that $\vec{u},\vec{v}, \vec{w}$ are collinear in that order; since $\bf Q$ is convex, $\vec{u} \in {\bf Q}$ and $\vec{v} \not \in \bf Q$, it follows that $\vec{w} \not \in {\bf Q}$.) Construct the sphere centered at $\vec{p}$ tangent to ${\bf C}'$. Let $z$ be the radius of the sphere. Since ${\bf p} \in @{\bf P}$ and the sphere is disjoint from $\bf Q$, we have $h \geq z$. Now let $\vec{a}$ be a point in $\bar{\bf B}(\vec{0},(r-h)) \cap {\bf C}$ and let $\vec{b}$ be a point in $\bar{\bf B}(\vec{p},z) \cap {\bf C}'$ such that $\vec{a}, \vec{q}, \vec{b}$ are collinear. Then the triangles $\bigtriangleup \vec{0},\vec{a},\vec{q}$ and $\bigtriangleup \vec{p},\vec{b},\vec{q}$ are similar right triangles. So $d(\vec{0},\vec{a})/d(\vec{0},\vec{q}) = (r-h)/|\vec{p}| = d(\vec{p},\vec{b})/d(\vec{p},\vec{q}) = z/(|\vec{p}| - |\vec{q}|)$. Combining these and rearranging we get $(|\vec{p}|-|\vec{q}|)/|\vec{p}| \leq h/(r-h)$. In the case where $|\vec{p}| < |\vec{q}|$, the analysis is exactly analogous, except that in that case you get the tighter bound $(|\vec{p}|-|\vec{q}|)/|\vec{p}| \leq h/r$. $\QED$ \begin{figure} \begin{center} \includegraphics[width=4in]{Cone.png} \caption{Proof of lemma~\ref{lemExistsContMorph}} \end{center} \label{figCone} \end{figure} \begin{corollary} \label{corMapping} Let ${\bf P,Q}$ be convex regions such that ${\bf P} \cap {\bf Q} \neq \emptyset$. Let $h = H({\bf P,Q})$, $r$ = radius({\bf P,o}), and $a=\mbox{diameter}({\bf P})$. Then there is a homeomorphism $g$ of $\mathbb{E}^{n}$ to itself such that $g({\bf P}) = {\bf Q}$ and, for all ${\bf x} \in {\bf P}$, $d({\bf x},g({\bf x})) \leq ah/(r-h)$. \end{corollary} {\bf Proof:} Find a convex region ${\bf w} \supset {\bf P} \cup {\bf Q}$ and choose a point ${\bf o} \in {\bf P} \cap {\bf Q}$. Then by lemma~\ref{lemExistsContMorph} the function $\Gamma_{\bf P,Q,W,o}(1, \cdot)$ satisfies the condition of the corollary. It seems likely that this bound can be substantially tightened using a different morphing and in particular that the dependence on diameter({\bf P}) can be eliminated. But for the purposes of our analysis, this will suffice. \begin{lemma} \label{lemContMorph} Let $\Topo$ be a topology over $\mbox{$\mathcal R$}$ that supports continuous morphing. Then, restricted to $\mbox{$\mathcal C$}$, $\Topo_{H}$, the topology induced by the Hausdorff metric, is at least as fine as $\Topo$. \end{lemma} {\bf Proof} of the contrapositive: Suppose that $\Topo_{H}$ is not a refinement of $\Topo$. Then there exists a region ${\bf P} \in \mbox{$\mathcal C$}$ and a sequence of regions ${\bf Q}_{1}, {\bf Q}_{2} \ldots \in \mbox{$\mathcal C$}$ that converges to ${\bf P}$ in $\Topo_{H}$ but not in $\Topo$. Let $r = \mbox{radius}({\bf P}) > 0$. Let $\epsilon_{i} = H({\bf Q}_{i},{\bf P})$; thus $\lim_{i \mbox{$\rightarrow$} \infty} \epsilon_{i} = 0.$ By renumbering we can assume that $\epsilon_{i} < r/2$ for all $i$. We are going to use lemma~\ref{lemExistsContMorph} to interpolate a continuous morphing $\phi$ that passes through the regions ${\bf Q}_{1}, {\bf Q}_{2}, {\bf Q}_{3} \ldots {\bf P}$ at times 1, 1/2, 1/3 \ldots 0. Fix a center point ${\bf o} \in {\bf P}$. By lemma~\ref{lemBall}, ${\bf B}({\bf o},r/2) \subset {\bf B}({\bf o},r-H({\bf Q}_{i},{\bf P})) \subset {\bf Q}_{i}$. Let $q = 1 + \mbox{diameter}({\bf P})+\max_{i}H({\bf Q}_{i},{\bf P})$; then it is easily shown that the sphere ${\bf R}={\bf B}({\bf o},q)$ contains $\bar{\bf P}$ and $\bar{\bf Q}_{i}$ for all $i$. Define the function $f_{k} = \Gamma_{{\bf Q}_{k},{\bf Q}_{k+1}, {\bf R}, {\bf o}}$ as in definition~\ref{defStandardMorphing}. By lemma~\ref{lemExistsContMorph}, $f_{k}(t,{\bf x})$ is a continuous morphing, $f_{k}(0,\cdot)$ is the identity, and $f_{k}(1,{\bf Q}_{i}) ={\bf Q}_{i+1}$. \\ Define the function $g_{k}(t,{\bf x}) = f_{k}(k+1-k(k+1)t, {\bf x})$; thus $g_{k}(1/k,{\bf x}) = f_{k}(0,{\bf x})$ and $g_{k}(1/(k+1),{\bf x}) = f_{k}(1,{\bf x})$. Now define the function $\phi: \mathbb{R} \times \mathbb{E}^{n} \mapsto \mathbb{E}^{n}$ as follows: \begin{itemize} \item Construct $f_{0}$ to satisfy lemma~\ref{lemExistsContMorph} for {\bf P} and {\bf Q}. For $t \geq 1$, define $\phi(t,{\bf x)} = f_{0}(1,{\bf x})$. \item For $k=1,2,3 \ldots$, for $t \in [1/(k+1),1/k)$ define $\phi(t,{\bf x}) = g_{k}(t,(\phi(1/k,{\bf x}))$ \item for $t \leq 0$, $\phi(t,\cdot)$ is the identity function on $\mathbb{E}^{n}$ \end{itemize} Note that $\phi(1,{\bf P}) = f_{0}(1,{\bf P}) = {\bf Q}_{0}$. \\ $\phi(1/2,{\bf P}) = g_{1}(1/2,\phi(1,{\bf P})) = f_{1}(1,Q_{0}) = {\bf Q}_{1}$. \\ $\phi(1/3,{\bf P}) = g_{2}(1/3,\phi(1/2,{\bf P})) = f_{2}(1,Q_{1}) = {\bf Q}_{2}$. \\ and in general $\phi(1/k,{\bf P}) = {\bf Q}_{k}$. To show that $\phi$ is continuous: Spatial continuity is immediate by construction. Temporal continuity between times of the form $1/k$ is guaranteed by the continuity of $f_{k}$. Continuity at times of the form $1/k$ follows from the fact that $\phi(t,\cdot)$ consists in expansion along rays emanating from a fixed center point $\vec{0}$ and that the limit at time $t=1/k$, both from above and below, of the amount of expansion at point $\vec{x}$ is $|\vec{q}_{k}(\hat{x})| / |\vec{p}(\hat{x})|$, in the notation of lemma~\ref{lemExistsContMorph}, where $\vec{q}_{k}(\hat{x})$ is the intersection of ${\bf Q}_{k}$ with the ray $\{ t \cdot \hat{x} \: | \: t > 0 \}$. The continuity of $\phi$ at time $t=0$, which is, of course, the critical point, is guaranteed by the facts that, by lemma~\ref{lemExistsContMorph}, for all $t \in [1/(k+1),1/k]$, $d(\phi(t,\vec{x}),\phi(1/(k+1),\vec{x}) \leq 2H({\bf Q}_{k},{\bf Q}_{k+1})/r$, and that $d(\phi(1/(k+1),\vec{x}),\phi(0,\vec{x}) \leq 2H({\bf Q}_{k},{\bf P})/r$, and by assumption, both of these Hausdorff distances go to zero as $k \mbox{$\rightarrow$} \infty$. $\QED$ \begin{lemma} \label{lemSequence} Let $\bf P$ be a bounded open region and let ${\bf Q}_{1}, {\bf Q}_{2} \ldots $ be an infinite sequence of convex, open regions. Then one of three things is true. \begin{itemize} \item[1.] $\lim_{i \mbox{$\rightarrow$} \infty} H({\bf P,Q}_{i}) = 0$. \item[2.] There is a region $\bf Z$ such that ${\bf Z} \subset {\bf P}$ and, for infinitely many ${\bf Q}_{i}$, ${\bf Z }\cap {\bf Q}_{i} = \emptyset$. \item[3.] There is a region $\bf Z$ such that ${\bf Z} \cap {\bf P} = \emptyset$ and, for infinitely many ${\bf Q}_{i}$, ${\bf Z} \subset {\bf Q}_{i}$. \end{itemize} \end{lemma} {\bf Proof:} If condition 1 does not hold, then there exists $c >0$ such that either (a) $H^{1}({\bf P,Q}_{i}) > c$ for infinitely many $i$, or (b) $H^{1}({\bf Q}_{i},{\bf P}) > c$ for infinitely many $i$. Suppose that (a) holds. For each such ${\bf Q}_{i}$, there is a point ${\bf p}_{i} \in {\bf P}$ such that $d({\bf p}_{i},{\bf Q}_{i}) > c$. These ${\bf p}_{i}$ must have a cluster point ${\bf p}$ in the closure of {\bf P}. Choose $\epsilon$ so that $0 < \epsilon < c$, and let the infinite set of indices $I =\{ i \: | \: d({\bf p}_{i},{\bf p}) < \epsilon \}$. Then for $i \in I$, $d({\bf p},{\bf Q}_{i}) > c-\epsilon$. Therefore condition 2 of the lemma is satisfied for ${\bf Z} = {\bf P} \cap {\bf B}({\bf p},c-\epsilon)$. Suppose that conditions 1 and 2 and (a) do not hold but (b) holds. Since ${\bf P}$ is open, there exists an open ball ${\bf B}({\bf o},r) \subset {\bf P}$. Let $0 < \epsilon < r$. Since (a) does not hold, $H({\bf P},{\bf Q}_{i}) < \epsilon$ for all but finitely many $i$. Ignore the $i$ where it does not happen. By lemma~\ref{lemBall}, $B({\bf p},r-\epsilon) \subset {\bf Q}_{i}$. Let $r'=\min(c,r-\epsilon)$. Since ${\bf P}$ is bounded, let $s$ be such that ${\bf P} \subset {\bf B}({\bf o},s)$. Since case (b) holds, for each ${\bf Q}_{i}$ there is a point ${\bf q}_{i} \in {\bf Q}_{i}$ such that $d({\bf q}_{i},{\bf P}) > c$. Let ${\bf H}_{i}$ be the convex hull of ${\bf B}({\bf o},r') \cup {\bf B}({\bf q}_{i},r')$. Thus ${\bf H}_{i}$ is a right spherical cylinder with spherical caps whose axis is the line from {\bf o} to ${\bf q}_{i}$. Since ${\bf B}({\bf o},r') \subset {\bf Q}_{i}$, ${\bf B}({\bf q}_{i},r') \subset {\bf Q}_{i}$, and ${\bf Q}_{i}$ is convex, ${\bf H}_{i} \subset {\bf Q}_{i}$. Let ${\bf w}_{i} = {\bf o}+\min(1,(s+c)/d({\bf q}_{i},{\bf o})) \cdot ({\bf q}_{i}-{\bf o})$; that is ${\bf w}_{i}$ is either ${\bf q}_{i}$, if ${\bf q}_{i}$ is less than distance $s+r'$ from {\bf o} or is the point on the line from ${\bf o}$ to ${\bf q}_{i}$ at distance $s+c$ from ${\bf o}$. In either case, ${\bf Z} = {\bf B}({\bf w}_{i},r')$ is disjoint from $\bf P$ and is a subset of ${\bf H}_{i}$ and therefore of ${\bf Q}_{i}$ (figure~\ref{figLemSequence}). Since all the ${\bf w}_{i}$ lie in the bounded region $\bar{\bf B}({\bf o},s+r')$, they have a cluster point $\bf w$. Thus, for any $t < r'$, ${\bf B}({\bf w},t)$ is a subset of infinitely many ${\bf Q}_{i}$ and is disjoint from {\bf P}. $\QED$ \begin{figure} \begin{center} \includegraphics[width=4in]{LemSequence.png} \end{center} \caption{Lemma~\ref{lemSequence}: Condition 3} \label{figLemSequence} \end{figure} \begin{lemma} \label{lemConvexAtLeastAsFine} Let $\mu$ be a metric on $\mbox{$\mathcal R$}$ such that the topology $\Topo_{\mu}$ satisfies the region separation condition. Then over the space of convex open regions, $\Topo_{\mu}$ is at least as fine as $\Topo_{H}$, the topology of the Hausdorff metric. \end{lemma} {\bf Proof by contradiction:} Suppose that $\Topo_{\mu}$ is not at least as fine as $\Topo_{H}$. Then there exists $\epsilon > 0$ and a region $\bf P$ such that the ball in the Hausdorff-metric topology ${\mathcal B}_{H}({\bf P},\epsilon)$ is not contained in any ball in the $\mu$ topology. Thus, there is a sequence of regions ${\bf Q}_{1}, {\bf Q}_{2} \ldots$ such that $\mu({\bf Q}_{i},{\bf P}) < 1/i$ but $H({\bf Q}_{i},{\bf P}) \geq \epsilon$ for all $i$. By lemma~\ref{lemSequence} either (a) there exists a region ${\bf Z} \subset \bf P$ such that $\bf Z$ is disjoint from ${\bf Q}_{i}$ for infinitely many ${\bf Q}_{i}$; or (b) there exists a region ${\bf Z}$ disjoint from $\bf P$ such that ${\bf Z} \subset {\bf Q}_{i}$ for infinitely many ${\bf Q}_{i}$. Let ${\mathcal U} \in \Topo_{\mu}$ satisfy the conditions of definition~\ref{defSeparates}. Then by that definition, infinitely many ${\bf Q}_{i}$ are not in $\mathcal U$; but that contradicts their construction above. \begin{theorem} \label{thmConvex} Let $\Topo_{\mu}$ be a well-behaved metric topology. Over the space $\mathcal C$ of convex open regions, $\Topo_{\mu}$ is equal to $\Topo_{H}$, the topology of the Hausdorff metric. \end{theorem} {\bf Proof:} This is just the combinations of lemmas~\ref{lemContMorph} and \ref{lemConvexAtLeastAsFine}. \begin{corollary} \label{corIdentical} Over the space $\mathcal C$ of convex open regions, the metrics $M,H,H^{d},V$ and $W^{\psi}$ all generate the identical topology. \end{corollary} {\bf Proof:} Immediate from theorem~\ref{thmConvex} together with theorems~\ref{thmMContinuous}, \ref{thmHausdorffContMorph}, \ref{thmHausdorffSeparation} \ref{thmVolumeSupportsMorphing} \ref{thmVolumeSeparation} \ref{thmWassersteinSep} and corollary\ref{corWassersteinMorphing}. \section{The space of two separated convex regions} \label{secTwoConvex} We now turn to, arguably, the next simplest class of regions: those that consist either of a single convex region or are the union of two separated convex regions. As we shall see, our metrics generate many different topologies for that space. Let $\mathcal D^{2}$ be the set of all unions of two separated convex regions: ${\mathcal D}^{2} = \{ {\bf X} \cup {\bf Y} \: | \: {\bf X,Y} \in {\mathcal C}, d({\bf X,Y}) > 0 \}$. \\ Let ${\mathcal D} = {\mathcal C} \cup {\mathcal D}^{2}$. \subsection{Well-behaved topologies over $\mathcal D$} We begin by establishing some properties of any well-behaved topology over $\mathcal D$. Let {\bf A} be a region in $\mathcal D$ and let $\Topo$ be a well-behaved topology over $\mathcal D$. Theorem~\ref{thmConvex} above showed that, informally, speaking, if {\bf A} is convex, the convex regions close to {\bf A} in $\Topo$ are those that are close in the Hausdorff distance. We will show in that, if {\bf A} is ${\mathcal D}^{2}$, then small neighborhoods of {\bf A} contain no convex regions (lemma~\ref{lemNoConvexClosetoNonConvex}) and that they contain exactly the regions in ${\mathcal D}^{2}$ that are close in the Hausdorff distance (theorem~\ref{thmNonConvexHausdorff}). The interesting question is, if {\bf A} is convex, what kinds of regions in ${\mathcal D}^{2}$ lie in its neighborhoods? As we will see, there are many different possible answers, depending on the metric. \begin{lemma} \label{lemNonConvexFarFromConvex} Let $\bf P$ be a region that is not convex. Then there exists $\epsilon>0$ such that, for every convex region $\bf Q$, radius({\bf S}({\bf P,Q})) $\geq$ $\epsilon$. \end{lemma} {\bf Proof:} Since $\bf P$ is not convex, let $\bf a,b,c$ be points such that {\bf b} lies on line {\bf ac}, ${\bf a,c} \in \bf P$ and ${\bf b} \not \in \bar{\bf P}$. Let $\epsilon_{1}> 0$ be such that ${\bf B}({\bf a},\epsilon_{1}) \subset {\bf P}$, ${\bf B}({\bf c},\epsilon_{1}) \subset {\bf P}$, and ${\bf B}({\bf b},\epsilon_{1})$ is disjoint from $\bar{\bf P}$. If both {\bf a} and {\bf c} are in {\bf Q}, then {\bf b} is in {\bf Q}, so $H({\bf P,Q}) \geq d({\bf b,Q}) \geq \epsilon$. If {\bf a} is not in {\bf Q}, then, since {\bf Q} is convex, some hemisphere of ${\bf B}({\bf a},\epsilon_{1})$ is not in {\bf Q}. This hemisphere contains a ball of radius $\epsilon_{1}/2$. The same holds if {\bf c} is not in {\bf Q}. Therefore, the conclusion is satisfied with $\epsilon = \epsilon_{1}/2$. \begin{lemma} \label{lemNoConvexClosetoNonConvex} Let $\mu$ be either the Hausdorff metric, the symmetric difference metric, or a Wasserstein metric. Let $\bf P$ be a non-convex region. Then there exists $\epsilon > 0$ such that ${\mathcal B}_{\mu}({\bf P},\epsilon)$ does not contain any convex regions. \end{lemma} {\bf Proof:} Immediate from lemma~\ref{lemNonConvexFarFromConvex}. \begin{lemma} \label{lemMatchPieces} Let {\bf P}={\bf C} $\cup$ {\bf D} and {\bf Q}={\bf E} $\cup$ {\bf F}. where {\bf C, D, E,} and {\bf F} are convex, $d({\bf C,D}) > 0$, and $d({\bf E,F}) > 0$. Let $r_{C}$ and $r_{D}$ be the radii of ${\bf C}$ and {\bf D} respectively. Let $h = H({\bf P,Q})$. If $h < \min(r_{\bf C},r_{\bf D},d({\bf C,D})/2)$, then either \begin{itemize} \item[a.] radius(${\bf C} \cap {\bf E}$) $>$ $r_{C}- h$, $H({\bf C},{\bf E}) \leq h$, ${\bf C} \cap {\bf F} = \emptyset$, radius(${\bf D} \cap {\bf F}$) $>$ $r_{D}- h$, $H({\bf D},{\bf F}) \leq h$, and ${\bf D} \cap {\bf E}=\emptyset$; or \item[b.] radius(${\bf D} \cap {\bf E}$) $>$ $r_{D}- h$, $H({\bf D},{\bf E}) \leq h$, ${\bf D} \cap {\bf F} = \emptyset$, radius(${\bf C} \cap {\bf F}$) $>$ $r_{C}- h$, $H({\bf C},{\bf F}) \leq h$, and ${\bf C} \cap {\bf E}=\emptyset$ \end{itemize} In case (a), we say that {\em {\bf E} corresponds to {\bf C} and {\bf F} to {\bf D}}. \end{lemma} {\bf Proof:} Let {\bf c} be a point such that ${\bf B}({\bf c},r_{\bf C}) \subset {\bf C}$. Since $H^{1}({\bf P,Q}) \leq h$, there is a point ${\bf q} \in {\bf Q}$ such that $d({\bf c,q}) < h$, so ${\bf q} \in {\bf C}$. Since ${\bf Q} = {\bf E} \cup {\bf F}$, it follows that ${\bf q} \in {\bf E}$ or ${\bf q} \in {\bf F}$; let us say in {\bf E}. I claim that $d({\bf D,E}) > h$. Proof by contradiction. Suppose there are points ${\bf d} \in {\bf D}$ and ${\bf e} \in {\bf E}$ such that $d({\bf d,e}) \leq h$. Let ${\bf z}$ be the point in $\bar{C}$ closest to {\bf e}; then $d({\bf e,C}) = d({\bf e,z})$. Also $d({\bf C,D}) \leq d({\bf z,d}) \leq d({\bf z,e})+d({\bf e,d})$. By assumption of the lemma, $2h < d({\bf C,D})$. Combining these we have $d({\bf e,C}) > h$. For any point {\bf x} let $\phi({\bf x}) = d({\bf x,C})-d({\bf x,D})$. As you move on a straight line from {\bf q} to {\bf e}, the value of $\phi$ changes from positive to negative. Let {\bf y} be a point where $\phi({\bf y})=0$ so $d({\bf y,D}) = d({\bf y,C})$. Again we have inequality that $2h < d({\bf y,C}) + d({\bf y,D})$ so $d({\bf h,P}) = \min(d({\bf y,C}), d({\bf y,D})) > h$. Since $H^{1}({\bf E,P}) \leq h$ that means that {\bf y} is not in {\bf E}. But since {\bf E} is convex, and {\bf q} and {\bf e} are in {\bf E}, {\bf y} must be in {\bf E}. That completes the contradiction. Since $H^{1}({\bf D,Q}) \leq h$ and $d({\bf E},{\bf D}) > h$, it must be that $H^{1}({\bf F,D}) \leq h$. It follows from lemma~\ref{lemBall} that radius(${\bf F} \cap {\bf D}$) $\geq$ $r_{\bf D}-h$. The same arguments show that $d({\bf E,D}) > h$ and that radius(${\bf E} \cap {\bf C}$) $\geq$ $r_{\bf C}-h$. $\QED$ \begin{lemma} \label{lemH1OfConvexHull} Let {\bf P} be a convex region; let {\bf Q} be a region; and let {\bf R} be the convex hull of ${\bf P} \cup {\bf Q}$. Then $H^{1}({\bf R,P}) = H^{1}({\bf Q,P})$ \end{lemma} {\bf Proof:} Let {\bf r} be the point in $\bar{\bf R}$ that is furthest from {\bf P}. There exists points ${\bf u,v} \in \bar{\bf P} \cup \bar{\bf Q}$ such that $\bf r$ is on the line {\bf uv}. Let {\bf w,x} be the points in $\bar{\bf P}$ closest to {\bf u,v} respectively. Since {\bf P} is convex, the line {\bf wx} is in {\bf P}. It is always the case that, given two lines {\bf uv} and {\bf wx} and a point {\bf r} on {\bf uv}, $d({\bf r,wx}) \leq \max(d({\bf u,w}),d({\bf v,x}))$. (The distance squared is a convex quadratic function, whose maximum over any interval is reached at one of the extrema.) So we have $H^{1}({\bf R,P}) = d({\bf r,P}) \leq d({\bf r,wx}) \leq \max(d({\bf u,w}),d({\bf v,x}))) \leq H^{1}({\bf Q,P})$. The reverse inequality is trivial. \begin{lemma} \label{lemExistsContMorphConvex2} (Analogous to lemma~\ref{lemExistsContMorph}). Let $\bf P, Q$ be regions in ${\mathcal D}^{2}$. Let ${\bf C,D,E,F}$ be convex regions such that ${\bf P}={\bf C} \cup {\bf D}$; ${\bf Q}={\bf E} \cup {\bf F}$; {\bf E} corresponds to {\bf C} and {\bf F} corresponds to {\bf D}. Let $h = H({\bf P,Q})$. Let $r = \min(\mbox{radius}({\bf C}),\mbox{radius}({\bf D}))$ and let $p = \max(\mbox{diameter}({\bf C}),\mbox{diameter}({\bf D}))$. If $h < d({\bf C,D})/2$ then there exists a continuous morphing $f:[0,1] \times R^{n} \mapsto R^{n}$ such that: \begin{itemize} \item[a.] for all ${\bf x} \in \mathbb{E}^{n}$ $f(0,{\bf x})={\bf x}$. \item[b.] $f(1,{\bf P})={\bf Q}$; \item[c.] for all $t \in [0,1]$, $H(f(t,{\bf P}),{\bf P}) \leq h$; and \item[d.] for all ${\bf x} \in \mathbb{E}^{n}$ and $t \in [0,1]$, $d(f(t,{\bf x}),{\bf x}) \leq d({\bf x,o}) \cdot ph/(r-h)$. \end{itemize} \end{lemma} {\bf Proof:} Let {\bf W} be the convex hull of ${\bf C} \cup {\bf E}$ and let {\bf X} be the convex hull of ${\bf D} \cup {\bf F}$. By lemma~\ref{lemH1OfConvexHull} $H^{1}({\bf W,C}) \leq h$ and $H^{1}({\bf X,D}) \leq h$. Let $\epsilon = d({\bf C,D})-2h > 0$. Let $\bf R$ and $\bf S$ be the expansions of {\bf W} and {\bf X} by $\epsilon$; that is ${\bf R} = \{ {\bf r} \: | \: d({\bf r},{\bf W}) < \epsilon \}$ and` ${\bf S} = \{ {\bf r} \: | \: d({\bf r},{\bf X}) < \epsilon$. It is easily shown that {\bf R} and {\bf S} are convex and disjoint. Choose points ${\bf c} \in {\bf C}$, ${\bf d} \in {\bf D}$ such that ${\bf B}({\bf c},r) \subset {\bf C}$, ${\bf B}({\bf d},r) \subset {\bf D}$. Clearly the maximal distance from {\bf c} to a point on @{\bf C} and the maximal distance from {\bf d} to a point on @{\bf D} are at most $p$. We can use definition~\ref{defStandardMorphing} to construct functions $\Gamma_{\bf C,E,R,c}$ and $\Gamma_{\bf D,F,S,d}$. Define $f(t,{\bf x})$ as \[ f(t,{\bf x}) = \left\{ \begin{array}{ll} \Gamma_{\bf C,E,R,c}(t,{\bf x}) & \mbox{if } {\bf x} \in {\bf R} \\ \Gamma_{\bf D,F,S,d}(t,{\bf x}) & \mbox{if } {\bf x} \in {\bf S} \\ {\bf x} & \mbox{otherwise} \end{array} \right. \] The stated properties then follow immediately from the properties of $\Gamma$ in lemma~\ref{lemExistsContMorph}. \begin{theorem} \label{thmNonConvexHausdorff} Let $\Topo_{\mu}$ be a well-behaved metric topology. Then the restriction of $\Topo_{\mu}$ to ${\mathcal D}^{2}$ is equal to $\Topo_{H}$, the topology of the Hausdorff metric. \end{theorem} {\bf Proof:} Identical to the proof of theorem~\ref{thmConvex}, replacing the use of lemma~\ref{lemExistsContMorph} with lemma~\ref{lemExistsContMorphConvex2}. Thus, in view of theorems~\ref{thmConvex} and \ref{thmNonConvexHausdorff} and lemma~\ref{lemNoConvexClosetoNonConvex}, if $\Topo_{\mu}$ is the Hausdroff, the symmetric difference, or the Wasserstein metric topology over $\mathcal D$, then every neighborhood of a region in ${\mathcal D}^{2}$ is a set of regions, all in ${\mathcal D}^{2}$ that are close in the Hausdorff metric; while the convex regions in the neighborhood of a convex region are those that are close in the Hausdorff distance. All that remains, therefore, is to characterize the non-convex regions that lie in the neighborhood of a convex region. We now explore how that works out in the various metrics we are studying. \subsection{The homeomorphism-based topology in $\mathcal D$} Over the space $\mathcal D$, the topology $\Topo_{M}$ is uninteresting. The distance between a region in $\mathcal C$ and a region in ${\mathcal D}^{2}$ is always infinite, so a basis for the topology over $\mathcal D$ is (the open sets of the Hausdorff topology over $\mathcal C$) union (the open sets of the Hausdorff topology over $\mathcal D$). In other words, the question, ``What regions in ${\mathcal D}^{2}$ are close to a convex region in $\mathcal C$?'' has the most boring possible answer: None at all. \subsection{The dual-Hausdorff metrics in ${\mathcal D}$} The dual-Hausdorff metric topology is strictly coarser than the homeomorphism metric topology over $\mathcal D$. In particular, a history in which a growing, second, piece emerges from the surface of a convex region is continuous under $H^{d}$. Thus, histories 1 and 2 are continuous under $H^{d}$ but not under $M$. {\bf History 1.0:} In $\mathbb{E}^{2}$ let $\phi(0) = (0,1) \times (0,1)$. For $t > 0$, let $\phi(t) = (0,1) \times (0,1) \cup (1+t,1+2t) \times (0,t)$ (figure~\ref{figHistory1.0}). \begin{figure} \begin{center} \includegraphics[width=4in]{FigHistory1-0.png} \end{center} \caption{History 1.0} \label{figHistory1.0} \end{figure} {\bf History 1.1:} In $\mathbb{E}^{2}$ let $\phi(0) = (0,1) \times (0,1)$. For $t > 0$, let $\phi(t) = (0,1) \times (0,1) \cup (1+t,1+2t) \times (0,1)$ (figure~\ref{figHistory1.1}). \begin{figure} \begin{center} \includegraphics[width=4in]{FigHistory1-1.png} \end{center} \caption{History 1.1} \label{figHistory1.1} \end{figure} It seems somewhat plausible that, for some purpose, one might consider history 1.0 to be continuous, but not history 1.1. This can be achieved in $\mathbb{E}^{2}$ as follows: Let perimeter($\bf P$) be the perimeter of region $\bf P$ (i.e. the arc length of @{\bf P}). Define a metric $\mu$ as follows: $\mu({\bf P,Q}) = H^{d}({\bf P,Q}) + \mbox{abs}(\mbox{perimeter}({\bf P})-\mbox{perimeter}({\bf Q})$ History 2, which involves a discontinuous change at time $t=0$ from a total perimeter of 4 to a total perimeter of 6, is thus discontinuous under $\mu$. Over the space ${\mathcal D}$, $\Topo_{\mu}$ supports continuous morphing; this is equivalent to saying that the perimeter is a continuous function in the Hausdorff metric topology $\Topo_{H}$. Over the larger space $\mbox{$\mathcal R$}$, $\Topo_{\mu}$ does not support continuous morphing, as discussed above in section~\ref{secWellBehaved}. In $\mathbb{E}^{n}$ for $n>2$, one might have pieces of any dimensionality $k<n$ peel off from the side: {\bf History 3.k} ($k=0 \ldots n-1$): In $\mathbb{E}^{n}$ let $\phi_{0} = (0,1)^{n}$ and let $\phi(t) = (0,1)^{n} \cup(0,1)^{k} \times (1+t,1+2t)^{n-k}$. The metric $H^{d}$ takes these all to be continuous. The metric $M$ takes them all to be discontinuous. If one defines a metric $\mu({\bf P,Q})$ as the sum of $H_{d}({\bf P,Q})$ plus the absolute value of the difference of the kth order quermassintegrals, then history 3.k will be continuous for all $k<j$ and discontinuous for all $k\geq j$. In ${\mathcal D}$, histories such as 3.k for $k > 0$ can only be constructed starting if part of $\phi_{0}$ is a $k$-dimensional flat surface. If $\phi_{0}$ is strongly convex, then only the analogue of history 3.0 can be constructed. Equivalently, over the space of regions whose closure is strictly convex, the metrics defined above all define the same topology for all values of $k$. \subsection{The Hausdorff metric in ${\mathcal D}$} The Hausdorff distance $H({\bf P,Q})$ is always greater than or equal to the dual-Hausdorff distance; hence the topology it generates is coarser. Indeed over the space $\mathcal D$ it is strictly coarser, as history 4 illustrates (figure~\ref{figHistory4}) {\bf History 4:} \\ $\phi(0) = (0,2) \times (0,2)$. \\ $\phi(t) = (0,1-t) \times (0,2) \cup (1+t,2) \times (0,2).$ For $t>0$, $H(\phi(t),\phi(0)) = t$; every point of $\phi(t)$ is in $\phi(0)$ and every point in $\phi(0)$ is within $t$ of $\phi(t)$. On the other hand for all $t$ $H^{d}(\phi(t),\phi(0)) = 1$; the point $\la 1,1 \ra$ is in $\phi(t)^{c}$ but is distance 1 from any point in $\phi(0)^{c}$. Thus History 4 is continuous at time $t=0$ under the Hausdorff distance but discontinuous over the dual-Hausdorff distance. \begin{figure} \begin{center} \includegraphics[width=4in]{FigHistory4.png} \end{center} \caption{History 4} \label{figHistory4} \end{figure} \subsection{The symmetric-difference metric in ${\mathcal D}$} \begin{lemma} \label{lemDilationHausdorff} Let {\bf P} and {\bf Q} be regions such that $H^{1}({\bf Q,P}) \leq \delta$. Let ${\bf W}(\delta)$ be the dilation of {\bf P} by $\delta$. Then ${\bf Q} \subset {\bf W}(\delta)$. \end{lemma} {\bf Proof:} Immediate from the definitions. \begin{lemma} \label{lemConvexErosion} Let {\bf P} and {\bf Q} be convex regions. Let $\delta > H({\bf P,Q})$ Then $\mbox{erode}({\bf P},\delta) \subset {\bf Q}$. \end{lemma} {\bf Proof:} of the contrapositive. Suppose that point ${\bf x} \in \mbox{erode}({\bf P},\delta)$ and that ${\bf x} \not \in {\bf Q}$. Since $\bf Q$ is convex, there is a plane $\bf Z$ through {\bf x} such that {\bf Q} lies on one side of {\bf Z}. Let $\bf H$ be the hemisphere of $\bar{B}({\bf x},\delta)$ on the far side of {\bf Z} from {\bf Q} and let {\bf c} be the apex of {\bf H}. Then ${\bf c} \in \bar{{\bf P}}$ and $d({\bf c,Q}) \geq \delta$, so $H({\bf P,Q}) \geq \delta$. \begin{corollary} \label{corSymDiffInShell} If $\bf P$ and $\bf Q$ are convex then the symmetric difference of {\bf P} and {\bf Q} is a subset of the union of the inner and outer shells of {\bf P} by the Hausdorff distance. \\ ${\bf S}({\bf P,Q}) \subset {\bf O}({\bf P},H({\bf P,Q})) \cup {\bf I}({\bf P},H({\bf P,Q}))$ \end{corollary} {\bf Proof:} Immediate from lemmas~\ref{lemDilationHausdorff} and \ref{lemConvexErosion}. \begin{lemma} \label{lemShell} Let $\bf P$ be any bounded open region. Then for any $\epsilon > 0$, there exists $\delta > 0$ such that $v({\bf O}({\bf P},\delta)) < \epsilon$ and $v({\bf I}({\bf P},\delta)) < \epsilon$. \end{lemma} {\bf Proof:} Easily shown from the definition of measure as a limit. \begin{lemma} \label{lemSmallerShell} Let {\bf P} be a convex region, let $\epsilon > 0$. and let ${\bf Q}$ be a convex region such that $\mbox{dilate}({\bf Q}, \epsilon) \subset {\bf P}$. Then $v({\bf O}({\bf Q},\epsilon)) \leq v({\bf O}({\bf P},\epsilon))$. \end{lemma} {\bf Proof:} Let ${\bf X} = \mbox{dilate}({\bf Q},\epsilon)$. Let ${\bf Z} \subset {\bf X}$ be a convex polytope such that $v({\bf X} \setminus {\bf Z}) < \alpha$. Let ${\bf Y}_{1} \ldots {\bf Y}_{m}$ be the faces of {\bf Z}. For $i=1 \ldots m$: let ${\bf C}_{i}$ be the prism where one face is ${\bf Y}_{i}$, the axis has length $\epsilon$, is orthogonal to ${\bf Y}_{i}$ and extends inward into ${\bf Z}$. I claim that $\bigcup_{i=1}^{m} {\bf C}_{i} \supset {\bf Z} \cap {\bf O}({\bf Q},\epsilon)$. Proof: Let $\bf z$ be a point in ${\bf Z} \cap {\bf O}({\bf Q},\epsilon)$. Let {\bf a} be the closest point to {\bf z} on $@{\bf X}$. Let {\bf b} be the intersection of the line {\bf az} with $@{\bf Z}$. Let {\bf c} be the closest point to {\bf z} on $@{\bf Z}$. Let ${\bf Y}_{i}$ be the face of {\bf z} containing {\bf c}. Then $\epsilon \geq d({\bf z,a}) \geq d({\bf z,b}) \geq d({\bf z,c})$. Moreover the line {\bf zc} is orthogonal to ${\bf Y}_{i}$, so ${\bf z} \in {\bf C}_{i}$. Therefore $v({\bf O}({\bf Q},\epsilon)) \leq v(\bigcup_{i=1}^{m} {\bf C}_{i}) + v({\bf X} \setminus {\bf Z}) \leq v(\bigcup_{i=1}^{m} {\bf C}_{i}) + \alpha \leq \sum_{i=1}^{m} v({\bf C}_{i})$. Now extend each prism ${\bf C}_{i}$ outward from @{\bf Z}. Let ${\bf D}_{i}$ be the intersection of each such extended prism with ${\bf O}({\bf P},\epsilon)$. Since {\bf Z} is convex, no two of these intersect. Moreover, each ${\bf D}_{i}$ contains a right prism with cross section ${\bf Y}_{i}$ and with length at least $\epsilon$, so $v({\bf D}_{i}) \geq v({\bf C}_{i})$. \\ So $v({\bf O}({\bf P},\epsilon)) \geq \sum_{i=1}^{m} v({\bf D}_{i}) \geq \sum_{i=1}^{m} v({\bf C}_{i}) \geq v({\bf O}({\bf Q},\epsilon)) - \alpha$. Since $\alpha$ can be made arbitrarily small, we have $v({\bf O}({\bf P},\epsilon)) \geq v({\bf O}({\bf Q},\epsilon))$. $\QED$ \begin{corollary} \label{corUniformShell} Let ${\bf P}$ be a convex region and let $\epsilon >0$. Then there exists $\delta>0$ such that, for any convex region ${\bf Q} \subset {\bf P}$, $v({\bf O}({\bf Q},\delta)) < \epsilon$. \end{corollary} {\bf Proof:} Choose $\delta_{1} > 0$. Let ${\bf W} = {\bf O}({\bf P},\delta_{1})$. Using lemma~\ref{lemShell}, choose $\delta_{2}$ so that $v({\bf O}({\bf W},\delta_{2})) < \epsilon$. Let $\delta= \min(\delta_{1},\delta_{2})$. Then since $\mbox{dilate}({\bf Q},\delta) \subset {\bf W}$, by lemma~\ref{lemSmallerShell}, $v({\bf O}({\bf Q},\delta)) < \epsilon$. \begin{theorem} \label{thmVolumeCoarserThanHausdorff} $\Topo_{V}$ is strictly coarser than $\Topo_{H}$ over $\mathcal D$. \end{theorem} {\bf Proof:} We first prove that $\Topo_{H}$ is at least as fine as $\Topo_{V}$ over $\mathcal D$. We need to show that, for any region ${\bf P} \in \mathcal D$ and $\epsilon > 0$ there exists $\delta > 0$ such that, if ${\bf Q} \in \mathcal D$ and $H({\bf Q,P}) < \delta$ then $V({\bf Q,P}) < \epsilon$. Choose {\bf P} and $\epsilon > 0$. There are two cases: Case 1: {\bf P} is convex. Using lemma~\ref{lemShell} choose $\delta_{1}$ such that $v({\bf O}({\bf P},\delta_{1})) < \epsilon/4$ and $v({\bf I}({\bf P},\delta_{1})) < \epsilon/4$. Then, by corollary~\ref{corSymDiffInShell} for every convex $\bf Q$, if $H({\bf P,Q}) < \delta_{1}$, $v({\bf S(P,Q)}) < \epsilon/2$. Let ${\bf W} =\mbox{dilate}({\bf P},\delta_{1})$. Using corollary~\ref{corUniformShell} choose $\delta_{2}$ such that, for every convex subset {\bf X} of {\bf W}, $v({\bf O}({\bf X},\delta_{2}) < \epsilon/4$. Let $\delta =\min(\delta_{1},\delta_{2})$. Suppose that ${\bf Q} \in {\mathcal D}^{2}$ such that $H({\bf Q,P}) < \delta$. Let ${\bf Q} = {\bf C} \cup {\bf D}$ where ${\bf C}$ and ${\bf D}$ are convex. Since $H^{1}({\bf Q,P}) < \delta$ it follows that ${\bf Q} \subset {\bf W}$. Hence $v({\bf Q} \setminus {\bf P}) \leq v{(\bf W} \setminus {\bf P}) \leq \epsilon/2$. Since $H^{1}({\bf P,Q}) < \delta$ it follows that ${\bf P} \subset \mbox{dilate}({\bf Q},\delta) = \mbox{dilate}({\bf C},\delta) \cup \mbox{dilate}({\bf D},\delta)$. \\ Hence ${\bf P} \setminus {\bf Q} \subset (\mbox{dilate}({\bf C},\delta) \cup \mbox{dilate}({\bf D},\delta)) \setminus {\bf Q} \subset {\bf O}({\bf C},\delta) \cup {\bf O}({\bf D},\delta)$. \\ But $\mbox{dilate}({\bf C},\delta)$ and $\mbox{dilate}({\bf D},\delta)$ are both convex subsets of {\bf W}, so $v({\bf O}({\bf C},\delta) \leq \epsilon/4$ and $v({\bf O}({\bf D},\delta) \leq \epsilon/4$. So $v({\bf P} \setminus {\bf Q}) < \epsilon/2$ and $v({\bf S(P,Q)}) < \epsilon$. Case 2: ${\bf P} \in {\mathcal D}^{2}$. By lemma~\ref{lemNoConvexClosetoNonConvex} there exists $\delta_{1} > 0$ such that there are no convex regions ${\bf Q}$ with $H({\bf P,Q}) < \delta_{1}$. Let ${\bf P} = {\bf C} \cup {\bf D}$ where ${\bf C}$ and {\bf D} are convex. By lemma~\ref{lemMatchPieces} there exists $\delta_{2}> 0$, such that, for any ${\bf Q} \in {\mathcal D}^{2}$, if $H({\bf P,Q}) < \delta_{2}$ then, {\bf Q} can be divided into convex components {\bf E} and {\bf F} such that $H({\bf C,E}) < \delta_{2}$ and $H({\bf D,F}) < \delta_{2}$. Clearly ${\bf S}({\bf P,Q}) = {\bf S}({\bf C,E}) \cup {\bf S}({\bf D,F})$. Using theorem~\ref{thmConvex} one can choose $\delta_{3}$ such that, if $H({\bf C,E}) < \delta_{3}$ then $v({\bf S}({\bf C,E})) < \epsilon/2$ and $v({\bf S}({\bf D,F})) < \epsilon/2$. Thus if $H({\bf P,Q}) < \min(\delta_{1},\delta_{3})$ then $V({\bf P,Q}) < H({\bf P,Q})$. To show that $\Topo_{V}$ is strictly coarser than $\Topo_{H}$, note that histories 5.1 and 5.2 below are continuous in $\Topo_{V}$ but not in $\Topo_{H}$. In history 5.1 for $t > 0$, $V(\phi(t),\phi(0)) = t^{2}$ while $H(\phi(t),\phi(0)) = 1+t$. $\QED$ {\bf History 5.1:} \\ $\phi(0) = (0,1) \times (0,1).$ \\ $\phi(t) = (0,1) \times (0,1) \cup (2,2+t) \times (0,t)$ for $t > 0$. {\bf History 5.2:} \\ $\phi(0) = (0,1) \times (0,1).$ \\ $\phi(t) = (0,1) \times (0,1) \cup (2,2+t) \times (0,1)$ for $t > 0$. Analogous with histories 3.k, in $\mathbb{E}^{n}$, one can define $n$ qualitatively different histories, depending on the dimensionality of the new piece. {\bf History 6.k} ($k=0 \ldots n-1$) In $\mathbb{E}^{n}$ let $\phi_{0} = (0,1)^{n}$ and let $\phi(t) = (0,1)^{n} \cup(0,1)^{k} \times (2,2+2t)^{n-k}$. As with histories 3.k, if one defines a metric $\mu({\bf P,Q})$ as the sum of $V({\bf P,Q})$ plus the absolute value of the difference of the $k$th-order quermassintegrals, then history 6.k will be continuous for all $k<j$ and discontinuous for all $k\geq j$. Unlike histories 3.k, these multiple types of histories are possible even if $\phi_{0}$ is strictly convex. (Define $\phi(t)$ as $\phi(0)$ union an ellipsoid with $k$ axes of length 1 and $n-k$ axes of length $t$.) \subsection{Wasserstein metrics in ${\mathcal D}$} To compare the topologies generated by the Wasserstein distances, we consider the following infinite collection of histories: {\bf History 7}.$\psi$ (figure~\ref{figHistory7}). Let $\psi : \mathbb{R} \mapsto \mathbb{R}$ be a continuous function such that $\psi(0)=0$ and $\lim_{x \mbox{$\rightarrow$} \infty} \psi(x) = \infty$. \\ Define the history $\phi^{\psi}: \mathbb{R} \mapsto \mathbb{E}^{n}$ as: \\ $\phi^{\psi}(0)=(0,1)^{n}$. \\ $\phi^{\psi}(t) = \phi(0) \cup [(0,t)^{n-1} \times (\psi^{-1}(t^{-n}), \psi^{-1}(t^{-n})+t)]$. \begin{figure} \begin{center} \includegraphics[height=3in]{FigHistory7.png} \end{center} \caption{History 7.$\psi$, where $\psi(t)=t^{2}$} \label{figHistory7} \end{figure} The idea is that at time $t>0$, the unit box $(0,1)^{n}$ is joined by another box of size $t^{n}$, growing from zero size, and heading inward from infinitely far away. The trade-off between the size of the box and its distance is governed by the function $\psi$ (the specific time dependence doesn't matter.) \begin{lemma} \label{lemWasserstein1} Let $\beta$ be a Mulholland functions. Let $\alpha(x)$ be a continuous function such that $\alpha(0)=0$ and $\lim_{x \mbox{$\rightarrow$} \infty} \alpha(x) = \infty$. \\ Let $\phi^{\alpha}(t)$ be as in History 7.$\alpha$. Then \[ \lim_{t \mbox{$\rightarrow$} 0^{+}} W^{\beta}(\phi^{\alpha}(t),\phi^{\alpha}(0)) = \left\{ \begin{array}{ll} 0 & \mbox{if } \lim_{x \mbox{$\rightarrow$} \infty} \beta(x)/\alpha(x) = 0 \\ \infty & \mbox{if } \lim_{x \mbox{$\rightarrow$} \infty} \beta(x)/\alpha(x) = \infty \end{array} \right. \] \end{lemma} {\bf Proof} (somewhat informal): The value of the integral in the definition of the Wasserstein distance $W^{\beta}$ is dominated by the cost of moving the quantity $t^{n}$ of material a distance $d(t)=\alpha^{-1}(t^{-n}))$. By definition of the Wasserstein distance, that cost $c(t) \approx \beta(d(t)) \cdot t^{-n} \approx \beta(\alpha^{-1}(t^{-n})) \cdot t^{n}$. The Wasserstein distance is $W^{\beta}(\phi(0),\phi(t)) \approx \beta^{-1}(c(t)).$ So as $t \mbox{$\rightarrow$} \infty$, if $\beta(t) \ll \alpha(t)$, then, as $t \mbox{$\rightarrow$} 0^{+}$, $\beta(\alpha^{-1}(t^{-n})) \ll t^{-n}$ so $c(t)$ and $W^{\beta}(t)$ go to 0; if $\beta(t) \gg \alpha(t)$, then, as $t \mbox{$\rightarrow$} 0^{+}$, $\beta(\alpha^{-1}(t^{-n})) \gg t^{-n}$ so $c(t)$ and $W^{\beta}(t)$ go to $\infty$. \begin{lemma} \label{lemWasserstein2} Let $\alpha$, $\beta$ be two Mulholland functions. If $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$ then, over $\mathcal D$, topology $\Topo_{{W}^{\alpha}}$ is not finer than the topology $\Topo_{{W}^{\beta}}$. \end{lemma} {\bf Proof:} Let $\zeta(x) = \sqrt{\alpha(x)\beta(x)}$ By lemma~\ref{lemWasserstein1} $\phi^\zeta(t)$ is continuous relative to $\Topo_{W^{\alpha}}$ but discontinuous with respect to $\Topo_{W^{\beta}}$. \begin{lemma} \label{lemWasserstein3} Let $\alpha$, $\beta$ be two Mulholland functions. If $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$ then, over $\mbox{$\mathcal R$}$, topology $\Topo_{{W}^{\beta}}$ is at least as fine as topology $\Topo_{{W}^{\alpha}}$. \end{lemma} {\bf Proof:} The intuition of the proof is this: Suppose that ${\bf Q}_{i}$ is close to $\bf P$ in the measure $W^{\beta}$. Let $\gamma$ be mapping of $\bf P$ to ${\bf Q}_{i}$ such that $C^{\beta}(\gamma,{\bf P},{\bf Q}_{i})$ is close to $W^{\beta}({\bf P},{\bf Q}_{i})$. Divide {\bf P} into two parts: the points that $\gamma$ is moving only a short distance, and the points that it is moving a long distance. If you consider now the integral using $\alpha$: the first part is moving only a small distance so it makes a small contribution to the integral in $W^{\alpha}$. Over the second part, the integral using $\alpha$ can't be very much larger than the integral using $\beta$, so it is also makes a small contribution to $W^{\alpha}$ Formally: We need to show that, for any region {\bf P} and sequence ${\bf Q}_{1}, {\bf Q}_{2} \ldots$ if $W^{\beta}({\bf Q}_{i},{\bf P})$ converges to 0, then $W^{\alpha}({\bf Q}_{i},{\bf P})$ also converges to 0. Since $\alpha(x)$ and $\beta(x)$ go to 0 as $x$ goes to 0, in view of the definition of $W^{\phi}$, it clearly suffices to show that, for any $\epsilon > 0$ there exists $\delta > 0$ such that, for any ${\bf Q}_{i}$ and uniform function $\gamma$ from ${\bf P}$ to ${\bf Q}_{i}$, if $I^{\beta}(\gamma,{\bf P}) < \delta$ then $I^{\alpha}(\gamma,{\bf P}) < \epsilon$ where $I^{\psi}$ is the integral defined earlier: Choose $\epsilon > 0$. Let $M = \sup_{x \in [\alpha^{-1}(\epsilon/2),\infty)} \alpha(x)/\beta(x)$. Since $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$, this supremum exists and is finite. Let $\delta = \epsilon/2M$. Assume that $I^{\beta}(\gamma,{\bf P}) < \delta$. Partition {\bf P} into two subsets (either may be empty): \\ ${\bf P}_{1} = \{ {\bf x} \: | \: d({\bf x},\gamma({\bf x})) < \alpha^{-1}(\epsilon/2) \}$. \\ ${\bf P}_{2} = \{ {\bf x} \: | \: d({\bf x},\gamma({\bf x})) \geq \alpha^{-1}(\epsilon/2) \}$. \\ Clearly \[ I^{\alpha}(\gamma,{\bf P}) = \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}} \alpha(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} = \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}_{1}} \alpha(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} + \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}_{2}} \alpha(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} \] But for ${\bf x} \in {\bf P}_{1}$, $\alpha(d({\bf x},\gamma({\bf x}))) \leq \epsilon/2$, so \[ \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}_{1}} \alpha(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} < \frac{v({\bf P}_{1})}{v({\bf P})}(\epsilon/2) \leq \epsilon/2 \] And for ${\bf x} \in {\bf P}_{1}$, $\alpha(d({\bf x}, \gamma({\bf x}))) \leq M \beta(d({\bf x}, \gamma({\bf x})))]$ so \[ \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}_{1}} \alpha(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{\bf x} < \frac{1}{v({\bf P})} \int_{{\bf x} \in {\bf P}_{1}} M \beta(d({\bf x},\gamma({\bf x}))) \: \mbox{d}{v} ) < M I^{\beta}(\gamma,{\bf P}) < \epsilon/2 \] $\QED$ \begin{theorem} Let $\alpha$, $\beta$ be two Mulholland functions. If $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$ then, over $\mbox{$\mathcal R$}$ and over $\mathcal D$, topology $\Topo_{{W}^{\beta}}$ is strictly finer than topology $\Topo_{{W}^{\alpha}}$. \end{theorem} {\bf Proof:} Immediate from lemmas~\ref{lemWasserstein2} and \ref{lemWasserstein3}. $\QED$ With a slight modification of the proof of \ref{lemWasserstein3} we can show that, if you consider a bounded subset of $\mbox{$\mathcal R$}$, then any two Wasserstein distances give the identical topology. In other words if you want to construct an example like History.7.$\psi$ that is continuous relative to one Wasserstein distance and discontinuous relative to another, then you have to use a similar construction of using, as $t \mbox{$\rightarrow$} 0^{+}$ smaller and smaller regions further and further out. \begin{theorem} \label{thmWassersteinBounded} Let $\bf U$ be a bounded region in $\mathbb{E}^{n}$. Let $\mathcal V$ be any collection of sub-regions of $\bf U$. Let $\alpha$, $\beta$ be two Mulholland functions. Then over $\mathcal U$, $\Topo_{W^{\alpha}} = \Topo_{W^{\beta}}$. \end{theorem} {\bf Sketch of proof:} Suppose that ${\bf Q}_{1}, {\bf Q}_{2} \ldots$ converges to {\bf P}, where these are all subsets of $\bf U$. Suppose that this converges in $\beta$. As in the proof of lemma~\ref{lemWasserstein3}, divide {\bf P} into two parts; ${\bf P}_{1}$, the points that are being moved a short distance, and ${\bf P}_{2}$ the points that are being moved a long distance. The integral over ${\bf P}_{1}$ is necessarily small using any Mulholland function. Since the integral over ${\bf P}_{2}$ is small using $\beta$, and since the distance that points are being moved is not small, the volume of ${\bf P}_{1}$ itself must be small. But the distance that they are being moved cannot be more than diameter({\bf U}). Therefore the integrand is not greater than $\alpha$(diameter({\bf U})), and since this is being taken over a small volume, the result is also small. As with histories 3, 5, and 6, one can add another parameter $k$, which is the dimensionality of the new piece that appears. {\bf History 7}.$\psi.k$: Let $\psi : \mathbb{R} \mapsto \mathbb{R}$ be a continuous function such that $\psi(0)=)$ and $\lim_{x \mbox{$\rightarrow$} \infty} \psi(x) = \infty$. Let $k$ be an integer between 0 and $n-1$\\ Define the history $\phi^{\psi}: \mathbb{R} \mapsto \mathbb{E}^{n}$ as: \\ $\phi^{\psi}(0)=(0,1)^{n}$. \\ $\phi^{\psi}(t) = \phi(0) \cup [(0,1)^{k} \times (0,t)^{n-1} \times (\psi^{-1}(t^{k-n}), \psi^{-1}(t^{k-n})+t)]$. It is easily seen that, if one considers metrics which are the sum of a Wasserstein function plus the absolute value of the difference of the $k$th-order quermassintegrals, then, for any two functions $\alpha$ and $\beta$ and any two values $k,m$ between 0 and $n-1$, if either $\alpha$ and $\beta$ have different growth rates or $k \neq m$, then one can construct a history $\phi$ of this form which is continuous with respect to one metric and discontinuous with respect to the other. Thus any two such metrics generate different topologies. The distinction between different values of $k$ can be achieved even if the space of regions is limited to subsets of a bounded region. \begin{lemma} \label{lemWassersteinHausdorffConvex2A} Let $\alpha$ be a Mulholland function. Then over $\mathcal D$, the corresponding Wasserstein metric topology $\Topo_{W^{\alpha}}$ generates a topology that is not finer than the Hausdorff metric topology $\Topo_{H}$. \end{lemma} {\bf Proof:} Consider history 5.1 above: \\ $\phi(0) = (0,1) \times (0,1).$ \\ $\phi(t) = (0,1) \times (0,1) \cup (2,2+t) \times (0,t)$ for $t > 0$ \\ It is easily shown that $H(\phi(0),\phi(t)) = 1+t$ but for any $\alpha$, $W^{\alpha}(\phi(0),\phi(t)) \approx \alpha^{-1}(t^{2})$. Thus, $\phi$ is continuous at $t=0$ in the Wasserstein topology but discontinuous in the Hausdorff-metric topology. \begin{lemma} \label{lemHausdorffFinerWasserstein} Let $\alpha$ be a Mulholland function. Then over $\mathcal D$, the corresponding Wasserstein metric topology $\Topo_{W^{\alpha}}$ generates a topology that is coarser than the Hausdorff metric topology $\Topo_{H}$. \end{lemma} {\bf Proof:} Choose region ${\bf P} \in {\mathcal D}$ and $\epsilon > 0$. Let $p=\mbox{diameter}({\bf P})$. Let $a=\alpha(\epsilon)v({\bf P})/2p$. Using theorem~\ref{thmVolumeCoarserThanHausdorff}, choose $\delta_{1}$ such that, for all ${\bf Q} \in {\mathcal D}$, if $H({\bf P,Q}) < \delta_{1}$ then $V({\bf P,Q}) < a$. Let $\delta= \min(\delta_{1},p/2)$. Then by lemma~\ref{lemTopoDHFinerWasserstein2}, if ${\bf Q} \in {\mathcal D}$ and $H({\bf P,Q}) < \delta$, then $W^{\psi}({\bf P,Q}) < \epsilon$. \begin{theorem} \label{thmHausdorffFinerWasserstein} Over the space $\mathcal D$, the Hausdorff metric topology is strictly finer than any Wasserstein metric topology. \end{theorem} {\bf Proof:} This is the combination of lemmas~\ref{lemWassersteinHausdorffConvex2A} and` \ref{lemHausdorffFinerWasserstein}. \begin{lemma} \label{lemSymDifNotFiner} Over $\mathcal D$, the symmetric difference topology $\Topo_{V}$ is not finer than any Wasserstein metric topology $\Topo_{W^{\alpha}}$. \end{lemma} {\bf Proof:} Let $\psi({\bf x}) = \alpha^{2}({\bf x})$. Then History.7.$\psi$ is continuous relative to $\Topo_{V}$ but not relative to $\Topo_{W^{\alpha}}$ by lemma~\ref{lemWasserstein1}. \begin{lemma} \label{lemDiffOfConvexA} Let {\bf P} be a convex region and $\epsilon > 0$. Then there exists $\delta > 0$ such that, for any convex {\bf Q}, if $v({\bf P} \setminus {\bf Q}) > \epsilon$ then there exists a point {\bf p} such that ${\bf B}({\bf p},\delta) \subset {\bf P} \setminus {\bf Q}$. \end{lemma} {\bf Proof:} Using lemma~\ref{lemShell}, choose $\delta_{1}$ such that $v({\bf I}({\bf P},\delta_{1})) < \epsilon$. Let ${\bf R} = \mbox{erode}({\bf P},\delta_{1}).$ Let ${\bf Q}$ be a convex region such that $v({\bf P} \setminus {\bf Q}) > \epsilon$. Clearly $\bf R$ is not a subset of ${\bf Q}$ since $v({\bf P} \setminus {\bf R}) < \epsilon$. Let ${\bf r}$ be a point in {\bf R} but not in {\bf Q}. Since ${\bf r} \in {\bf R}$ it follows that ${\bf B}({\bf r},\delta_{1}) \subset {\bf P}$; since {\bf Q} is convex, there is at least a hemisphere of ${\bf B}({\bf r},\delta_{1})$ that is not in {\bf Q}. Therefore there is a ball of radius $\delta_{1}/2$ in ${\bf P} \setminus {\bf Q}$. \begin{lemma} \label{lemDiffOfConvexB} Let {\bf P} be a convex region and $\epsilon > 0$. Then there exists $\delta > 0$ such that, for any ${\bf Q} \in {\mathcal D}^{2}$, if $v({\bf P} \setminus {\bf Q}) > \epsilon$ then there exists a point {\bf p} such that ${\bf B}({\bf p},\delta) \subset {\bf P} \setminus {\bf Q}$. \end{lemma} {\bf Proof:} Choose $\bf P$ and $\epsilon > 0$. Using corollary~\ref{corUniformShell} choose $\delta_{1} > 0$ such that, for all convex ${\bf X} \subset {\bf P}$, $v({\bf O}({\bf X},\delta_{1})) < \epsilon/2$. Let $\delta = \delta_{1}/2$. Let ${\bf Q}$ be any region in ${\mathcal D}^{2}$ such that $v({\bf P} \setminus {\bf Q}) > \epsilon$. Let {\bf C} and {\bf D} be the two components of {\bf Q}. Let ${\bf C}' = {\bf C} \cap {\bf P}$ and ${\bf D}' = {\bf D} \cap {\bf P}$. If either of these is empty, then the result follows from lemma~\ref{lemDiffOfConvexA}, so assume that neither is empty. Let ${\bf X}$ be a hyperplane dividing ${\bf C}'$ from ${\bf D}'$. Then {\bf X} divides ${\bf P}$ into two parts, $\bf E$ containing $\bf C$ and {\bf F} containing {\bf D}. Clearly {\bf E} and {\bf F} are convex and ${\bf P} \setminus {\bf Q} = ({\bf E} \setminus {\bf C}) \cup ({\bf F} \setminus {\bf D})$. Therefore either $v({\bf E} \setminus {\bf C}) > \epsilon/2$ or $v({\bf F} \setminus {\bf D}) > \epsilon/2$. Assume the former. By the same argument as in lemma~\ref{lemDiffOfConvexA}, there exists a point ${\bf r}$ such that ${\bf B}({\bf r},\delta) \subset {\bf E} \setminus {\bf C}$. \begin{lemma} \label{lemDiffOfConvexC} Let {\bf P} be a region in $\mathcal D$ and $\epsilon > 0$. Then there exists $\delta_{1}, \delta_{2} > 0, $ such that, for any region {\bf Q}, if $v({\bf Q} \setminus {\bf P}) > \epsilon$ then there is a subset ${\bf W} \subset {\bf Q}$ such that $d({\bf P,W}) > \delta_{1}$ and $v({\bf W})/v({\bf Q}) > \delta_{2}$. \end{lemma} {\bf Proof:} Using lemma~\ref{lemShell}, choose $\delta_{1}$ such that $v({\bf O}({\bf P},\delta_{1})) < \epsilon/2$. Let ${\bf R} = \mbox{dilate}({\bf P},\delta_{1}).$ Let ${\bf Q}$ be a region such that $v({\bf Q} \setminus {\bf P}) > \epsilon$. Let ${\bf W} = {\bf Q} \setminus {\bf R}$. Then ${\bf Q} \setminus {\bf P} \subset {\bf W} \cup ({\bf R} \setminus {\bf P})$ so $v({\bf W}) > \epsilon/2$. So the conclusion is satisfied with $\delta_{2} = \epsilon/(\epsilon + v({\bf R}))$. \begin{lemma} \label{lemDiffOfConvexD} Let {\bf P} be a region in $\mathcal D$ and $\epsilon > 0$. Then there exists $\delta_{1}, \delta_{2} > 0, $ such that, for any region ${\bf Q} \in {\mathcal D}$, if $V({\bf P,Q}) > \epsilon$ then there is a subset ${\bf W} \subset {\bf Q}$ such that $d({\bf P,W}) > \delta_{1}$ and $v({\bf W})/v({\bf Q}) > \delta_{2}$. \end{lemma} {\bf Proof:} $V({\bf P,Q}) = v(({\bf P} \setminus {\bf Q}) \cup ({\bf Q} \setminus {\bf P}))$, so if $V({\bf P,Q}) > \epsilon$ then either $v({\bf P} \setminus {\bf Q}) > \epsilon/2$ or $v({\bf Q} \setminus {\bf P}) > \epsilon/2$. Using lemmas~\ref{lemDiffOfConvexA} and \ref{lemDiffOfConvexB}, we can find $\delta_{A}$ such that, for all ${\bf Q} \in \mathcal D$, if $v({\bf Q} \setminus {\bf P}) < \epsilon/2$ and $v({\bf P} \setminus {\bf Q}) > \epsilon/2$, then there is a point {\bf r} such that ${\bf B}({\bf r},\delta_{A}) \subset {\bf Q} \setminus {\bf P}$, so in this case, we can choose ${\bf W}={\bf B}({\bf r},\delta_{A}/2)$. Let $s= v({\bf B}({\bf r},\delta_{A}))$, the volume of the $n$-dimensional sphere of radius $\delta_{A}$. Then $v({\bf W})/v({\bf Q}) \geq s/(v({\bf P})+\epsilon/2)$. Using lemma~\ref{lemDiffOfConvexB} we can find $\delta_{B}, \delta_{C}$ such that, for all regions ${\bf Q}$, if $v({\bf Q} \setminus {\bf P}) > \epsilon/2$ then there exists a subset ${\bf W} \subset {\bf Q} \setminus {\bf P}$ such that $d({\bf W,P}) > \delta_{B}$ and $v({\bf W})/v({\bf Q}) > \delta_{C}$. So if we take $\delta_{1} = \min(\delta_{A}/2,\delta_{B})$ and $\delta_{2} = \min(s/(v({\bf P})+\epsilon/2),\delta_{C})$, the conclusion of the lemma is satisfied. \begin{lemma} \label{lemWassersteinFinerSymDiff} Over $\mathcal D$, the symmetric difference topology $\Topo_{V}$ is coarser than any Wasserstein metric topology $\Topo_{W^{\psi}}$. \end{lemma} {\bf Proof:} We need to show that, for any Mulholland function $\psi$, for any ${\bf P} \in {\mathcal D}$ and $\epsilon > 0$ there exists $\delta >0$ such that, for any ${\bf Q} \in {\mathcal D}$, if $W^{\psi}({\bf P,Q}) < \delta$ then $V({\bf P,Q}) < \epsilon$. Given $\psi, {\bf P}, \epsilon$ as above, by lemma~\ref{lemDiffOfConvexD} there exist $\delta_{1}, \delta_{2}$ such that, for all ${\bf Q} \in \mathcal D$, if $V({\bf P,Q}) > \epsilon$ then there exists a region ${\bf W} \subset {\bf Q}$ such that $d({\bf W,P}) > \delta_{1}$ and $v({\bf W}) > \delta_{2}$. Let $\gamma$ be any uniform mapping from $\bf Q$ to $\bf P$. Then \[ I^{\psi}(\gamma) = \int_{{\bf x} \in \bf Q} \psi(d({\bf x},\gamma({\bf x}))) \: d{\bf x} > \int_{{\bf x} \in \bf W} \psi(d({\bf x},\gamma({\bf x}))) \: d{\bf x} > \int_{{\bf x} \in \bf W} \psi(\delta_{1}) \: d{\bf x} > \delta_{2}v({\bf Q}) \psi(\delta_{1}) \] So $W^{\psi}({\bf P,Q}) = \inf_{\gamma} \psi^{-1}(1/v({\bf Q})) I(\gamma) > \psi^{-1}(\delta_{2}\psi(\delta_{1}))$. So the conclusion is satisfied with $\delta =\psi^{-1}(\delta_{2}\psi(\delta_{1}))$. $\QED$ \begin{theorem} \label{thmWassersteinFinerSymDiff} Over the space ${\mathcal D}$, any Wasserstein-metric topology is strictly finer than the symmetric-difference-metric topology. \end{theorem} {\bf Proof:} From lemmas \ref{lemSymDifNotFiner} and \ref{lemWassersteinFinerSymDiff}. \section{Star-shaped regions} \label{secStar} Over the space $\mathcal S$ of star-shaped regions centered at the origin, the situation is very different. As we shall show, the Hausdorff metric, the Wasserstein metrics, and the symmetric difference metrics all yield topologies that are incomparable in terms of fineness. For simplicity, we will demonstrate our results in $\mathbb{E}^{2}$, but the generalizations to $\mathbb{E}^{n}, n>2$ are obvious. It will be convenient to define a generalized wedge function: \begin{definition} Let $\theta \in [0,2 \pi)$, $\beta \in (0,\pi/4), b \in (0,1), l \in (0, \infty)$ The {\em wedge centered at $\alpha$ of width $\beta$ with base $b$ and length $l$,} denoted ${\bf G}(\alpha,\beta,b,l)$ is the set of all points whose polar coordinate $\la r,\theta \ra$ satisfy $b <r < l$, $\alpha-\beta/2 < \theta < \alpha+\beta/2$. \end{definition} Note that $v({\bf G}(\alpha,\beta,b,l)) = (l^{2}-b^{2})\beta.$ \begin{theorem} Over $\mathcal S$, the symmetric-difference metric and the Wasserstein metrics are not finer than the Hausdorff metric. \end{theorem} {\bf Proof:} Consider the following history $\phi(t)$: \\ {\bf History}.8 \\ $\phi(0) = {\bf B}(\vec{0},1)$. \\ $\phi(t) = {\bf B}(\vec{0},1) \cup {\bf G}(0,t,1,2)$. Then $H(\phi(t),\phi(0)) = 1$. $V(\phi(t),\phi(0)) = t$. It is easily to show, using lemma~\ref{lemProbDist}, that for any $\psi$, $\lim_{t \mbox{$\rightarrow$} 0^{+}} \: W^{\psi}(\phi(t),\phi(0))=0$. Thus $\phi$ is continuous with respect to $V$ and to $W^{\psi}$ but not with respect to $H$. \begin{theorem} \label{thmStarHausdorffSymDiff} Over $\mathcal S$, the Hausdorff metric is not finer than the symmetric-difference metric and the Wasserstein metric. \end{theorem} {\bf Proof:} Let ${\bf P} = {\bf B}(\vec{0},2)$. For $k=1,2 \ldots$ let ${\bf Q}_{k} = {\bf B}(\vec{0},1) \cup \bigcup_{i=1}^{k} {\bf G}(2 \pi i/k, 2*\pi/k^{2},1,2)$ (Figure~\ref{figPorcupine1}). That is, ${\bf Q}_{k}$ is the unit ball plus $k$ evenly spaced wedges of width $1/k^{2}$ in the annulus between radius 1 and radius 2. As $k$ goes to infinity, the wedges get denser and denser within the ball of radius 2, but the total area of the wedges is $6 \pi/k$. Thus $H({\bf P,Q}_{k}) \approx 1/2k - 1/2k^{2}$ but $V({\bf P,Q}) = 3\pi - 3/k$. Thus the sequence ${\bf Q}_{k}$ converges to $\bf P$ with respect to the Hausdorff metric but not with respect to the symmetric-difference metric. To show that $W^{\psi}({\bf Q}_{k},{\bf P})$ does not converge to 0, note that the fraction of the area of ${\bf Q}_{k}$ that is in the central ball is $\pi/(\pi+3/k)$. Thus as $k \mbox{$\rightarrow$} \infty$, any uniform function $\gamma$ from ${\bf Q}_{k}$ to {\bf P} must essentially spread the central ball out over all of {\bf P}; the wedges become increasingly irrelevant. So $\lim_{k \mbox{$\rightarrow$} \infty} W^{\psi}({\bf Q}_{k},{\bf P}) = W^{\psi}({\bf B}(\vec{0},1),{\bf P})$. \begin{figure} \begin{center} \includegraphics[width=4in]{Porcupine1.png} \end{center} \caption{Proof of theorem~\ref{thmStarHausdorffSymDiff}} \label{figPorcupine1} \end{figure} \begin{theorem} \label{thmStarHausdorffWasserstein} Over $\mathcal S$, no Wasserstein metric is finer than the symmetric-difference metric. \end{theorem} {\bf Proof:} We modify the example from the proof of theorem~\ref{thmStarHausdorffSymDiff} by making the central circle much smaller than the wedges. Let ${\bf P} = {\bf B}(\vec{0},2)$. For $k=1,2 \ldots$ let ${\bf Q}_{k} = {\bf B}(\vec{0},1/k) \cup \bigcup_{i=1}^{k} {\bf G}(2 \pi i/k, 2\pi/k^{2},1/k,2)$. (Figure~\ref{figPorcupine2}). The combined area of the wedges approaches $4/k$, while the area of the central circle is $\pi/k^{2}$. Define the mapping $\gamma$ from ${\bf Q}_{k}$ to {\bf P} so that, on the center circles $\gamma$ is the identity, and, on the edges, $\gamma$ spreads out the wedges uniformly in concentric circles so that the entire circle {\bf P} is covered. \begin{quote} For ${\bf x} \in {\bf B}(\vec{0},1/k)$, $\gamma({\bf x})={\bf x}$ For ${\bf x} \in {\bf G}(2 \pi i/k, 1/k^{2},1/k,2)$ if ${\bf X}$ has polar coordinates $\la r,\theta \ra$, then $\gamma({\bf x})$ has polar coordinates $\la r, 2 \pi i k + k \pi (\theta - 2 \pi i/k) \ra$. \end{quote} Let $\Gamma({\bf x})$ be the distribution generated by $\gamma$. Almost all the mass in ${\bf Q}_{k}$ is in the wedges; in $\Gamma$ this mass is distributed evenly over the annulus $1/k < r < 2$. The density of $\Gamma$ over the inner circle ${\bf B}(\vec{0},1/k)$ is much larger, but that circle is small, so the total mass there is small. Therefore using lemma~\ref{lemProbDist}, the distribution $\Gamma$ is close in Wasserstein distance to $U_{\bf Q}$. However, $\gamma$ moves each point by a maximum distance $2/k$; hence $W^{\psi}(U_{P},\Gamma)$ is small. So for every $\psi$, $W^{\psi}({\bf Q}_{k},{\bf P})$ converges to 0 as $k \mbox{$\rightarrow$} \infty$. However, $V({\bf Q}_{k},{\bf P}) = 4\pi - (3/k+ \pi/k^{2})$. \begin{figure} \begin{center} \includegraphics[width=4in]{Porcupine2.png} \end{center} \caption{Proof of theorem~\ref{thmStarHausdorffWasserstein}} \label{figPorcupine2} \end{figure} To compare Wasserstein functions over $\mathcal S$, we define a history analogous to {\bf History}.7.$\psi$. {\bf History}.9.$\psi$. Let Let $\psi(x)$ be a continuous function such that $\alpha(0)=0$ and $\lim_{x \mbox{$\rightarrow$} \infty} \psi(x) = \infty$. \\ Let $\zeta$ be the inverse of $\psi$. Define the history $\phi^{\psi}(t)$ as follows: $\phi^{\psi}(0) = {\bf B}(\vec{0},1)$. \\ $\phi^{\psi}(t) = {\bf B}(\vec{0},1) \cup {\bf G}(0,t/\zeta^{2}(1/t),1,\zeta(1/t))$ (figure~\ref{figHistory9}). \begin{figure} \begin{center} \includegraphics[height=3in]{History9.png} \end{center} \caption{History 9.$\psi$, with $\psi(t)=|t|$} \label{figHistory9} \end{figure} \begin{lemma} \label{lemWassersteinStar1} Let $\beta$ be a Mulholland functions. Let $\alpha(x)$ be a continuous function such that $\alpha(0)=0$ and $\lim_{x \mbox{$\rightarrow$} \infty} \alpha(x) = \infty$. \\ Let $\phi^{\alpha}(t)$ be as in History 9.$\alpha$. Then \[ \lim_{t \mbox{$\rightarrow$} 0^{+}} W^{\beta}(\phi^{\alpha}(t),\phi^{\alpha}(0)) = \left\{ \begin{array}{ll} 0 & \mbox{if } \lim_{x \mbox{$\rightarrow$} \infty} \beta(x)/\alpha(x) = 0 \\ \infty & \mbox{if } \lim_{x \mbox{$\rightarrow$} \infty} \beta(x)/\alpha(x) = \infty \end{array} \right. \] \end{lemma} {\bf Proof:} (Informal, analogous to the proof of lemma~\ref{lemWasserstein1}.) A function $\gamma_{t}({\bf x})$ that transforms $\phi(0)$ into $\phi(t)$ involves, to order of magnitude, moving a total of $t$ mass a distance of $\alpha^{-1}(1/t)$. Therefore the integral $I(\gamma_{t})$ is roughly $t \cdot \beta(\alpha^{-1}(1/t))$. The Wasserstein distance is $W^{\beta}(\phi(0),\phi(t)) \approx \beta^{-1}()).$ So as $t \mbox{$\rightarrow$} \infty$, if $\beta(x) \ll \alpha(x)$, then, as $t \mbox{$\rightarrow$} 0^{+}$, $\beta(\alpha^{-1}(1/t)) \ll 1/t$ so $I(\gamma_{t}$ and $W^{\beta}(t)$ go to 0; if $\beta(t) \gg \alpha(t)$, then, as $t \mbox{$\rightarrow$} 0^{+}$, $\beta(\alpha^{-1}(t)) \gg t$ so $I(\gamma_{t}$ and $W^{\beta}(t)$ go to $\infty$. \begin{lemma} \label{lemStarWasserstein} Let $\beta$ be a Mulholland function and $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$. Then over $\mathcal S$, $W^{\alpha}$ is not finer than $W^{\beta}$. \end{lemma} {\bf Proof:} Let $\zeta(x) = \sqrt{\alpha(x)\beta(x)}$ By lemma~\ref{lemWassersteinStar1} $\phi^\zeta(t)$ is continuous relative to $\Topo_{W^{\alpha}}$ but discontinuous with respect to $\Topo_{W^{\beta}}$. \begin{theorem} \label{thmStarWasserstein} Let $\beta$ be a Mulholland function and $\alpha(x) \ll \beta(x)$ as $x \mbox{$\rightarrow$} \infty$. Then over $\mathcal S$, $W^{\beta}$ is strictly finer than $W^{\alpha}$. \end{theorem} {\bf Proof:} Immediate from lemmas~\ref{lemWasserstein3} and \ref{lemStarWasserstein}. \begin{theorem} \label{thmStarWassersteinSymDiff} Over $\mathcal S$, for any Mulholland function $\beta$, the symmetric-distance metric is not finer than the Wasserstein metric $W^{\alpha}$. \end{theorem} {\bf Proof:} Using lemma~\ref{lemWassersteinStar1}, if $\psi = \sqrt{\alpha}$ then the function $\phi^{\psi}$ defined in history.8.$\psi$ is continuous relative to the symmetric-difference metric but not with respect to the metric $W^{\alpha}$. \subsection*{Acknowledgements} Thanks to Giorgio Stefani for helpful information; in particular, for drawing my attention to the quermassintegral as a useful measure.
2023-04-23T08:18:17.580Z
2021-09-21T02:27:18.000Z
redpajama/arxiv
arxiv_0000
1,536
18,219
bb85c1c071caf531850958fe4f6eb32dec702191
\section{Introduction} Despite some disagreement over the last few decades, many physicists have maintained that absolute time was displaced by Einstein's relative time. Contrary to that tradition, William Lane Craig has long argued that relativity should be supplanted by an alternative, (supposedly) empirically indistinguishable, theory called \emph{Neo-Lorentzianism} (\cite{Craig_Rel:1990, Craig:1999, Craig:2001, craig_2001, Craig_balashov_reply, Craig:2008, Craig_Bergson}).\footnote{The case that relativity and the A-theory of time are not compatible has been presented in various places, but see \cite{Rietdijk:1966, Putnam:1967}; \cite[201, 303-304]{penrose:1989}; \cite{Petkov:2006, RomeroPerez:2014}. For work by physicists discussing Neo-Lorentzian theories, see \cite{Builder:1958, Prokhovnik:1963, Prokhovnik:1964a, Prokhovnik:1964b, Prokhovnik:1973, Prokhovnik:1986, Bell:1976, MacielTiomno:1985, MacielTiomno:1989a, MacielTiomno:1989b}. Balashov and Janssen (\citeyear{BalashovJanssen:2003}) have offered a masterful reply to Craig's Neo-Lorentzianism, to which Craig replies in his \citeyear{Craig_balashov_reply}. For a recent critical discussion of the view that absolute time is better accommodated by General Relativity than by Special Relativity because absolute time can be associated with cosmic time, as maintained by Craig and certain other A-theorists, see \cite{Read:2020}.} Elswhere, Craig, and his sometimes co-author James Sinclair, have argued that contemporary physical cosmology strongly supports the conclusion that the universe began to exist at a finite time in the past (\cite{Craig:1979, Craig:1992, Craig_Smith_1993A, Craig_1993_Criticism, CraigSinclair:2009, CraigSinclair:2012, CarrollCraig:2016}). Moreover, they have argued that \emph{beginning to exist} -- in their sense of the phrase -- is an irreducibly tensed notion, so that the universe could have begun to exist only if the A-theory of time -- that is, the view that there are objectively and irreducibly tensed facts -- is true (\cite[183-184]{CraigSinclair:2009}, \cite{Craig:2007}, \cite[337-338]{Craig_Rel:1990}); this conclusion is shared by many other philosophers and theologians, including William Godfrey-Smith (\citeyear{Godfrey-Smith:1977}), Bradley Monton (\citeyear[94]{Monton:2009}), David Oderberg (\citeyear[146]{Oderberg:2003}), Ryan Mullins (\citeyear[135-136, 143, 147]{Mullins:2016}; \citeyear[43]{Mullins:2011}), and Felipe Leon (\citeyear[62]{Leon:2019}). Other authors, e.g., \cite[11]{Reichenbach:1971}, hold that B-theory entails that nothing objectively begins or changes and so are implicitly committed to a view close to Craig and Sinclair's. For Craig, in light of the evidence supporting relativity, Neo-Lorentzianism is the only plausible choice for the A-theorist. If so, whether the universe began to exist at a finite time in the past stands or falls with Neo-Lorentzianism. I will refer to the conjunction of the A-theory and Neo-Lorentzianism as ANL. Here, I argue that when we take ANL much more seriously than many of Craig and Sinclair's interlocutors have in the past, we find that ANL invites a form of skepticism in tension with the view that present physical cosmology supports a beginning of the universe. After unpacking and clarifying ANL, I consider two families of cosmological models. The first family of cosmological models are singular cosmologies (as explicated below); Craig and Sinclair have argued that singular cosmological models depict the universe as having had an \emph{ex nihilo} beginning at a finite time in the past. I argue that ANL invites skepticism in tension with Craig and Sinclair's treatment of singular cosmological models for two reasons. First, ANL severs the connection between our universe's matter-energy content and chronogeometric structure in such a way that we can no longer justifiably infer that our universe began to exist. Second, Craig and Sinclair have failed to adequately justify the adoption of a specific metric of absolute time. I offer an alternative metric for the duration of absolute time that is at least as good as (and possibly superior to) the one Craig and Sinclair favor. On the alternative metric, past time is infinite. For Craig and Sinclair, that the universe began to exist requires that past absolute time is finite. Without a way to adjudicate which of the two metrics -- if either -- corresponds to absolute time, we cannot infer whether past absolute time is finite. What these two epistemic challenges share is the realization that a beginning of the universe is not a directly observable feature of the universe; for that reason, the inference that the universe began to exist requires the conjunction of observation and a substantive physical theory. By displacing orthodox relativity with an underspecified alternative, Craig and Sinclair have effectively ripped away the ability they might otherwise have had to infer that the universe began. Along the way, we will see that there are additional reasons to think that while the conjunction of observation and a substantive physical theory may be necessary for inferring that the universe began, they may not be sufficient. In thinking about singular cosmological models, I develop a set of criteria that friends of ANL can deploy to determine whether, according to any given cosmological model, physical reality had a beginning in the finite past. When I turn to a second family of cosmological models -- bounce cosmologies -- I utilize those tools in considering whether they represent physical reality as having begun to exist in the finite past. According to the orthodox interpretation of bounce cosmologies, a prior contracting universe birthed our expanding universe (e.g., \cite{Brandenberger2017, AshtekarSingh:2011, AgulloSingh:2017, ShtanovSahni:2003, Cai:2014jla, IjjasSteinhardt:2017, Ijjas_2018, Ijjas:2019pyf, Steinhardt:2002, steinhardt_2007, Corda:2011, Odintsov:2015, Oikonomou:2015}). Craig and Sinclair have argued for the reinterpretation of bounce cosmologies as the ex nihilo birth of two universes (a ``double bang'').\footnote{\cite{Huggett:2018} likewise argued that a variety of bounce cosmologies, particularly those developed utilizing loop quantum gravity, should be interpreted to involve a double bang and not a bounce. However, Huggett and Wüthrich's interpretation, as well as their argument for their interpretation, relies on the view that, in loop quantum gravity, time is explicable in terms of yet more fundamental physical entities. Craig and Sinclair's view involves a distinct metaphysics of time in which time is absolute and cannot be explicated in terms of yet more fundamental physical entities. Moreover, given that non-fundamentality of time in quantum gravity plausibly commits one to some variety of eternalism (see, e.g., \cite{Bihan:2020}), Huggett and Wüthrich's interpretation is plausibly incompatible with the universe having a beginning in Craig and Sinclair's sense.} In light of Craig's Neo-Lorentzianism, I extend arguments recently offered in \cite{Linford:2020a, Linford:2020b} against Craig and Sinclair's interpretation of bounce cosmologies. As I argue, just as ANL strips away the ability to infer that singular cosmologies include a beginning of the universe, so, too, ANL blocks the inference that bounce cosmologies depict physical reality as having begun to exist. \section{The Beginning of the Universe, the Omphalos Objection, and ABIDO} According to what J. Brian Pitts (\citeyear{Pitts:2008}) calls the \emph{metrical conception} of the beginning of the universe, the universe began to exist if space-time is finite to the past. Following Smith (\citeyear{Smith:1985}), Craig and Sinclair endorse the metrical conception: \begin{quote} [...] we can say plausibly that time begins to exist if for any arbitrarily designated, non-zero, finite interval of time, there are only a finite number of isochronous intervals earlier than it; or, alternatively, time begins to exist if for some non-zero, finite temporal interval there is no isochronous interval earlier than it \cite[99]{CraigSinclair:2012}. \end{quote} However, Craig and Sinclair elsewhere indicate that, since beginning to exist is a tensed notion, the universe could only have begun to exist if the A-theory of time is true. Presentists, like Craig, maintain that future events do not yet exist, but come into being by becoming present, and past events become past by going out of existence. As Fay Dowker describes, ``The universe com[ing] into being [...] corresponds to the passage of time'' \cite[133]{Dowker:2020}. In contrast, B-theorists maintain that the distinction between the past, present, and future is perspectival and so future states of affairs do not not come into being by becoming present. For this reason, B-theory has sometimes been described as the view that space-time is objectively unchanging and eternal.\footnote{Of course, there are tenseless theories of change and perhaps one could utilize a tenseless theory of change to construct a tenseless conception of the beginning of time. Whatever one might think of the attempt to develop an account of the beginning of time utilizing a tenseless theory of change, my point is only that \emph{A-theorists} typically think objective change, and so the universe coming into being, requires the A-theory of time. Moreover, I take it that the A-theoretical conception of a beginning is particularly salient for theology. On many versions of A-theory, part of what it means for time to pass is (roughly) that each moment (somehow) produces a successor moment, with caveats for the possibility that time is continuous. And on the view that each moment must be produced -- either by another moment or by something else -- we can ask what could have produced the first segment of time, since the first segment of time could not have been produced by another time. Barring backwards causation -- which may not be possible on the A-theory anyway -- only some entity external to the temporal order could have produced the first segment of time.} As Craig and Sinclair write, \begin{quote} On a B-Theory of time, the universe does not in fact come into being or become actual at the Big Bang; it just exists tenselessly as a four-dimensional space-time block that is finitely extended in the earlier than direction. If time is tenseless, then the universe never really comes into being, and therefore, the quest for a cause of its coming into being is misconceived \cite[183-184]{CraigSinclair:2009}. \end{quote} Elsewhere, Craig writes, ``The doctrine of creation involves an important metaphysical feature which is under-appreciated: it commits one to a tensed or A-theory of time. For if one adopts a tenseless or B-theory of time, then things do not literally come into existence. [...] The universe does not come into being on a B-theory of time, regardless of whether it has a finite or an infinite past relative to any time'' \cite[319]{Craig:2007}. In fact, Craig and Sinclair define \emph{beginning to exist} as a tensed fact. Quoting from the definition that they offer in their \cite[197]{CraigSinclair:2009}: \begin{enumerate} \item $x$ begins to exist at $t$ iff $x$ comes into being at $t$. \item $x$ comes into being at $t$ iff \begin{enumerate} \item $x$ exists at $t$, and the actual world includes no state of affairs in which $x$ exists timelessly, \item $t$ is either the first time at which $x$ exists or is separated from any $t* < t$ at which $x$ existed by an interval during which $x$ does not exist, and \item $x$’s existing at $t$ is a tensed fact.\footnote{Craig \citeyear[99]{Craig:2002}; \citeyear[318]{Craig:2007} offer similar definitions; also see the discussion in \cite[337-338]{Craig_Rel:1990}.} \end{enumerate} \end{enumerate} Consequently, Craig and Sinclair implicitly accept that, in order to establish that the universe began to exist in Craig and Sinclair's sense, one must establish three claims. First, one must establish that the A-theory of time is true. Second, one must establish that we have good reason for identifying a past (closed or open) boundary to the universe, such that the universe did not exist before the boundary. And, third, one must establish that the span of absolute time between the past boundary and the present is finite. I do not claim that these three are sufficient for establishing that the universe began to exist, though Craig and Sinclair have not suggested any additional conditions, and I do not claim that there is no other conception of the beginning of the universe.\footnote{For a defense of the view that the universe beginning to exist does not require tensed facts, see \cite{Loke:2017}.} Instead, I claim that if these three desiderata cannot be satisfied, then the universe lacks a beginning in the sense that Craig and Sinclair have defended. In what follows, I examine the three criteria in order beginning with a somewhat lengthy review of ANL. Readers who are familiar with Neo-Lorentzianism may want to skip to section \ref{IDing_the_boundary} and refer back to section \ref{A-theory} only when necessary. \subsection{\label{A-theory}The A-theory of Time and Neo-Lorentzianism} Craig motivates his acceptance of A-theory on the grounds that the ``objectivity of tense and the reality of temporal becoming'' is a ``properly basic belief''. Craig goes on to write that ``belief in the reality of tense and temporal becoming enjoys such powerful positive epistemic status for us that not only can we be said to know that tense and temporal becoming are real, but also that this belief constitutes an intrinsic defeater-defeater which overwhelms the objections brought against it'' \cite[138]{Craig:2000}. For Craig, that time passes is self-evident, immediate, and undeniable. Craig also maintains that only A-theory can successfully explain the directionality and anisotropy of time (\cite{Craig:1999}). Craig has argued that presentism is the only consistent version of A-theory and that presentism is incompatible with any plausible understanding of Special Relativity. Craig thinks that the only plausible way to save the A-theory of time, given the empirical adequacy of relativity within the relevant domain, is to accept a view I will call `Neo-Lorentzianism'. To introduce Neo-Lorentzianism, and as a warmup exercise, let's first consider how Neo-Lorentzianism relates to orthodox Special Relativity. I will turn to General Relativity later, though I should note that, despite their interest in cosmology, neither Craig nor his co-authors have offered (or cited) a full-fledged Neo-Lorentzian alternative theory to General Relativity. In pre-relativistic physics, velocities are relative to reference frames. James Clerk Maxwell's discovery that his equations entail a unique value for the speed of light presented late nineteenth century physicists with two options. Either Maxwell's Equations do not hold in all reference frames or the measured value of the speed of light is the same in all reference frames. If the speed of light is the same in all reference frames, the Galilean transformations need to be replaced with Lorentz transformations (or something more complex). In turn, there are two ways to motivate the Lorentz transformations. First, in Special Relativity, one can accept the Lorentz transformations as primitives. Lengths, temporal durations, and simultaneity are relativized and no longer considered absolute; nonetheless, as Hermann Minkowski pointed out, the unification of space and time in a four-dimensional structure -- space-time -- can be considered absolute. Alternatively, following Lorentz, one can introduce a new set of forces that act on moving bodies to mimic the appearance of Special Relativity. Lorentzians agree that orthodox relativity is empirically adequate within its domain of application,\footnote{Some authors have argued that at least some forms of Neo-Lorentzianism can be empirically tested and so can be empirically distinguished from Special Relativity; see \cite{MacielTiomno:1985, MacielTiomno:1989a, MacielTiomno:1989b, Petkov:2006}.} but deny Minkowski's metaphysics. Their goal has been to restore absolute simultaneity, with the consequence that space and time are independent absolutes.\footnote{Neo-Lorentzianism does not, in itself, provide a conclusive answer to the debate between substantivalists and relationalists. As \cite[1]{Read:2020} explain, absolute time is ``defined as enshrining a universal present moment, objective temporal passage, and tensed facts''. Consequently, the existence of absolute space and absolute time do not entail that time is substantival, but does entail that spatial and temporal relations are observer independent.} The trouble between presentism and Special Relativity arises because a moment of time, e.g., the present, as usually understood, consists of a collection of events, that is, space-time points, all of which are objectively simultaneous. Special Relativity appears to entail that there are no objectively simultaneous events. Therefore, if the objective present must consist of more than one event, Special Relativity appears to entail that there is no objective present and so to entail that presentism is false. Craig concludes that relativity should be supplanted by an alternative theory, i.e., Neo-Lorentzianism, that restores an absolute present.\footnote{To be sure, Neo-Lorentzianism is one among a number of different strategies that A-theorists might pursue in reply and A-theory might still be true if Neo-Lorentzianism is false; for recent surveys of possible replies, see (\cite{GilmoreCostaCalosi:2016, Baron:2018}). Nonetheless, Craig has argued against alternative A-theoretic strategies so that, on his view, the A-theorist has no plausible choice but to opt for Neo-Lorentzianism and so for ANL.} For present purposes, let's set aside adjudicating the debate between A-theorists in order to focus on ANL. Neo-Lorentzians draw inspiration from Newton's \emph{Principia}. Newton maintained that absolute space should be distinguished from any physical measure of space; the change in spatial representation from one reference frame to another reflected differences in physical measures and not differences in space, as space is in itself. Likewise, Newton maintained a distinction between absolute time and physical measures of time \cite[6]{Newton:1974}; even though Newton did not consider the possibility that time would be measured differently from distinct reference frames, we can expect Newton to have said that differences in physical measures of time do not entail differences in how time is in itself. Neo-Lorentzians argue that, in order to maintain space and time as independent absolutes, we can re-deploy Newton's strategy with Einstein's coordinate transformations. Poincar{\'e} considered an important thought experiment that helps to illustrate this strategy. Poincar{\'e} imagined the interior of a sphere inhabited by creatures who carry rulers (\citeyear[55-57]{Poincare:2001b}). According to the laws of the world the creatures inhabit, as any object $O$ approaches the boundary of the sphere, the extension of $O$ in the direction of the sphere's radius shrinks to zero. As Poincar{\'e} points out, the creatures would mistakenly infer their world to be infinitely large. The lesson is that any attempt to measure the geometry of one's world will involve a decision about which effects are due to geometry and which effects are due to forces acting on one's measuring instruments. By carefully selecting a set of forces, measurements that appear to confirm Minkowski space-time can be rendered consistent with Newtonian absolute space and time. Lorentz considered the possibility that forces conspire to obscure our world's true chronogeometry in such a way that the speed of light appears the same from every reference frame. For example, in order to maintain equilibrium with the surrounding electromagnetic field, a moving spherical distribution of charge would need to foreshorten in precisely the way that Einstein predicted objects foreshorten when they move close to the speed of light. Neo-Lorentzians employ different mechanical models that they think might explain relativistic phenomena. For example, Simon Prokovnik (\citeyear[79-89]{Prokhovnik:1986}), an important proponent of Neo-Lorentzianism (\citeyear{Prokhovnik:1963, Prokhovnik:1964a, Prokhovnik:1964b, Prokhovnik:1973, Prokhovnik:1986}), points out that if Newtonian gravitation is altered to include the fact that gravitational influences propagate at the speed of light (as previously suggested in, e.g., \cite{bastin:1960}), then moving bodies would need to foreshorten to maintain equilibrium with the gravitational field. Prokhovnik argues that the other predictions made by Special Relativity -- including the appearance of the relativity of simultaneity -- can similarly be recovered from gravitational phenomena. More recent work has shown that while one can always redescribe relativistic effects as the result of a universal force, the forces that one must postulate have fairly exotic (arguably, implausible) properties (\cite{WeatherallManchak:2014}).\footnote{An anonymous reviewer has brought to my attention that teleparallel gravity may offer another approach to redescribing relativistic effects as the result of forces, e.g., \cite{Knox:2011}.} Although Craig consulted Prokovnik while writing his book length defense of ANL (\citeyear{Craig:2001}) and cites Prokovnik throughout, Craig sets aside whether Prokhovnik's explanation in terms of gravitation is correct \cite[181]{Craig:2001}. Following \cite{MacielTiomno:1989a}, Craig thinks of Neo-Lorentzianism as a family of theories: \begin{quote} A theory may be classified as [Neo-]Lorentzian just in case it affirms (i) physical objects are $n$-dimensional spatial objects which endure through time, (ii) the round trip vacuum propagation of light is isotropic in a preferred (absolute) reference frame $R_o$ (with speed $c=1$) and independent of the velocity of the source, and (iii) lengths contract and time rates dilate in the customary special relativistic way only for systems in motion with respect to $R_o$ \cite[178]{Craig:2001}. \end{quote} On Craig's brand of Neo-Lorentzianism, physical objects are three dimensional entities that endure through time. The present can be approximated as a three-dimensional hypersurface that changes as time passes.\footnote{There are some additional nuances in Craig's view, since Craig denies the existence of instants. We might have expected Craig to endorse the existence of instants given his presentism, but Craig has long maintained that instants do not exist. For example, Craig argues that ``only intervals of time are real or present and that the present interval (of arbitrarily designated length) may be such that there is no such time as `the present' \emph{simpliciter}; it is always `the present hour', `the present second', etc. The process of division is potentially infinite and never arrives at instants'' \cite[260]{Craig_1993_Criticism}; also see \cite[179-180]{Craig:2000_extent}. Craig maintains that time is gunky, i.e., that every interval of time has proper sub-intervals, and he maintains that time cannot be decomposed into instants one of which is the present. I confess that I find the conjunction of presentism and gunky time difficult to understand. For example, if presentism is the thesis that the only things that exist simpliciter are present and that there is no present simpliciter -- as Craig's gunky time seems to entail -- does nothing exist simpliciter?} \subsection{\label{IDing_the_boundary}Identifying the Universe's Past Boundary} Having reviewed the first criterion for establishing that the universe began -- viz., ANL -- let's turn to the second criterion. The second criterion involves a past boundary for the universe beyond which the universe's past cannot be extended. In pre-relativistic physics, the universe's contents cannot be used to determine when (or whether) time began. Consider, for example, an acorn. Suppose that God created the acorn \emph{ex nihilo} last Thursday. In that case, we'd have no indication that the acorn did not exist before last Thursday because the acorn's existence instead suggests the existence of a pre-existing tree that gave rise to the acorn. Likewise, the entire universe could have been created last Thursday, even though we have memories of prior times. This suggests that a beginning of the universe would make no difference to the universe's matter-energy content and so would be empirically undetectable. These considerations give rise to a general skeptical problem: \begin{enumerate} \item No matter when the universe began, the universe's contents suggest a prior history. \item If no matter when the universe began, the universe's contents suggest a prior history, then we cannot empirically determine when the universe began. \item Therefore, we cannot empirically determine when the universe began. \end{enumerate} In \citeyear{Gosse:1857}, Philip Gosse voiced a related argument in his book \emph{Omphalos: An Attempt to Untie the Geological Knot}; therefore, call this worry the \emph{Omphalos Objection}.\footnote{For Gosse, the view that we cannot empirically determine whether -- or when -- the universe began was part of a strategy to render a literal reading of the Bible compatible with evidence from geology that the Earth is older than the Biblical narrative appears to indicate. Nonetheless, Gosse's strategy has never enjoyed popularity -- as Russell wrote, ``nobody can believe it'' (\citeyear[68]{Russell:1961}) -- and, in any case, is straightforwwardly incompatible with any attempt to empirically determine the age of either the Earth or the universe.} Perhaps one could object that even if we do not know when the universe began, one may still be able to infer that the universe began. For example, perhaps one can produce a rationalist argument that any temporal (or causal) series must have a finite length. However, this would be an admission that, at least for friends of ANL, an \emph{empirical} case for a beginning of the universe does not succeed and so would be a concession to the arguments that I offer in this paper. I have difficulty conceiving of a situation in which we have \emph{empirical} grounds for inferring that the universe began to exist but we do not have empirical grounds for inferring \emph{when} the universe began to exist. In the case of the Earth, we have evidence that the Earth began to exist precisely because we have evidence for when the Earth began to exist. If we accept Craig and Sinclair's analysis of \emph{beginning to exist}, an empirical case for the beginning of the universe would presumably share this feature with the empirical case for the Earth's beginning. Relativistic physics appears to resolve the Omphalos Objection for two reasons. First, among relativists, a common working assumption -- that is, the assumption that space-time is maximally extended -- prohibits truncated space-times (e.g., \cite[32]{Earman:1995}). As Earman has argued, allowing space-time to be prematurely truncated reintroduces a version of the Omphalos Objection and results in a skeptical catastrophe \cite[119-122]{Earman:1977}. Second, although we cannot observe the chronogeometric structure of space-time directly, we can observe the distribution of our universe's matter-energy content. In turn, General Relativity ties the matter-energy distribution to the geometry of space-time in such a way that, by observing the matter-energy content, we may be able to infer that our space-time has a past open boundary beyond which space-time cannot be extended.\footnote{A similar point was previously raised in, e.g., (\cite{Weingard:1979}).} Arguably, this is precisely the case in classical models of the Big Bang. For example, in Friedmann-Lema{\^i}tre-Robertson-Walker (FLRW) models, the scale factor is zero for some value of the cosmic time, thereby resulting in a corresponding divergence in the energy density. The divergence in the energy density corresponds to a divergence in the Ricci scalar. The result is a space-time singularity,\footnote{A divergence in the various curvature parameters is neither necessary nor sufficient for space-time to be singular (\cite{Earman:1995, Curiel:1999, Curiel:2021, Joshi:2014}). For example, so-called conical singularities are well-known examples of singularities without associated curvature pathology. However, the singularity that represents the ``Big Bang'' in cosmologically relevant space-times is associated with a divergence in the Ricci scalar and the energy density.} i.e., an open boundary beyond which space-time cannot be extended. Thus, so long as we realistically construe the relationship between the universe's matter-energy content and the geometry of space-time, and assume that space-time is maximally extended, then there is hope that a survey of the universe's matter-energy content would indicate a past open boundary to space-time. (I say `hope' because, as we will see below, there is reason to doubt that this hope can be fulfilled.) My description of Craig and Sinclair's views would be incomplete without remarking on the fact that most physicists view divergences in physical theories with suspicion.\footnote{While the predictions of a theory within a specific domain may provide some inductive evidence that the theory will apply to neighboring domains, no one should have confidence that the theory will apply to domains that are arbitrarily distant. Consider approaching a point $p$ where the energy density diverges. As one approaches $p$, one encounters arbitrarily large energy densities and so one inevitably encounters energy densities which surpass the domain of applicability of General Relativity before one reaches $p$. For that reason, ceteris paribus, we should doubt the predictions made by General Relativity within the vicinity of curvature singularities.} For that reason, most physicists think we should not draw conclusions about the beginning of the universe from singular cosmological models. Craig and Sinclair maintain instead that singular cosmological models \emph{do} provide strong evidence for a beginning of the universe because they argue that there will be features in a future quantum gravity theory that correspond to the cosmological singularities in FLRW models. As they write, ``There may be no such things as singularities per se in a future quantum gravity formalism, but the phenomena that [General Relativity] incompletely strives to describe must nonetheless be handled by the refined formalism, if that formalism has the ambition of describing our universe'' \cite[106]{CraigSinclair:2012}. I don't find this reply convincing. However, whether or not we should look upon divergences in physical theories as suspicious has been discussed at length elsewhere and I set the issue aside for the purposes of this paper. For the sake of clarity, the argument that I am considering here -- which rules out extendable space-times in order to avoid the Omphalos Objection -- is distinct from another set of arguments that have been offered by John Earman and criticized by J.B. Manchak. For example, Earman (\citeyear[32-33]{Earman:1995}) has argued that extendable space-times can be thrown away by invoking metaphysical principles such as the principle of sufficient reason. Manchak rejects Earman's metaphysical argument. Moreover, Manchak (\citeyear{Manchak:2011}) has argued that we cannot determine, on empirical grounds, whether the space-time we inhabit is maximally extended. In any case, neither argument is the argument that I am considering here; the argument under consideration does not rule out extendable space-times for metaphysical or empirical reasons. Instead, the argument that I am considering rules out extendable space-times on epistemic grounds; that is, if we allow for extendable space-times, then a skeptical catastrophe results. For example, if space-time can be arbitrarily truncated, then all of our memories -- including our memories of whatever scientific experiments or observations we take to support our best theory of space-time -- could have been created ex nihilo last Thursday. As Bertrand Russell wrote, ``We may all have come into existence five minutes ago, provided with ready made memories, with holes in our socks and hair that needed cutting'' (\citeyear[68]{Russell:1961}).\footnote{An anonymous reviewer suggests that we may have reasons for trusting our memory that differ from the reasons we have for trusting other sorts of records of the past. For example, perhaps we have some reason to think that God would ensure our memory of the past is veridical. If so, we may have reason to avoid truncating the past during any period that any person remembers without invoking a general prohibition on truncated space-times, so that we may be able to avoid a skeptical catastrophe without invoking a prohibition on truncated space-times. Nonetheless, we ordinarily take our empirical access to the past to be based on more than memories. Importantly, successful cosmological science requires ampliative inferences to time periods no mere mortal could remember, e.g., the early universe. Perhaps we could once more invoke God to secure the veridicality of our records or of other ampliative inferences we might make to the past, but this begins to look like yet another prohibition on truncated space-times, albeit a prohibition invoking theological premises. Craig and Sinclair are unlikely to pursue this route, for they utilize cosmological science in defense of God's existence, and naturalists are unlikely to find a theological defense of a truncation prohibition principle convincing.} And, if we recognize this as a genuine possibility, we would have undermined whatever support we take ourselves to have for our best theory of space-time in the first place. Thus, if we accept as a live possibility that space-time could be arbitrarily truncated, we undercut the evidence that we have for our best theory of space-time. And if the evidence we have for our best theory of space-time is undercut, we lose whatever support we would otherwise have had for our best theory of space-time. To avoid the skeptical catastrophe, we should rule out extendable space-times.\footnote{Parallel arguments have been used for including the Past Hypothesis as a fundamental physical principle \cite[116]{albert_2000} and to rule out cosmologies in which Boltzmann brains dominate (\cite{Carroll:2021}).} I have said that General Relativity may provide \emph{hope} for resolving the skeptical catastrophe posed by postulating a beginning of the universe in the finite past. However, these hopes are problematized -- and possibly dashed -- by a series of recent results formulated by Manchak. As I've said, the skeptical catastrophe is supposedly avoided by throwing away space-times that are not maximally extended. However, that space-time $(M, g_{\mu\nu})$, where $M$ is a pseudo-Riemannian manifold with metric tensor $g_{\mu\nu}$, fails to be maximally extended is likely not the only condition that would render $(M, g_{\mu\nu})$ physically unreasonable. If the A-theory of time is true, then the A-theory of time is likely necessarily true, or at least true at all of the physically possible worlds. And the A-theory of time -- or at least the version of A-theory that Craig has defended -- appears to require that space-time is (for example) globally hyperbolic. Suppose, then, that all physically reasonable space-times are globally hyperbolic. In that case, let's say that $(M, g_{\mu\nu})$ is maximally GH-extended just in case $(M, g_{\mu\nu})$ has no globally hyperbolic extensions. Following Manchak (\citeyear{Manchak:2021}), consider that Misner space-time is not globally hyperbolic and so, if only globally hyperbolic space-times are physically reasonable, Misner space-time would not be considered physically reasonable. But there is a region of Misner space-time that is hyperbolic. Consider then a model of space-time consisting only of this region excised from Misner space-time; this new space-time, though not maximally extended, is globally hyperbolic. Moreover, as Manchak has argued, this new space-time has no globally hyperbolic extensions. Consequently, the new space-time is maximally GH-extended even though the new space-time is not maximally extended. So: if we say that `maximally GH-extended space-time' and `physically reasonable space-time' are co-extensive, we will have picked out a different collection of space-times than if we say that `maximally extended space-time' and `physically reasonable space-time' are co-extensive. Furthermore, a similar argument can be used to show that if we select some proper subclass of maximally GH-extended space-times satisfying (say) property $P$, then there will generally be space-times that are maximally $P$-extended but not maximally GH-extended. If, for example, `maximally GH-extended space-time' and `physically reasonable space-time' are co-extensive, then the globally hyperbolic region we excised from Misner space-time has a boundary. Therefore, upon discovering local evidence that one inhabits a Misner space-time (for example) whether one would then have grounds for inferring that one's universe has a boundary will depend upon how physical reasonableness should be understood. Obviously, this creates a conceptual difficulty for how we should understand the relationship between the extendability of space-time and the resolution of the Omphalos Objection. Work is on-going and there is, as yet, no consensus as to what features make a space-time physically reasonable. The relevant question for Craig and Sinclair is whether a new set of criteria for physical reasonableness can be successfully developed and agreed upon that would allow one to identify a past boundary to space-time by examining the mass-energy-momentum distribution within space-time. For example, if global hyperbolicity does turn out to be the relevant ``mark of physical reasonableness'' and one could show, from the matter-energy distribution, that our space-time cannot be GH-extended beyond some boundary in the finite past, then Craig and Sinclair would have one of the necessary ingredients for a successful empirical case for a beginning of the universe. As of yet, such a result has failed to materialize. While Manchak has produced a series of results problematizing the notion of physical reasonableness, Manchak and others have simultaneously produced a series of results casting doubt on our ability to determine the global structure of space-time. Roughly, space-time $(M, g_{\mu\nu})$ is \emph{observationally indistinguishable} from some numerically distinct space-time $(M', g'_{\mu\nu})$ just in case any observer at any arbitrarily chosen point in $(M, g_{\mu\nu})$ cannot determine, from \emph{any} of the data in their past light cone, which of the two space-times they inhabit \cite[412]{Manchak:2011}. As the Manchak-Malament theorem has established (\cite{Malament:1977, Manchak:2009}; also see \cite{ Manchak:2021, Norton:2011, Beisbart:2009, Butterfield:2014}), the members of a fairly broad class of space-times are observationally indistinguishable from numerically distinct space-times. To pose an epistemological challenge, one requires a significantly weaker condition than observational indistinguishability, at least as previously defined. Observational indistinguishability imposes a condition on \emph{any} arbitrarily chosen observer and consequently for any space-time point. For us to be unable to empirically distinguish our space-time from a space-time with quite different global properties requires only a result on data that we can gather from within our cosmological horizon and within some reasonable proper time. And \emph{this} entails that our space-time is indistinguishable -- as far as we are concerned -- from an even larger class of space-times. In order to distinguish this form of observational indistinguishability from those catalogued in \cite{Malament:1977}, I will denote it \emph{super weak observational indistinguishability}. What is relevant for my purposes in this paper is that singular FLRW space-times -- that is, FLRW space-times such that all time-like or null curves are inextendable beyond some boundary located in the finite past -- are super weakly observationally indistinguishable from space-times containing at least one time-like or null curve that is not bounded to the past. For example, the Borde-Guth-Vilenkin (BVG) theorem (\cite{Borde:2003}) has sometimes been interpreted to show that the universe has a past singular boundary. However, the BVG theorem actually shows that if the average value of a specific generalization of the Hubble parameter along a time-like or null geodesic congruence is positive, then that congruence must be incomplete to the past. The resulting congruence can be isometrically embedded either into a space-time with or without a global past boundary. That is, supposing that we inhabited such a congruence, we might see a boundary to our past, even though other observers in the same space-time would have an infinite and unbounded past. So, there is reason to doubt, first, that space-time extendability's relationship to a beginning of the universe has been successfully characterized and, second, that, even given a correct characterization, we could have epistemic access to sufficient data to determine important global properties of the space-time we inhabit, including whether our space-time has a global boundary in the finite past. Nonetheless, we also have reason to think that if a beginning of the universe in the finite past could be empirically established with the resources of relativistic cosmology, then empirically establishing a beginning of the universe in the finite past would draw upon results concerning maximal extendability, or whatever the appropriate notion of extendability turns out to be. Any such result depends upon an inference from the distribution of the matter-energy contents of space-time to the chronogeometry of space-time. That is, results about extendability, in whatever the appropriate sense turns out to be, are likely to be necessary, but not sufficient, for using the resources of relativistic cosmology to establish a beginning of the universe in the finite past. And as I argue in the next section, insofar as there is tension between ANL and our being able to empirically infer the actual chronogeometry of our universe, then there is tension between ANL and the use of extendability results in resolving the Omphalos Objection. \subsubsection{Tension between ANL and the resolution of the Omphalos Objection} I've argued that relativistic cosmology may help in resolving the Omphalos Objection. So long as we realistically construe the relationship between the universe's matter-energy content and space-time curvature, and assume that space-time is maximally extended (in whatever sense turns out to be appropriate), then there is hope that a survey of the universe's matter-energy content would indicate a past boundary to space-time. We've also seen that there may be some reason to think that this hope has been put in doubt. But insofar as relativistic cosmology can be utilized to empirically determine whether our universe began to exist in the finite past, such a determination can be made only by utilizing the observed matter-energy content and the relationship relativity provides us between that content and chronogeometric structure. Friends of ANL do not realistically construe the relationship between the universe's matter-energy content and space-time curvature. They maintain that our epistemic situation with respect to absolute space and absolute time resembles the epistemic situation of Poincar{\'e}'s creatures in that our world's true chronogeometry is hidden from us. If our world's true chronogeometry is hidden from us, then there is the possibility that the universe only appears to have a past boundary. What we need to know is not whether our universe merely \emph{appears} to have a past boundary when measured by physically embodied observers, but enough about physical reality’s relationship to absolute time to know that the universe \emph{really does} have a past boundary. To put the point another way, consider a classic argument for space-time conventionalism that builds on Poincar{\'e}'s previously mentioned thought experiment. Poincar{\'e}, Reichenbach, Gr{\"u}nbaum, and other space-time conventionalists maintain that significant chronogeometric features, such as simultaneity, are conventional. For Reichenbach, space-time conventionalism can be supported by noting that any empirical determination of chronogeometry will need to adopt a convention as to which effects are due to chronogeometry and which effects are due to forces that universally act on the objects populating space-time \cite[30-34, 118-119]{Reichenbach:1958}. Philosophy of science has come a long way since the logical positivists. For that reason, although space-time conventionalism still retains proponents, philosophers will tend to see Reichenbach as having moved too quickly; the fact that we cannot empirically determine which effects are due to forces and which effects are due to chronogometry -- if it is a fact -- does not suffice for the conclusion that there is no fact of the matter concerning which effects are due to forces and which effects are due to chronogeometry. Those who take a realistic interpretation of relativity set the universal forces to zero. In contrast, Craig and Sinclair believe we have independent reason to maintain absolute time and, for that reason, Craig and Sinclair endorse the existence of non-zero universal forces whose effect is to make our world appear as though relativity were true. The trouble is that the past boundary postulated by relativistic cosmology is just yet more chronogeometric structure; what Craig and Sinclair need to provide us is a way to infer, from empirical observations, enough about absolute time so that we can infer that absolute time has a beginning in the finite past. But this doesn't seem possible. As John Norton writes, universal forces are ``entities protected from evidential scrutiny by careful contrivance'' (\cite{Norton:2020}). As in Poincar{\'e} and Reichenbach's original argument, without being able to determine the universal forces operating on bodies, we are likewise left without a way to determine the fundamental chronogeometry. There are various routes Craig and Sinclair might pursue to resolve the tension between inferring that absolute time began in the finite past and their view that the actual chronogeometry is protected from evidential scrutiny. For example, some A-theorists have maintained that space and time have the structure postulated by relativity conjoined with some additional structure that they attribute to absolute time; call this view $GR^+$. Because $GR^+$ is logically stronger than General Relativity, any space-time that is singular with respect to General Relativity is likewise singular with respect to $GR^+$. (Likewise, any space-time that is inextendable with respect to General Relativity for some other reason -- for example, any space-time that has no GH-extension -- will likewise be inextendable with respect to $GR^+$.) Unfortunately, Craig and Sinclair do not maintain $GR^+$. Friends of $GR^+$ maintain that space-time fundamentally has the structure postulated by General Relativity and therefore claim that General Relativity has some identifiable significance for fundamental metaphysics. In contrast, Craig and Sinclair maintain that space and time do not have the structure postulated by General Relativity; instead, space and time have some altogether different structure, appearing to satisfy General Relativity only because some collection of universal forces -- left unspecified -- distort all of our measuring instruments in just the right way so as to render General Relativity empirically adequate. Whatever the underlying space-time may be, Craig and Sinclair can always postulate some set of universal forces that appropriately distorts chronogeometric measurements. (A similar point was previously made in, e.g., \cite[15]{Read:2020}.) In some sense, this was the point originally made by the space-time conventionalists, that is, our measurements can be rendered consistent with any chronogeometry whatsoever so long as one is sufficiently creative with the forces that one postulates. ANL thereby severs the connection from physical measures of space and time to the true chronogeometry. On the resulting instrumentalist interpretation of length contraction, time dilation, and the relativity of simultaneity, rulers and clocks provide systematically spurious results due to the influence of universal forces. For that reason, rulers and clocks are no help in determining our world's true chronogeometry. Furthermore, length contraction, time dilation, the relativity of simultaneity, and other consequences of Special Relativity are ordinarily thought to be consequences, at least in part, of the metric $g_{\mu\nu}$; after all, one way to \emph{derive} the various consequences of Special Relativity begins with $g_{\mu\nu}$. If length contraction, time dilation, the relativity of simultaneity, and other relativistic effects are to be treated instrumentally because they are subject to the influence of universal forces, we should say that $g_{\mu\nu}$ is merely an apparent metric that affords empirically adequate predictions and so does not reflect the true metric of the underlying chronogeometry. That is, $g_{\mu\nu}$ is afforded an instrumental interpretation in Special Relativity. But if $g_{\mu\nu}$ is afforded an instrumental interpretation in Special Relativity, $g_{\mu\nu}$ should equally be afforded an instrumental interpretation in General Relativity. Likewise, the motion of test particles cannot help Craig and Sinclair in determining the true chronogeometric structure because the motion of test particles will be subject to the same universal forces distorting our measuring instruments. Consider that, in classical electrodynamics, the electric and magnetic fields are invisible. In order to ``see'' the electric and magnetic fields, we need to utilize the fact that test bodies, e.g., iron filings and small charges, couple to the electric and magnetic fields via the Lorentz force law. In turn, the dynamics of the electric and magnetic fields are described by Maxwell's equations. Analogously, on a realist interpretation of General Relativity, test masses reveal chronogeometric structure because the trajectories of test masses are described by the geodesic equation. That is, the way in which test masses couple to space-time is an important auxiliary hypothesis for inferring chronogeometry from observational data.\footnote{\label{aux-hyp}Perhaps one can object that, in General Relativity, other mathematical relationships can describe the way that matter-energy couples to chronogeometry than the geodesic equation, for example, by the source term in the Einstein Field Equation or the Raychaudhuri equation. And, arguably, the Raychaudhuri equation is more important for inferring singular behavior from the matter-energy distribution. But analogous conclusions follow; whatever auxiliary hypothesis one uses, so long as the auxiliary hypothesis follows from General Relativity, an inference to the actual, and not merely apparent, chronogeometry requires a realistic construal of General Relativity.} As Misner, Thorne, and Wheeler famously quipped, ``Space tells matter how to move. Matter tells space how to curve'' (\citeyear[5]{MisnerThorneWheeler:1973}). This is the insight that, on a realistic construal of General Relativity, allows us to infer invisible chronogeometric structure from the visible distribution of matter. On a realistic construal, the Einstein Field Equations (together with the geodesic equation) express a relationship between the matter-energy distribution and space-time curvature, i.e., \begin{equation} G_{\mu \nu} = 8\pi G T_{\mu \nu} \end{equation} Here, $G_{\mu \nu}$ is the Einstein tensor and is comprised by the Ricci curvature tensor and the Ricci curvature scalar while $T_{\mu \nu}$ is the stress-energy tensor, expressing the matter-energy distribution. In order to compute trajectories of test bodies, one can utilize the geodesic equation: \begin{equation} \frac{d^2 x^{\mu}}{ds^2} + \Gamma^{\mu}_{\hphantom{\mu}\alpha\beta} \frac{d x^{\alpha}}{ds} \frac{d x^{\beta}}{ds} = 0 \end{equation} $x^\mu$ is the set of coordinates specifying a trajectory parametrized by $s$ and $\Gamma^{\mu}_{\hphantom{\mu}\alpha\beta}$ is the Christoffel symbol computed from the relevant metric as obtained from the Einstein Field Equations.\footnote{The exact logical relationship between the Einstein Field Equations and the geodesic equation has been the matter of some dispute. For example, the Geroch-Jang theorem, as well as various related results, show that, at least for space-times and matter satisfying a small set of realistic conditions, the motion of small massive bodies (e.g., test masses) satisfies the geodesic equation. See \cite{GerochJang:1975, EhlersGeroch:2004, Brown:2005, Weatherall:2011, Weatherall:2019, Malament:2012}. For my purposes, the point is that General Relativity provides us with a set of mathematical principles, whatever their interrelationship might be, which, when realistically construed, allow us to infer chronogeometric structure from the mass-energy distribution.} On the implementation of universal forces described by Michael Friedman (\citeyear[298]{Friedman:1983}), the geodesic equation is modified to accommodate the universal force $F^{\mu}$ and an associated metric compatible with a connection given by $\tilde{\Gamma}^{\mu}_{\hphantom{\mu}\alpha\beta}$: \begin{equation} \frac{d^2 x^{\mu}}{ds^2} + \tilde{\Gamma}^{\mu}_{\hphantom{\mu}\alpha\beta} \frac{d x^{\alpha}}{ds} \frac{d x^{\beta}}{ds} = F^{\mu} \end{equation} In turn, $F^{\mu}$ can be calculated in terms of the original and modified connections: \begin{equation} F^{\mu} = (\tilde{\Gamma}^{\mu}_{\hphantom{\mu}\alpha\beta} - \Gamma^{\mu}_{\hphantom{\mu}\alpha\beta})\frac{d x^{\alpha}}{ds} \frac{d x^{\beta}}{ds} \end{equation} Since the modified geodesic equation is trivially equivalent to the original geodesic equation, the modified geodesic equation will result in the same trajectories as the original geodesic equation. Given the tremendous range of freedom in the specification of $F^{\mu}$, we have a corresponding freedom in how we specify $\tilde{\Gamma}^{\mu}_{\hphantom{\mu}\alpha\beta}$ and therefore in how we specify the metric. Furthermore, Friedman's proposal represents merely one way, out of a myriad of possibilities, for specifying universal forces. More radical proposals might replace the geodesic equation altogether. An instrumental interpretation accepts the observable matter-energy distribution and accepts the Einstein Field Equation and geodesic equation, but only as useful calculational devices. Instrumentalists deny that the Einstein Field Equation and geodesic equation have ontological import for inferring unobservable chronogeometric structure. Thereby, instrumentalists sever the inference from the matter-energy distribution to the \emph{real} chronogeometry.\footnote{Craig provides us with another reason for thinking that the fundamental chronogeometric structure is decoupled from the motion of test masses. Though Craig does not endorse Prokovnik's views about gravity, Craig does maintain an instrumentalist interpretation of the relationship between the distribution of matter-energy and chronogeometric structure. For example, Craig writes that the ``geometrization of gravitation'' is only ``a heuristic device'' \cite[189]{Craig:2001}. Elsewhere, Craig and Sinclair explicitly deny the ``view that gravity \emph{just is} the curvature of an objectively real space-time'' (emphasis is Craig and Sinclair's) and instead argue that gravity is a force that operates between bodies situated in space \cite[104]{CraigSinclair:2012}. If, as they write, the ``geometrization of gravity'' is a mere ``heuristic device'' and gravity is instead a force operating between bodies, then the dynamics of the matter-energy distribution should ultimately be explained in terms of a force instead of the coupling of the matter-energy distribution to space-time curvature demanded by the Einstein Field Equation. I'm not sure what sort of forces Craig and Sinclair would put in place of space-time curvature; they never offer a fully worked out and mathematically precise alternative to General Relativity. \cite[189]{Craig:2001} cites \cite[vii]{Weinberg:1972}, but Weinberg alternately states that his focus on geometry is a pedagogical strategy instead of a denial that gravity is the curvature of space-time (\citeyear[viii]{Weinberg:1972}) and that he is otherwise ambivalent concerning the metaphysical upshot of General Relativity: ``The important thing is to be able to make predictions on the astronomers' photographic plates, frequencies of spectral lines, and so on, and it simply doesn't matter whether we ascribe these predictions to the effects of gravitational fields on the motion of planets and photons or to a curvature of space and time.'' In other words, Weinberg's attitude -- at least as of 1972 -- was that, instead of trying to determine the metaphysics of space-time, we should ``shut up and calculate''. This is obviously not an attitude that friends of ANL can adopt. Moreover, Weinberg is no friend of Craig's approach to relativity. Weinberg's anti-metaphysical interpretation of relativity is likely the result of a wholesale anti-metaphysical attitude that would reject appeals to absolute time and absolute space. Elsewhere, Weinberg has argued that we should ``score'' physical theories against whether they satisfy Lorentz invariance (\citeyear[85]{Weinberg:2003}) -- whereas ANL only \emph{appears}, but does not actually, satisfy Lorentz invariance -- and that we should think Einstein was rightly victorious in his debate with Lorentz (\citeyear[68, 85]{Weinberg:2003}).} Singular space-times are typically identified in virtue of geodesic incompleteness. Realists utilize the Einstein Field Equations, the geodesic equation, and the observed matter-energy distribution (or some other auxiliary hypothesis, i.e., see footnote \ref{aux-hyp}) to infer that space-time is geodesically incomplete to the past. Since geodesic incompleteness is a bit of chronogeometric structure to which the insrumentalist is not metaphysically committed, by endorsing an instrumental interpretation of the Einstein Field Equations and related mathematical relationships, Craig and Sinclair cannot justifiably use the Einstein Field Equations, the geodesic equation, or other auxiliary hypotheses from General Relativity, and the observed matter-energy distribution to infer that space-time really is geodesically incomplete to the past. In addition to geodesic incompleteness, singular FLRW space-times are characterized by a divergent Ricci scalar curvature. The realist can utilize the Einstein Field Equations and the matter-energy distribution to infer that the curvature was arbitrarily large in the past. The instrumentalist affords the various curvature parameters -- and so curvature pathology -- an instrumental interpretation and so is not committed to the ontological reality of curvature pathology. For that reason, Craig and Sinclair cannot infer that the Ricci scalar (or other curvature parameters) were arbitrarily large in the past.\footnote{Perhaps I've moved too quickly here. As an anonymous reviewer points out, while Craig and Sinclair cannot infer that the Ricci scalar, or other curvature parameters, qua curvature of space-time, was arbitrarily large in the past, they may be able to infer that the Ricci scalar, or other curvature parameters, construed as some mixture of geometry and universal forces, was arbitrarily large. Two comments can be made in reply. First, any argument from curvature pathology to singular behavior is weak because curvature pathology does not ensure a space-time singularity. Part of what matters for a boundary to space-time is that there is some ``location'' beyond which paths cannot be reasonably extended and yet ``[...] no species of curvature pathology we know how to define is either necessary or sufficient for the existence of incomplete paths'' (\cite{Curiel:2021}). For this reason, space-time singularities are now typically understood in terms of geodesic incompleteness and not in terms of curvature pathology. Second, a physical field can exhibit singular behavior without a corresponding boundary to time. For example, in classical electrodynamics, electric charges are singularities in the electric field. Classical electrodynamics is well-defined on Minkowski space-time, for which there is no past boundary. If we understand $g_{\mu\nu}$ as a physical field defined on a background absolute space and time, then, instead of attributing curvature pathology to an objectively real temporal boundary, curvature pathology can be attributed to a divergence in a physical field. In that case, we come to the analogy that I construct between ANL and the theory considered by Feynman, Pitts, and Schieve.} \begin{comment} In sum, by endorsing instrumentalism, Craig and Sinclair have severed the inference from the matter-energy distribution, conjoined with General Relativistic auxiliary hypotheses, to a past boundary. Consequently, they find themselves once more without the partial solution to the Omphalos Objection suggested above. \end{comment} There is an intriguing analogy between ANL and a theory considered by Richard Feynman (\citeyear{Feynman:2003}) and by J. Brian Pitts and W. C. Schieve (\citeyear{PittsSchieve:2003, PittsSchieve:2004, PittsSchieve:2007}; also see \cite{Pitts:2019}).\footnote{\cite{PittsSchieve:2007} and \cite{Pitts:2019} consider another similar theory that is, in principle, empirically distinguishable from standard General Relativity. The theory approximates standard General Relativity arbitrarily well given a sufficiently small graviton mass.} According to the theory Feynman, Pitts, and Schieve consider, physicists have been wrong to think of the metric $g_{\mu\nu}$ appearing in General Relativity as a description of space-time; instead, Pitts and Schieves consider the possibility that like, e.g., the electromagnetic field, $g_{\mu\nu}$ is a gravitational field (i.e., the field of a spin-2 boson) defined on a background Minkowski (flat) space-time equipped with a metric $\eta_{\mu\nu}$. Therefore, although Craig and Sinclair's brand of ANL maintains a background absolute space and absolute time, and Minkowski space-time differs from absolute space and absolute time, both theories postulate that $g_{\mu\nu}$ is a physical field defined on a background space-time. And, like Prokhovnik's view, the theory Pitts and Schieve consider entails that rulers and clocks are systematically distorted by the gravitational field in such a way that observers will conclude they inhabit a curved relativistic space-time \cite[1318]{PittsSchieve:2003}. In fact, Feynman (\citeyear{Feynman:2003}) and Michael Lockwood (\citeyear[335-336]{Lockwood:2007}) have utilized Poincare's creatures to explicate the theory. There are multiple ways that a general relativistic metric can be laid on top of a Minkowski space-time. Pitts and Schieve argue that, for every point of the underlying Minkowski space-time, the gravitational field should have a well-defined value. In classical models of the Big Bang, there is no defined value for $g_{\mu\nu}$ prior to the cosmological singularity. So, to avoid postulating space-time points where the gravitational field is undefined, Pitts and Schieve lay $g_{\mu\nu}$ on top of the Minkowski space-time in such a way that the cosmological singularity is relegated to infinitely far in the past. There are no space-time points to the past of past time-like infinity, so there are no points of the underlying space-time where the gravitational field is undefined. If an analogous argument is applied to ANL, the cosmological singularity is again relegated to past time-like infinity. In that case, the universe would not have begun to exist at any time in the finite past. Craig and Sinclair might reply that the gravitational field has a well-defined value at every point of the underlying space-time if the underlying space-time is truncated where the gravitational field becomes undefined; in that case, Craig and Sinclair would have reason to think that the underlying absolute time has a boundary. The trouble for this sort of view is two-fold. First, as I've discussed, there is a well-known and widely adopted principle according to which space-time should be maximally extended and that forbids the premature truncation of space-time. This was the principle that, in conjunction with relativistic cosmology, one might have hoped would help to overcome the Omphalos Objection in the first place. Second, Craig and Sinclair would still need a principled reason for choosing a specific metric for absolute time. Pitts and Schieve argue that the only physically sensible way to overlay an FLRW metric as a gravitational field on a background Minkowski space-time banishes the cosmological singularity to the infinite past (at least as recorded by the metric of the underlying Minkowski space-time). Consequently, the cosmological singularity ``disappears'' \cite[1321]{PittsSchieve:2003}. For Craig and Sinclair to conclude that singular cosmological models depict the universe as having begun in the finite past according to absolute time, they must demonstrate that a similar verdict will not follow for ANL. Here, I turn to the third desideratum for establishing that the universe began to exist in the sense that Craig and Sinclair have defended, that is, that the duration of past absolute time over which the universe has existed is finite. In the next subsection, I develop a set of desiderata that Neo-Lorentzian accounts need to adequately satisfy in order to determine the direction and duration of time in a given cosmological model. I explicitly evaluate the classic big bang model in terms of each desideratum because, so far as I can tell, friends of ANL have not previously explicitly evaluated their project in terms of the desiderata. Moreover, I will re-use the results that I gather from evaluating the desiderata when I turn to bounce cosmologies in a later section. \subsection{Identifying the Span of Past Absolute Time} In addition to the A-theory of time and a past boundary of the universe, in order to establish that the universe began, one must show that the span of absolute time since the past boundary to the present is finite. If one can only establish that the past universe has an open boundary,\footnote{Or some other pathology in virtue of which space-time is not further extendable to the past in whatever sense turns out to be appropriate.} then that open boundary may be located infinitely far into the past, in which case, at least in Craig and Sinclair's sense, the universe might not have begun to exist after all. That is, in addition to resolving the Omphalos Objection, one must resolve the Absolute Infinite Duration Objection, or ABIDO: \begin{enumerate} \item If we do not know whether the absolute duration between the absolute present and the past boundary is infinite, then we do not know whether the universe began to exist in the finite past. \item We do not know whether the absolute duration between the absolute present and the past boundary is infinite. \item Therefore, we do not know whether the universe began to exist in the finite past. \end{enumerate} There are four steps that friends of ANL should take to overcome ABIDO. First, friends of ANL should identify the requisite preferred foliation. Second, friends of ANL should determine a way to order the hypersurfaces in that foliation from the objective past to the objective future. Third, friends of ANL should identify a labeling of the hypersurfaces in the preferred foliation that corresponds to absolute time. And, fourth, friends of ANL should show that the total past duration of absolute time -- as measured by differences in the labeling of the hypersurfaces -- is finite. Although Craig and Sinclair do not explicitly evaluate these four steps, I will show that the first two steps can plausibly be adequately addressed. However, I argue that Craig, Sinclair, and other friends of ANL who endorse a beginning of the universe in the finite past have not adequately addressed the third step. And since they have not adequately addressed step three, we will not be able to move on to step four. Without an adequately supported objective labeling of the hypersurfaces in their preferred foliation, friends of ANL cannot infer that the past had an objectively finite duration and so cannot infer that the universe began to exist a finite time ago. \subsubsection{Step 1: Identify the preferred foliation} Craig favors the view that space-time should be foliated into hypersurfaces of Constant Mean (extrinsic) Curvature (CMC). (For a non-technical introduction to the CMC foliation, see \cite[118-120]{Lockwood:2007}.) Consider a monotonically expanding FLRW space-time. Proper time, as recorded by observers who are co-moving with the universe's expansion, can be used to label the CMC hypersurfaces. This labeling is called the cosmic time. (However, the labeling is not unique; for example, the CMC hypersurfaces could instead be labeled with the scale factor.) The choice of the CMC foliation as the preferred foliation can be defended in several ways; collectively, they render plausible the choice of the CMC foliation as the preferred foliation for for friends of ANL. In passing, I note that the CMC foliation is unique only for closed universes. In the case of an open universe, there are an infinite collection of distinct foliations \cite[120]{Lockwood:2007}. Moreover, Michael Lockwood has argued that evidence for black hole decay is evidence that the actual universe has no CMC foliation \cite[152]{Lockwood:2007}. Here, I set these objections to one side in order to examine the case for accepting one of the CMC foliations as the preferred foliation. Craig (\citeyear[236]{Craig:2001}) offers three arguments in support of his identification of a CMC foliation as the preferred foliation. First, Craig claims that the CMC foliation is ``natural'' because the foliation is defined by the global distribution of the universe's matter-energy content. Second, Craig draws upon an analogy with Newtonian spacetime. In Newtonian spacetime, the laws of motion assume a particularly simple form in inertial frames. For this reason, although we cannot identify which inertial frame corresponds to absolute space and time, one might argue that absolute space and time corresponds to one of the inertial frames. Likewise, in FLRW spacetimes, motion has a particularly simple form with respect to the CMC foliations and so one might surmise that one of the CMC foliations corresponds to absolute time.\footnote{An anonymous reviewer objects that the CMC foliation might not be the foliation picked out by the cosmic microwave background, as stipulated by Craig and other authors. As the reviewer notes, the CMB picks out a foliation for which the density of a scalar field -- representing the CMB -- is roughly spatially homogeneous, whereas a CMC foliation picks out a time-slicing so that the Hubble expansion is spatially homogeneous. Consequently, the two procedures could pick out distinct foliations. Supposing the two procedures did pick out distinct foliations, we would have yet another criticism of Craig's arguments and therefore further support to my own case against Craig's ANL. Nonetheless, I can see two reasons to think that the two procedures do pick out the same -- or approximately the same -- foliation. First, we are discussing FLRW space-times, that is, space-times that are exactly homogenous and isotropic. Suppose that there were a space-time in which the CMB were not isotropic so that an observer co-moving with the universe's expansion would observe the CMB as being significantly ``hotter'' in one direction as compared with other directions. If the CMB were hotter in one direction than in another, then this would presumably be the result of an anisotropy in the matter-energy distribution. And if the matter-energy distribution is anisotropic, then the universe is not an FLRW space-time. So, while I agree that one could have had a CMB density that did not pick out a CMC foliation, I don't see how that would have been possible in an FLRW space-time. And given that the universe we inhabit is well approximated by the FLRW ansatz on cosmological scales, if everything else I've said in this paragraph is correct, we have that the CMB density picks out a foliation that is at least well approximated by the CMC foliation. Second, in the case of FLRW space-times, space-time is ``naturally'' foliated in a way that locally corresponds to observers at rest with respect to the universe's expansion. This is the foliation that can be labeled with the cosmic time. And, as it turns out, at least in FLRW space-times, the surfaces of constant cosmic time are also surfaces of constant extrinsic scalar curvature. That is, the surfaces labeled by the cosmic time just are the CMC surfaces \cite[75]{Callender:2017}. But then the surfaces labeled by the cosmic time are just those that are uniform with respect to the CMB.} Third, in universes that approximate perfect homogeneity and isotropy on large scales, the Cosmic Background Radiation will appear, to a high degree of approximation, isotropic for the rest frames of CMC foliations. Simon Saunders has also argued that choosing a CMC foliation as the preferred foliation carries a number of theoretical advantages \cite[290]{Saunders:2002}. For example, the notion that one moment of time produces the next sits comfortably with the view that time objectively passes. Since CMC hypersurfaces are Cauchy surfaces, the full state of the world on one CMC hypersurface suffices for determining the state of the world on any other CMC hypersurface in the same foliation.\footnote{As Roser describes, \begin{quote} [...] the initial data can only be given on a a slice of constant scalar extrinsic curvature [that is, a CMC hypersurface], or equivalently of constant $T$. If we take the idea of a theory of gravity described by three-dimensional space whose geometry evolves through time (rather than the four-covariant `spacetime' picture) seriously, then the [fact that the initial data can only be given on a CMC hypersurface] strongly suggests that slices of constant $T$ are slices of constant time, so that the foliation on which the initial-value problem can be solved is indeed the foliation that corresponds to stacking of spaces at consecutive instances. For if physical time corresponded to a different time variable, that is, if the reconstruction of spacetime from the space-through-time theory were not a reconstruction from a constant-mean-curvature foliation, then as a consequence initial data could not be specified at a single instance in time. This would pose a major conundrum for the notion of what determines the dynamics of a physical system \cite[49]{Roser:2016}. \end{quote} } Consequently, if the A-theorist's preferred foliation is one of the CMC foliations, then the A-theorist can imagine the state on one CMC hypersurface producing the state on a subsequent CMC hypersurface. Craig has argued that effects which lie outside the domain of applicability of classical Special Relativity, e.g., quantum mechanics and cosmology, pick out a preferred foliation \cite[219-234]{Craig:2001}. Other friends of a preferred foliation agree. For example, Monton has argued that presentism may find a friendly home in quantum gravity approaches utilizing a fixed foliation into CMC hypersurfaces (\cite{Monton:2006}, though see the responses by W{\"u}thrich, i.e., \citeyear{Wuthrich:2010, Wuthrich:2013}). In addition, there are quantum gravity theories that violate Lorentz invariance by postulating a cut-off scale for the energy.\footnote{A cut-off scale does not necessarily imply Lorentz invariance violation; see, e.g., (\cite{RovelliSpeziale:2002vp}).} In turn, Lorentz invariance violation can result in a preferred reference frame, e.g., \cite[624-627]{Baron:2017}. Likewise, Ho\u{r}ava-Lifshitz gravity violates Lorentz invariance at high energy (\cite{Horava:2009, NilssonCzuchry:2019, TawfikDahab:2017}, \cite[132-134]{Koperski:2015}) and the CMC foliation has been suggested as the preferred foliation for Ho\u{r}ava-Lifshitz gravity (e.g., \cite{Afshordi:2009}). However, Ho\u{r}ava-Lifshitz gravity's relevance for Craig and Sinclair's project is unclear because some versions prevent singularities and lead to a bounce in the early universe (\cite{TawfikDahab:2017, Brandenberger2017}). Thomas Crisp (\citeyear{Crisp:2008, Crisp:2012}) and Pitts (\citeyear{Pitts:2004}) have both proposed presentist theories closely related to General Relativity and that utilize a CMC foliation. For example, Crisp has suggested that Julian Barbour's shape dynamics program can be adapted into a version of presentism that treats contemporary physics respectfully. However, Crisp and Pitts's proposals are incompatible with Craig and Sinclair's project, since Crisp and Pitts's proposals do not allow for an objective labeling of the CMC hypersurfaces that the theories pick out. Some interpretations of quantum mechanics, e.g., Bohmian mechanics, violate Lorentz invariance, e.g., \cite[160-1]{Albert:1992}, \cite[59]{Valentini:1996}, and require a preferred foliation. Antony Valentini has suggested adopting a CMC foliation (and the York time for labeling hypersurfaces, as described below) in building a Bohmian cosmology \cite[60]{Valentini:1996}. The point to take from this discussion is that if one is going to pick out a preferred foliation of our space-time as the one that corresponds to absolute time, then the CMC foliation is a particularly natural choice. Therefore, when Craig and Sinclair look for a preferred foliation of our space-time, they are right to pick out the CMC foliation as a suitable candidate. \subsubsection{Step 2: Identify the ordering of the hypersurfaces\label{OrderOfTime}} In order for Neo-Lorentzians to say that singular cosmologies (or, at any rate, cosmologies featuring space-times that are inextendable in whatever sense turns out to be appropriate) depict the universe as having begun to exist, they will need to show that our objective past has an open boundary represented by a singularity. And in order to show that our universe's objective past is bounded, they will need to provide an objective ordering of the hypersurfaces in the preferred foliation. If it should turn out that, according to absolute time, the cosmic singularity resides in our objective future, then the singularity provides no reason to think the absolute past is bounded. Some authors have thought that the direction of time shares a reductive explanation with the entropic arrow of time and so identify the direction of time with the direction of the entropy gradient. I will turn back to that view later, but, for now, note that authors who endorse Neo-Lorentzianism because of a prior commitment to A-theory, that is, most, perhaps all, friends of ANL, should not endorse the view that the direction of time has a reductive explanation. Instead of utilizing the entropy gradient as an indication of the direction of time, Neo-Lorentzians can determine the direction of time from a relativistic description of chronogeometric structure. As Matthews (\citeyear{Matthews:1979}) and Castagnino (\citeyear{Castagnino:2003}) describe, a relativistic space-time -- which they denote $(M, g)$ -- admits of a global direction of time if $(M, g)$ satisfies three conditions (quoted from \cite[889--890]{Castagnino:2003}): \begin{enumerate} \item $(M, g)$ is temporally orientable; \item For some $x \in M$, $(M, g)$ has a direction of time at $x$, that is, there is a non-arbitrary way of choosing the future lobe $C_x^+$ of the null cone $C_x$ at $x$; \item For all $x, y \in M$ such that $(M, g)$ has a direction of time at both $x$ and $y$, if the timelike vector $u$ lies inside $C_x^+$ and the timelike vector $v$ lies inside $C_y^+$ , then $u$ and $v$ have the same direction, that is, the vector resulting from parallel transport of $v$ to $x$ lies inside $C_x^+$. \end{enumerate} As shown in, e.g., \cite{Matthews:1979} and \cite{Castagnino:2003}, space-times satisfying the FLRW ansatz, i.e., those that are isotropic and homogoneous so that they can be described by the line element $ds^2 = -dt^2 + a(t)^2d\Sigma^2$, and that do not have pathological topological features (i.e., closed time-like curves) admit a global direction of time in this sense. However, that a space-time \emph{admits} of a global direction of time is merely to say that the space-time is compatible with a global direction of time. In order to identify the objectively correct ordering of the hypersurfaces in a given foliation, we would need a non-arbitrary method for choosing which of the two light cones at a given space-time point is the future (or past) cone. Unfortunately, relativity is inadequate, in itself, for determining which of the two cones is the future cone because the Einstein Field Equations are symmetric with respect to time. I offer two interrelated arguments for determining the direction of time from the relativistic chronogeometry that presuppose only that relativity is empirically adequate. First, the \emph{argument from empirical adequacy}. Note that for a theory to be \emph{empirically adequate}, the theory needs to make accurate predictions. Consequently, if relativity is empirically adequate within a given domain, all observations within that domain conform to the restrictions provided by the null-cone structure predicted from the Einstein Field Equations. On the A-theory, signals can be received only from the objective past and sent only to the objective future. Thus, \begin{enumerate} \item If relativity is empirically adequate and A-theory is true, then any given local observer $o$ can receive signals only from points within $o$'s past light cone and transmit signals only to points within $o$'s future light cone. \end{enumerate} By definition, friends of ANL are committed to: \begin{enumerate}[resume] \item Relativity is empirically adequate. \item A-theory is true. \end{enumerate} Friends of ANL can thereby conclude: \begin{enumerate}[resume] \item Therefore, any given local observer $o$ can receive signals only from points within $o$'s past light cone and transmit signals only to points within $o$'s future light cone. \end{enumerate} But if $o$ can receive signals only from points within $o$'s past light cone and transmit signals only to points within $o$'s future light cone, then $o$ can use facts about their light cones to determine which light cone is objectively future and which is objectively past: \begin{enumerate}[resume] \item If any given local observer $o$ can receive signals only from points within $o$'s past light cone and transmit signals only to points within $o$'s future light cone then $o$ can use facts about which points $o$ can receive signals from or send signals to to determine which of $o$'s light cones is objectively past and which is objectively future. \item Therefore, $o$ can use facts about which points $o$ can receive signals from or send signals to determine which of $o$'s light cones is objectively past and which is objectively future. \end{enumerate} Moreover, A-theorists think that we have immediate access to the passage of time. If we do have immediate access to the passage of time, then our immediate experience provides a non-arbitrary way of choosing the future lobe $C_x^+$ of the null cone $C_x$ at any given point $x$. And having picked out the future direction at $x$, we can parallel transport a future-directed time-like vector $u$ from $x$ to any other point $y$; if $u$ remains future-directed according to the locally defined direction of time at $y$, then the space-time has a global direction of time. All FLRW space-times, with the possible exception of those with sufficiently bizarre topological features, have a globally definable direction of time in this sense. Consequently, our phenomenology of time's passage provides us with an additional argument. First, let IMPAPT stand for ``immediate phenomenological access to the passage of time''. Then: \begin{enumerate} \item If $o$ has IMPAPT then, for any local observer $o$, which of $o$'s two light cones is objectively past and which is objectively future can be determined from $o$'s IMPAPT. \item If which of $o$'s two light cones is objectively past and which is objectively future can be determined from $o$'s IMPAPT then which of the time-like tangent vectors along $o$'s worldline is future-directed can be determined using $o$'s IMPAPT. \item Therefore, if A-theory is true, which of the time-like tangent vectors along $o$'s worldline is future-directed can be determined using $o$'s IMPAPT. \end{enumerate} Call this the \emph{argument from IMPAPT}. According to friends of ANL, we have IMPAPT. Consequently, friends of ANL should be committed to the view that which of our time-like tangent vectors points into our future can be determined using our IMPAPT. Moreover, FLRW space-times admit of a global direction of time. Once a set of, e.g., future-directed time-like tangent vectors has been determined for $o$ at $p_1$, those vectors can be parallel transported to any other point $p_2$ in that space-time. As long as the space-time has a global direction of time in the sense defined above, the objective direction of time at $p_2$ is then the direction indicated by the parallel transported time-like tangent vector at $p_2$. Consequently, friends of ANL should be committed to the view that our IMPAPT allows us to determine a global direction of time.\footnote{An anonymous reviewer suggested a problem for the argument from IMPAPT. The argument from IMPAPT supposed that we could parallel transport, with respect to the Levi-Cevita connection, future directed tangent vectors from one space-time location to another. But, given the ANL proponent's interpretation of the Levi-Cevita connection, we might have reason to doubt the veridicality of results drawn from parallel transporting, with respect to the Levi-Cevita connection, vectors to arbitrary space-time points. To put the point another way, the argument from IMPAPT assumed that General Relativity is empirically adequate for a sufficiently broad class of potential (and not necessarily actual) observers. But if General Relativity is not empirically adequate for a sufficiently broad class of potential observers, the argument from IMPAPT would not establish the global direction of time. Similar worries may likewise endanger the cogency of the argument from empirical adequacy. Nonetheless, without a mathematically and empirically sufficient formulation of a Neo-Lorentzian successor to General Relativity, I have difficulty seeing how friends of ANL could establish the global direction of time in any other way. In order to be generous to Craig and Sinclair, I will assume that General Relativity is empirically adequate for a sufficiently broad class of potential observers, so that the size of the class of observers for whom General Relativity is empirically adequate is no problem for establishing a global direction of time using the arguments from empirical adequacy and IMPAPT.} The arguments from empirical adequacy and from IMPAPT allow one to identify an objective ordering for the hypersurfaces in the CMC foliation of an FLRW space-time. I will return to these two arguments when I turn to bounce cosmologies below. \subsubsection{Step 3: Identify a labeling of the preferred foliation} I've argued that, given ANL, Craig and Sinclair may be able to defend the CMC foliation as the preferred foliation\footnote{Caveats apply since, for example, for FLRW space-times, the CMC foliation is unique only for closed universes.} and that that they can plausibly identify the objective ordering of the hypersurfaces in the preferred foliation. Nonetheless, as I show in this section, supposing that the difficulties in identifying a past boundary to the universe discussed in section \ref{IDing_the_boundary} can be overcome, I have difficulty seeing how Craig and Sinclair can adequately justify an objective measure of time from that past boundary to the present. For Neo-Lorentzians, each of the hypersurfaces in the preferred CMC foliation should be assigned a label that corresponds to the absolute time at which the events on that hypersurface take place. For Craig, the ``cosmic time plausibly coincides with God's metaphysical time, that is, with Newton's absolute time'' \cite[213]{Craig:2001}. And Craig has suggested that it is with respect to cosmic time that the universe can be said to have had an \emph{ex nihilo} beginning a finite time ago \cite[204]{Craig:2001}. Craig frequently moves back and forth between the CMC foliation and cosmic time as if the cosmic time labeling and the CMC foliation were equivalent.\footnote{For example, on page 220 of Craig's \citeyear{Craig:2001_Eternity}, Craig cites \cite{QadirWheeler:1985} in support of Craig's comments on cosmic time. While Qadir and Wheeler use the phrase `cosmic time' in their paper, Qadir and Wheeler's `cosmic time' is York time. Despite Qadir and Wheeler's placement of the Big Bang at past time-like infinity, Craig states -- on the same page! -- that cosmic time places the Big Bang at approximately fifteen billion years ago.} They are not. Foliations do not uniquely determine a labeling of the hypersurfaces that they pick out. Unfortunately, Craig's arguments for identifying the cosmic time as the objectively correct labeling of the preferred CMC hypersurfaces either do not uniquely pick out the cosmic time or else are not obviously stronger than arguments for other possible labelings. Consider a space-time in which the line element is given by the FLRW ansatz, i.e., $ds^2 = -dt^2 + a(t)^2 d\Sigma^2$, where $t$ is the cosmic time, $a(t)$ is the scale factor, $d\Sigma^2$ is the spatial part of the line element (e.g., for flat space, $d\Sigma^2 = dx^2 + dy^2 + dz^2$). Suppose that $a(t)$ is a monotonic function, as is true for expanding or contracting universes. In that case, each of the CMC hypersurfaces in the preferred foliation can be labeled with $t$, $a(t)$, or with any bijective and order-preserving function of $t$, for example, inverse powers of $a(t)$, i.e., $\tau = -a(t)^{-n}$. (The mapping must be order-preserving in order to be consistent with the results of step 2.) Note that $\tau$ labels the cosmological singularity with $-\infty$ and the present with $\tau = 0$. Hans Halvorson and Helge Kragh argue that this raises doubts about whether the past finitude of the universe has ``intrinsic physical or theological significance'' (\cite{sep-cosmology-theology}). Several cosmologists have offered similar verdicts (\cite{Milne:1948, Misner:1969, Roser:2016, RoserValentini:2017}). On Charles Misner's view, the ``clock'' that we should use at ``early'' times is given by $\Omega = -\log(V^{1/3})$, where $V$ is the ``volume'' of the universe calculated as an inverse power of $a(t)$ \cite[1332]{Misner:1969}. Since the volume shrinks to zero as one approaches the cosmological singularity, $\Omega$ maps the cosmological singularity to infinitely far in the past. Misner concluded (emphasis his), ``\emph{The Universe is meaningfully infinitely old}'' \cite[186]{Misner:1969}. Misner's cosmological model is outmoded. However, contemporary physicists have suggested labeling the CMC hypersurfaces with a quantity termed the \emph{York time} (\cite{York:1972}; \cite[49]{Roser:2016}). The York time is the trace of the extrinsic curvature $K$ of each CMC hypersurface. In the case of a model satisfying the FLRW ansatz, the York time is proportional to the negative Hubble parameter, i.e., $Tr K \propto -H$. Consequently, in FLRW models, there is an order preserving bijection between the York time and the cosmic time. The point for our purposes is two-fold. First, as physicist Philipp Roser notes, ``even if there are other options [for a labeling that coincides with absolute time], York time must be considered a favourite among them based only on a few theoretical principles'' \cite[52]{Roser:2016}. Second, in FLRW space-times, York time varies inversely with $a(t)$ and, like $\tau$, the York time labels the cosmological singularity with negative infinity. As Roser describes, ``[...] just as in the case of Misner’s parameter [...] the `beginning' lies in the infinite past [...] The York-time approach to quantum gravity gives no explanation of a beginning because the universe simply has none. It is infinitely old'' (\cite[58--61]{Roser:2016}; also see \cite{RoserValentini:2017}). Importantly, if York time is at least as good a choice for absolute time as cosmic time, then we have reason to endorse the second premise in ABIDO, i.e., we cannot know whether the absolute duration between the present and the past boundary is infinite. Craig does try to articulate some advantages of identifying absolute time with cosmic time. However, each of the supposed advantages of cosmic time that Craig recounts fail to uniquely pick out cosmic time. In part, this is because, as mentioned earlier, Craig does not consistently distinguish the preferred foliation from his labeling of the preferred foliation with cosmic time. For example, Craig tells us that observers whose clocks measure the cosmic time will record the CMB as isotropic. Nonetheless, the CMB is also isotropic according to, e.g., York time or with respect to any other labeling of the CMC hypersurfaces. One might think that cosmic time has the advantage that physically embodied observers whose clocks measure cosmic time are easier to come by than physically embodied observers whose clocks measure York time. True enough, but this cannot be an advantage for friends of absolute time. On their view, absolute time is independent of the time recorded by any local (or even physically constituted) clock. As Craig, channeling Newton, would tell us, local clocks need not record God's absolute time. Perhaps, as an anonymous reviewer has objected, I've moved too fast and the fact that physically embodied observers possess clocks that record the cosmic time is an advantage for cosmic time. For example, if phenomenal conservatism is true, then, unless an argument to the contrary is available, we should assume that what seems to be true really is true. Local clocks seem to measure the time, so, unless we have an argument to the contrary, perhaps we should think that local clocks do measure the absolute time. And since local clocks approximate the cosmic time, we have reason to associate absolute time with cosmic time. I am not persuaded by the phenomenal conservative argument. To start, note that although Craig thinks we have phenomenological access to the passage of time, there is a distinction between time's passage and the rate at which time passes. Local clocks do not actually measure cosmic time; instead, only the local clocks of observers who are co-moving with the universe's expansion approximate cosmic time. Supposing that absolute time should be identified with cosmic time, the rate of temporal passage experienced by any observer moving with respect to the universe's expansion is illusory. In fact, Craig argues that we are moving with respect to the universe's expansion \cite[56-57]{craig_2001}, so that Craig implicitly admits that, if cosmic time is absolute time, the rate at which we experience the passage of time is illusory. If the rate at which most observers experience the passage of time turns out to be a widespread illusion, then we have reason not to trust the rate at which time seems to pass. Furthermore, the phenomenal conservative strategy suggests that, in our ordinary experience, clocks that can be physically constructed seem to measure time, so that perhaps we should think our local clocks measure absolute time. The suggestion followed that this gives us some reason to favor cosmic time as the measure of absolute time. However, we also have some seemings from attempts to construct quantum gravity theories or from attempts to construct an ontology for quantum theory that have elsewhere been argued to count in favor of York time as a candidate for absolute time. (Roser goes so far as to note that cosmic time is a ``highly unnatural choice of time parameter when discussing the very early universe'', \cite[58]{Roser:2016}.) Why should a metaphysics of absolute time favor the former seemings over the latter seemings? I don't see a clear way to settle the dispute between the two conflicting sets of seemings in favor of one set of seemings and that would be considered widely attractive to all disputants. And without a way to settle the dispute, all else being equal and provided absolute time exists, I am inclined to agnosticism concerning the finitude of the duration of past absolute time. And that agnosticism provides sufficient justification for endorsing ABIDO. York time may have some advantages over cosmic time as a candidate for absolute time. Cosmic time can be used to label a CMC foliation of our universe provided our universe satisfies the FLRW ansatz, that is, if our universe is exactly homogeneous and isotropic. Our universe is not homogeneous and isotropic; instead, our universe only appears to be homogenous and isotropic when one averages the mass-energy content over sufficiently long spatio-temporal scales. As Gerald Whitrow writes, ``cosmic time is essentially a statistical concept, like the temperature of a gas'' (\citeyear[246]{Whitrow:1961}; also see \cite[117-118]{Lockwood:2007}). As Daniel Saudek notes, this leads to the consequence that cosmic time is defined only in a coarse grained way -- that is, for what Saudek calls ``stages'' of cosmological evolution -- and is inadequate for defining a total ordering over space-like separated localized events (\citeyear[56]{Saudek:2020}), as should be expected from a good candidate for absolute time. Thus, cosmic time can only be used as an approximate label of a CMC foliation of our universe. York time does not require homogeneity or isotropy; in fact, York time plausibly requires no averaging at all and so can be used as an exact label.\footnote{There may be worries about quantum indeterminacy. For example, perhaps there is no precise matter-energy distribution, with the consequence that the York time becomes ambiguous on small scales and so that no labeling could be exact. If so, we would be unable to define any candidate for the absolute time based on chronogeometric structure on sufficiently small spatio-temporal scales. Nonetheless, this would be an objection that applies equally to all possible candidates for absolute time and not to any specific candidate.} Beyond our cosmological horizon, we have no reason to expect space-time to be homogenous or isotropic and so no reason for the FLRW ansatz to hold. For example, in an inflationary multiverse scenario, the distinct universes are proper parts of one space-time. Some -- though perhaps not all -- of the universes can be approximated using the FLRW ansatz so that \emph{a} cosmic time can be approximately defined for each universe.\footnote{George Ellis and Rituparno Goswami have promoted a generalization of the cosmic time -- the proper time co-moving gauge -- as a candidate for the absolute time (\citeyear[250]{EllisGoswami:2014}). Ellis and Goswami's proposal would apply to inhomogenous or anisotropic space-times. However, Ellis and Goswami's proposal labels a distinct foliation from the CMC foliation. Moreover, while one might have expected that any surface in the foliation labeled by absolute time is a space-like surface, Ellis's proposal has the bizarre consequence that, in inhomogenous space-times, some surfaces in the foliation Ellis's proposal picks out may be time-like surfaces. Therefore, if the proper time co-moving gaugage corresponds to absolute time, some moments of time are time-like surfaces, which is implausible. For this reason, the York time parametrization of the CMC foliation is arguably superior or at least not inferior.} However, the FLRW ansatz cannot be used to describe the \emph{entire} space-time, so that cosmic time cannot be defined as a global parameter for the entire multiverse. Nonetheless, supposing that the entire multiverse is a globally hyperbolic space-time manifold -- as is presumably required for ANL -- York time \emph{can} be defined as a global parameter.\footnote{Craig Callendar and Casey McCoy object that in the de Sitter phase of an inflationary multiverse, all CMC surfaces have the same value of the York time (\cite{CallendarMcCoy:2021}). If all of the CMC surfaces in the de Sitter phase have the same value of the York time, and the York time is identified with absolute time, then the awkward consequence follows that all of the CMC surfaces in the de Sitter phase are (somehow) \emph{simultaneous with} one another. Nonetheless, Roser points out that an actual inflationary phase is only approximately de Sitter and that the York time really is ``increasing during this cosmological period'' (\cite[50]{Roser:2016}; also see \cite[76-79]{Roser:2016}).} Presentists maintain that only present objects exist simpliciter. There doesn't seem to be a coherent way for presentists to maintain that there are multiple presents, one present for each space-time region where the FLRW ansatz approximately holds. Therefore, insofar as we have either no reason to expect that space-time satisfies the FLRW ansatz beyond our cosmological horizon or that we even possess evidence that we do inhabit an inflationary multiverse (or some other anisotropic space-time) the presentist should favor a globally definable parameter as a candidate for absolute time. The York time is a globally definable parameter for a far broader class of space-times than cosmic time and is, for that reason, preferable by the presentist's own lights. \section{Bounce Cosmologies} I've issued two epistemological challenges to authors who endorse both ANL and the view that singular cosmological models support the conclusion that the universe had a beginning in the finite past. I now turn to considering how those who endorse Neo-Lorentzianism and the A-theory of time should interpret bounce cosmologies, that is, cosmological models according to which our expanding universe was birthed from a prior contracting universe. Bounce cosmologies can be constructed in classical General Relativity, but more sophisticated bounce cosmologies can be constructed in quantum gravity theories, such as Ho\u{r}ava-Lifshitz gravity (as already mentioned), loop quantum gravity (\cite{AshtekarSingh:2011, Cai:2014jla, AgulloSingh:2017}), string theory (\cite{IjjasSteinhardt:2017, Ijjas_2018, Ijjas:2019pyf, Steinhardt:2002, steinhardt_2007, ShtanovSahni:2003}), and $f(R)$ gravity (\cite{Corda:2011, Odintsov:2015, Oikonomou:2015}). All four result in modifications to the Einstein Field Equations and corresponding modifications to the FLRW equations. As discussed in \cite{Linford:2020b}, some bounce cosmologies depict our universe as having been produced from a black hole in another universe; in this paper, I've set aside cosmologies in which universes are born from ``bouncing'' through black holes, though analogous conclusions apply. \subsection{The Thermodynamic Arrow of Time and Bounce Cosmologies} Bounce cosmologies lack a singular boundary to time. For that reason, one might have thought that bounce cosmologies depict physical reality as not having a beginning, at least in Craig and Sinclair's sense. In order to defend the view that physical reality did have an absolute beginning, Craig and Sinclair set about trying to show that either bounce cosmologies are implausible or that, contrary to orthodoxy, bounce cosmologies depict physical reality as having an absolute beginning after all. Here, I set aside whether bounce cosmologies are empirically successful (or are otherwise plausible) in order to focus on the interpretation of bounce cosmologies. Following the previous section's discussion, let's consider a CMC foliation of a bounce cosmology. In many bounce cosmologies, the entropy obtains a minimum value at the CMC hypersurface (herein, the \emph{interface}) joining the two universes. On each of the hypersurfaces before the interface, the entropy decreases and, on each hypersurface after the interface, the entropy increases. In other contexts, the direction of increasing entropy has been termed the ``thermodynamic arrow of time'' because the direction in which entropy increases is (often) the future direction. Since the entropy increases in both directions away from the interface, Craig and Sinclair have argued that the future lies in both directions away from the interface. In other words, on their view, the interface is a closed boundary indicating the beginning of time for two universes. Craig and Sinclair write, ``The boundary that formerly represented the `bounce' will now [be interpreted to] bisect two symmetric, expanding universes on either side'' \cite[122]{CraigSinclair:2012}. Elsewhere, Craig and Sinclair state, ``The last gambit [in trying to avoid an absolute beginning], that of claiming that time reverses its arrow prior to the Big Bang, fails because the other side of the Big Bang is \emph{not} the past of our universe'' \cite[158]{CraigSinclair:2009}. And they conclude, ``Thus, the [universe on the other side of the interface] is not our past. This is just a case of a double Big Bang. Hence, the universe \emph{still} has an origin'' \cite[180-181]{CraigSinclair:2009}. Nonetheless, as Daniel Linford (\citeyear{Linford:2020a, Linford:2020b}) has recently argued, there is tension between the views that the entropy gradient indicates the future direction and that the direction of time is irreducible. According to reductionists, such as David Albert (\citeyear{albert_2000, Albert:2015}) and Barry Loewer (\citeyear{Loewer_2007, Loewer:2012, Loewer:2019}), there is no microphysical distinction between past and future directions. Instead, the future appears to lie in a distinct direction from the past because the various macrophysical time asymmetries share a common reductive explanation with the entropy gradient. However, as Craig tells us, ``From the standpoint of a classical A-theorist like Isaac Newton, the failure of reductionistic attempts to explain the asymmetry of time is patent, since physical processes are at best mere sensible measures of time, not constitutive of time itself'' \cite[351]{Craig:1999}. Any attempt to reduce the direction of time is ``misconceived'' because God could have created a``universe lacking any of the typical thermodynamic, cosmological or other arrows of time'' but in which God ``experiences the successive states of the universe in accord with the lapse of His absolute time'' \cite[162]{craig_2001}. We don't need to adopt Craig's theology; as Poincar{\'e} wrote, ``the atheists [can imagine] put[ting] themselves in the place where God would be if he existed'' \cite[217]{Poincare:2001a}. One way that the universe could lack the typical arrows of time would be if the entropy decreased in the direction in which God experiences the successive states of the universe. We can imagine a God's eye view of the world in which eons pass as the entropy decreases and absolute time inexorably flows forward. Thus, Craig's metaphysics of time entails that the alignment between the direction of time and the thermodynamic arrow is not metaphysically necessary. In fact, Craig has argued that the ``physically reductionist accounts of temporal direction and/or anisotropy have little to commend themselves'' precisely because ``the physical arrows are neither necessary nor sufficient for time's having a direction and/or anisotropy'' \cite[352]{Craig:1999}. Craig has suggested that the second law of thermodynamics may render the alignment between the direction of time and the entropic arrow nomologically necessary \cite[78]{CarrollCraig:2016}. Nonetheless, the second law of thermodynamics is a statistical regularity that admits of exceptions; violations of the second law of thermodynamics are not nomologically impossible. In some places, Craig appears to admit that there is no nomologically necessary connection between the entropy gradient and the direction of time.\footnote{For example, Craig has considered a thought experiment in which the universe is a vast equilibrium gas with small, localized fluctuations from equilibrium. As Craig notes, for his reductionist interlocutors, there may be no sense in which a fluctuation at one time is either before or after a fluctuation at another distinct time. Craig thinks that an advantage of his anti-reductionism is that there would be a fact about which fluctuation is first regardless of how the universe's entropy changes in the interim \cite[354]{Craig:1999}. As Craig (\citeyear[355]{Craig:1999}) writes, ``The fact that entropy states of a process range in value between higher and lower numbers tells us nothing about which values exist later''.} Thus, on ANL, the alignment between the thermodynamic arrow and the direction of time is not logically, metaphysically, or nomologically necessary and Craig and Sinclair's interpretation of the interface as a double big bang cannot be adequately justified. The points that I've raised in this section are important and much more can be said. For now, let's move on by reminding ourselves of the resources that ANL supplies for determining the direction of time. \subsection{The Global Direction of Time in Bounce Cosmologies} We can imagine a CMC foliation of the bounce cosmology's space-time and a congruence of time-like curves passing through the foliation. And recall the procedures that we used in the Argument from IMPAPT and in the Argument from Empirical Adequacy. In both cases, we locally determined the direction of time -- either from our IMPAPT or from our ability to send or receive signals -- and then constructed the global direction of time by projecting the local direction via parallel transport. To determine whether or not time reverses at the interface, we can parallel transport future-directed time-like tangent vectors along a time-like geodesic congruence back through the interface and examine what the vectors do as they cross the interface. If we find that the future-directed, time-like tangent vectors become past-directed -- so that the direction of the objective past and the direction of the objective future flips at the interface -- then we would have reason to think that the interface is the beginning of two universes. And if the future-directed, time-like tangent vectors do not reverse their direction then the objective direction of time, as endorsed by Neo-Lorentzians, does not change at the interface. In that case, the other universe would be situated in our objective past and, contra Craig and Sinclair, bounce cosmologies should not be interpreted as having a beginning. As I've said, bounce cosmologies can be constructed using either standard or modified FLRW equations. All standard and modified FLRW space-times, with the exception of those that allow some exotic topological features that are independently rejected by presentists (e.g., space-times featuring closed time-like curves), can be assigned a globally consistent direction of time because they satisfy the FLRW ansatz previously discussed. If we accept ANL's commitment to an absolute distinction between the past and the future in the current universe, then, not only does there exist a consistent time-ordering, but the time-ordering is uniquely determined by our IMPAPT and by the direction in which we can receive or transmit signals. In standard or modified FLRW space-times, the parallel transport of any future-directed time-like tangent vector along any time-like curve in any of the relevant FLRW space-times will never result in a past-directed time-like tangent vector. Therefore, when the direction of time is projected to the other universe via parallel transport, we find that the other universe is to our past. Therefore, contra Craig and Sinclair, friends of ANL should not interpret the interface as the birth of two universes. One might object to the view that I've offered in this section on the grounds that, given the mismatch between the entropic arrow in the other universe, according to some bounce cosmologies, events in the other universe would happen in reverse order. One needs to be careful because, as \cite{Linford:2020b} points out, some bounce cosmologies maintain that the entropy per unit volume decreases as one approaches the bounce and entropy becomes hidden behind a cosmological horizon, even though the ``total'' entropy does not decrease. In that case, there may still be sensible thermodynamic development along a given time-like curve from the prior universe into the current universe. However, other bounce cosmologies predict anti-thermodynamic behavior as one approaches the bounce. Assuming the supervenience of mental states on brain states, observers might form memories in the reverse direction; if we could communicate with them -- and two-way communication does not seem likely -- they might tell us that they experience the passage of time in reverse. Perhaps we should listen to our trans-universal interlocutors and conclude, with Craig and Sinclair, that the other universe is not to our past after all. Note that this argument is distinct from a closely related objection to some bounce cosmologies; according to the objection, we never observe a global reversal of the entropic arrow and this provides us good inductive reason to doubt bounce cosmologies in which there is a global reversal of the entropic arrow.\footnote{Nonetheless, as argued in \cite{Linford:2020b}, not all bounce cosmologies do include a global reversal of the entropic arrow of the sort that would be undermined by an inductive inference of this kind.} Whatever one might think of the merits of this objection, I am concerned with how friends of ANL should interpret bounce cosmologies and not with whether any bounce cosmology is plausible. Consider a family of debunking arguments against the A-theory of time according to which the passage of time makes no difference to the physical world. Assuming mental states supervene on brain states, the same sequence of phenomenal states would be realized regardless of whether the A-theory of time is true \cite[14-15]{Price:1997}, \cite{Prosser:2000, Prosser:2007, Prosser:2013}.\footnote{Kristie Miller (\citeyear{Miller:2017}) has recently offered a related argument. Miller argues that, independent of the supervenience of mental states on brain states, all versions of A-theory on offer suggest that the passage of time can make no relevant difference to our phenomenal experience. If she is right, then our experience of temporal passage does not provide a reliable guide to the direction in which time passes.} On behalf of A-theorists, Sam Barron (\citeyear{Baron:2017}) argues that we have no way to rule out the possibility that the passage of time will make a difference to future physics and so (possibly) make a difference to brain states. The A-theorist can apply a corresponding view to bounce cosmologies; perhaps observers in the other universe would feel as though time passes in the direction towards the bounce, even if our current physics indicates that physical processes will happen in the ``reverse direction'' with respect to the bounce. If so, our trans-universal interlocutors would not say that their experience of time is reversed from ours. Perhaps other A-theorists would say that experiencing temporal passage in reverse is not metaphysically possible, though Craig seems unbothered by the possibility of observers unable to successfully detect the passage of absolute time \cite[354]{Craig:1999}. In any case, thinking about how to respond to our trans-universal interlocutors has limited utility. We can compare (i) how friends of ANL should interpret a given cosmological model and (ii) what friends of ANL should say if the model turns out to correctly describe the world. If our world turns out to be correctly described by a bounce cosmology, and if there turn out to be creatures inhabiting the other universe who experience temporal passage in reverse, then the A-theorist would be left in a similar epistemic position as someone who discovers that some members of their world are brains-in-vats and who begin to wonder how they could know that they are not envatted themselves. Recall that, according to Craig, our IMPAPT provides an intrinsic defeater-defeater which ``overwhelms the objections brought against'' the ``objectivity of tense and the reality of temporal becoming'' \cite[138]{Craig:2000}. Perhaps Craig would likewise say that we have an intrinsic defeater-defeater for any objections brought against our access to the \emph{direction} of temporal becoming. If so, then, as with any agent who possesses an intrinsic defeater-defeater against global skepticism, observers who find themselves in a universe described by a bounce cosmology should retain their confidence that they have successfully identified the direction of time and so retain their confidence that the other universe is to our past. In any case, in this paper, I am considering cosmological models ``from the outside'', that is, how friends of ANL should \emph{interpret} bounce cosmological models. Here, friends of ANL should conclude that, \emph{according to the model}, first, denizens of the prior universe may either experience anti-thermodynamic phenomena (if the supervenience of the mental on the physical fails) or else be subject to a widespread illusion about the direction of temporal passage and that, second, we, as denizens of the posterior universe, may either retain our confidence in the direction of temporal passage -- provided we have an intrinsic defeater-defeater for our access to the direction of temporal passage -- or else we would have to be much less confident about our experience of temporal passage than we can be in the actual world. And perhaps friends of ANL should be content that we currently have no evidence that we inhabit a bounce cosmology. Either way, we cannot conclude that the interface represents the \emph{ex nihilo} beginning of two universes. \section{Conclusion} Craig and Sinclair are not friends of metaphysical skepticism. They maintain thick metaphysical positions, \emph{viz}, the existence of God, a thick conception of the passage of time, a global present pervading the entire universe, and so on. Furthermore, they have frequently critiqued traditional interpretations of relativity on the grounds that those interpretations require positivist principles now largely considered \emph{pace} in anglophone philosophy of science. But despite their attempts to skirt metaphysical skepticism, Craig and Sinclair have endorsed positions that, as I have argued, invite a form of metaphysical skepticism antithetical to their larger project. The beginning of the universe, as understood by Craig and Sinclair, cannot be directly observed. For that reason, empirical arguments for a beginning of the universe require us to conjoin observational data with a robust physical theory. In the case of singular (or appropriately inextendable) FLRW cosmologies, the conjunction of observational data and General Relativity may allow us to infer that the universe had an open boundary in the finite past, provided the difficulties discussed in section \ref{IDing_the_boundary} can be overcome. But, for Craig and Sinclair, an open boundary to space-time is insufficient for inferring that the universe began. When observational data is instead conjoined with Craig and Sinclair's Neo-Lorentzian alternative to relativity, we find ourselves unable to convincingly formulate the inferences that the universe has a past boundary or that the past boundary resides in the finite past. Craig and Sinclair have also offered an alternative interpretation of bounce cosmologies on which space-time has a closed boundary in the finite past. As I've shown, careful examination of bounce cosmologies reveals that Craig and Sinclair's Neo-Lorentzianism leaves us without reason to infer that bounce cosmologies include a closed boundary in the finite past. \section{Acknowledgements} Thanks to Levi Greenwood, Philipp Roser, George Gale, Felipe Leon, Jeffrey Brower, Martin Curd, and Jacqueline Mariña's dissertation seminar for helpful discussions or feedback on this article. \singlespacing \bibliographystyle{plainnat}
2023-04-23T08:18:17.690Z
2021-09-20T02:20:06.000Z
redpajama/arxiv
arxiv_0000
1,538
19,436
bd53fc1bd598501ed197a46f97775e77b3d3b78d
\section{Proof of the inequalities in Theorem \ref{T1}} \ \ Let us start with following lemma from [FM2], in order to clarify the equivalence between exponential inequalities on sets $\{x\in \mathbb{R}^n\ :\ |Tf(x)|\geq 1\}$ and regularized exponential inequalities over $\mathbb{R}^n$. \bigskip \noindent\textup{\bf Lemma A ([FM2, Lemma 9])} \emph{Let $(N,\nu)$ be a measure space and $1<p<\infty$, $a>0$. Then for every $u\in {L}^p(N)$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|u|\geq 1\}}e^{a|u|^{p'}}dx-e^a||u||^p_p\leq \int_{N} \bigg(e^{a|u|^{p'}}-\sum_{k=0}^{\lceil p-2\rceil}\frac{a^k|u|^{kp'}}{k!}\bigg)dx\leq \int_{\{|u|\geq 1\}}e^{a|u|^{p'}}dx+e^a||u||^p_p \label{l3a}\ee and also \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|u|\geq 1\}}\frac{e^{a|u|^{p'}}}{1+|u|^{p'}}dx-e^a||u||^p_p\leq \int_{N} \frac{e^{a|u|^{p'}}-\sum_{k=0}^{\lceil p-2\rceil}\frac{a^k|u|^{kp'}}{k!}}{1+|u|^{p'}}dx\leq \int_{\{u\geq 1\}}\frac{e^{a|u|^{p'}}}{1+|u|^{p'}}dx+e^a||u||^p_p. \label{l3b}\ee In particular, the following three inequalities are equivalent:\\ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{N} \frac{\exp_{\lceil p-2\rceil}{[a|u|^{p'}}]}{1+|u|^{p'}}dx\leq C||u||^p_p,\label{l3cc}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|u|\geq 1\}}\frac{e^{a|u|^{p'}}}{1+|u|^{p'}}dx\leq C||u||^p_p,\label{l3dd}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{E} \frac{e^{a|u|^{p'}}}{1+|u|^{p'}}dx\leq C(||u||^p_p+|E|)\label{l3ee}\ee for every measurable set $E$ with finite measure.} \bigskip In order to prove \eqref{1b}, it is enough to show that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\int_{\{|Tf|\geq 1\}}\frac{\exp\bigg[\dfrac{1}{A_g}|Tf|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{ n}{n-\alpha}}}dx \leq C||Tf||_{n/\alpha}^{n/\alpha}. \label{1c}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} t_0=\big|\big\{x\ :\ |Tf|\geq 1\big\}\big|.\nt\ee Note that by this definition we have $(Tf)^*(t)\geq 1$ for $0<t < t_0$ and $(Tf)^*(t)<1$ for $t>t_0$.\par Now we will show that \eqref{1c} is equivalent to \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{0}^{t_0}\frac{\exp\bigg[\dfrac{1}{A_g}\big((Tf)^*(t)\big)^{\frac{n}{n-\alpha}}\bigg]}{1+\big((Tf)^*(t)\big)^{\frac{ n}{n-\alpha}}}dt \leq C||Tf||_{n/\alpha}^{n/\alpha}. \label{1d}\ee Let us denote the rearrangement of $Tf$ with respect to a measurable set $E$ as $(Tf)_E^*$ and its corresponding maximal function as $(Tf)_E^{**}$. Clearly \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)_E^*(t)=\big((Tf)\chi_E\big)^*(t),\qquad 0<t\le |E|.\label{r16}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} F(z)=\dfrac{e^{\frac{1}{A_g}z^{n/(n-\alpha)}}}{1+z^{n/(n-\alpha)}}\label{r15}\ee and $E=\big\{x\ :\ |Tf|\geq 1\big\}$, then the LHS of \eqref{1c} can be written as \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{E}F(|Tf|)dx=\int_{0}^{t_0}F((Tf)_E^*(t))dt =\int_{0}^{t_0}F((Tf)^*(t))dt,\end{aligned}\label{r17}\ee The first equality holds since for $F$ non-negative and measurable on $[0,\infty)$, for $g$ measurable on $\mathbb{R}^n$ and if $E$ is a level set of $g$, we have $ \int_{E}F\circ |g|dx= \int_{0}^{|E|}F\circ g_E^*dt$ (see for example [K, Theorem 1.1.1]). To prove the second equality, note that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |(Tf)\chi^{}_{E}(x)|\geq ||(Tf)\chi^{}_{E^c}||_\infty,\qquad\text{for a.e.}\ x\in E,\nt\ee hence by Lemma \ref{l0} we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)_{E}^*(t)=(Tf)^*(t),\qquad\text{for}\ 0<t < t_0.\nt\ee \ \ To estimate $(Tf)^*$, we first define $1$-parameter families of sets $E_\tau,\ F_\tau$ (depending on $f$) as follows. For $\tau>0$, let $E_\tau$ be the set such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bc &|E_{\tau}|=\tau\\ &\{x\ :\ |Tf(x)|>(Tf)^*(\tau)\}\subseteq E_{\tau}\subseteq \{x\ :\ |Tf(x)|\geq(Tf)^*(\tau)\}. \ec \label{d2}\ee In order to show that such $E_\tau$ exists, we denote $V_1=\{x\ :\ |Tf(x)|>(Tf)^*(\tau)\}$ and $V_2=\{x\ :\ |Tf(x)|\geq(Tf)^*(\tau)\}.$ By definition of rearrangement, we have that $\mu(V_1)\le \tau$ and $\mu(V_2)\geq \tau.$ If $\mu(V_2)=\tau$, we take $E_\tau=V_2$. Otherwise, consider the continuous function $g(r)=\mu(V_1)+\mu(B_r\cap (V_2\setminus V_1))$ for $r\geq 0$, where $B_r=B(0,r)$ is the ball centered at $0$ with radius $r$. It is clear that $g(0)=\mu(V_1)\le \tau$, and $g(r)\to\mu(V_2)$ as $r\to\infty$. Since $\mu(V_2)>\tau$, there exists a $r$ such that $g(r)=\tau$, and $E_\tau=V_1\cup(B_r\cap V_2\setminus V_1)$ is a measurable set that satisfies the condition \eqref{d2}. \bigskip Similarly, let $F_{\tau}$ be the set such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bc &|F_{\tau}|=\tau\\ &\{x\ :\ |f(x)|>f^*(\tau)\}\subseteq F_{\tau}\subseteq \{x\ :\ |f(x)|\geq f^*(\tau)\}. \ec \label{d3}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f_{\tau}=f\chi^{}_{F_{\tau}},\ \ \ f_\tau'=f\chi^{}_{F^c_{\tau}} \nt\ee \noindent and $r(\tau)=(\tau/|B_1|)^{1/n}$ so that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |B(0,r(\tau))|=\tau.\label{rtau}\ee \begin{rk} If $f$ and $K$ are radially decreasing, then both $E_\tau$ and $F_\tau$ are either open or closed balls of volume $\tau$.\end{rk} Next, for all $x \in E_\tau$ define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} W(\tau,x)=\int_\tau^{2\tau}k_1^*(u)(f_\tau'\chi^{}_{B(x,r(\tau))})^*(u-\tau)du \label{d4} \ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} M(\tau,x)=|T(f_\tau'\chi^{}_{B^c(x,r(\tau))})(x)|. \label{d5}\ee \noindent Lastly, for fixed $\tau>0$, take the essential supremum in \eqref{d4} and \eqref{d5}, and let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} W_\tau=\supess_{x\in E_\tau} W(\tau,x) \label{d7} \ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} M_\tau=\supess_{x\in E_\tau} M(\tau,x). \label{d8}\ee We want to point out that for all $\tau>0$ we have $W_\tau<\infty$ and $M_\tau<\infty$, by the fact that $f$ is compactly supported and $||f||_{n/\alpha}\leq 1$. Also, note that for each $x$ and $\tau$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f=f_\tau+f'_\tau=f_\tau+f_\tau'\chi^{}_{B(x,r(\tau))}+f_\tau'\chi^{}_{B^c(x,r(\tau))}\ \ \ \nt \ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Tf(x)=Tf_\tau(x)+T(f_\tau'\chi^{}_{B(x,r(\tau))})(x)+T(f_\tau'\chi^{}_{B^c(x,r(\tau))})(x).\label{d6}\ee From now on we will use the following notation:\begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} q=\frac{n}{\alpha},\qquad\qquad q'=\frac{n}{n-\alpha}.\nt\ee Recall that the O'Neil functional is defined as \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Uf(t)=C_0t^{-\frac{1}{q'}} \int_0^{t}{f}^*(u)du+\int_t^{\infty}k_1^*(u){f}^*(u)du. \nt\ee Our first step toward a proof of \eqref{1d} is to establish the following estimate: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)^{*}(t)\leq (Tf)^{**}(t)\leq Uf_\tau(t)+W_\tau+M_\tau\ \ \ \ \ \textup{for}\ 0<t\leq\tau.\label{e1}\ee Recall the definition of $(Tf)_E^*$ in \eqref{r16}, for any measurable set $E$ . The definition of $E_\tau$ implies that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |(Tf)\chi^{}_{E_\tau}(x)|\geq ||(Tf)\chi^{}_{E^c_\tau}||_\infty,\qquad\text{for a.e.}\ x\in E_\tau,\nt\ee hence we can apply Lemma \ref{l0} to get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)_{E_\tau}^{**}(t)=(Tf)^{**}(t),\qquad\text{for}\ 0<t\le\tau.\label{TE1}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f_{x,\tau}=f_\tau+f_\tau'\chi^{}_{B(x,r(\tau))},\nt\ee note that by the definition of $M_\tau$ in \eqref{d8} and the decomposition of $Tf$ in \eqref{d6}, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |(Tf)\chi^{}_{E_\tau}(x)|\le |(Tf_{x,\tau})\chi^{}_{E_\tau}(x)|+M_\tau,\qquad \text{for} \ x\in E_\tau.\nt\ee Due to subadditivity of $(\cdot)^{**}$ (see [BS, Chapter 2 inequality (3.12)]) and \eqref{TE1}, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)^*(t)\le(Tf)^{**}(t)= (Tf)_{E_\tau}^{**}(t)\le (Tf_{x,\tau})_{E_\tau}^{**}(t)+M_\tau,\qquad \ 0<t\le\tau.\label{TE2}\ee Therefore in order to prove \eqref{e1} it is enough to show the following: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf_{x,\tau})_{E_\tau}^{**}(t)\leq Uf_\tau(t)+W_\tau\qquad \text{for}\ 0<t\le \tau.\label{e101}\ee In other words, we only need to show the rearrangement of $Tf_{x,\tau}(x)$ over the set $E_\tau$ satisfies \eqref{e101}. Let us apply the improved O'Neil Lemma (Lemma \ref{l}) with $N=E_\tau,\ M=\mathbb{R}^n,\ q=n/\alpha,\ f_x=f_{x,\tau}$ , and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \overline{f}=|f_\tau+f_\tau'|=|f| \label{e2}\ee so that \eqref{l2} holds. For $x\in E_\tau,$ let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} T'f(x)=Tf_{x,\tau}(x)=\int_{\mathbb{R}^n}K(x-y)(f_\tau+f_\tau'\chi_{B(x,r(\tau))})(y)dy. \label{e3}\ee By Lemma 9 in [FM1], we have that \eqref{A1}, \eqref{A3} implies the condition \eqref{l1} in Lemma \ref{l}. From now on we will use $k^*$ to denote $k_1^*$ since $k(x,y)=K(x-y)$ is a convolution kernel, and $k^*(t)=k_1^*(t)$. We obtain \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} (T'f)_{E_\tau}^{**}(t)&\leq C_0t^{-\frac{1}{q'}}\int_0^t {f}^*(u)du+\supess_{x\in E_\tau}\int_t^{\infty}k_1^*(x,u)f_{x,\tau}^*(u)du\cr &=C_0t^{-\frac{1}{q'}}\int_0^t {f}^*(u)du+\supess_{x\in E_\tau}\int_t^{\infty}k^*(u)f_{x,\tau}^*(u)du.\end{aligned}\label{e4}\ee By definition of $F_\tau, f_\tau$ and \eqref{e2}, we apply Lemma \ref{l0} to get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} {f}^*(u)=(f_\tau+f_\tau')^*(u)=f_\tau^*(u)\ \ \text{if} \;\; 0<u<\tau\label{e5} \ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f_{x,\tau}^*(u)=(f_\tau+f_\tau'\chi_{B(x,r(\tau))})^*(u)=\begin{cases}f_\tau^*(u) &\text{if} \;\; 0<u<\tau\\ \big(f_\tau'\chi_{B(x,r(\tau))}\big)^*(u-\tau)& \text{if} \;\;u>\tau.\end{cases}\label{e6} \ee Therefore, \eqref{e4} can rewritten as \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned}(T'f)_{E_\tau}^{**}(t)&\leq C_0t^{-\frac{1}{q'}}\int_0^t f_\tau^*(u)du+\int_t^{\tau}k^*(u)f_\tau^*(u)du\cr &+\supess_{x\in E_\tau}\int_\tau^{2\tau}k^*(u)\big(f_\tau'\chi_{B(x,r(\tau))}\big)^*(u-\tau)du\cr &=Uf_\tau(t)+W_\tau.\end{aligned} \label{e7}\ee Hence \eqref{e101} is proved and \eqref{e1} follows. \bigskip Next we consider the following inequality (also in [MS2, the inequality below (4.7)], with slightly different form) \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (a+b)^p\leq \lambda^{1-p}a^p+(1-\lambda)^{1-p}b^p\qquad\quad a,b\geq0,\;0<\lambda<1,\; p>1\label{1*}, \ee which can be proved by writing $a+b$ as $(a\lambda^{-1/p'})\lambda^{1/p'}+(b(1-\lambda)^{-1/p'})(1-\lambda)^{1/p'}$ and apply Holder inequality. Then we use estimation \eqref{e1} and apply the above inequality to the integrand in \eqref{1d}, with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \ p=q',\ a=(Uf_\tau)^*(t),\ \textup{and}\ b=W_\tau+M_\tau.\nt\ee We get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} &\frac{\exp\bigg[\dfrac{1}{A_g}\big((Tf)^*(t)\big)^{q'}\bigg]}{1+\big((Tf)^*(t)\big)^{q'}} \leq C\frac{\exp\bigg[\dfrac{1}{A_g}\big(Uf_\tau(t)+W_{\tau}+M_{\tau}\big)^{q'}\bigg]}{1+\big(Uf_\tau(t)+W_{\tau}+M_{\tau}\big)^{q'}} \cr &\leq C \frac{\exp\bigg[\dfrac{(1-\lambda)^{1-q'}}{A_g}\big(W_{\tau}+M_{\tau}\big)^{q'}\bigg]}{1+\big(W_{\tau}+M_{\tau}\big)^{q'}}\cdot{\exp\bigg[\dfrac{\lambda^{1-q'}}{A_g}\big(Uf_\tau(t)\big)^{q'}\bigg]}. \end{aligned} \label{main1}\ee To get the first inequality in \eqref{main1}, let $F(z)$ be defined as in \eqref{r15}. Note that $F(z)\geq C>0$ on $[0,\infty)$ and is increasing in $z$ for $z\geq 1+A_g^{(n-a)/n}$. Also recall that for $0<t < t_0$ we have $(Tf)^*(t)\geq 1$. We consider two cases. If $1\le (Tf)^*(t)\le 1+A_g^{(n-a)/n}$, then we have $F((Tf)^*(t))\le C$ and $F(Uf_\tau(t)+W_{\tau}+M_{\tau})\geq C>0$. If $(Tf)^*(t)\geq 1+A_g^{(n-a)/n}$, then by \eqref{e1} and the fact that $F(z)$ is increasing, the first inequality follows.\par Let $t_1>0$ be the number such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \dfrac{\int_0^{t_1}f^*(u)^{q}du}{||f||_{q}^{q}}=\frac{1}{4}\nt\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\epsilon_\tau=\min\left\{\dfrac{1}{4},\ \dfrac{\int_0^{\tau}f^*(u)^{q}du}{||f||_{q}^{q}}\right\}.\nt\ee We estimate \eqref{main1} using the following two lemmas. The first one is an integral estimate (essentially the Adams inequality): \begin{lemma}\label{lemmaI2} If we define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I_2(\tau,t,\lambda)={\exp\bigg[\dfrac{\lambda^{1-q'}}{A_g}\big(Uf_\tau(t)\big)^{q'}\bigg]}, \qquad \tau>0,\ t>0,\ \lambda>0,\nt\ee then \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_0^{\tau}I_2(\tau,t,\epsilon_\tau)dt\le C\tau,\qquad 0<\tau\le t_1. \label{c1bbb}\ee \end{lemma} \noindent {\bf Proof of Lemma \ref{lemmaI2}:} First note that when $\tau\le t_1$, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \epsilon_\tau=\dfrac{\int_0^{\tau}f^*(u)^{q}du}{||f||_{q}^{q}}.\nt\ee If we let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \widetilde{f}:=\frac{f_{\tau}}{\epsilon_{\tau}^{1/q}}, \nt\ee then we have that $\widetilde{f}$ has measure of support $\mu(\textup{supp} \widetilde{f})\leq \tau$ with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||\widetilde{f}||_q\leq1,\nt\ee also assumption \eqref{A1} implies the estimate \eqref{l1} on $k^*$ (See [FM1, Lemma 9]). That is, conditions \eqref{t0a} and \eqref{t0b} are satisfied. Therefore by Theorem A, the Adams inequality for the O'Neil functional, we obtain \eqref{c1bbb}.\hfill{\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.2pt}\fbox{\rule[0pt]{0pt}{1.3ex}\rule[0pt]{1.3ex}{0pt}}}\\ \par The estimation for $I_1(\tau,\lambda)$ which is stated in the following lemma is essential for the rest of the proof. Let us assume the lemma for now, and its proof will be given in sections 4-5. \begin{lemma}\label{le1} Let $0<\tau\leq t_0$, $0\le\lambda<1$. Define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I_1(\tau,\lambda)=\frac{\exp\bigg[\dfrac{(1-\lambda)^{1-q'}}{A_g}\big(W_{\tau}+M_{\tau}\big)^{q'}\bigg]}{1+\big(W_{\tau}+M_{\tau}\big)^{q'}}.\nt\ee Then there exists constant $C>0$ such that for all $x_1,x_2\in E_\tau$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I_1(\tau,\epsilon_\tau)\leq \frac{C}{\tau}||Tf||^{q}_{q} \label{l1a} \ee where $C=C(n,\alpha,K).$ \end{lemma} Assuming Lemma \ref{le1}, let $ \tau_0=\min\{t_0,t_1\}$. To prove \eqref{1d} it is enough to show that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{0}^{\tau_0}\frac{\exp\bigg[\dfrac{1}{A_g}\big((Tf)^*(t)\big)^{q'}\bigg]}{1+\big((Tf)^*(t)\big)^{q'}}dt\leq C||Tf||_q^q \label{c2a} \ee and then show that if $t_1<t_0$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{t_1}^{t_0}\frac{\exp\bigg[\dfrac{1}{A_g}\big((Tf)^*(t)\big)^{q'}\bigg]}{1+\big((Tf)^*(t)\big)^{q'}}dt\leq C||Tf||_q^q. \label{c2b} \ee To prove \eqref{c2a}, we take $\tau=\tau_0$ in \eqref{l1a} and \eqref{c1bbb} to get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I_1(\tau_0,\epsilon_{\tau_0})\leq \frac{C}{\tau_0}||Tf||^{q}_{q}\qquad\text{and}\qquad \int_0^{\tau_0}I_2(\tau_0,t,\epsilon_{\tau_0})dt \leq C\tau_0. \label{main4} \ee Therefore, using \eqref{main1} it is immediate that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{0}^{\tau_0}\frac{\exp\bigg[\dfrac{1}{A_g}\big((Tf)^*(t)\big)^{q'}\bigg]}{1+\big((Tf)^*(t)\big)^{q'}}dt\leq\int_0^{\tau_0}C I_1(\tau_0,\epsilon_{\tau_0})I_2(\tau_0,t,\epsilon_{\tau_0})dt\le C||Tf||^{q}_{q}\label{main6}\ee and \eqref{c2a} follows.\\ Next, to show \eqref{c2b}, we take $\tau=t$ for $t_1\le t\le t_0$, and $\lambda=\frac{1}{8}$ in the definition of $I_2$ in Lemma \ref{lemmaI2}. Then by definition of O'Neil's operator and the fact that the support $f_t$ has measure less than or equal $t$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Uf_t(t)=C_0t^{-\frac{1}{q'}}\int_0^t f_t^*(u)du \leq C||f_t||_q\leq C.\nt\ee So we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I_2\bigg(t,t,\frac{1}{8}\bigg)\le C.\label{main7}\ee Since $t_1\le t\le t_0$ and $\epsilon_{t_1}=\frac{1}{4}$, by definition $\epsilon_t=\frac{1}{4}$. Take $\theta=(6/7)^{\frac{\alpha}{n-\alpha}}<1$. Hence \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} I_1\bigg(t,\frac{1}{8}\bigg)&= \frac{\exp\bigg[ \dfrac{(7/8)^{1-q'} }{A_g}\big(W_t+M_t\big)^{q'}\bigg]}{1+(W_t+M_t)^{q'}}=\frac{\exp\bigg[(7/6)^{1-q'} \dfrac{(3/4)^{1-q'} }{A_g}\big(W_t+M_t\big)^{q'}\bigg]}{1+(W_t+M_t)^{q'}}\cr &=\frac{\left(\exp\bigg[ \dfrac{(3/4)^{1-q'} }{A_g}\big(W_t+M_t\big)^{q'}\bigg]\right)^\theta}{1+(W_t+M_t)^{q'}}\le \Bigg(\frac{\exp\bigg[ \dfrac{(3/4)^{1-q'} }{A_g}\big(W_t+M_t\big)^{q'}\bigg]}{1+(W_t+M_t)^{q'}}\Bigg)^\theta\cr &=I_1^{\theta}\bigg(t,\frac{1}{4}\bigg)=I_1^{\theta}(t,\epsilon_t)\le \frac{C}{t^\theta}||Tf||^{\theta q}_q. \end{aligned} \label{main8}\ee Using \eqref{main7}, \eqref{main8} we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{t_1}^{t_0}CI_1\bigg(t,\frac{1}{8}\bigg)I_2\bigg(t,t,\frac{1}{8}\bigg)dt&\leq C\int_{t_1}^{t_0}\frac{1}{t^\theta}||Tf||^{\theta q}_qdt\le Ct_0^{1-\theta}||Tf||^{\theta q}_{q}\cr &\leq C||Tf||^{(1-\theta){q}}_{q}||Tf||^{\theta q}_{q}=C||Tf||_q^q, \end{aligned} \label{c2e}\ee where the last inequality is by the fact that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||Tf||_q^q\geq t_0\label{c2f}\ee since $(Tf)^*(t)\geq 1$ for all $t< t_0$, by the definition of $t_0$. So \eqref{c2b} follows from \eqref{main1}. In order to complete the proof of Theorem \ref{T1}, we are left to prove Lemma \ref{le1}. \section{Proof of Lemma \ref{le1}} It is enough to show that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \frac{\exp\bigg[\dfrac{(1-\epsilon_\tau)^{-\frac{\alpha}{n-\alpha}}}{A_g}(W(\tau,x_2)+M(\tau,x_1))^{\frac{n}{n-\alpha}}\bigg]}{1+(W(\tau,x_2)+M(\tau,x_1))^{\frac{n}{n-\alpha}}}\leq C\frac{||Tf||_q^q}{\tau}\label{le1aaa}\ee for all $x_1,\ x_2\in E_\tau.$ Now let us state the following key lemma in [MS1]-[MS3], [LT]. \begin{lemma}\label{lo} Given any sequence $\displaystyle a=\{a_k\}_{k\geq 0}.$ Let $q>1 $ and\begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||a||_1=\sum_{k=0}^{\infty} |a_k|,\ ||a||_q=\big(\sum_{k=0}^{\infty} |a_k|^q\big)^{1/q}\label{lo1}\ee define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \mu_d(h)=\inf \{\sum_{k=0}^{\infty} |a_k|^{q}e^{qk}: \ ||a||_1=h,\ ||a||_q \leq 1\}.\nt\ee Then for $h>1$, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} C_1(q)\frac{\exp\big[{qh^{q'}}\big]}{h^{ q'}}\leq \mu_d(h)\leq C_2(q) \frac{\exp\big[{qh^{q'}}\big]}{h^{ q'}}.\label{lo2}\ee\end{lemma} As a consequence of the above optimal growth lemma, we deduce that for any $q>1$ and any $\mu>0,\,h>1$ there is $C=C(q)$ such that for any sequence $\{a_k\}$ satisfying \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\sum_{k=0}^\infty |a_k|=h \qquad \sum_{k=0}^\infty|a_k|^q\le \mu\label{le1a}\ee we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \frac{\exp\big[{q\mu^{1-q'}h^{q'}}\big]}{h^{q'}}\le C\mu^{-q'}\sum_{k=0}^\infty |a_k|^q e^{qk}.\label{le1b}\ee The next task is to find a number $h_1$, depending on $f$ and $x_1$, and a sequence $a=\{a_k\}$, also depending on $f$ and $x_1$, such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))\le h_1\label{le1c}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \sum_{k=0}^{\infty}|a_k|=h_1,\qquad\quad \sum_{k=0}^{\infty}|a_k|^q\le 1-\epsilon_\tau,\qquad\quad \sum_{k=0}^{\infty}|a_k|^qe^{qk}\le \frac {C}{\tau}||Tf||_q^q.\label{le1d} \ee Clearly \eqref{le1aaa} follows from \eqref{le1a}-\eqref{le1d}, with $\mu=1-\epsilon_\tau\geq \frac{3}{4}$ and $h=h_1$.\par From now on, throughout the proof of Lemma \ref{le1}, we fix $0<\tau\leq t_0$, and $x_1,x_2\in E_\tau$ as defined in \eqref{d2}. First let us introduce some notation. Recall that $r(\tau)$ is the number such that $|B(0,r(\tau))|=\tau$. Define for each $j=0,1,2...$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} r_j= r(\tau) e^{\frac{q}{n}j},\qquad D_j=B(x_1,\ r_j) \nt\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \alpha_j=||f_\tau'\chi_{D_{j+1}\setminus D_{j}}^{}||_q,\qquad \alpha_{-1}=||f_\tau'\chi_{D_0}||_q\nt\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \ab_j=\max{\{\alpha_{-1},\alpha_0,...,\alpha_j\}},\qquad \beta}\def\ab{{\overline\a}_j=||f_\tau'\chi_{D^c_{j}}^{}||_q. \nt\ee Notice that for any $j$ $$\alpha_j\le\beta}\def\ab{{\overline\a}_j\le1.$$ Clearly $\beta}\def\ab{{\overline\a}_j$ is decreasing, and it vanishes for all $j$ large enough, since $f$ has compact support. In particular, there is an integer $N$ so that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} {\rm supp }}\def\N{{\mathbb N} f\subseteq D_N=B(x_1,r_N). \nt\ee Now we are ready to state the main estimates on $M(\tau,x_2)$ and $W(\tau,x_1)$: \begin{prop}\label{cl} There exist constants $C_2,C_3$ independent of $f$ and an integer $J$ such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))\le\sum_{j=0}^{J}\alpha_j+C_2\ab_J+C_2\beta}\def\ab{{\overline\a}_J \label{cl1}\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \sum_{j=0}^{J} \alpha_j^qe^{qj}+\beta}\def\ab{{\overline\a}_J^qe^{qJ}\leq \frac{C_3}{\tau}||Tf||_q^q.\label{cl2}\ee \end{prop} The proof of Proposition \ref{cl} will be given in section 5. Assuming the proposition, we now show how to derive \eqref{l1a}, and hence finish the proof of Lemma~\ref{le1}, using \eqref{le1a}-\eqref{le1d} together with Proposition \ref{cl}. \par Our goal is to find a number $h_1$ and a sequence $a=\{a_k\}$ that satisfies \eqref{le1c} and \eqref{le1d}. Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} h_1=\sum_{j=0}^J \alpha_j+C_2\,\ab _J+C_2\beta}\def\ab{{\overline\a}_J.\label{le1e}\ee Clearly, by Proposition \ref{cl}, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))\le h_1.\label{le1f}\ee Let $J^*$ be the smallest integer such that $\alpha_{J^*}=\ab_{J}$. It is clear that $J^*\le J$. To construct the sequence $a$ that satisfies \eqref{le1d}, let us first define $N_i,\ i=1,...,4$ as follows: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} N_1&=J^*\cr N_2&=N_1+\lceil(1+C_2)^{q'}\rceil\cr N_3&=N_2+J-1-J^*\cr N_4&=N_3+\lceil(1+C_2)^{q'}\rceil. \end{aligned} \label{le1g}\ee Let $a=\{a_k\}$ be the following: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} a_k= \begin{cases} \alpha_{k-1}\;\text{if}\ J^*\ne -1; \ 0 \ \text{if}\ J^*=-1 \; &\text{for}\ \ k=0,...,N_1\cr \dfrac{(1+C_2)\ab_{J}}{N_2-N_1}\;\text{if}\ J^*\ne -1; \ \dfrac{C_2\ab_{J}}{N_2-N_1} \ \text{if}\ J^*=-1 \ &\text{for}\ \ k=N_1+1,...,N_2\cr \alpha_{k-N_2+J^*}\;\;&\text{for}\ \ k=N_2+1,...,N_3\cr \dfrac{\alpha_{J}+C_2\beta}\def\ab{{\overline\a}_{J}}{N_4-N_3}\;\; &\text{for}\ \ k=N_3+1,...,N_4. \cr \ec \label{le1h}\ee With this definition of $a_k$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}||a||_1=\sum_{k=0}^{N_4}|a_k|=\sum_{k=0}^{N_4}a_k=h_1.\label{le1i}\ee If $J^*\ne -1$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} \sum_{k=0}^{N_4}|a_k|^q&=\sum_{k=0}^{N_1}\alpha_{k-1}^q+\sum_{k=N_1+1}^{N_2}\left(\frac{(1+C_2)\ab_{J}}{N_2-N_1}\right)^q+\sum_{k=N_2+1}^{N_3}\alpha_{k-N_2+J^*}^{q}\cr &+\sum_{k=N_3+1}^{N_4}\left(\frac{\alpha_{J}+C_2\beta}\def\ab{{\overline\a}_{J}}{N_4-N_3}\right)^q\cr &\leq \sum_{k=0}^{J^*-1}\alpha_{k-1}^q+\ab^q_{J}+\sum_{k=J^*+1}^{J-1}\alpha_k^q+\beta}\def\ab{{\overline\a}^q_{J}\cr &=||f_\tau'||_q^q\leq (1-\epsilon_\tau)||f||^q_q\leq 1-\epsilon_\tau. \end{aligned} \label{le1j}\ee Likewise for $J^*=-1$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} \sum_{k=0}^{N_4}|a_k|^q&=\sum_{k=N_1+1}^{N_2}\left(\frac{C_2\ab_{J}}{N_2-N_1}\right)^q+\sum_{k=N_2+1}^{N_3}\alpha_{k-N_2+J^*}^{q}\cr &+\sum_{k=N_3+1}^{N_4}\left(\frac{\alpha_{J}+C_2\beta}\def\ab{{\overline\a}_{J}}{N_4-N_3}\right)^q\cr &\leq \alpha_{-1}^q+\sum_{k=0}^{J-1}\alpha_{k}^q+\beta}\def\ab{{\overline\a}^q_{J}\cr &=||f_\tau'||_q^q\leq (1-\epsilon_\tau)||f||^q_q\leq 1-\epsilon_\tau. \end{aligned} \label{le1k}\ee And using \eqref{cl2} in Proposition \ref{cl}, we also have, if $J^*\ne-1$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} \sum_{k=0}^{N_4}|a_k|^qe^{qk}&=\sum_{k=0}^{N_1}\alpha_{k-1}^qe^{qk}+\sum_{k=N_1+1}^{N_2}\left(\frac{(1+C_2)\ab_{J}}{N_2-N_1}\right)^qe^{qk}+\sum_{k=N_2+1}^{N_3}\alpha_{k-N_2+J^*}^{q}e^{qk}\cr &+\sum_{k=N_3+1}^{N_4}\left(\frac{\alpha_{J}+C_2\beta}\def\ab{{\overline\a}_{J}}{N_4-N_3}\right)^qe^{qk}\cr &\leq \sum_{k=0}^{J^*-1}\alpha_{k-1}^qe^{qk}+C\ab^q_{J}e^{q(J^*+C_4)}+\sum_{k=J^*+1}^{J-1}\alpha_k^qe^{q(k+C_4)}+\beta}\def\ab{{\overline\a}^q_{J}e^{q(J+2C_4)}\cr &\leq Ce^{2C_4}\left(\alpha_{-1}^q+\sum_{k=0}^{J}\alpha_{k}^qe^{qk}+\beta}\def\ab{{\overline\a}_J^{q}e^{qJ}\right)\le Ce^{2C_4}\left(C+\sum_{k=0}^{J}\alpha_{k}^qe^{qk}+\beta}\def\ab{{\overline\a}_J^{q}e^{qJ}\right)\cr &=Ce^{2C_4}\left(C\frac{\tau}{\tau}+\sum_{k=0}^{J}\alpha_{k}^qe^{qk}+\beta}\def\ab{{\overline\a}_J^{q}e^{qJ}\right)\le \frac{C}{\tau}||Tf||_q^q \end{aligned} \label{le1l}\ee where $C_4$ in the above inequality is $C_4=\lceil(1+C_2)^{q'}\rceil$, and in the last inequality we used the fact that $\tau\le ||Tf||_q^q$ since $(Tf)^*(t)\geq1$ for $0<t\le \tau< t_0$. Similarly for $J^*=-1$,we also have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} \sum_{k=0}^{N_4}|a_k|^qe^{qk}\leq Ce^{2C_4}\left(\alpha_{-1}^q+\sum_{k=0}^{J}\alpha_{k}^qe^{qk}+\beta}\def\ab{{\overline\a}_J^{q}e^{qJ}\right)\le \frac{C}{\tau}||Tf||_q^q. \end{aligned} \label{le1m}\ee Finally, \eqref{le1j}-\eqref{le1m} shows that the sequence $a$ satisfies \eqref{le1c} and \eqref{le1d}. Hence \eqref{l1a} follows and the proof is concluded.\hfill{\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.2pt}\fbox{\rule[0pt]{0pt}{1.3ex}\rule[0pt]{1.3ex}{0pt}}} \section{Proof of Proposition \ref{cl}} In the following proof we will set for any measurable function $\phi:\mathbb{R}^m\to \mathbb{R}$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} S_j \phi=\phi\chi_{D_j^c}^{}=\phi\chi_{\{|y-x_1|\ge r_j\}}^{}. \nt\ee With this notation we then have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}( S_j-S_{j+1})f_\tau'=f_\tau'\chi_{D_{j+1}\setminus D_j}^{}=f_\tau'\chi_{\{r_j\le |y-x_1|<r_{j+1}\}}^{} \nt\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \alpha_j=\|( S_j-S_{j+1})f_\tau'\|_q,\qquad \beta}\def\ab{{\overline\a}_j=\|S_jf_\tau'\|_q. \nt\ee \def{\rm supp }}\def\N{{\mathbb N}{{\rm supp }}\def\N{{\mathbb N}} Also note that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f_\tau'=f_\tau'\chi^{}_{B(x_1,r(\tau))}+f_\tau'\chi^{}_{B^c(x_1,r(\tau))}=f_\tau'\chi^{}_{D_0}+S_0f_\tau'.\label{de1}\ee For the rest of the proof we assume that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} TS_0f_\tau'(x_1) \geq 0.\label{de2}\ee If, on the other hand, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} TS_0f_\tau'(x_1) < 0,\label{de22}\ee we replace $T$ by $-T$, and the proof is exactly the same.\par We first give some preliminary estimates on $W(\tau,x_2)$ and $M(\tau,x_1).$ We have that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} W(\tau,x_2)&=\int_\tau^{2\tau}k_1^*(u)(f_\tau'\chi^{}_{B(x,r(\tau))})^*(u-\tau)du\cr &\leq C\int_\tau^{2\tau}u^{-\frac{1}{q'}}(f_\tau'\chi^{}_{B(x_2,r(\tau))})^*(u-\tau)du\leq C||(f_\tau'\chi^{}_{B(x_2,r(\tau))})^*||_q\cr&= C||f_\tau'\chi^{}_{B(x_2,r(\tau))}||_q. \end{aligned} \label{la7}\ee Since $f$ is supported in $D_N$, we also have that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} {\rm supp }}\def\N{{\mathbb N}\ f_\tau'\chi^{}_{B(x_2,r(\tau))} \subseteq D_N=\bigcup_{j=0}^{N-1}(D_{j+1}\setminus D_j)\cup D_0 \nt\ee by the definition of $D_j$ it is clear that $B(x_2,r(\tau))$ can only have nonempty intersection with at most two elements in the set \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \big\{D_0,\ D_{j+1}\setminus D_j,\ \textup{for}\ j=0,1,...,N-1\big\},\nt\ee therefore we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f_\tau'\chi^{}_{B(x_2,r(\tau))}||_q\leq \alpha_{j_1}+\alpha_{j_2} \label{la8}\ee for some $j_1,j_2\in \{-1,0,1,...,N\}$. Then by the definitions of $\alpha,\ab,\beta}\def\ab{{\overline\a}$, we have for any $J\in\{0,1,...,N\}$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bc \alpha_{j}\leq \ab_{J}\ \ \ \text{if}\ J\geq j\\ \alpha_{j}\leq \beta}\def\ab{{\overline\a}_{J}\ \ \ \text{if}\ J\leq j\ec\ \ \ \ j=j_1,j_2\label{la9}\ee so that by combining \eqref{la7},\eqref{la8} and \eqref{la9}, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} W(\tau,x_2)\leq C\ab_J+C\beta}\def\ab{{\overline\a}_J\label{la2}\ee where $C=C(n,\alpha,K).$ Next, recall that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} M(\tau,x)=|T(f_\tau'\chi^{}_{B^c(x,r(\tau))})(x)|. \ee By \eqref{de1} and \eqref{de2}, we can write, for any $J\in\{0,1,...,N\}$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} M(\tau,x_1)&=|TS_0f_\tau'(x_1)|={T}S_0f_\tau'(x_1) \cr &=\sum_{j=0}^J\big({T}S_jf_\tau'(x_1)-{T}S_{j+1}f_\tau'(x_1)\big)+{T}S_{J+1}f_\tau'(x_1)\cr &=\sum_{j=0}^J{T}\big(S_jf_\tau'-S_{j+1}f_\tau'\big)(x_1)+{T}S_{J+1}f_\tau'(x_1). \end{aligned} \nt\ee Next, for any integer $j$, we have the estimate \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} &{T}\big(S_jf_\tau'-S_{j+1}f_\tau'\big)(x_1)\le |T(S_jf_\tau'-S_{j+1}f_\tau')(x_1)|\cr&\le \bigg(\mathop\int\limits_{r_j\le|y|<r_{j+1}}|K(y)|^{q'}dy\bigg)^{1/q'}\|S_jf_\tau'-S_{j+1}f_\tau'\|_q.\end{aligned}\label{la3} \ee Using \eqref{A1}, \eqref{A4} and the inequality $(a+b)^\beta\le a^\beta}\def\ab{{\overline\a}+\beta}\def\ab{{\overline\a} 2^{\beta}\def\ab{{\overline\a}-1}(a^{\beta}\def\ab{{\overline\a}-1}b+b^\beta}\def\ab{{\overline\a})$ for $\beta}\def\ab{{\overline\a}>1$ (see Adams [A1, inequality (17)] or use mean value theorem) we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |K(y)|^{q'}\le |g(y^*)|^{q'}|y|^{-n}+C\min\{|y|^{-n+\delta_1}, |y|^{-n-\delta_2}\} \label{la4}\ee for some $C>0,\ C=C(n,\alpha,H_1,H_2,B,\delta_1,\delta_2)$. Since $r_{j+1}=e^{\frac q n}r_j$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned}&{T}\big(S_jf_\tau'-S_{j+1}f_\tau'\big)(x_1)\le \big(qA_g+C\min\{r_j^{\delta_1}, r_j^{-\delta_2}\}\big)^{1/q'}\alpha_j\cr &\le \left(q^{\frac1{q'}}A_g^{\frac1{q'}}+C\min\{r_j^{{\delta_1/q'}}, r_j^{-{\delta_2/q'}}\}\right)\alpha_j.\end{aligned}\label{la5} \ee Using \eqref{la5}, we then get that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} M(\tau,x_1)&\leq \sum_{j=0}^J\left(q^{\frac1{q'}}A_g^{\frac1{q'}}+C\min\{r_j^{{\delta_1/q'}}, r_j^{-{\delta_2/q'}}\}\right)\alpha_j+{T}S_{J+1}f_\tau'(x_1)\cr &=q^{\frac1{q'}}A_g^{\frac1{q'}} \sum_ {j=0}^{J}\alpha_j+ C\ab_J \sum_{j=0}^\infty \min\{r_j^{{\delta_1/q'}}, r_j^{-{\delta_2/q'}}\}+ TS_{J+1}f_\tau'(x_1)\cr &\le q^{\frac1{q'}}A_g^{\frac1{q'}} \sum_ {j=0}^{J}\alpha_j+ C\ab_J \sum_{j=0}^\infty (e^{-\frac{q}{n}\frac{\delta_1}{q'}j}+e^{-\frac{q}{n}\frac{\delta_2}{q'}j})+ TS_{J+1}f_\tau'(x_1)\cr &= \ q^{\frac1{q'}}A_g^{\frac1{q'}} \sum_ {j=0}^{J}\alpha_j+ C\ab_J + TS_{J+1}f_\tau'(x_1). \end{aligned} \label{la1}\ee \par Note that \eqref{la9} and \eqref{la1} are true for any $J\in\{0,1,...,N\}$. The main task now is prove that there exists $J\in\{0,1,...,N\}$ such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} TS_{J+1}f_\tau'(x_1)\le C\ab_J +C\beta}\def\ab{{\overline\a}_J .\label{clcl2}\ee This will be effected by a double stopping time argument, which will simultaneously yield \eqref{cl2} in Proposition \ref{cl}.\par Recall that $N$ is an integer such that ${\rm supp }}\def\N{{\mathbb N} f\subseteq D_N$. Let $J_1\in\{1,...,N\} $ be such that \begin{numcases}{} \beta}\def\ab{{\overline\a}_{j+1}^q\le \left(\avint_{D_{j+1}\setminus D_{j}} |Tf(x)|dx\right)^q & {\text {for }} $j=0,...,J_1-1$\label {j1a}\\ \beta}\def\ab{{\overline\a}_{J_1+1}^q>\left(\mathop{\,\rlap{\bf--}\!\!\int}\nolimits_{D_{J_1+1}\setminus D_{J_1}} | Tf(x)|dx\right)^q. \label{j1b} \end{numcases} If condition \eqref{j1a} is never satisfied we let $J_1=0$, and if \eqref{j1b} is never satisfied let $J_1=N+1$. Next, let $J_2\in\{1,...,N\} $be such that \begin{numcases}{} {T}S_{j+1}f_\tau'(x_1)\geq \left(\frac{e^{q-1}+1}{2e^{q-1}} \right) {T}S_{j}f_\tau'(x_1) & {\text {for }} $j=0,...,J_2-1$\label {j2a}\\ {T}S_{J_2+1}f_\tau'(x_1)< \left(\frac{e^{q-1}+1}{2e^{q-1}} \right) {T}S_{J_2}f_\tau'(x_1) . \label{j2b} \end{numcases} As in the definition of $J_1$, we let $J_2=0$ if condition \eqref{j2a} is never satisfied , and let $J_2=N+1$ if \eqref{j2b} is never satisfied. \par We will first prove \eqref{clcl2}, and hence \eqref{cl1}, in three cases depending on $J_1,J_2$, then we will show that \eqref{cl2} holds with the chosen $J$ in each case. \bigskip \noindent{\bf \underline{Case 1:}} $J_2\leq J_1\leq N+1$ and $J_2\ne N+1$. \bigskip \noindent{\bf \underline{Case 2:}} $J_2\geq J_1+1$. \bigskip \noindent{\bf \underline{Case 3:}} $J_1=J_2=N+1$. \bigskip \noindent {\underline{\emph{Proof of \eqref{clcl2} in the case $J_2\leq J_1\leq N+1$ and $J_2\ne N+1$:}}} \bigskip In this case, by \eqref{j2b} we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} {T}S_{J_2+1}f_\tau'(x_1)&< \left(\frac{e^{q-1}+1}{2e^{q-1}} \right){T}S_{J_2}f_\tau'(x_1) \cr &= \left(\frac{e^{q-1}+1}{2e^{q-1}} \right) \big({T}S_{J_2+1}f_\tau'(x_1)+{T}(S_{J_2}-S_{J_2+1})f_\tau'(x_1)\big)\cr &\leq \left(\frac{e^{q-1}+1}{2e^{q-1}} \right) \big({T}S_{J_2+1}f_\tau'(x_1)+\big|{T}(S_{J_2}-S_{J_2+1})f_\tau'(x_1)\big|\big)\cr &\le \left(\frac{e^{q-1}+1}{2e^{q-1}} \right) \big({T}S_{J_2+1}f_\tau'(x_1)+C\alpha_{J_2}\big)\cr \end{aligned}\label{cl3}\ee where the last inequality is by \eqref{la5}. So we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} {T}S_{J_2+1}f_\tau'(x_1)<C\left(\frac{e^{q-1}+1}{2e^{q-1}} \right)\alpha_{J_2}=C\alpha_{J_2}.\label{cl4}\ee Hence by taking $J=J_2$ in \eqref{la1}, we obtain \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))&\le\sum_{j=0}^{J_2}\alpha_j+C\ab_{J_2}+C\beta}\def\ab{{\overline\a}_{J_2}+{T}S_{J_2+1}f_\tau'(x_1)\cr &\le\sum_{j=0}^{J_2}\alpha_j+C\ab_{J_2}+C\beta}\def\ab{{\overline\a}_{J_2}+C\alpha_{J_2}\cr &\le \sum_{j=0}^{J_2}\alpha_j+C\ab_{J_2}+C\beta}\def\ab{{\overline\a}_{J_2}. \end{aligned}\label{cl5}\ee Therefore we get \eqref{cl1} with $J=J_2$ and $C_2=C$ in the above inequality.\\ \noindent {\underline{\emph{Proof of \eqref{clcl2} in the case $J_2\geq J_1+1$:}}} \bigskip We will need the following lemma to handle this case. Let us state it here, and its proof will be postponed to the Appendix. \begin{lemma}\label{lb} There is a constant $C_1=C_1(n,\alpha,K)$ such that for any $J\le N-1$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \avint_{D_{J+1}\setminus D_{J}} |Tf_\tau(x)|dx\leq C_1\left(\frac{1}{e^{q-1}}\right)^J,\label{lb1}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bigg| \avint_{D_{J+1}\setminus D_{J}} {T}S_{J+2}f_\tau'(x)dx- {T}S_{J+1}f_\tau'(x_1)\bigg|\le C_1\beta}\def\ab{{\overline\a}_{J+1},\label{lc1}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bigg|\avint_{D_{J+1}\setminus D_{J}} {T}(S_0-S_{J+2})f_\tau'(x)dx\bigg| \le C_1\ab_{J+1},\label{ld1}\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \bigg|\avint_{D_{J+1}\setminus D_{J}} T(f_\tau'\chi_{D_0})(x)dx\bigg| \le C_1\alpha_{-1}.\label{ld2}\ee \end{lemma} % Assuming Lemma \ref{lb}, let us first make a reduction. Recall that $0<\tau\le t_0.$ We will assume that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} M(\tau,x_1)\geq\max\{4C_1,1\}. \label{rdt1}\ee where $C_1$ is the constant which is defined in Lemma \ref{lb}. If the above is not true, then we have that $M(\tau,x_1)\le C$, and on the other hand, by \eqref{la7} \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} W(\tau,x_2)\le C||f_\tau'\chi^{}_{B(x_2,r(\tau))}||_q\le C. \end{aligned} \label{la777}\ee Therefore, $W(\tau,x_2)+M(\tau,x_1)\le C$, and hence \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \frac{\exp\bigg[\dfrac{(1-\epsilon_\tau)^{-\frac{\alpha}{n-\alpha}}}{A_g}(W(\tau,x_2)+M(\tau,x_1))^{\frac{n}{n-\alpha}}\bigg]}{1+(W(\tau,x_2)+M(\tau,x_1))^{\frac{n}{n-\alpha}}}\leq C= C\frac{\tau}{\tau}\le C\frac{||Tf||_q^q}{\tau}\label{rdt2}\ee which is \eqref{le1aaa}, and the last inequality is by \eqref{c2f}.\\ By \eqref{lc1} in Lemma \ref{lb} and recalling that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f=f_\tau+f_\tau'=f_\tau+f_\tau'\chi^{}_{D_0}+f_\tau'\chi^{}_{D_{J_1+2}\setminus D_0}+f_\tau'\chi^{}_{D_{J_1+2}^c},\nt\ee we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} & {T}S_{J_1+1}f_\tau'(x_1)\leq \avint_{D_{J_1+1}\setminus D_{J_1}} {T}S_{J_1+2}f_\tau'(x)dx+C\beta}\def\ab{{\overline\a}_{J_1+1}\cr &\leq \bigg|\avint_{D_{J_1+1}\setminus D_{J_1}} {T}S_{J_1+2}f_\tau'(x)dx\bigg|+C\beta}\def\ab{{\overline\a}_{J_1+1}\cr &=\bigg|\avint_{D_{J_1+1}\setminus D_{J_1}}\big(Tf-Tf_\tau-T(f_\tau'\chi^{}_{D_0}) -T(S_0-S_{J_1+2})f_\tau'\big)(x)dx\bigg|+C\beta}\def\ab{{\overline\a}_{J_1+1}\cr &\le\bigg|\avint_{D_{J_1+1}\setminus D_{J_1}} Tf(x)dx\bigg|+\bigg|\avint_{D_{J_1+1}\setminus D_{J_1}} Tf_\tau(x)dx\bigg|+\bigg|\avint_{D_{J_1+1}\setminus D_{J_1}} T(f_\tau '\chi^{}_{D_0})(x)dx\bigg|\cr&+\bigg|\avint_{D_{J_1+1}\setminus D_{J_1}} T(S_0-S_{J_1+2})f_\tau'(x)dx\bigg|+C\beta}\def\ab{{\overline\a}_{J_1+1}\cr &\le \avint_{D_{J_1+1}\setminus D_{J_1}} |Tf(x)|dx+\avint_{D_{J_1+1}\setminus D_{J_1}} |Tf_\tau(x)|dx+C\alpha_{-1}+C\ab_{J_1+1}+C\beta}\def\ab{{\overline\a}_{J_1+1} \end{aligned}\label{cl8}\ee where the second last inequality is by Lemma \ref{lb} \eqref{ld1},\eqref{ld2}. To estimate the second integral, note first that by reduction \eqref{rdt1} we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} M(\tau,x_1)={T}S_0f_\tau'(x_1)\geq 4C_1. \label{cl7}\ee Using \eqref{lb1} in Lemma \ref{lb}, and condition \eqref{j2a}, we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned}\avint_{D_{J_1+1}\setminus D_{J_1}} |Tf_\tau(x)|dx&\leq C_1\left(\frac{1}{e^{q-1}}\right)^{J_1} \leq \frac{1}{4}\left(\frac{1}{e^{q-1}}\right)^{J_1}{T}S_0f_\tau'(x_1)\cr&\le \frac{1}{4}\left(\frac{1}{e^{q-1}}\right)^{J_1}\left(\frac{2e^{q-1}}{e^{q-1}+1}\right)^{J_1+1}{T}S_{J_1+1}f_\tau'(x_1)\cr &= \frac{1}{4}\left(\frac{2}{e^{q-1}+1}\right)^{J_1}\left(\frac{2e^{q-1}}{e^{q-1}+1}\right){T}S_{J_1+1}f_\tau'(x_1)\le \frac{1}{2}{T}S_{J_1+1}f_\tau'(x_1). \end{aligned}\label{cl7}\ee Hence we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} {T}S_{J_1+1}f_\tau'(x_1)\leq \avint_{D_{J_1+1}\setminus D_{J_1}} |Tf(x)|dx+\frac{1}{2}{T}S_{J_1+1}f_\tau'(x_1)+C\ab_{J_1+1}+C\beta}\def\ab{{\overline\a}_{J_1+1}. \end{aligned}\nt\ee So the above inequality along with the condition \eqref{j1b} give us \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} {T}S_{J_1+1}f_\tau'(x_1)&\leq 2\avint_{D_{J_1+1}\setminus D_{J_1}} |Tf(x)|dx+2C\ab_{J_1+1}+2C\beta}\def\ab{{\overline\a}_{J_1+1}\cr &\le 2C\ab_{J_1+1}+(2C+2)\beta}\def\ab{{\overline\a}_{J_1+1}.\end{aligned}\label{cl9}\ee By taking $J=J_1$ in \eqref{la1}, we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))&\le\sum_{j=0}^{J_1}\alpha_j+C\ab_{J_1}+C\beta}\def\ab{{\overline\a}_{J_1}+{T}S_{J_1+1}f_\tau'(x_1)\cr &\le\sum_{j=0}^{J_1}\alpha_j+C\ab_{J_1+1}+C\beta}\def\ab{{\overline\a}_{J_1+1}\cr&\le \sum_{j=0}^{J_1}\alpha_j+C\ab_{J_1}+C\beta}\def\ab{{\overline\a}_{J_1} \end{aligned}\label{cl10}\ee where the last inequality is by the fact that $\ab_{J+1}\le\ab_J+\alpha_{J+1}\le \ab_J+\beta}\def\ab{{\overline\a}_{J+1}\le \ab_J+\beta}\def\ab{{\overline\a}_J$. Therefore we get \eqref{cl1} with $J=J_1$. \bigskip \noindent {\underline{\emph{Proof of \eqref{clcl2} in the case $J_1=J_2=N+1$:}}} \bigskip In this case, we will simply write the entire series, that is, we will take $J=N$. Since ${T}S_{N+1}f_\tau'(x_1)=0$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} q^{-\frac 1{q'}}A_g^{-\frac1 {q'}} (W(\tau,x_2)+M(\tau,x_1))&\le\sum_{j=0}^{N}\alpha_j+C\ab_{N}+C\beta}\def\ab{{\overline\a}_{N}+{T}S_{N+1}f_\tau'(x_1)\cr &\le \sum_{j=0}^{N}\alpha_j+C\ab_{N}.\end{aligned}\label{cl12}\ee % To check \eqref{cl2}, note that we take $J=J_2$ in case 1, $J=J_1$ in case 2 and $J=N$ in case 3. Assume first that $J_1\neq 0$. Then in all the cases we have that \eqref{j1a} is true for all $j\leq J_1-1$, so \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \sum_{j=0}^{J} \alpha_j^qe^{qj}+\beta}\def\ab{{\overline\a}_{J}^qe^{qJ}&\leq 3e^{2q}\sum_{j=0}^{J_1-1} \beta}\def\ab{{\overline\a}_j^qe^{qj}\le C\sum_{j=0}^{J_1-1} \left(\avint_{D_{j+1}\setminus D_{j}} |Tf(x)|dx\right)^qe^{qj}\cr &\le C\sum_{j=0}^{J_1-1} \left(\frac{1}{r_j^n} \int_{D_{j+1}\setminus D_{j}} |Tf(x)|dx\right)^qe^{qj}\cr &\le C\sum_{j=0}^{J_1-1} r_j^{-qn}\left( \int_{D_{j+1}\setminus D_{j}} |Tf(x)|^qdx\right)\left( \int_{D_{j+1}\setminus D_{j}} dx\right)^{q/q'}e^{qj}\cr &=C\sum_{j=0}^{J_1-1} r_j^{-qn}r_j^{(n-\alpha)q}\left( \int_{D_{j+1}\setminus D_{j}} |Tf(x)|^qdx\right)e^{qj}\cr &\le C\sum_{j=0}^{J_1-1}r_j^{-n}e^{qj}\int_{D_{j+1}\setminus D_{j}} |Tf(x)|^qdx\le \frac{C}{\tau}||Tf||_q^q \end{aligned}\label{cl6}\ee where in the last inequality we used the fact that $r_j^n=r_0^ne^{qj}$ and $\tau=|B(x_1,r_0)|$, so $\tau=Cr_0^n$.\\ If $J_1=0$, then we just need to check \eqref{cl2} for $J=0$: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \alpha_0^q+\beta}\def\ab{{\overline\a}_{0}^q\leq 2\beta}\def\ab{{\overline\a}_0^q\le C=C\frac{\tau}{\tau}\le \frac{C}{\tau}||Tf||_q^q, \label{cl666}\ee where the last inequality is by \eqref{c2f}. Proposition \ref{cl} is proved. \hfill{\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.2pt}\fbox{\rule[0pt]{0pt}{1.3ex}\rule[0pt]{1.3ex}{0pt}}} \\ \section{ Proof of Lemma \ref{lb}} \clearpage \addcontentsline{toc}{section}{Appendix: Proof of Lemma \ref{lb}} \noindent{\Large\bf Appendix: Proof of Lemma \ref{lb}} \vskip 0.2in \noindent{\bf Proof of \eqref{lb1}:} Using \eqref{A3} we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \avint_{D_{J+1}\setminus D_{J}} |Tf_\tau(x)|dx &\leq \frac{C}{|D_{J+1}\setminus D_J|}\int _{\mathbb{R}^n}|f_\tau(y)|\int_{D_{J+1}\setminus D_J}|x-y|^{\alpha-n}dxdy\cr&\leq \frac{C}{r_J^n}\int _{\mathbb{R}^n}|f_\tau(y)|r_J^\alpha dy. \end{aligned}\label{lb2}\ee Here the second inequality above is by the straightforward computation: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &\int_{D_{J+1}\setminus D_J}|x-y|^{\alpha-n}dx\cr &=\int_{\{|x-y|\leq r_{J}\}\cap (D_{J+1}\setminus D_J) }|x-y|^{\alpha-n}dx+\int_{\{|x-y|> r_{J}\}\cap (D_{J+1}\setminus D_J)}|x-y|^{\alpha-n}dx\cr &\le \int_{\{|x|\leq r_{J}\}}|x|^{\alpha-n}dx+\int_{D_{J+1}\setminus D_J}r_J^{\alpha-n}dx\le Cr_J^\alpha+Cr_J^{\alpha-n}r_J^n=Cr_J^\alpha. \end{aligned}\label{lb222}\ee Recall that $|F_\tau|=\tau=|D_0|$, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &\frac{C}{r_J^n}\int _{\mathbb{R}^n}|f_\tau(y)|r_J^\alpha dy=Cr_J^{\alpha-n}\int _{\mathbb{R}^n}|f_\tau(y)|dy=Cr_J^{\alpha-n}\int _{\mathbb{R}^n}|f|\chi^{}_{F_\tau}dy\cr &\le Cr_J^{\alpha-n}|F_\tau|^{1/q'} ||f||_q\leq Cr_J^{\alpha-n}r_0^{n-\alpha}=C_1\left(\frac{r_0}{r_J}\right)^{n-\alpha}=C_1\left(\frac{1}{e^{q-1}}\right)^J.\end{aligned}\label{lb3}\ee \\ \noindent{\bf Proof of \eqref{lc1}:} First write \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} {T}S_{J+2}f_\tau'(x)-{T}S_{J+1}f_\tau'(x_1)={T}S_{J+2}f_\tau'(x)-{T}S_{J+2}f_\tau'(x_1)-{T}\big(S_{J+1}-S_{J+2})f_\tau'(x_1)\nt\ee so \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &| {T}S_{J+2}f_\tau'(x)- {T}S_{J+1}f_\tau'(x_1)|\cr &\le|{T}S_{J+2}f_\tau'(x)-{T}S_{J+2}f_\tau'(x_1)|+|{T}\big(S_{J+1}-S_{J+2})f_\tau'(x_1)|. \end{aligned}\label{lc2}\ee Use similar argument as in \eqref{la3}, we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} |{T}\big(S_{J+1}-S_{J+2})f_\tau'(x_1)|&=\bigg|\int_{D_{J+2}\setminus D_{J+1}}K(x_1-y)f_\tau'(y)dy\bigg|\cr &\leq C\bigg|\int_{D_{J+2}\setminus D_{J+1}}|x_1-y|^{\alpha-n}f_\tau'(y)dy\bigg|\cr&\leq Cr_{J+1}^{\alpha-n}\int_{D_{J+2}\setminus D_{J+1}}|f_\tau'(y)|dy\le Cr_{J+1}^{\alpha-n}r_{J+1}^{n-\alpha}\alpha_{J+1}\le C\beta}\def\ab{{\overline\a}_{J+1}. \end{aligned}\label{lc3}\ee Next, by the regularity assumption \eqref{A3}, since $x_1\in D_0$, we have for $x\in D_{J+1}\setminus D_{J}$ and $y\in D^c_{J+2}$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |K(x-y)-K(x_1-y)|\leq C|x-x_1|(e^{q/n})^{n+1-\alpha}|x_1-y|^{\alpha-n-1}.\nt\ee Hence \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} |{T}S_{J+2}f_\tau'(x)-{T}S_{J+2}f_\tau'(x_1)| & =\bigg|\int_{D^c_{J+2}}(K(x-y)-K(x_1-y))f_\tau'(y)dy\bigg|\cr &\leq C|x-x_1|\int_{D^c_{J+2}}|f_\tau'(y)||x_1-y|^{\alpha-n-1}dy\cr &\leq C|x-x_1|\beta}\def\ab{{\overline\a}_{J+2}\left(\int_{D^c_{J+2}}|x_1-y|^{-n-\frac{n}{n-\alpha}}dy \right)^{\frac{n-\alpha}{n}}\cr &\leq Cr_{J+1}\beta}\def\ab{{\overline\a}_{J+2}\frac{C}{r_{J+2}}\leq C\beta}\def\ab{{\overline\a}_{J+2}\le C\beta}\def\ab{{\overline\a}_{J+1}.\end{aligned}\label{lc4}\ee So we have \eqref{lc1} by \eqref{lc2}-\eqref{lc4}. \bigskip \noindent {\bf Proof of \eqref{ld1} and \eqref{ld2}:} Let $j\in\{0,1,...,J\}$, then \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned}\bigg|\avint_{D_{J+1}\setminus D_{J}} T(S_j-S_{j+1})f_\tau'(x)dx\bigg|&=\bigg|\avint_{D_{J+1}\setminus D_{J}} \int_{D_{j+1}\setminus D_{j}} K(x-y)f_\tau'(y)dydx\bigg|\cr &\le C\avint_{D_{J+1}\setminus D_{J}} \int_{D_{j+1}\setminus D_{j}} |x-y|^{\alpha-n}|f_\tau'(y)|dydx\cr &=C\int_{D_{j+1}\setminus D_{j}} |f_\tau'(y)|\avint_{D_{J+1}\setminus D_{J}} |x-y|^{\alpha-n}dxdy\cr &\le Cr_J^{\alpha-n}\int_{D_{j+1}\setminus D_{j}} |f_\tau'(y)|dy\le Cr_J^{\alpha-n} r_j^{n-\alpha}\alpha_j\cr&=Ce^{\frac{q}{n}(j-J)}\alpha_j \end{aligned} \label{ld3}\ee where the second inequality is by \eqref{lb222}. Therefore, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \bigg|\avint_{D_{J+1}\setminus D_{J}}{T}(S_0-S_{J+2})f_\tau'(x)dx\bigg|&=\bigg|\avint_{D_{J+1}\setminus D_{J}} \left(\sum_{j=0}^{J+1}T(S_j-S_{j+1})f_\tau'(x)\right)dx\bigg|\cr &\le \sum_{j=0}^{J+1}\bigg|\avint_{D_{J+1}\setminus D_{J}} T(S_j-S_{j+1})f_\tau'(x)dx\bigg|\cr &\le C\sum_{j=0}^{J+1} e^{\frac{q}{n}(j-J)}\alpha_j\le C\ab_{J+1}. \end{aligned} \label{ld4}\ee So we have \eqref{ld1}. For \eqref{ld2}, by similar calculations as in \eqref{ld3}, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \bigg|\avint_{D_{J+1}\setminus D_{J}} T(f_\tau'\chi_{D_0})(x)dx\bigg| &=\bigg|\avint_{D_{J+1}\setminus D_{J}} \int_{D_0} K(x-y)f_\tau'(y)dydx\bigg|\cr &\le C\avint_{D_{J+1}\setminus D_{J}} \int_{D_0} |x-y|^{\alpha-n}|f_\tau'(y)|dydx\cr &=C\int_{D_0}|f_\tau'(y)|\avint_{D_{j+1}\setminus D_{j}} |x-y|^{\alpha-n}dxdy\cr &\le Cr_J^{\alpha-n}\int_{D_0}|f_\tau'(y)|dy\le Cr_J^{\alpha-n} r_0^{n-\alpha}\alpha_{-1}\cr &\le C\alpha_{-1}. \end{aligned} \label{ld5}\ee \hfill\QEDopen \section{Improved O’Neil Lemma, O’Neil functional and Adams inequality}\label{ON} \ \ \ Suppose that $(M,\mu)$ and $(N,\nu)$ are measure spaces. Given a measurable function $f\ :\ M\rightarrow \ [-\infty,\infty]$ its distribution function will be denoted by \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} m_f(s)=\mu(\{x\in M:\ |f(x)|>s\}),\ \ \ s\geq 0.\nt\ee Assume that the distribution function of $f$ is finite for $s\geq 0$. The decreasing rearrangement of $f$ will be denoted by \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f^*(t)=\inf\{s\geq0:\ m_f(s)\leq t\},\ \ \ t>0\nt\ee and we define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} f^{**}(t)=\frac{1}{t}\int_0^tf^*(u)du,\ \ \ t>0\nt\ee which is sometimes called the \emph{maximal function} of $f^*$. \par Given a $\nu\times\mu$-measurable function $k: N\times M\rightarrow [-\infty,\infty]$, assume that the level sets of $k(x,\cdot)$ and $k(\cdot,y)$ have finite measure for all $x\in N$ and all $y\in M$. Let $k_1^*(x,t)$ and $k_2^*(y,t)$ be the decreasing rearrangement of $k(x,y)$ with respect to the variable $y$(resp. $x$) for fixed $x$(resp. $y$), and define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &k_1^*(t)=\sup_{x\in N}k_1^*(x,t)\cr &k_2^*(t)=\sup_{y\in M}k_2^*(y,t). \end{aligned} \nt\ee Lastly, let $T$ be an integral operator defined as \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Tf(x)=\int_M k(x,y)f(y) d\mu(y). \ee One of the main tools used in the proof is the following version of O'Neil lemma, whose proof was given in ([Q, Section 1.2.3]). Its proof is based on the proof of lemma 9 in [FM3] with some small modifications. \begin{lemma}\label{l}\textup{\textbf{(Improved O'Neil lemma)}}\\ Let $k: N\times M\rightarrow [-\infty,\infty]$ be measurable and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} k_1^*(t)\leq Dt^{-\frac{1}{q'}},\ \ \ \ k_2^*(t)\leq Bt^{-\frac{1}{q'}},\ \ \ t>0\label{l1}\ee with $q'>1$. Let $f(x,y):\ N\times M\to\mathbb{R}$ be a measurable function on $N\times M$. For each $x\in N$, let $f_x:\ M\to\mathbb{R}$ be defined as $f_x(y)=f(x,y)$ on $M$ and assume $f_x\in L^q(M)$. $(q')^{-1}+q^{-1}=1$. Suppose there is a measurable function $\overline{f}:M\rightarrow [-\infty, \infty]$ , $\overline{f}\in L^q(M)$ such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \text{For all}\ x\in N,\ \ \overline{f}(y)\geq |f_x(y)| \ \ \mu - a.e.\ y\in M\label{l2}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} T'f(x)=Tf_x(x)=\int_M k(x,y)f_x(y) d\mu(y), \label{l3}\ee then there is a constant $C_0=C_0(D,B,q)$ such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (T'f)^{**}(t)\leq C_0\max\{\tau^{-\frac{1}{q'}},t^{-\frac{1}{q'}}\}\int_0^{\tau} \overline{f}^*(u)du +\supess_{x\in N} \int_\tau^\infty f_x^*(u)k_1^*(x,u)du,\ \ \ \forall t,\tau>0. \label{l4}\ee \end{lemma} Note that in this Lemma we consider the rearrangement of $Tf_x(x)$, where the function $f_x$ depends on $x$. This makes it different from the other improved O'Neil lemma in [FM3], which estimated the rearrangement of $Tf$ for a fixed function $f$.\\ In order to apply the above Lemma \ref{l} to the proof of our main theorem, we also need the following lemma regarding the rearrangement of the sum of two functions whose supports are mutually disjoint. \bigskip \begin{lemma}\label{l0} Let $f_1,f_2:\ M\rightarrow [-\infty,\infty]$ be measurable functions. Suppose that the supports of $f_1$ and $f_2$ are mutually disjoint, $\mu(\textup{supp}f_1)=z$ and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |f_1|\geq ||f_2||_{\infty} \ \ \ \mu-a.e.\ x\in \{x: f_1(x)\ne 0\}\nt\ee Then we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (f_1+f_2)^*(u)=\begin{cases}f_1^*(u) &\text{if} \;\; 0<u< z\\ f_2^*(u-z)& \text{if} \;\;u> z\end{cases},\label{l0a} \ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (f_1+f_2)^{**}(u)=f_1^{**}(u)\quad \text{for}\ 0<u\le z.\label{I0a1}\ee \end{lemma} \noindent{\bf Proof of lemma \ref{l0}:} Given the assumptions on $f_1,f_2$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} m_{f_1+f_2}(s)=m_{f_1}(s)+m_{f_2}(s)\label{I0b1}\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{cases}m_{f_2}(s)=0 & \text{if}\; m_{f_1}(s)<z\\ m_{f_1}(s)=z & \text{if} \; m_{f_2}(s)>0.\end{cases}\label{l0b} \ee For $u<z$, by \eqref{I0b1}, \eqref{l0b} we have $m_{f_1+f_2}(s)\le u$ if $m_{f_1}(s)\le u$. It is also clear that $m_{f_1}(s)\le u$ whenever $m_{f_1+f_2}(s)\le u$. So $m_{f_1}(s)\leq u$ if and only if $m_{f_1+f_2}(s)\leq u$. We get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} (f_1+f_2)^*(u)=\inf\{s\geq 0:m_{f_1+f_2}(s)\le u\}=\inf\{s\geq 0: m_{f_1}(s)\le u\}=f_1^*(u). \end{aligned} \nt\ee Let $u>z$. If $m_{f_2}(s)\le u-z$ then $m_{f_1+f_2}(s)\le u$. We will show that $m_{f_1+f_2}(s)\le u$ implies $m_{f_2}(s)\le u-z$. Suppose there exists $s\geq 0$ such that $m_{f_1+f_2}(s)\le u$ and $m_{f_2}(s)>0$. Then by \eqref{l0b} we have $m_{f_1}(s)=z$ and hence \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} m_{f_2}(s)=m_{f_1+f_2}(s)-m_{f_1}(s)\le u-z.\nt\ee Therefore, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} (f_1+f_2)^*(u)=\inf\{s\geq 0: m_{f_1+f_2}(s)\le u\}=\inf\{s\geq 0: m_{f_2}(s)\le u-z\}=f_2^*(u-z). \end{aligned} \nt\ee Lastly, \eqref{I0a1} holds by \eqref{l0a} and the definition of the maximal function. \hspace*{\fill} {\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.2pt}\fbox{\rule[0pt]{0pt}{1.3ex}\rule[0pt]{1.3ex}{0pt}}}\\ It is clear that as a consequence of Lemma \ref{l}, if $f_x=f$ for all $x\in N$, the O'Neil estimate takes the following form: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)^{**}(t)\le C_0t^{-\frac{1}{q'}} \int_0^{t}{f}^*(u)du+\int_t^{\infty}k_1^*(u){f}^*(u)du. \nt\ee Under the hypothesis that $m(f,s)<\infty$ for $s\geq 0$, we are able to define the O'Neil functional $U$ as follows: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Uf(t)=C_0t^{-\frac{1}{q'}} \int_0^{t}{f}^*(u)du+\int_t^{\infty}k_1^*(u){f}^*(u)du \nt\ee where $C_0$ is the constant in the improved O'Neil lemma \ref{l}. \par\medskip We now state an Adams inequality due to Fontana and Morpurgo ([FM3, Corollary 2]) in terms of the O'Neil functional. Although the original theorem in their paper is not stated in this form, it is clear from the proof in [FM3] that everything also works for the O'Neil functional instead of the original operator. This result plays a crucial role in the proof of our main result. \bigskip \noindent \textup{\textbf{Theorem A ([FM3, Corollary 2])}} \emph{Suppose $\nu(N)<\infty$, $\mu(M)<\infty$, and that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} k_1^*(t)\leq A^{\frac{1}{q'}}t^{-\frac{1}{q'}}\big(1+H(1+|\log t|)^{-\gamma}\big),\ \ \ 0<t\leq\mu(M)\label{t0a}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} k_2^*(t)\leq Bt^{-\frac{1}{q'}}.\ \ \ 0<t\leq \nu(N)\label{t0b}\ee Then there exists a constant $C=C(q,\gamma,A,B,H)$ such that for each $f\in L^q(M)$ with $||f||_q\leq 1$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_0^{\nu(N)} \exp\bigg[\frac{1}{A}\big(Uf(t)\big)^{q'}\bigg]dt\leq C\big(\nu(N)+\mu(M)\big).\label{t0c}\ee} \section{Proof of Corollary \ref{T3}} Assume that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}||f||_{n/\alpha}\leq 1.\label{T3a0}\ee Let $q=n/\alpha.$ It is enough to show that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{\theta}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx\leq \frac{C}{1-\theta}||Tf||_{q}^{q} \label{T3a}\ee since \eqref{3c} is then a direct consequence of the exponential regularization Lemma A. To show \eqref{T3a}, write \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &\exp\bigg[\frac{\theta}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]\cr &=\frac{\exp\big[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\big]}{1+|Tf|^{\frac{n}{n-\alpha}}}\frac{1+|Tf|^{\frac{n}{n-\alpha}}}{\exp\big[\frac{1-\theta}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\big]}. \end{aligned} \label{T3c}\ee Observe that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \frac{1+y}{e^{(1-\theta)y/{A_g}}} \leq \frac{C}{1-\theta},\ \ \ \text{for} \ y\geq 0\nt\ee So by Theorem \ref{T1}, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{\theta}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx &\le \frac{C}{1-\theta} \int_{\{|Tf|\geq 1\}}\frac{\exp\bigg[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{n}{n-\alpha}}}dx\cr &\le \frac{C}{1-\theta}||Tf||_q^q. \end{aligned} \label{T3d}\ee Obviously \eqref{T3d} also follows under the more restrictive condition \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}||f||^{pn/\alpha}_{n/\alpha}+||Tf||^{pn/\alpha}_{n/\alpha}\leq 1,\ \ \ p<\infty.\nt\ee The proof of sharpness is the same as in [FM2]. We use the family of functions $\psi_{\epsilon,r}$ in section \ref{sharpness}, and choose \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} r^{n}=\frac{A_g}{2C_4}\bigg(\log{\frac{1}{\epsilon^n}}\bigg).\nt\ee \section{Proof that Theorem \ref{T1} implies \eqref{4c}}\label{rufinq} It is enough to show that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx\leq C \label{T4a}\ee under the Ruf condition \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f||^{n/\alpha}_{n/\alpha}+||Tf||^{n/\alpha}_{n/\alpha}\leq 1.\nt\ee Let $\tau=||Tf||_q^q$. Clearly we can assume that $\tau\in(0,1)$. We consider two cases: \bigskip \underline{\bf Case 1:} $\tau\geq 1-({2}/{3})^{q-1}.$ \bigskip \underline{\bf Case 2:} $\tau<1-({2}/{3})^{q-1}.$ \bigskip \underline{\emph{Proof of \eqref{T4a} in case 1:}} In this case, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f||_q^q\leq 1-\tau\le \big(\frac{2}{3}\big)^{q-1},\label{T4b}\ee so letting $\widetilde{f}=f/(\frac{2}{3})^{\frac{q-1}{q}}=\big(\frac{3}{2}\big)^{\frac{q-1}{q}}f$ gives $||\widetilde{f}||_q^q\le 1$. We can write \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx=\int_{\{|Tf|\geq 1\}}\exp\bigg[\dfrac{2}{3A_g}|T\widetilde{f}(x)|^{\frac{n}{n-\alpha}}\bigg]dx. \label{T4c}\ee So by taking $\theta=\dfrac{2}{3}$ in Adachi-Tanaka result, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{2}{3A_g}|T\widetilde{f}(x)|^{\frac{n}{n-\alpha}}\bigg]dx\le \frac{C}{1-2/3}||T\widetilde{f}||_q^q=3{(\frac{3}{2})^{q-1}C}||T{f}||_q^q\le C.\label{T4d}\ee Combining \eqref{T4c} and \eqref{T4d} finishes the proof in case 1. \bigskip \underline{\emph{Proof of \eqref{T4a} in case 2:}} In this case, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f||_q^q\le 1-\tau \in\bigg(\big(\frac{2}{3}\big)^{q-1},1\bigg).\label{T4e}\ee Let $p>1$ be such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} p(1-\tau)^{\frac{\alpha}{n-\alpha}}=1. \nt\ee Rewrite \eqref{T4a} and apply Holder's inequality, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned}&\int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx= \int_{\{|Tf|\geq 1\}} \frac{\exp\bigg[\dfrac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{n}{(n-\alpha)p}}}\bigg(1+|Tf|^{\frac{n}{(n-\alpha)p}}\bigg)dx\cr &\le \left(\int_{\{|Tf|\geq 1\}} \left(\frac{\exp\bigg[\dfrac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{n}{(n-\alpha)p}}}\right)^p dx\right)^{\frac{1}{p}} \left(\int_{\{|Tf|\geq 1\}} \bigg(1+|Tf|^{\frac{n}{(n-\alpha)p}}\bigg)^{\frac{p}{p-1}}dx\right)^{\frac{p-1}{p}}\cr &=I'\cdot I''. \end{aligned}\label{T4f}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \widetilde{f}=\frac{f}{(1-\tau)^{1/q}}\nt\ee so that $||\widetilde{f}||_q^q\le 1$. Applying Theorem \ref{T1} gives \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} I'&\le \frac{C}{(1-\tau)^{1/q}}\left(\int_{\{|Tf|\geq 1\}} \left(\frac{\exp\bigg[\dfrac{(1-\tau)^{\frac{\alpha}{n-\alpha}}}{A_g}|T\widetilde{f}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\widetilde{f}|^{\frac{n}{(n-\alpha)p}}}\right)^p dx\right)^{\frac{1}{p}}\cr &\le C\left(\int_{\{|Tf|\geq 1\}} \frac{\exp\bigg[\dfrac{1}{A_g}|T\widetilde{f}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\widetilde{f}|^{\frac{n}{(n-\alpha)}}} dx\right)^{\frac{1}{p}}\cr &\le C||T\widetilde{f}||_q^{\frac{q}{p}}\le \frac{C}{1-\tau}||Tf||_q^{\frac{q}{p}}\le C||Tf||_q^{\frac{q}{p}}. \end{aligned}\label{T4g}\ee Next, to estimate $I''$ we start with the following Adachi-Tanaka inequality: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{1}{2A_g}|T{f}(x)|^{\frac{n}{n-\alpha}}\bigg]dx\le C||T{f}||_q^q.\label{T4h}\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} P=\frac 1{p-1},\ P_0=\bigg\lceil\frac{1}{p-1}\bigg\rceil-1,\ P_1=\bigg\lceil\frac{1}{p-1}\bigg\rceil\nt\ee and define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} F(x):=\bc Tf(x) \ \ &\text{if}\ |Tf(x)|\geq 1\\ 0\ \ &\text{otherwise}.\ec\ee By the power series expansion of the exponential function, we have that the inequality \eqref{T4h} implies that for any integer $N\geq 1$, \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{\{|Tf|\geq 1\}}{|Tf(x)|^{\frac{nN}{n-\alpha}}}&=||F||_{q'N}^{q'N}\le {(2A_g)^{N}N!} \int_{\{|Tf|\geq 1\}}\exp\bigg[\frac{1}{2A_g}|T{f}(x)|^{\frac{n}{n-\alpha}}\bigg]dx\cr &\le C{(2A_g)^{N}N!}||T{f}||_q^q\le C(2A_g)^{N}N^{N}||T{f}||_q^q .\label{T4i}\end{aligned}\ee By \eqref{T4e}, we have $P_0,P_1\geq 1$, hence \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} ||F||_{q'P_0}^{q'}&\le 2A_gC^{1/P_0}P_0||Tf||_q^{q/P_0}\le CP_0||Tf||_q^{q/P_0}\cr ||F||_{q'P_1}^{q'}&\le 2A_gC^{1/P_1}P_1||Tf||_q^{q/P_1}\le CP_1||Tf||_q^{q/P_1}.\end{aligned}\nt\ee Let $a\in[0,1]$ be the number such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \frac{1}{P}=\frac a {P_0}+\frac{1-a}{P_1}.\nt\ee By interpolation [Fol, Proposition 6.10] we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} ||F||_{q'P}^{q'}&\le ||F||_{q'P_0}^{q'a}||F||_{q'P_1}^{q'(1-a)}\le CP_0^aP_1^{1-a}||Tf||_q^{q(a/P_0)}||Tf||_q^{q(1-a)/P_1}\cr &= CP_0^aP_1^{1-a}||Tf||_q^{q/P}\le CP||Tf||_q^{q/P}.\end{aligned}\ee Hence, since $p>1$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} I''&= \left(\int_{\{|Tf|\geq 1\}}{|Tf(x)|^{\frac{n}{n-\alpha}\frac{1}{p-1}}}dx\right)^{\frac{p-1}{p}}=||F||_{q'P}^{q'/p}\le C\bigg(\frac{1}{p-1}\bigg)^{1/p}||T{f}||_q^{q(\frac{p-1}{p})}\cr&\le \frac{C}{p-1}||T{f}||_q^{q(\frac{p-1}{p})}. \end{aligned}\label{T4j}\ee So combining \eqref{T4g} and \eqref{T4j}, we get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} I'\cdot I''\le C\frac{1}{p-1}||T{f}||_q^{q}=C\frac{\tau}{p-1}\le C\frac{\tau}{1-(1-\tau)^{\frac{\alpha}{n-\alpha}}}\le C\label{T4k}\ee where the second inequality is by \eqref{T4e}. \section{Introduction and main results} \par \ \ \ The Moser-Trudinger inequality with exact growth condition on $\mathbb{R}^n$ takes the form \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n} \frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\left[\beta}\def\ab{{\overline\a}(\alpha,n)|u|^{\frac{n}{n-\alpha}}\right]}{1+|u|^{\frac{n}{n-\alpha}}}dx\le C||u||^{n/\alpha}_{n/\alpha}\ \ \quad \text{for all }\ u\in W^{\alpha,\frac{n}{\alpha}}(\mathbb{R}^n),\ \ ||\nabla^\alpha u||_{n/\alpha}\le 1\label{in17}\ee where $\lceil x\rceil$ denotes the ceiling of $x$, i.e. the smallest integer greater than or equal $x$, and where $\exp_N$ is the regularized exponential, that is \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \exp_N(t)=e^t-\sum_{k=0}^{N}\frac{t^k}{k!}\nt\ee and where for $\alpha\in(0,n)$ an integer the higher order gradient $\nabla^\alpha$ is defined as \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \nabla^\alpha u=\bc (-\Delta)^{\frac \alpha 2}u\ \ \ &\text{if}\ \alpha\ \text{is even}\\ \nabla (-\Delta)^{\frac {\alpha-1} 2}u\ \ \ &\text{if}\ \alpha\ \text{is odd.}\ec\nt\ee Such inequality was proved first by Ibrahim, Masmoudi and Nakanishi [IMN] for $n=2$ and $\alpha=1$, followed by Masmoudi and Sani who dealt with the cases $n=4$, $\alpha=2$ in [MS1], any $n$ and $\alpha=1$ in [MS2], and any $n$ and any integer $\alpha$ in [MS3]. In [LT] Lu and Tang dealt with the case $\alpha=2$, for any $n$. In all these results the explicit sharp exponential constant $\beta(\alpha,n)$ (see \eqref{5c}) that was found was the same as the sharp exponential constant in the classical Moser-Trudinger inequality on bounded domains due to Adams [A1]: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\Omega} \exp\left[\beta}\def\ab{{\overline\a}(\alpha,n)|u|^{\frac{n}{n-\alpha}}\right]dx\le C|\Omega|\ \ \quad \text{for all } u\in W_0^{\alpha,\frac{n}{\alpha}}(\Omega),\ \ \|\nabla^\alpha u\|_{n/\alpha}\le 1\label{Adams}\ee where $|\Omega|$ denotes the Lebesgue measure of $\Omega$. Recall that the exponential constant is sharp in the sense that it cannot be replaced by a larger constant. \def\sum_{k=0}^{2N-1}{\frac n{\alpha}} The main new result behind the proof of \eqref{in17} in [IMN], [MS1] ($\alpha=1$) is what the authors call ``optimal descending growth condition" (ODGC). In essence, such result gives an optimally adjusted exponential growth of {\it radial} functions outside balls of radius $R$, given the $L^n$ norms of their gradients. In [MS2] and [LT] the same result is proven for radial functions under $L^{n/2}$ norm conditions on their Laplacians, and in [MS3] under Lorentz $L^{n/2,q}$ norm conditions on their Laplacians. The key initial step that allowed the authors to only consider the radial Sobolev functions was the application of well-known, powerful symmetrization inequalities, specifically the P\'olya-Szeg\"o and Talenti's inequalities. There are a few other types of sharp Moser-Trudinger inequalities in the whole $\mathbb{R}^n$. The most common one states that for all $u\in W^{\alpha,\sum_{k=0}^{2N-1}}(\mathbb{R}^n)$ satisfying the under the {\it Ruf condition} \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \|u\|_{n/\alpha}^{n/\alpha}+\|\nabla^\alpha u\|_{n/\alpha}^{n/\alpha}\le 1\label{ruf}\ee the following estimate holds \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n}\exp_{\lceil\frac{n}{\alpha}-2\rceil}\left[\beta}\def\ab{{\overline\a}(\alpha,n)|u|^{\frac{n}{n-\alpha}}\right]dx\le C.\label{MT1}\ee This result was first derived by Ruf, [R] for $\alpha=1$ in dimension $n=2$ and later extended to all dimensions by Li-Ruf [LR]. The general case was settled by Fontana-Morpurgo in [FM2], where the authors prove \eqref{MT1} under \eqref{ruf} for arbitrary $n$ and integer $\alpha$, but also for fractional powers of $\Delta$, and for homogeneous elliptic operators with constant coefficients. Under norm conditions weaker than \eqref{ruf} estimate \eqref{MT1} is in general false, but it becomes true if one lowers the exponential constant. For example under the condition \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \max\big\{\|u\|_{n/\alpha}, \,\|\nabla^\alpha u\|_{n/\alpha}\big\}\le 1\label{at}\ee inequality \eqref{MT1} holds with exponential constant $\theta\beta(\alpha,n)$, for any $\theta\in (0,1)$. This result was originally derived for $\alpha=1$ by Cao [C] and Panda [P] in dimension 2 and by Do \'O [D] in any dimension. Later Adachi-Tanaka [AT] re-proved the result and cast it in a dilation invariant form. In [FM2] the authors derived the general case as a corollary of \eqref{MT1} under \eqref{ruf}, and showed that under either \eqref{at} or under \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \|u\|_{n/\alpha}^{rn/\alpha}+\|\nabla^\alpha u\|_{n/\alpha}^{rn/\alpha}\le 1,\qquad r>1\label{at1}\ee for any $\theta\in(0,1)$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n}\exp_{\lceil\frac{n}{\alpha}-2\rceil}\left[\theta\beta}\def\ab{{\overline\a}(\alpha,n)|u|^{\frac{n}{n-\alpha}}\right]dx\le C(1-\theta)^{-1+1/r},\label{MT2}\ee where $r=\infty$ under \eqref{at}. It is important to point out that the Masmoudi-Sani result is the strongest one to date, in the sense that it directly implies \eqref{MT1} under the Ruf condition (see [MS1], [MS2], [MS3]). Our initial goal was to derive the sharp Adams inequality with exact growth condition for the Riesz potential $$I_\alpha f(x)=\int_{\mathbb{R}^n} |x-y|^{\alpha-n} f(y)dy,$$ that is \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\int_{\mathbb{R}^n}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\left[\dfrac{1}{|B_1|}|I_\alpha f|^{\frac{n}{n-\alpha}}\right]}{1+|I_\alpha f|^{\frac{ n}{n-\alpha}}}dx \leq C||I_\alpha f||_{n/\alpha}^{n/\alpha},\qquad ||f||_{\frac{n}{\alpha}}\le 1, \label{in18}\ee where $|B_1|$ is the volume of the unit ball of $\mathbb{R}^n$ and where the exponential constant is sharp. Note that the exponential constant $|B_1|^{-1}$ is the same constant as in the original inequality due to Adams [A1]: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\Omega} \exp\left[\dfrac{1}{|B_1|}|I_\alpha f(x)|^{\frac{n}{n-\alpha}}\right]dx\le C|\Omega|\ \ \quad \text{for all } f\in L^{\frac{n}{\alpha}}(\Omega),\ \ \|f\|_{n/\alpha}\le 1\label{Adams1}.\ee Clearly \eqref{in18} implies \eqref{in17}, in the same way that \eqref{Adams1} implies \eqref{Adams} due to the fact that $I_\alpha$ is the inverse of $(-\Delta)^{\alpha/2}$ on smooth, compactly supported functions. In this paper we prove that \eqref{in18} is true, and not only for the Riesz kernel but for a subclass of the Riesz-like kernels introduced by Fontana-Morpurgo, which we call \emph{strictly Riesz-like kernels.}\par \bigskip To describe our result let us recall the definition given in [FM2]: \begin{de}\label{rzlike} A measurable function $K\ :\ \mathbb{R}^n\setminus \{0\}\to \mathbb{R}$ is a Riesz-like kernel of order $\alpha\in(0,n)$ if it satisfies the following properties: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} K(x)=g(x^*)|x|^{\alpha-n}+O(|x|^{\alpha-n+\delta_1}) \ ,\qquad x^*=\frac{x}{|x|},\qquad 0<|x|\le B\label{A1}\tag{A1}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}|K(x)|\leq H_1|x|^{\alpha-n}\label{A2}\tag{A2}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}|K(z_1)-K(z_2)|\leq H_2|z_1-z_2|\max \{{|z_1|}^{\alpha-n-1},{|z_2|}^{\alpha-n-1}\},\ \ z_1, z_2\neq0 \label{A3}\tag{A3}\ee where $g\ :\ S^{n-1}\to\mathbb{R}$ is a measurable function and $\delta, H_1,H_2,B$ are positive constants. \end{de} If we add an additional condition \eqref{A4} as below, we have more restrictive control of the kernel $K$ when $|x|$ is large: \begin{de}\label{riesz2} A measurable function $K\ :\ \mathbb{R}^n\setminus \{0\}\to \mathbb{R}$ is a strictly Riesz-like kernel of order $\alpha\in(0,n)$ if it is Riesz-like and satisfies the following property: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |K(x)|\le |g(x^*)||x|^{\alpha-n}+O(|x|^{\alpha-n-\delta_2}) \ ,\ \ |x|>B \label{A4}\tag{A4}\ee where $g\ :\ S^{n-1}\to\mathbb{R}$ is a measurable function and $B, \delta_1,\delta_2, H_1,H_2$ are positive constants. \end{de} Here the ``big O'' notation in \eqref{A1} means that $|O(|x|^{\alpha-n+\delta_1})|\le C|x|^{\alpha-n+\delta_1}$ for all $x$ such that $0<|x|\le B$. And the same notation in \eqref{A4} means that $|O(|x|^{\alpha-n-\delta_2})|\le C|x|^{\alpha-n-\delta_2}$. It is clear that \eqref{A3} implies that $g$ is Lipschitz. Also, \eqref{A1},\eqref{A3} and \eqref{A4} imply \eqref{A2}. Clearly, any kernel of type $g(x^*)|x|^{\alpha-n}$ with $g$ Lipschitz on the sphere, provides an example of strictly Riesz-like kernel. \par For $m\in \mathbb{N}$, a kernel $K$ is called \emph{m-regular} if $K\in C^m(\mathbb{R}^n\setminus \{0\})$ and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |D_x^h K(x)|\le C|x|^{\alpha-n-|h|},\ \ \ x\ne 0,\ |h|\le m\nt\ee where $h=(h_1,...,h_n)$ is a multi-index with $|h|=h_1+...+h_n$ and where $D_x^h K$ denotes the $h$-th derivative of $K$ with respect to $x$. Clearly the Riesz kernel is $m$-regular for all $m$, and any $1$-regular $K$ satisfies condition \eqref{A3}. \par Let us denote $T$ the convolution operator with kernel $K$: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Tf(x)=K\ast f(x)=\int_{\mathbb{R}^n} K(x-y)f(y)dy.\nt\ee For vector valued functions $K=(K_1,...,K_m),\ ; f=(f_1,...,f_m)$ we define $Tf$ in the same way with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Kf=K_1f_1+...+K_mf_m,\ \ \ |f|=(f_1^2+...+f_m^2)^{1/2}.\nt\ee \par The results and proofs in this paper apply to both scalar and vector cases, so we will not distinguish between these two cases, except in the proof of sharpness.\par \bigskip The main result of this paper is the following: \begin{theorem}\label{T1} Let $0<\alpha <n$, and $K$ is strictly Riesz-like. There exists $C=C(n,\alpha,K)>0$ such that for all compactly supported $f$ with $$ ||f||_{\frac{n}{\alpha}}\leq 1$$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\int_{\mathbb{R}^n}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\bigg[\dfrac{1}{A_g}|Tf|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{ n}{n-\alpha}}}dx \leq C||Tf||_{n/\alpha}^{n/\alpha}, \label{1b}\ee where \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} A_g=\dfrac 1 n \int_{S^{n-1}}|g(x^*)|^{\frac{n}{n-\alpha}}dx^*.\label{ag1}\ee If $K$ is $n$-regular, then the exponential constant $A_g^{-1}$ in \eqref{1b} cannot be replaced by a larger number. Furthermore, if $K$ is $n$-regular, then \eqref{1b} cannot hold if the power $\frac{n}{n-\alpha}$ in the denominator is replaced by any smaller power. \end{theorem} Here $dx^*$ is the surface measure of the unit sphere $S^{n-1}$, induced by the Lebesgue measure. \smallskip \newpage As pointed out in [FM2] Adams type estimates involving an integral of the regularized exponential over the whole space, have equivalent ``local" formulations in terms of the standard exponential. Via the exponential regularization lemma (Lemma A in section 3) estimate \eqref{1b} is equivalent to the following local version \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\int_{E}\frac{\exp\bigg[\dfrac{1}{A_g}|Tf|^{\frac{n}{n-\alpha}}\bigg]}{1+|Tf|^{\frac{ n}{n-\alpha}}}dx \leq C\big(|E|+||Tf||_{n/\alpha}^{n/\alpha}\big) \label{1c}\ee valid for all measurable $E$ with finite measure, and under $ ||f||_{\frac{n}{\alpha}}\leq 1$.\par We mention that inequality \eqref{1b} still holds if we have ``$\le$'' instead of ``$=$'' in condition \eqref{A1}, that is, if \eqref{A1} is replaced by \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |K(x)|\le |g(x^*)||x|^{\alpha-n}+|O(|x|^{\alpha-n+\delta_1})| ,\qquad 0<|x|\le B.\label{A1'}\tag{A1'}\ee But in order to have sharpness in the exponential constant $A_g^{-1}$ we need to assume condition \eqref{A1}.\par We point out that for Theorem \ref{T1} to hold it is not enough to assume that $K$ be only Riesz-like. It is relatively easy to find an example of a Riesz-like kernel such that the inequality in Theorem \ref{T1} cannot hold, but the one in [FM2, Theorem 5] holds. In section \ref{sharpness} remark \ref{example}, we will address this example, which indicates that it is necessary for us to strengthen our assumption for large $|x|$, i.e. \eqref{A4}, so that $K$ has same behavior near the origin and at infinity. \par One of the main difficulties we had to overcome toward a proof of Theorem 1, even for the Riesz potential as in \eqref{in18}, was to find a suitable replacement of the optimal growth condition result for the potential $Tf$, under norm conditions on $f$. Clearly, in this context no tools such as the P\'olya-Szeg\"o or Talenti's inequalities are available, which makes an initial reduction to radial functions impossible, even in the case of the Riesz potential. The way we bypass this problem is by carefully splitting the function $f$ and by making use of an improved O'Neil inequality. Very loosely speaking, we will consider a suitable 1-parameter family of sets $F_\tau$ depending on $f$, and with measure $\tau$, and we will split $f$ as $f=f_\tau+f_\tau'$, with $f_\tau=f\chi_{F_\tau}^{}.$ By use of an improved O'Neil inequality we will prove an estimate of type \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} (Tf)^*(t)\le U f_\tau(t)+ U' f_\tau' (\tau), \;\;0<t\le \tau \label{estimate}\ee where $(Tf)^*$ is the symmetric decreasing rearrangement of $Tf$, and where $U, U'$ are two suitable, real-valued (nonlinear) functionals stemming from the O'Neil inequality (see estimate \eqref{e1}). The first term in \eqref{estimate} is handled by an Adams inequality for sets of finite measure due to Fontana-Morpurgo (see Theorem A). The challenging part is the proof of an optimal descending growth condition for the function $U'f_\tau'(\tau)$ (see Proposition 1). In [IMN], [MS1], [MS2], [LT] a version ODGC was first proved for sequences, followed by a suitable discretization of radial Sobolev functions. We will also make use of the discrete ODGC for sequences (See Lemma \ref{lo}, Section 4), however the discretization of $U'f'_\tau(\tau)$ turns out to be rather involved (see Proposition~1 and its proof, given in Section~5). \bigskip \par As a consequence of Theorem \ref{T1} we derive the following general Adachi-Tanaka type inequality: \begin{cor}\label{T3} If $K$ is a strictly Riesz-like kernel, then for any $\theta\in(0,1)$ there exists $C$ independent of $\theta$ such that for all compactly supported $f$ with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f||_{n/\alpha}\le 1, \label{3a} \ee we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n}\exp_{\lceil\frac{n}{\alpha}-2\rceil}\bigg[\frac{\theta}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx\leq \frac{C}{1-\theta}||Tf||_{n/\alpha}^{n/\alpha} \label{3c}\ee where $A_g$ is the same as in \eqref{ag1}. If $K$ is $n$-regular and $K\notin L^{\frac{n}{n-\alpha}}(|x|\geq 1)$ then inequality \eqref{3c} is sharp, in the sense that the exponential integrals cannot be uniformly bounded if $\theta=1$. \end{cor} Estimate \eqref{3c} improves the one obtained in [FM2, Theorem 6], which does not have $\|Tf\|_{n/\alpha}$ on the right hand side, and which has $(1-\theta)^{-1}$ only in the case $K$ homogeneous. \def\sum_{k=0}^{2N-1}{\sum_{k=0}^{2N-1}} \def\sum_{k=2N}^{\infty}{\sum_{k=2N}^{\infty}} \def\sum_{k=0}^{\infty}{\sum_{k=0}^{\infty}} At the level of Moser-Trudinger inequalities, Theorem 1 implies almost immediately the Masmoudi-Sani result \eqref{in17}, for integer powers $\alpha$. Similarly, as a consequence of Theorem 1, we will obtain a Moser-Trudinger inequality with exact growth condition for the fractional Laplacian $(-\Delta)^{\alpha/2}$, for any $\alpha\in (0,n)$ , and also for general homogeneous elliptic operators.\par To describe such result, recall that the Sobolev space $ W^{\alpha,p}({\mathbb{R}^n})$ consists of all locally summable functions $u\ :\ {\mathbb{R}^n}\to \mathbb{R}$ such that for each multiindex $h$ with $|h|\le \alpha$, the $h$-th weak partial derivative of $u$ exists and belongs to $L^p({\mathbb{R}^n}).$ For non integer $\alpha$, the space $W^{\alpha,\frac{n}{\alpha}}(\mathbb{R}^n)$ will denote the Bessel potential space \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} W^{\alpha,p}(\mathbb{R}^n)=\{u\in \mathcal{S}'\ :\ (I-\Delta)^{\alpha/2}u\in L^p(\mathbb{R}^n)\}=\{G_\alpha\ast f,\ f\in L^p(\mathbb{R}^n)\},\label{r2}\ee where $G_\alpha$ is the kernel of the Bessel potential $(I-\Delta)^{-\alpha/2}$ and its Fourier transform is $(1+4\pi^2|\xi|^2)^{-\alpha/2}$. We also recall that a homogeneous elliptic differential operator of even order $\alpha<n$ with real constant coefficients has the form \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Pu=\sum_{|k|=\alpha}a_kD^ku\label{5e}\ee for $u\in C_c^\infty(\mathbb{R}^n)$, with symbol \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} p_\alpha(\xi)=P(2\pi i \xi)=(2\pi)^\alpha(-1)^{\alpha/2}\sum_{|k|=\alpha}a_k\xi^k,\ \ |p_\alpha(\xi)|\geq c_0|\xi|^\alpha,\ \ \xi\in\mathbb{R}^n\nt\ee for some $c_0>0$. The fundamental solution of $P$ is given by a convolution operator with kernel $g_P:$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} g_P(x)=\int_{\mathbb{R}^n} \frac{e^{-2\pi i x\cdot \xi}}{p_\alpha(\xi)}d\xi \label{5f}\ee in the sense of distributions. Since $p_\alpha$ is homogeneous of order $\alpha$, the kernel $g_P$ is also homogeneous with order $\alpha-n$. \begin{theorem}\label{T5} For $0<\alpha<n, $ let $P$ be either $(-\Delta)^{\frac{\alpha}{2}}$, $\nabla(-\Delta)^{\frac{\alpha-1}{2}}$ for $\alpha$ odd, or a homogeneous elliptic operator of even order $\alpha<n$ with constant coefficients. Then there exists $C=C(\alpha,n,P)>0$ such that for every $u\in W^{\alpha,\frac{n}{\alpha}}(\mathbb{R}^n)$ with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}||Pu||_{n/\alpha}^{n/\alpha}\leq 1\label{5a}\ee we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\big[\gamma(P)|u(x)|^{\frac{n}{n-\alpha}}\big]}{1+|u(x)|^{\frac{n}{n-\alpha}}}dx\leq C||u||_{n/\alpha}^{n/\alpha} \label{5b}\ee where \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \gamma(P)= \begin{cases} \dfrac{c_\alpha^{-\frac{n}{n-\alpha}}}{|B_1|},\ \ \ &\text{if} \ P=(-\Delta)^{\frac{\alpha}{2}}\\ \dfrac{((n-\alpha-1)c_{\alpha+1})^{-\frac{n}{n-\alpha}}}{|B_1|},\ \ &\text{if} \ P=\nabla(-\Delta)^{\frac{\alpha-1}{2}}\ and \ \alpha\ odd, \end{cases} \label{5c} \ee with \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} c_\alpha=\frac{\Gamma(\frac{n-\alpha}{2})}{2^\alpha \pi^{n/2}\Gamma(\frac{\alpha}{2})}\label{calpha}\ee and where \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \gamma(P)=\frac{n}{\int_{S^{n-1}}|g^{}_P(x^*)|^{\frac{n}{n-\alpha}}dx^*}\nt\ee if $P$ elliptic and $\alpha$ even. The exponential constant $\gamma(P)$ is sharp. Moreover, the above inequality \eqref{5b} cannot hold if the power $\frac{n}{n-\alpha}$ in the denominator is replaced by any smaller power. \end{theorem} As an immediate consequences of Theorem \ref{T5}, we have the following Corollary: \begin{cor}\label{corT5} Let \ $\Omega$ be a bounded and open set in $\mathbb{R}^n$, $0<\alpha<n$ an integer. There exists $C>0$ such that for all $u\in W_0^{\alpha,\frac{n}{\alpha}}(\Omega)$ with $||\nabla^\alpha u||_{\frac{n}{\alpha}}\le 1$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\Omega}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\big[\gamma(P)|u(x)|^{\frac{n}{n-\alpha}}\big]}{1+|u(x)|^{\frac{n}{n-\alpha}}}dx\leq C||u||_{n/\alpha}^{n/\alpha}. \label{cT5a}\ee The exponential constant $\gamma(P)$ is sharp. Furthermore, the above inequality \eqref{cT5a} cannot hold if the power $\frac{n}{n-\alpha}$ in the denominator is replaced by any smaller power. \end{cor} Although the proof of \eqref{cT5a} uses Adams inequality on $\Omega$, it is still not an easy direct consequence from Adams [A1]. We also mention that the above inequality \eqref{cT5a} is different from the inequalities in [A1] because of the norm of $u$ in the RHS. For example, take $\alpha=1,\ n=2$, then by Corollary \ref{corT5} we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\Omega}\frac{e^{4\pi u^2}-1}{1+u^2}dx\leq C||u||_2^2 \label{cT5b}\ee and [A1] gave \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\Omega}{e^{4\pi u^2}}dx\leq C|\Omega| \label{cT5c}\ee So for fixed $\Omega$, we can see that, if $||u||_2^2$ becomes very small, then so is the LHS of \eqref{cT5b}, but this point may not be reflected by the second inequality \eqref{cT5c}.\par \def{\cal L}{{\cal L}} As we mentioned earlier, Riesz-like kernels were introduced in [FM2], where the authors proved, among other things, that if $K$ is a Riesz-like kernel, then under the Ruf condition \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||f||^{n/\alpha}_{n/\alpha}+||Tf||^{n/\alpha}_{n/\alpha}\leq 1\label{4a}\ee the following Adams inequality holds: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \int_{\mathbb{R}^n}\exp_{\lceil{\frac{n}{\alpha}-2\rceil}}\bigg[\frac{1}{A_g}|Tf(x)|^{\frac{n}{n-\alpha}}\bigg]dx\leq C \label{4c}\ee where $A_g$ is as in \eqref{ag1}. and where the exponential constant $A_g^{-1}$ is sharp if the kernel is $n-$regular. In section \ref{rufinq} we will prove that such result is implied by Theorem~\ref{T1} if $K$ is strictly Riesz-like. \section{Proof of Theorem \ref{T5} and Corollary \ref{corT5}} \ \ \ In Theorem \ref{T1} we assume that the functions $f$ are compactly supported, with both $f$ and $Tf$ in the space $L^q(\mathbb{R}^n)$. We denote this space of functions by $D_0$: \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} D_0:=\{f\in L^q(\mathbb{R}^n)\ :\ {\rm supp }}\def\N{{\mathbb N} f\ \text{is compact and} \ Tf\in L^q(\mathbb{R}^n)\}.\nt\ee In the following Theorem [FM2, Theorem 7] we see that $T$ has a smallest closed extension, which enables us to extend Theorem \ref{T1} to all functions in the domain of the extension. In particular, Theorem \ref{T5} is a consequence of the following Theorem: \bigskip {\textup{\bf Theorem B ([FM2, Theorem 7])}.} \emph{If $K$ is a Riesz-like kernel, then the operator $T\ :\ D_0(T)\rightarrow L^q(\mathbb{R}^n)$ is closable, and its smallest closed extension (still denoted $T$) has domain \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} D(T)=\{f\in L^q(\mathbb{R}^n)\ :\ \exists \{f_k\}\subseteq D_0(T), \exists h\in L^q(\mathbb{R}^n) \ \text{with} \ f_k\xrightarrow[]{L^q} f, Tf_k\xrightarrow[]{L^q} h\}\label{T5a}\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} Tf=h.\nt\ee In the case of Riesz potential we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} W^{\alpha,q}(\mathbb{R}^n)=\{I_\alpha f, f\in D(I_\alpha)\} \label{T5b}\ee and the operator $(-\Delta)^{\frac{\alpha}{2}}$ is a bijection between $ W^{\alpha,q}(\mathbb{R}^n)$ and $D(I_\alpha)$, with inverse $c_\alpha I_\alpha.$} \bigskip By using the above Theorem B and Fatou's lemma, we easily deduce that Theorem \ref{T1} is still valid for all functions $f$ in $D(T)$. Also \eqref{T5b} tells us that the Riesz potential for all functions $f$ in the extended domain $D(I_\alpha)$ is the space $W^{\alpha,q}(\mathbb{R}^n)$. Therefore by the fact that the inverse of $(-\Delta)^{\frac{\alpha}{2}}$ is $c_\alpha I_\alpha$, we have \eqref{5b}.\par In the case of elliptic operator, by the formula \eqref{5f}, the kernel of the integral operator is homogeneous of order $\alpha-n$, therefore we have \eqref{5b}.\par It is enough to assume $u\in C_c^\infty(\mathbb{R}^n)$ since $\alpha$ is an integer for the remaining cases. For $P=\nabla(-\Delta)^{\frac{\alpha-1}{2}}$ and $\alpha$ is an odd integer, since $u=c_{\alpha+1}I_{\alpha+1}(-\Delta)^{\frac{\alpha+1}{2}}u$, we can write \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} u(x)=\int_{\mathbb{R}^n} c_{\alpha+1}(n-\alpha-1)|x-y|^{\alpha-n-1}(x-y)\cdot f(y),\ \ \ f=\nabla(-\Delta)^{\frac{\alpha-1}{2}}u.\label{T5c}\ee Clearly the kernel in the above formula satisfies our assumptions \eqref{A1}-\eqref{A4}, so \eqref{5b} follows.\par For the proof of Corollary \ref{corT5}, it is clear that the inequality \eqref{cT5a} is a direct consequence of \eqref{5b} since $\Omega\subseteq \mathbb{R}^n$. \bigskip {\bf Proof of sharpness:} To prove the sharpness, let $\psi_{\epsilon,r}$ be the function as in the proof of sharpness (section \ref{sharpness}). If $P=(-\nabla)^{\frac{\alpha}{2}}$, consider the functions \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} u_{\epsilon,r}=c_\alpha I_\alpha\psi_{\epsilon,r}. \nt\ee Similarly, for $P$ an elliptic operator, let $u_{\epsilon,r}=g_P\ast \psi_{\epsilon,r}$. \par Lastly, we construct the extremal family of functions that proves sharpness for the case $P=\nabla(-\Delta)^{\frac{\alpha-1}{2}}$ in Theorem \ref{T5}, as well as sharpness for Corollary \ref{corT5}. Note that in all these cases $\alpha$ is an integer. We use the same extremal functions as in Adams ([A1], see also [FM1], [FM2], [MS2]). Let $\varphi\in C^\infty([0,1])$ such that $\varphi^{(k)}(0)=0$ for $0\le k\le \alpha-1$, and $\varphi(1)=\varphi'(1)=1$, $\varphi^{(k)}(1)=0$ for $2\le k\le m-1$. Let $\epsilon$ be small enough, define \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} v_\epsilon(y)=\bc 0\qquad &\text{for}\ |y|\geq \frac{3}{4}\cr \varphi(\log\frac{1}{|y|})&\text{for}\ \frac{1}{2}\le|y|\le\frac{3}{4}\cr \log\frac{1}{|y|}&\text{for}\ 2\epsilon\le|y|\le\frac{1}{2}\cr\log\frac{1}{\epsilon}-\varphi(\log\frac{|y|}{\epsilon}) &\text{for}\ \epsilon\le|y|\le 2\epsilon\cr\log\frac{1}{\epsilon}&\text{for}\ |y|\le\epsilon.\ec\nt\ee Then we have that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||v_\epsilon||_{\beta}\def\ab{{\overline\a} q}\le C,\qquad ||\nabla^\alpha v_\epsilon||_{q}^{q'}=\frac{\gamma(P)}{n}(\log\frac{1}{\epsilon})^{q'-1}+O(1).\nt\ee Let \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} u_\epsilon=\frac{v_\epsilon}{||\nabla^\alpha v_\epsilon||_{q}},\nt\ee it is clear that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||\nabla u_\epsilon||_q\le 1,\qquad ||u_\epsilon||^{ q}_{q}\le C(\log\frac{1}{\epsilon})^{-1},\label{mtshp1}\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |u_\epsilon|^{q'}\geq \gamma(P)^{-1}\log\frac{1}{\epsilon^n},\qquad |y|\le\epsilon.\nt\ee For the sharpness of the exponential constant, we take $\theta >1$ and estimate \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{\mathbb{R}^n}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\big[\theta\gamma(P)|u_\epsilon|^{\frac{n}{n-\alpha}}\big]}{1+|u_\epsilon|^{\frac{ n}{n-\alpha}}}dy&\geq \int_{|y|\le\epsilon}\frac{\exp\big[\theta\gamma(P)|u_\epsilon|^{\frac{n}{n-\alpha}}\big]}{1+|u_\epsilon|^{\frac{ n}{n-\alpha}}}dy\geq C\epsilon^n\frac{\exp\bigg[\theta\log\dfrac{1}{\epsilon^n}+C\bigg]}{1+C\log\dfrac{1}{\epsilon}}\cr&= C\frac{\epsilon^{(1-\theta)n}}{1+C\log\dfrac{1}{\epsilon}}\to\infty \end{aligned}\nt\ee as $\epsilon\to0^+.$\par For the sharpness of the power of the denominator, we take $\theta <1$ and get \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{\mathbb{R}^n}\frac{\exp_{\lceil\frac{n}{\alpha}-2\rceil}\big[\gamma(P)|u_\epsilon|^{\frac{n}{n-\alpha}}\big]}{1+|u_\epsilon|^{\frac{\theta n}{n-\alpha}}}dy&\geq \int_{|y|\le\epsilon}\frac{\exp\big[\gamma(P)|u_\epsilon|^{\frac{n}{n-\alpha}}\big]}{1+|u_\epsilon|^{\frac{\theta n}{n-\alpha}}}dy\geq C\epsilon^n\frac{\exp\bigg[\log\dfrac{1}{\epsilon^n}+C\bigg]}{1+C(\log\dfrac{1}{\epsilon})^{\theta}}\cr&\geq C(\log\frac{1}{\epsilon})^{-\theta}. \end{aligned}\nt\ee Hence by \eqref{mtshp1} we have that the quotient of the above integral over the norm $||u_\epsilon||_{ q}^{ q}$ goes to infinity as $\epsilon\to0^+$, so the sharpness follows.\par \section{Proof of Theorem \ref{T2}} \section{ Proofs of the sharpness statements in Theorem~\ref{T1}.}\label{sharpness} \par \ \ \ We will make use of the extremal family of functions constructed in [FM2, Section 6], that the authors used to prove the sharpness of the exponential constants in \eqref{4c} and \eqref{3c}. Specifically, under the hypothesis that $K$ is $n$-regular, the authors produced a family of compactly supported functions $\psi_{\epsilon,r}\in L^q(B(0,r))$ such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \max\{||\psi_{\epsilon,r}||_q^{q}\ ,\ ||T\psi_{\epsilon,r}||_{ q}^{ q}\}\le 1\nt\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\begin{aligned} |T{\psi}_{\epsilon,r}(x)|^{q'}&\geq A_g\log{\frac{1}{(\epsilon r)^n}}+b_r\bigg(1-\frac{C}{\log{\frac{1}{\epsilon^n}}}\bigg)-C,\qquad |x|\le \epsilon r/2,\end{aligned}\label{shp15}\ee \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} ||T\psi_{\epsilon,r}||_{ q}^{q}\le Cr^n(\log{\frac{1}{\epsilon^n}})^{-1},\label{normtf2}\ee where \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} b_r:=\int_{1\le |y|\le r }|K(y)|^{q'}dy\nt\ee and \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} 1\le r^{n}\le \frac{A_g}{2C_4}\bigg(\log{\frac{1}{\epsilon^n}}\bigg).\label{shp13}\ee Note that by the assumptions \eqref{A1}, \eqref{A4}, we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} b_r\le A_g\log r^n+C.\label{sp9}\ee Note also that for $\epsilon$ small \eqref{shp15} and \eqref{shp13} imply \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} |T\psi_{\epsilon,r}(x)|\ge 1,\qquad \forall x\in B_{\epsilon r/2}. \label{TF}\ee To prove that the exponential constant sharp, i.e. it cannot be replaced by a larger constant, pick $$r^{n}= \frac{A_g}{2C_4}\bigg(\log{\frac{1}{\epsilon^n}}\bigg)$$ and for any fixed $\theta>1$ estimate \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} &\int_{\{|T\psi_{\epsilon,r}|\ge1\}}\frac{\exp\bigg[\dfrac{\theta}{A_g}|T\psi_{\epsilon,r}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,r}(x)|^{\frac{ n}{n-\alpha}}}dx\ge \int_{B_{\epsilon r/2}}\frac{\exp\bigg[\dfrac{\theta}{A_g}|T\psi_{\epsilon,r}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,r}(x)|^{\frac{ n}{n-\alpha}}}dx\cr&\qquad \geq |B_{\epsilon r/2}|\frac{\exp\bigg[\theta\log{\dfrac{1}{(\epsilon r)^n}}+\dfrac{\theta b_r}{A_g}\bigg(1-\dfrac{C}{\log{\frac{1}{\epsilon^n}}}\bigg)-\theta C\bigg]}{1+A_g\log{\dfrac{1}{(\epsilon r)^n}}+Cb_r-C}\geq \dfrac{C(\epsilon r)^{-(\theta-1)n}}{\log{\dfrac{1}{(\epsilon r)^n}}}\rightarrow\infty\end{aligned}\label{S17}\ee as $\epsilon\to 0^+$, and where the last inequality is by the estimate of $b_r$ in \eqref{sp9}.\\ Using exponential regularization, Lemma A, we get, for any $\theta>1$ \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned}\lim_{\epsilon\to0^+}\int_{\mathbb{R}^n} \frac{\exp_{\lceil\frac n\alpha -2\rceil}\bigg[\dfrac{\theta}{A_g}|T\psi_{\epsilon,r}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,r}(x)|^{\frac{ n}{n-\alpha}}}dx=+\infty\ee which proves the sharpness of the exponential constant. \medskip To show the sharpness of the power of the denominator take $r=1$, so that $b_r=0$. For any fixed $\theta<1$ we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{B_{\epsilon/2}}\frac{\exp\bigg[\dfrac{1}{A_g}|T\psi_{\epsilon,1}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,1}(x)|^{\frac{\theta n}{n-\alpha}}}dx&\geq C\epsilon^n\frac{\exp\bigg[\log{\dfrac{1}{\epsilon^n}}-C\bigg]}{1+\Big(A_g\log{\dfrac{1}{\epsilon^n}}-C\Big)^{\theta}}\cr &\geq \frac{C}{1+\Big(\log{\dfrac{1}{\epsilon^n}}\Big)^{\theta}}\geq C\Big(\log{\dfrac{1}{\epsilon^n}}\Big)^{-\theta}.\end{aligned}\label{S18}\ee Therefore by the estimation \eqref{normtf2} on the $ q$-th norm of $T\psi_{\epsilon,1}$ we have, for any $\theta<1$, $$\lim_{\epsilon\to0^+}\|T\psi_{\epsilon,1}\|_q^{-q}\int_{\mathbb{R}^n} \frac{\exp_{\lceil\frac n\alpha -2\rceil}\bigg[\dfrac{1}{A_g}|T\psi_{\epsilon,1}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,1}(x)|^{\frac{\theta n}{n-\alpha}}}dx=+\infty.$$ \bigskip \begin{rk}\label{example}{An example where the inequality in Theorem \ref{T1} fails but \eqref{4c} holds.}\end{rk} \ \ \ For an example that Theorem \ref{T1} cannot hold merely under the assumption that $K$ is a Riesz-like kernel, we can take $0<\alpha<\frac{n}{2}$ and let $K\in C^1(\mathbb{R}^n\setminus 0)$ be such that \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} K(x)=\bc |x|^{\alpha-n}\ \ &\text{if}\ |x|\le 1\\ 2|x|^{\alpha-n}\ \ &\text{if}\ |x|\geq 2.\ec\nt\ee Note that we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} 2^{q'}|B_1|\log r^n-C\le b_r\le 2^{q'}|B_1|\log r^n+C.\nt\ee Choose \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} r^{n}=\frac{A_g}{2C_4}\bigg(\log{\frac{1}{\epsilon^n}}\bigg),\label{shp130}\ee which satisfies \eqref{shp13}. Hence we have \begin{equation}}\def\ee{\end{equation}}\def\nt{\notag}\def\bc{\begin{cases}}\def\ec{\end{cases}}\def\ba{\begin{aligned} \begin{aligned} \int_{B_{\epsilon r/2}}\frac{\exp\bigg[\dfrac{1}{|B_1|}|T\psi_{\epsilon,r}(x)|^{\frac{n}{n-\alpha}}\bigg]}{1+|T\psi_{\epsilon,r}(x)|^{\frac{ n}{n-\alpha}}}dx&\geq |B_{\epsilon r/2}|\frac{\exp\bigg[\log{\dfrac{1}{(\epsilon r)^n}}+\dfrac{ b_r}{|B_1|}\bigg(1-\dfrac{C}{\log{\frac{1}{\epsilon^n}}}\bigg)-C\bigg]}{1+C\log{\dfrac{1}{(\epsilon r)^n}}+b_r-C}\cr &\geq C\frac{r^{2^{q'}n}}{1+Cr^n}\to\infty\end{aligned}\label{S20}\ee as $\epsilon\rightarrow 0^+$.\par On the other hand, since $K$ is a Riesz-like kernel, the inequality \eqref{4c} [FM2, Theorem 5] holds under the Ruf condition. \chapter*{Acknowledgements} \begin{abstract} We derive sharp Adams inequalities with exact growth condition for the Riesz potential as well as more general Riesz-like potentials on $\mathbb{R}^n$. We also obtain Moser-Trudinger inequalities with exact growth condition for the fractional Laplacian, and for general homogeneous elliptic differential operators with constant coefficients. \end{abstract} \numberwithin{equation}{section} \input{chapters/introduction1} \input{chapters/background} \input{chapters/3.1} \input{chapters/3.2-1} \input{chapters/3.3} \input{chapters/sharpness1} \input{chapters/FMAdachi} \input{chapters/FMRieszkernel} \input{chapters/MTinq} \input{chapters/appendix2} \input{ly_references} \end{document}
2023-04-23T08:18:17.749Z
2021-09-20T02:16:20.000Z
redpajama/arxiv
arxiv_0000
1,540
21,966
ee0408c390ef324f9dc647a8d626ee373629f689
\section{Proofs for Section~\ref{sec:exact_sub}}\label{app:exact_sub} \begin{proof}[Proof of Theorem~\ref{thm:bad_sdp}] We first show that the SDP admits exactly one feasible point. Indeed, let $X \succeq 0$ be feasible for the SDP. Then, constraints $n$ and $n+1$ imply $\ip{A_{n+1} - A_{n}}{X} = 0$. That is, the trace of the principal submatrix of size $n-1$ of $X$ is zero. Since this submatrix is also positive semidefinite, it is zero. Constraints 1 to $n-1$ further show that all entries but $X_{nn}$ are zero. Finally, constraints $n$ and ${n+1}$ force $X_{nn} = \frac{5(n-1)}{3}$. This $X$ has rank~1 and is necessarily optimal. We now show that the proposed $\bar U$ is suboptimal for $L$. To this end, build $\tilde U \in \mathbb{R}^{n\times k}$ with the last row having squared 2-norm equal to $\frac{5(n-1)}{3}$, and all other rows are zero. Clearly, $\tilde U\tilde U^T$ is feasible for the SDP, so that $L(\tilde U) = 0$: this is optimal. On the other hand, $L(\bar U) = \frac{5}{18}(n-1)^2\epsilon^2 > L(\tilde U)$. Finally, we check stationarity of $\bar U$. Let $\mathcal{A} \colon \Sym{n\times n} \to \mathbb{R}^m$ be the linear operator such that $\mathcal{A}(X)_i = \ip{A_i}{X}$, and define the residue function $\mathbf{r}(U) = \mathcal{A}(UU^T) - b$. The cost function and its derivatives take the following forms: \begin{align*} L(U) & = \frac{1}{2} \|\mathbf{r}(U)\|_2^2, \\ \nabla L(U) & = 2\mathcal{A}^*(\mathbf{r}(U))U, \\ \nabla^2 L(U)[\dot U] & = 2\mathcal{A}^*(\mathbf{r}(U))\dot U + 2\mathcal{A}^*(\mathcal{A}(U\dot U^T + \dot U U^T))U. \end{align*} Simple computations show that $\mathcal{A}(\bar U \bar U^T) = (0, \ldots, 0, (n-1)\epsilon, 2(n-1)\epsilon)^T$, so that $\mathcal{A}^*(\mathbf{r}(\bar U)) = -\frac{n-1}{3} \epsilon^2 \cdot e_n^{} e_n^T$: only the bottom-right entry is non-zero. Consequently, $\nabla L(\bar U) = 0$: $\bar U$ is an FOSP To show second-order stationarity, we must also show that $\nabla^2 L(\bar U)$ is positive semidefinite. That is, we must show the inequalities: \begin{align*} 0 \leq \ip{\dot U}{\nabla^2 L(U)[\dot U]} & = 2\ip{\dot U \dot U^T}{\mathcal{A}^*(\mathbf{r}(U))} + \left\|\mathcal{A}(U \dot U^T + \dot U U^T)\right\|_2^2 \end{align*} for all $\dot U \in \mathbb{R}^{n\times k}$. Let \begin{align*} \dot U & = \begin{bmatrix} \textrm{---} &\dot u_1^T & \textrm{---} \\ & \vdots& \\ \textrm{---} &\dot u_n^T& \textrm{---} \end{bmatrix}, \quad \textrm{ with } \quad \dot u_1, \ldots, \dot u_n \in \mathbb{R}^{k} \textrm{ arbitrary.} \end{align*} Then, $\mathcal{A}(\bar U\dot U^T + \dot U\bar U^T) = (2\dot u_n^T, q_1, q_2)^T$ for some values $q_1, q_2$, so that: \begin{align*} \ip{\dot U}{\nabla^2 L(\bar U)[\dot U]} & = -2\frac{n-1}{3} \epsilon^2 \|\dot u_n\|_2^2 + 4 \|\dot u_n\|_2^2 + q_1^2 + q_2^2 \geq \left(4-2\frac{n-1}{3} \epsilon^2 \right) \|\dot u_n\|_2^2. \end{align*} Under our condition on $\epsilon$, this is indeed always nonnegative: $\bar U$ is an SOSP. \end{proof} \section{Proofs for Section~\ref{sec:exact}}\label{app:exact} \begin{proof}[Proof of Lemma~\ref{lem:global}] Necessary and sufficient optimality conditions for~\eqref{eq:prob_fx} are: $\nabla f(X) \succeq 0$ and $\nabla f(X)X = 0$. Let $U$ be an SOSP for~\eqref{eq:prob_fx_U} with $\operatorname{rank}(U) < k$ and define $X = UU^T$. Then, $\nabla g(U) = 2\nabla f(UU^T)U = 0$ and $\nabla^2 g(U) \succeq 0$. The first statement readily shows that $\nabla f(X)X = 0$. The Hessians of $f$ and $g$ are related by \begin{align*} \frac{1}{2}\nabla^2 g(U)[\dot U] & = \nabla f(UU^T)\dot U + \nabla^2 f(UU^T)[U\dot U^T + \dot U U^T]U. \end{align*} Since $\operatorname{rank}(U) < k$, there exists a vector $z \in \mathbb{R}^{k}$ such that $Uz = 0$ and $\|z\|_2 = 1$. For any $x \in \mathbb{R}^{n}$, set $\dot U = xz^T$ so that $U\dot U^T + \dot U U^T = 0$. Using second-order stationarity of $U$, we find: \begin{align*} 0 \leq \frac{1}{2}\ip{\dot U}{\nabla^2 g(U)[\dot U]} & = \ip{xz^T}{\nabla f(UU^T)xz^T} = x^T \nabla f(UU^T) x. \end{align*} This holds for all $x \in \mathbb{R}^{n}$, hence $\nabla f(UU^T) \succeq 0$ and $X = UU^T$ is optimal for~\eqref{eq:prob_fx}. Since~\eqref{eq:prob_fx} is a relaxation of~\eqref{eq:prob_fx_U}, it follows that $U$ is optimal for~\eqref{eq:prob_fx_U}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:rank_deficient}] Let $U$ be any FOSP of~\eqref{eq:penalty_factored} and consider the linear operator $\mathcal{A} \colon \Sym{n\times n} \to \mathbb{R}^{m}$ defined by $\mathcal{A}(X)_i = \ip{A_i}{X}$. By first-order stationarity, we have: \begin{align*} \nabla L_\mu(U) & = 2\left( C + 2\mu \mathcal{A}^*(\mathcal{A}(UU^T) - b) \right)U = 0. \end{align*} Hence, the nullity of $C + 2\mu \mathcal{A}^*(\mathcal{A}(UU^T) - b)$ (the dimension of its kernel) satisfies: \begin{align} \operatorname{rank}(U) \leq \operatorname{null}(C + 2\mu \mathcal{A}^*(\mathcal{A}(UU^T) - b)) \leq \max_{y \in \mathbb{R}^{m}} \operatorname{null}(C + \mathcal{A}^*(y)). \label{eq:maxnulliny} \end{align} The maximum over $y$ is indeed attained since the function $\operatorname{null}$ takes integer values in $0, \ldots, n$. Say the maximum evaluates to $\ell$. Then, for some $y$, $M \triangleq C + \mathcal{A}^*(y)$ has nullity $\ell$. Hence, \begin{align*} C & = M - \mathcal{A}^*(y) \in \mathcal{N}_\ell + \operatorname{im} \mathcal{A}^*, \end{align*} where $\mathcal{N}_\ell$ is the manifold of symmetric matrices of size $n$ and nullity $\ell$, $\operatorname{im} \mathcal{A}^*$ is the range of $\mathcal{A}^*$ and the plus is a set-sum. More generally, assuming the maximum in~\eqref{eq:maxnulliny} is $p$ or more, then \begin{align*} C & \in \mathcal{M}_p \triangleq \bigcup_{\ell = p, \ldots, n} \mathcal{N}_\ell + \operatorname{im} \mathcal{A}^*. \end{align*} The manifold $\mathcal{N}_\ell$ has dimension $\frac{n(n+1)}{2} - \frac{\ell(\ell+1)}{2}$~\citep[Prop.~2.1(i)]{helmke1995matrixlsq}, while $\operatorname{im} \mathcal{A}^*$ has dimension at most $m$. Hence, $$\dim \mathcal{M}_p \leq m + \max_{\ell = p, \ldots, n} \dim \mathcal{N}_\ell = m + \frac{n(n+1)}{2} - \frac{p(p+1)}{2}.$$ Since $C$ is in $\Sym{n\times n}$ and $\dim \Sym{n\times n} = \frac{n(n+1)}{2}$, almost no $C$ lives in $\mathcal{M}_p$ if $\dim\mathcal{M}_p < \dim \Sym{n\times n}$, which is the case if $\frac{p(p+1)}{2} > m$. Stated differently: $\operatorname{rank}(U) \leq p$, and for almost all $C \in \Sym{n\times n}$, $\frac{p(p+1)}{2} \leq m$. To conclude, require that $k$ is strictly larger than any $p$ which satisfies $\frac{p(p+1)}{2} \leq m$. \end{proof} \section{Proofs for Section~\ref{sec:gd}}\label{apdx:gd} \begin{proof}[Proof of Lemma \ref{lem:gd_param}.] We start by showing that the gradient is $l$-Lipschitz continuous. The gradient is given by: $$\nabla {\widehat L_\mu}(U) = \left[2(C+G) + 4\mu\mathcal{A}^*(\mathbf{r})\right]U,$$ where $\mathbf{r} = \mathbf{r}(U) = \mathcal{A}(UU^T) - \mathbf{b}$. Hence, for $U_1, U_2 \in \mathbb{R}^{n\times k}$, with notation $\mathbf{r}_1 = \mathbf{r}(U_1), \mathbf{r}_2 = \mathbf{r}(U_2)$, \begin{align*} \norm{ \nabla {\widehat L_\mu}(U_1) -\nabla {\widehat L_\mu}(U_2)}_F & \leq \norm{ 2(C+G)(U_1-U_2)}_F + 4\mu \norm{\mathcal{A}^*(\mathbf{r}_1)U_1 - \mathcal{A}^*(\mathbf{r}_2)U_2 }_F \\ & \leq 2\|C+G\|_2 \| U_1 -U_2\|_F + 4 \mu \norm{\mathcal{A}^*(\mathbf{r}_1) (U_1- U_2) }_F \\ & \quad \quad + 4 \mu \norm{\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2) U_2}_F \\ & \leq \left(2\|C+G\|_2 + 4 \mu \left\| \mathcal{A}^*(\mathbf{r}_1) \right\|_2 \right) \| U_1 -U_2\|_F \\ & \quad \quad + 4 \mu \norm{\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2) U_2}_F. \end{align*} This further simplifies using the norm of $\mathcal{A}$~\eqref{eq:normofA}: $\left\| \mathcal{A}^*(\mathbf{r}_1) \right\|_2 \leq \|\mathcal{A}\| \|\mathbf{r}_1\|_2$ and $\|\mathbf{r}_1\|_2 \leq \|\mathcal{A}\| \|U_1\|_F^2 + \|\mathbf{b}\|_2$, so that if $\|U_1\|_F \leq \tau$: \begin{align*} \left\| \mathcal{A}^*(\mathbf{r}_1) \right\|_2 & \leq (\tau^2\|\mathcal{A}\|+\|\mathbf{b}\|_2)\|\mathcal{A}\|. \end{align*} Similarly, using $\|U_2\|_F \leq \tau$ as well: \begin{align} \norm{\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2) U_2}_F & \leq \|\mathcal{A}^*(\mathcal{A}(U_1^{}U_1^T - U_2^{}U_2^T))\|_2 \|U_2\|_F\nonumber\\ & \leq \tau \|\mathcal{A}\|^2 \|U_1^{}U_1^T - U_2^{}U_2^T\|_F \nonumber\\ & = \tau \|\mathcal{A}\|^2 \|U_1^{}U_1^T - U_1^{}U_2^T + U_1^{}U_2^T - U_2^{}U_2^T\|_F \nonumber\\ & \leq \tau \|\mathcal{A}\|^2 \left( \|U_1^{}(U_1 - U_2)^T \|_F + \|(U_1^{} - U_2^{})U_2^T\|_F \right) \nonumber\\ & \leq 2\tau^2 \|\mathcal{A}\|^2 \|U_1 - U_2\|_F. \label{eq:Astarr1r2} \end{align} Combining, we find \begin{align*} \norm{ \nabla {\widehat L_\mu}(U_1) -\nabla {\widehat L_\mu}(U_2)}_F & \leq \left(2\|C+G\|_2 + 4 \mu \|\mathcal{A}\| (\tau^2\|\mathcal{A}\|+\|\mathbf{b}\|_2) \right) \| U_1 -U_2\|_F \\ & \quad \quad + 8 \mu \tau^2 \|\mathcal{A}\|^2 \|U_1 - U_2\|_F, \end{align*} which establishes the Lipschitz constant for $\nabla {\widehat L_\mu}$. We now show that the Hessian is $\rho$-Lipschitz continuous in operator norm, that is, we must show that for any $U_1$ and $U_2$ with norms bounded by $\tau$, \begin{align*} \underset{\|{\dot U}\|_F \leq 1}{\max}\ip{\nabla^2 {\widehat L_\mu}(U_1)[{\dot U}] -\nabla^2 {\widehat L_\mu}(U_2)[{\dot U}]}{{\dot U}} \leq \rho \|U_1 -U_2\|_F. \end{align*} Recall from~\eqref{eq:HessianLmuip} that \begin{align*} \ip{\nabla^2 {\widehat L_\mu}(U)[{\dot U}]}{{\dot U}} = 2\ip{C+G+2\mu \mathcal{A}^*(\mathbf{r})}{{\dot U}{\dot U}^T} + 2\mu \|\mathcal{A}(U{\dot U}^T+{\dot U}U^T)\|_2^2. \end{align*} Hence, \begin{multline*} \ip{\nabla^2 {\widehat L_\mu}(U_1)[{\dot U}]}{{\dot U}} -\ip{\nabla^2 {\widehat L_\mu}(U_2)[{\dot U}]}{{\dot U}} \\ = 4\mu\ip{\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2)}{{\dot U}{\dot U}^T} + 2\mu \left(\|\mathcal{A}(U_1{\dot U}^T+{\dot U}U_1^T)\|_2^2 - \|\mathcal{A}(U_2{\dot U}^T+{\dot U}U_2^T)\|_2^2\right). \end{multline*} On one hand, following the same reasoning as in~\eqref{eq:Astarr1r2}, we have \begin{align*} \ip{\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2)}{{\dot U}{\dot U}^T} & \leq \|\mathcal{A}^*(\mathbf{r}_1 - \mathbf{r}_2)\|_F \|{\dot U}{\dot U}^T\|_F \\ & \leq 2\tau \|\mathcal{A}\|^2 \|U_1 - U_2\|_F \|\dot U\|_F^2. \end{align*} On the other hand, using that for any two vectors $u, v$ we have \begin{align*} \|u\|_2^2 - \|v\|_2^2 = \ip{u+v}{u-v} \leq \|u+v\|_2 \|u-v\|_2 \leq (\|u\|_2 + \|v\|_2)\|u - v\|_2, \end{align*} we can find: \begin{align*} \|\mathcal{A}(U_1{\dot U}^T+{\dot U}U_1^T)\|_2^2 - \|\mathcal{A}(U_2{\dot U}^T+{\dot U}U_2^T)\|_2^2 & \leq 4 \tau \|\mathcal{A}\|^2 \|U_1 - U_2\|_F \|\dot U\|_F^2. \end{align*} For this, we used $\|\mathcal{A}(U\dot U^T + \dot U U^T)\|_2 \leq \|\mathcal{A}\| \|U\dot U^T + \dot U U^T\|_F \leq \tau \|\mathcal{A}\|\|\dot U\|_F$ when $\|U\|_F \leq \tau$ and \begin{align*} \|\mathcal{A}(U_1{\dot U}^T+{\dot U}U_1^T - U_2{\dot U}^T - {\dot U}U_2^T)\|_2 & \leq \|\mathcal{A}\| \left( \|(U_1-U_2)\dot U^T\|_F + \|\dot U(U_1-U_2)^T\|_F \right) \\ & \leq 2\|\mathcal{A}\|\|\dot U\|_F \|U_1 - U_2\|_F. \end{align*} Overall, this shows $\rho = 16\mu \tau \|\mathcal{A}\|^2$ is an appropriate Lipschitz constant. \end{proof} \section{Proof of Lemma~\ref{lem:eigenvalue_main}: lower-bound for smallest singular values}\label{apdx:proofNguyen} First we state a special case of Corollary 1.17 from~\citep{nguyen2017repulsion}. Let $N_I(X) $, denote the number of eigenvalues of $X$ in the interval $I$. \begin{corollary}\label{cor:Nguyen} Let $M'$ be a deterministic symmetric matrix in $\Sym{n\times n}$. Let $G'$ be a random symmetric matrix with entries $G'_{ij}$ drawn i.i.d.\ from $\mathcal{N}(0,1)$ for $i \geq j$ (in particular, independent of $M'$.) Then, for given $0 < \gamma < 1$, there exists a constant $c = c(\gamma)$ such that for any $\epsilon > 0$ and $k \geq 1$, with $I$ being the interval, $[-\frac{\epsilon k}{\sqrt{n}}, \frac{\epsilon k}{\sqrt{n}}]$, $$ \Pr{N_I(M'+G') \geq k } \leq n^k \left(\frac{c\epsilon}{\sqrt{2\pi}}\right)^{(1-\gamma)k^2/2}. $$ \end{corollary} We can use the above corollary to prove Lemma~\ref{lem:eigenvalue_main}. \begin{proof} In our case, entries of $G$ have variance $\sigma_G^2$. Thus, set $G = \sigma_G G'$, and set $\bar M = \sigma_G M'$. From Corollary~\ref{cor:Nguyen}, we get \begin{align*} N_{\sigma_G I}(\bar M+G) = N_I(M'+G') < k \end{align*} with probability at least $1 - n^k \left(\frac{c\epsilon}{\sqrt{2\pi}}\right)^{(1-\gamma)k^2/2}$. In this event, $\sigma_{n-(k-1)}(\bar M+G) \geq \frac{\epsilon k}{\sqrt{n}} \sigma_G$. Choose $\gamma =\frac{1}{2}$, and $\epsilon = \frac{1}{2 c}$. Substituting this we get with probability at least $1 - \exp\left( - \frac{k^2}{8} \log( 8 \pi) + k \log (n)\right)$ that $$ \sigma_{n-(k-1)}(\bar M+G) \geq \frac{k}{2c \sqrt{n}}\sigma_G. $$ Hence, $\sum_{i=1}^{k} \sigma_{n-(i-1)}\left(\bar M+G\right)^2 \geq \sigma_{n-(k-1)}\left(\bar M+G\right)^2 \geq \frac{k^2}{c_0 n} \sigma_G^2$, for some absolute constant $c_0 = 4c^2$. \end{proof} \section{Proofs for Section~\ref{sec:smooth}}\label{app:smooth} \begin{proof}[Proof of Lemma \ref{lem:compact_optimal_approx}] The gradient and Hessian of $L_\mu$~\eqref{eq:penalty_factored}, with $\mathbf{r} \triangleq \mathbf{r}(U) = \mathcal{A}(UU^T)-b$, are: \begin{align} \nabla L_\mu(U) & = 2\left( C + 2\mu\mathcal{A}^*(\mathbf{r}) \right)U,\label{eq:gradLmu}\\ \nabla^2 L_\mu(U)[\dot U] & = 2\left( C + 2\mu\mathcal{A}^*(\mathbf{r}) \right)\dot U + 4\mu\mathcal{A}^*(\mathcal{A}(\dot U U^T + U\dot U^T))U. \label{eq:HessianLmu} \end{align} Since $U$ is an $(\epsilon, \gamma)$-SOSP, it holds for all $\dot U \in \mathbb{R}^{n\times k}$ with $\|\dot U\|_F = 1$ that: \begin{align} -\frac{\gamma\sqrt{\epsilon}}{2} & \leq \frac{1}{2} \ip{\dot U}{\nabla^2 L_\mu(U)[\dot U]} = \ip{C + 2\mu\mathcal{A}^*(\mathbf{r})}{\dot U \dot U^T} + \mu \left\|\mathcal{A}(\dot U U^T + U \dot U^T)\right\|_2^2. \label{eq:HessianLmuip} \end{align} We now construct specific $\dot U$'s to exploit the fact that $U$ is almost rank deficient. Let $z \in \mathbb{R}^k$ be a right singular vector of $U$ such that $\|Uz\|_2 =\sigma_k(U)$ (that is, $z$ is associated to the least singular value of $U$ and $\|z\|_2 = 1$.) For any $x \in \mathbb{R}^{n}$ with $\|x\|_2 = 1$, introduce $\dot U = xz^T$ in~\eqref{eq:HessianLmuip}: \begin{align*} -\frac{\gamma\sqrt{\epsilon}}{2} & \leq x^T(C + 2\mu\mathcal{A}^*(\mathbf{r}))x + \mu \left\|\mathcal{A}(\dot U U^T + U \dot U^T)\right\|_2^2. \end{align*} The last term is easily controlled: \begin{align*} \left\|\mathcal{A}(\dot U U^T + U \dot U^T)\right\|_2 \leq 2 \|\mathcal{A}\| \|U\dot U^T\|_F = 2 \|\mathcal{A}\| \|Uzx^T\|_F \leq 2 \|\mathcal{A}\|\|Uz\|_2 \|x\|_2 = 2 \|\mathcal{A}\|\sigma_k(U). \end{align*} Let $x$ be an eigenvector of $C + 2\mu\mathcal{A}^*(\mathbf{r})$ associated to its least eigenvalue and combine the last two statements together with the assumption on $\sigma_k(U)$ to find: \begin{align} \lambda_{\min}(C + 2\mu\mathcal{A}^*(\mathbf{r})) \geq -\frac{\gamma\sqrt{\epsilon}}{2} - 4\mu\|\mathcal{A}\|^2\sigma_k^2(U) \geq -\gamma\sqrt{\epsilon}. \label{eq:lambdaminC2mu} \end{align} This inequality is key to bound the optimality gap. For this part, we rely on the fact that $L_\mu(U) = F_\mu(UU^T)$ and $F_\mu$ is convex on $\Sym{n\times n}$~\eqref{eq:penalty_sdp}. Specifically, let $\tilde X$ be a global optimum for $F_\mu$ (assuming it exists), and set $X = UU^T$. Then, $\nabla F_\mu(X) = C + 2\mu\mathcal{A}^*(\mathbf{r}), \nabla L_\mu(U) = 2\nabla F_\mu(X)U$ and: \begin{align*} F_\mu(\tilde X) - F_\mu(X) & \geq \ip{\nabla F_\mu(X)}{\tilde X - X} = \ip{C+2\mu\mathcal{A}^*(\mathbf{r})}{\tilde X} - \frac{1}{2}\ip{\nabla L_\mu(U)}{U} \\ & \geq -\gamma \sqrt{\epsilon} \operatorname{Tr}(\tilde X) - \frac{1}{2}\epsilon \|U\|_F. \end{align*} In the last step, we used~\eqref{eq:lambdaminC2mu} as well as approximate first-order stationarity . \end{proof} \begin{proof}[Proof of Proposition \ref{prop:compactSDPconsraintA0}] One direction is elementary: if there exists $A_0 \succ 0$ and $b_0 \geq 0$ such that $\ip{A_0}{X} = b_0$ for all $X \in \mathcal{C}$, then, \begin{align*} \forall X \in \mathcal{C}, \quad \trace{X} = \ip{I_n}{X} \leq \lambda_{\min}(A_0)^{-1} \ip{A_0}{X} = \lambda_{\min}(A_0)^{-1} b_0. \end{align*} Thus, the trace of $X \succeq 0$ is bounded, and it follows that $\mathcal{C}$ is compact. Furthermore: if $b_0 = 0$, then $\mathcal{C} = \{0\}$; and if $b_0 > 0$, then $0 \notin \mathcal{C}$. To prove the other direction, assume $\mathcal{C}$ is non-empty and compact. If $\mathcal{C} = \{0\}$, let $A_0 = I_n, b_0 = 0$. Now assume $\mathcal{C} \neq \{0\}$. The SDP comes in a primal-dual pair: \begin{align*} \min_{X\in\Sym{n\times n}} \ip{C}{X} \quad & \textrm{ s.t. } \quad \mathcal{A}(X) = b, \ X \succeq 0, \tag{P} \label{eq:proofP}\\ \max_{y\in\mathbb{R}^m} \ip{b}{y} \quad & \textrm{ s.t. } \quad C \succeq \mathcal{A}^*(y). \tag{D} \label{eq:proofD} \end{align*} It is well known that if~\eqref{eq:proofD} is infeasible, then~\eqref{eq:proofP} is unbounded or infeasible~\citep[Thm.~4.1(a)]{wolkowicz1981optimization}. Since we assume $\mathcal{C}$ is non-empty, this simplifies to: if~\eqref{eq:proofD} is infeasible, then~\eqref{eq:proofP} is unbounded. The contrapositive states: if~\eqref{eq:proofP} is bounded, then~\eqref{eq:proofD} is feasible. By our compactness assumption on $\mathcal{C}$, we know that~\eqref{eq:proofP} is bounded for all $C \in \Sym{n\times n}$. Thus,~\eqref{eq:proofD} is feasible for any $C$. In particular, take $C = -I_n$: there exists $-y \in \mathbb{R}^m$ such that $A_0 \triangleq \mathcal{A}^*(y) \succeq I_n$. Furthermore, \begin{align*} \forall X \in \mathcal{C}, \quad \ip{A_0}{X} = \ip{\mathcal{A}^*(y)}{X} = \ip{y}{\mathcal{A}(X)} = \ip{y}{b} \triangleq b_0. \end{align*} Since there exists $X \neq 0$ in $\mathcal{C}$, it follows that $b_0 > 0$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:compact_eps_fosp}] Using~\eqref{eq:gradLmu}, $U$ is an $\epsilon$-FOSP of the perturbed problem if and only if $\| (M+G)U\|_F \leq \frac{\epsilon}{2}$, where $M = C + 2\mu\mathcal{A}^*(\mathbf{r})$. Let $U = P \Sigma Q^T$ be a thin SVD of $U$ ($P$ is $n\times k$ with orthonormal columns; $Q$ is $k\times k$ orthogonal). Then, \begin{align*} \| (M+G)U\|_F & = \| (M+G) P \Sigma \|_F \\ & \geq \sigma_k(U)\| (M+G) P\|_F \\ & \geq \sigma_k(U) \sqrt{\sum_{i=1}^k \sigma_{n-(i-1)}(M+G)^2}. \end{align*} Hence, we control the smallest singular value of $U$ in terms of $\epsilon$ and the $k$ smallest singular values of $M+G$: \begin{align} \sigma_k(U) & \leq \frac{\epsilon}{2\sqrt{\sum_{i=1}^{k}\sigma_{n-(i-1)}(M+G)^2}}. \label{eq:keyboundsigmak} \end{align} The next lemma helps lower-bound the denominator---it follows from Theorem 1.16 and Corollary 1.17 in \citep{nguyen2017repulsion}; see proof in Appendix~\ref{apdx:proofNguyen}. \begin{lemma}\label{lem:eigenvalue_main} Let $\bar M$ be a fixed symmetric matrix of size $n$. Let $G$ be a symmetric Gaussian matrix of size $n$, independent of $\bar M$, with diagonal and upper-triangular entries sampled independently from $\mathcal{N}(0,\sigma_G^2)$. There exists an absolute constant $c_0$ such that: \begin{align*} \Pr{\sum_{i=1}^{k} \sigma_{n-(i-1)}\left(\bar M+G\right)^2 < \frac{k^2}{c_0 n} \sigma_G^2 } \leq \exp\left( - \frac{k^2}{8} \log(8 \pi) + k \log (n)\right). \end{align*} \end{lemma} We cannot use Lemma~\ref{lem:eigenvalue_main} directly, as in our case $M$ is not statistically independent of $G$. Indeed, $M$ depends on $U$ through the residue $\mathbf{r} = \mathbf{r}(U)$ and $U$ is an $\epsilon$-FOSP: a feature that depends on $G$. To resolve this, we cover the set of possible $M$'s with a net, under the assumption that $\mathbf{r}$ is bounded. Lemma~\ref{lem:eigenvalue_main} provides a bound for each $\bar M$ in this net. This can be extended to hold for all $\bar M$'s in the net simultaneously via a union bound. By taking a sufficiently dense net, we can then infer that $M$ is necessarily close to one of these $\bar M$'s, and conclude. Let $\mathcal{E}$ be the event (on $G$) that $\|\mathbf{r}\|_2 \leq B$ for all $\epsilon$-FOSPs of the perturbed problem. Conditioned on $\mathcal{E}$, we have \begin{align*} \|M - C\|_F & = 2\mu\|\mathcal{A}^*(\mathbf{r})\|_F \leq 2\mu B \|\mathcal{A}\|, \end{align*} where $\|\mathcal{A}\|$ is defined in~\eqref{eq:normofA}. As a result, $M$ lies in a ball of center $C$ and radius $2 \mu B\|\mathcal{A}\|$ in an affine subspace of dimension $\operatorname{rank}(\mathcal{A})$. A unit-ball in Frobenius norm in $d$ dimensions admits an $\varepsilon$-net of $(1+2/\varepsilon)^d$ points~\citep[Cor.~4.2.13]{vershynin2016high}. Thus, we can pick a net with $\left( 1 + \frac{4 \mu B\|\mathcal{A}\|}{\sigma_G} \sqrt{\frac{4c_0 n}{k^2}} \right)^{\operatorname{rank}(\mathcal{A})}$ points in such a way that, independently of $\mathbf{r}$, there exists a point $\bar M$ in the net satisfying: \begin{align} \| \bar M - M \|_F \leq \sqrt{\frac{k^2}{4c_0 n}} \sigma_G = \frac{k}{2\sqrt{c_0 n}}\sigma_G. \label{eq:epscover} \end{align} Let $T \colon \Sym{n\times n} \to \mathbb{R}^{k}$ be defined by $T_q(A) = (\sigma_{n-q+1}(A), \ldots, \sigma_n(A))^T$, that is: $T$ extracts the $q$ smallest singular values of $A$, in order. Then, \begin{align*} \| \bar M - M \|_F & = \| (\bar M+G) - (M+G) \|_F \\ & \geq \| T_n(\bar M+G) - T_n(M + G) \|_2 \\ & \geq \| T_k(\bar M+G) - T_k(M + G) \|_2 \\ & \geq \| T_k(\bar M+G) \|_2 - \| T_k(M + G) \|_2, \end{align*} where the first inequality follows from~\citep[Ex.~IV.3.5]{bhatia2013matrix}. Hence, \begin{align} \sqrt{\sum_{i=1}^{k}\sigma_{n-(i-1)}(M+G)^2} \geq \sqrt{\sum_{i=1}^{k}\sigma_{n-(i-1)}(\bar M+G)^2} - \| \bar M - M \|_F. \label{eq:foo} \end{align} Now, taking a union bound for $\mathcal{E}$ and for Lemma~\ref{lem:eigenvalue_main} over each $\bar M$ in the net, we get~\eqref{eq:epscover} and \begin{align} \sqrt{ \sum_{i=1}^{k} \sigma_{n-(i-1)}\left(\bar M+G\right)^2 } \geq \frac{k}{\sqrt{c_0 n}} \sigma_G \label{eq:bar} \end{align} with probability at least $$ 1 - \exp \left( - \frac{k^2}{8}\log(8\pi) + k\log(n) + \operatorname{rank}(\mathcal{A}) \cdot \log\left( 1 + \frac{4 \mu B\|\mathcal{A}\|}{\sigma_G} \sqrt{\frac{4c_0 n}{k^2}} \right) \right) - \delta. $$ Inside the log, we can safely replace $k$ with 1, as this only hurts the probability. Then, the result holds with probability at least $$ 1 - \exp \left( - \frac{k^2}{8}\log(8\pi) + k\log(n) + \operatorname{rank}(\mathcal{A}) \cdot \log\left( 1 + \frac{8 \mu B\|\mathcal{A}\|}{\sigma_G} \sqrt{c_0 n} \right) \right) - \delta. $$ We aim to pick $k$ so as to ensure \begin{align*} \exp \left( - \frac{k^2}{8}\log(8\pi) + k\log(n) + \operatorname{rank}(\mathcal{A}) \cdot \log\left( 1 + \frac{8 \mu B\|\mathcal{A}\|}{\sigma_G} \sqrt{c_0 n} \right) \right) \leq \delta'. \end{align*} This is a quadratic condition of the form \begin{align*} -ak^2 + bk + c \leq \log(\delta') \end{align*} for some $a, b > 0$, $c \geq 0$. Since $k$ is positive we get, $k \geq \frac{b+\sqrt{a (c+\log(1/\delta'))}}{a}$, which is satisfied for, $$ k \geq 3\left[\log\left(\frac{n}{\delta'}\right) + \sqrt{ \operatorname{rank}(\mathcal{A}) \log\left( 1 + \frac{8 \mu B\|\mathcal{A}\| \sqrt{c_0 n}}{\sigma_G } \right) } \right].$$ Combining~\eqref{eq:keyboundsigmak},~\eqref{eq:epscover},~\eqref{eq:foo} and~\eqref{eq:bar}, we find: \begin{align*} \sigma_k(U) \leq \frac{ \epsilon}{\sigma_G } \frac{2\sqrt{c_0 n}}{k} \end{align*} with probability at least $1-\delta-\delta'$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:residues_compact}] If $U = 0$, the bounds clearly hold: assume $U \neq 0$ in what follows. Using $\nabla \tilde L_\mu(U) = 2( C + 2\mu\tilde \mathcal{A}^*(\tilde \mathbf{r}) )U$, the definition of $\epsilon$-FOSP reads: \begin{align*} \frac{\epsilon}{2} \geq \left\| \left( C + 2\mu\tilde \mathcal{A}^*(\tilde \mathbf{r}) \right) U \right\|_F. \end{align*} Combining this with $\|A\|_F \geq \frac{1}{\|B\|_F} \ip{A}{B}$ for $B\neq 0$ (Cauchy--Schwarz) gives: \begin{align*} \frac{\epsilon}{2} \geq \frac{1}{\|U\|_F} \ip{\left( C + 2\mu\tilde \mathcal{A}^*(\tilde \mathbf{r}) \right) U}{U}. \end{align*} This can be further developed as: \begin{align} \frac{\epsilon \|U\|_F}{2} & \geq \ip{C + 2\mu\tilde \mathcal{A}^*(\tilde \mathbf{r})}{UU^T} \nonumber\\ & = \ip{C}{UU^T} + 2\mu \ip{\tilde\mathbf{r}}{\tilde\mathcal{A}(UU^T)} \nonumber\\ & = \ip{C}{UU^T} + 2\mu \ip{\tilde \mathcal{A}(UU^T) - \tilde \mathbf{b}}{\tilde \mathcal{A}(UU^T)}. \label{eq:residueboundintermediate} \end{align} At this point, we separate the constraint $(A_0, b_0)$ from the rest, using the usual definition for $(\mathcal{A}, \mathbf{b})$ which capture constraints $1, \ldots, m$: \begin{align*} \frac{\epsilon \|U\|_F}{2} & \geq \ip{C}{UU^T}+2\mu\left( \ip{\mathcal{A}(UU^T)-\mathbf{b}}{\mathcal{A}(UU^T)}+ \left(\ip{A_0}{UU^T}-b_0\right)\ip{A_0}{UU^T} \right) \\ & \geq \ip{C}{UU^T} + 2\mu\left( \|\mathcal{A}(UU^T)\|_2^2-\|\mathbf{b}\|_2 \|\mathcal{A}(UU^T)\|_2 + \left(\ip{A_0}{UU^T}-b_0\right)\ip{A_0}{UU^T} \right). \end{align*} Let $y =\|\mathcal{A}(UU^T)\|_2$. Then the above inequality holds when $$ y^2 -\|\mathbf{b}\|_2 y +\frac{1}{2\mu}\left( \ip{C}{UU^T}-\frac{\epsilon \|U\|_F}{2} \right)+ \left(\ip{A_0}{UU^T}-b_0\right)\ip{A_0}{UU^T} \leq 0. $$ For this to happen we need the above quadratic to have real roots. This requires: \begin{align*} \frac{1}{4}\|\mathbf{b}\|_2^2 & \geq \frac{1}{2\mu}\left( \ip{C}{UU^T}-\frac{\epsilon \|U\|_F}{2} \right)+ (\ip{A_0}{UU^T}-b_0)\ip{A_0}{UU^T} \\ & \geq\frac{1}{2\mu}\left( -\|CU\|_F \|U\|_F-\frac{\epsilon \|U\|_F}{2} \right)+ \lambda_{\min}(A_0)^2 \|U\|_F^4-b_0 \lambda_{\max}(A_0) \|U\|_F^2 \\ & \geq \lambda_{\min}(A_0)^2 \|U\|_F^4 -\frac{\|C\|_2}{2\mu}\|U\|_F^2 - b_0 \lambda_{\max}(A_0) \|U\|_F^2 - \frac{\epsilon}{4\mu}\|U\|_F, \end{align*} where we used that for any two matrices $A$ and $B$, it holds that $\|AB\|_F \leq \|A\|_2 \|B\|_F$. Focus on the last two terms of the last inequality. We distinguish two cases. Either \begin{align*} b_0 \lambda_{\max}(A_0) \|U\|_F^2 + \frac{\epsilon}{4\mu}\|U\|_F \geq \frac{3}{2}b_0 \lambda_{\max}(A_0)\|U\|_F^2, \end{align*} in which case $\|U\|_F \leq \frac{\epsilon}{2\mu b_0 \lambda_{\max}(A_0)}$ (assuming $b_0 > 0$). Or the opposite holds, and: \begin{align*} \frac{1}{4} \|\mathbf{b}\|_2^2 & \geq \lambda_{\min}(A_0)^2 \|U\|_F^4 - \left(\frac{\|C\|_2}{2\mu} + \frac{3}{2}b_0 \lambda_{\max}(A_0)\right)\|U\|_F^2. \end{align*} This is a quadratic inequality in $y = \|U\|_F^2$ of the form $ay^2 - by - c \leq 0$ with coefficients $a> 0$ and $b, c \geq 0$. Such a quadratic always has at least one real root, so that $y \leq \frac{b + \sqrt{b^2 + 4ac}}{2a}$. Furthermore, $\sqrt{b^2 + 4ac} \leq \sqrt{b^2 + (\sqrt{4ac})^2 + 2b\sqrt{4ac}} = b + \sqrt{4ac}$. Hence, $y \leq \frac{b}{a} + \sqrt{\frac{c}{a}}$, which means: \begin{align*} \|U\|_F^2 & \leq \frac{1}{\lambda_{\min}(A_0)^2} \left(\frac{\|C\|_2}{2\mu} + \frac{3}{2}b_0 \lambda_{\max}(A_0)\right) + \frac{\|\mathbf{b}\|_2}{2\lambda_{\min}(A_0)}. \end{align*} Accounting for the two distinguished cases, we find: \begin{align*} \|U\|_F^2 & \leq \max\left\{ \left(\frac{\epsilon}{2\mu b_0 \lambda_{\max}(A_0)}\right)^2, \frac{1}{\lambda_{\min}(A_0)^2} \left(\frac{\|C\|_2}{2\mu} + \frac{3}{2}b_0 \lambda_{\max}(A_0)\right) + \frac{\|\mathbf{b}\|_2}{2\lambda_{\min}(A_0)} \right\}. \end{align*} We now bound the residues (generically) in terms of $\|U\|_F$, using submultiplicativity for $\|UU^T\|_F \leq \|U\|_F^2$ and the definition of $\|\mathcal{A}\|$~\eqref{eq:normofA}: \begin{align*} \|\mathbf{r}\|_2 = \| \mathcal{A}(UU^T) - \mathbf{b} \|_2 \leq \|\mathcal{A}\| \|UU^T\|_F + \|\mathbf{b}\|_2 \leq \|\mathcal{A}\| \|U\|_F^2 + \|\mathbf{b}\|_2. \end{align*} Evidently, the same bound holds for $\tilde\mathcal{A}, \tilde \mathbf{b}, \tilde \mathbf{r}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:optimal_approx_compact}] By Lemma~\ref{lem:residues_compact}, for a problem perturbed with $G$, the residues of all $\epsilon$-FOSPs, $\|\tilde \mathbf{r}\|_2$, are bounded as: \begin{align*} \|\tilde \mathcal{A}\| \max\left\{ \left(\frac{\epsilon}{2\mu b_0 \lambda_{\max}(A_0)}\right)^2, \frac{1}{\lambda_{\min}(A_0)^2} \left(\frac{\|C+G\|_2}{2\mu} + \frac{3}{2}b_0 \lambda_{\max}(A_0)\right) + \frac{\|\mathbf{b}\|_2}{2\lambda_{\min}(A_0)} \right\} + \|\tilde \mathbf{b}\|_2 \end{align*} With probability at least $1 - \delta$, $\|C + G\|_2 \leq \|C\|_2 + 3\sigma_G\left(\sqrt{n} + \sqrt{2\log(1/\delta)}\right)$. Hence, Theorem~\ref{thm:compact_eps_fosp} applies with this $\delta$ and \begin{align*} B = \|\tilde \mathcal{A}\| \max\left\{ \left(\frac{\epsilon}{2\mu b_0 \lambda_{\max}(A_0)}\right)^2, \frac{1}{\lambda_{\min}(A_0)^2} \left(\frac{\|C\|_2 + 3\sigma_G\sqrt{n}}{2\mu} + \frac{3}{2}b_0 \lambda_{\max}(A_0)\right) + \frac{\|\mathbf{b}\|_2}{2\lambda_{\min}(A_0)} \right\} + \|\tilde \mathbf{b}\|_2. \end{align*} Hence, with $k$ as prescribed in that theorem for a given $\delta' =\delta \in (0, 1)$, with probability at least $1 - 2\delta$, it holds that \begin{align*} \sigma_k(U) \leq \frac{ 2\epsilon}{\sigma_G } \frac{\sqrt{c_0 n}}{k} \end{align*} for any $\epsilon$-FOSP. Lemma~\ref{lem:compact_optimal_approx} requires $\sigma_k^2(U) \leq \frac{\gamma \sqrt{\epsilon}}{8 \mu\|\mathcal{A}\|^2}$. Hence, we choose: $\epsilon \leq \left(\frac{\gamma k^2 \sigma_G^2 }{ 32c_0 n \mu \|\mathcal{A}\|^2}\right)^{\sfrac{2}{3}}$, and with probability at least $1-2\delta$ hypothesis of Lemma \ref{lem:compact_optimal_approx} is satisfied. Let $\tilde X$ be a global optimum for $\tilde F_\mu$, then the optimality gap obeys: \begin{align*} \tilde F_\mu(UU^T) - \tilde F_\mu(\tilde X) & \leq \gamma \sqrt{\epsilon} \operatorname{Tr}(\tilde X) + \frac{1}{2}\epsilon \|U\|_F. \end{align*} \end{proof} \subsection{Proof of section 4.2} \begin{proof}[Proof of Lemma \ref{lem:residues}] With probability at least $1-\delta$, $\sigma_1(G) \leq 3\sigma_G\sqrt{n} $. In that event, for $\sigma_G \leq \frac{\lambda_{n}(C)}{6\sqrt{n \log(n/\delta)}}$, we have $C+G \succeq \frac{\lambda_{n}(C)}{2}I$. $U$ is an $\epsilon$-FOSP of \eqref{eq:smoothed} implies $\|2(C+G+2\mu \mathcal{A}^*(\mathbf{r}))U\|_F \leq \epsilon$. \begin{align*} \frac{\epsilon}{2} &\geq \left\| \left(C+G+2\mu \mathcal{A}^*(\mathbf{r}) \right)U \right\|_F \\ &\geq \frac{1}{\|U\|_F} \ip{C+G+2\mu \mathcal{A}^*(\mathbf{r})}{UU^T} . \end{align*} Hence, \begin{align*} \frac{\epsilon \|U\|_F}{2} &\geq \ip{C+G}{UU^T} + 2\mu\ip{ \mathcal{A}^*(\mathbf{r})}{UU^T} \\ &\geq \frac{\lambda_n(C)}{2}\|U\|_F^2+2\mu\ip{\mathbf{r}}{\mathcal{A}(UU^T)} \\ &\geq \frac{\lambda_n(C)}{2}\|U\|_F^2+2\mu( \|\mathcal{A}(UU^T)\|_2^2 - \|\mathbf{b}\|_2\|\mathcal{A}(UU^T)\|_2). \end{align*} The above inequality is a quadratic in $y=\|\mathcal{A}(UU^T)\|_2$: $y^2 -y \|\mathbf{b}\|_2 + \frac{1}{2\mu} \left(\frac{\lambda_n(C)}{2}\|U\|_F^2 -\frac{\epsilon \|U\|_F}{2}\right) \leq 0$. If $\frac{\epsilon \|U\|_F}{2} \geq \frac{\lambda_n(C)}{4}\|U\|_F^2$, then $\|U\| _F \leq \frac{2\epsilon} {\lambda_n(C)}$. Else, for the above inequality to hold we need the quadratic to have real roots. \begin{align*} \|\mathbf{b}\|_2^2 &\geq 4 \cdot 1 \cdot \frac{1}{2\mu} \left( \frac{\lambda_n(C)}{2}\|U\|_F^2 -\frac{\epsilon \|U\|_F}{2} \right) \\ &\geq \frac{2}{\mu} \frac{\lambda_n(C)}{4}\|U\|_F^2. \end{align*} The last inequality follows from $\frac{\epsilon \|U\|_F}{2} \leq \frac{\lambda_n(C)}{4}\|U\|_F^2$. Hence, $\|U\|_F^2 \leq \max \left \{ \left( \frac{2\epsilon} {\lambda_n(C)}\right)^2, \frac{2\mu}{\lambda_n(C)} \|\mathbf{b}\|_2^2 \right \}$. Hence, \begin{multline*} \|\mathbf{r}\|_2 = \| \mathcal{A}(UU^T) -\mathbf{b}\|_2 \leq \|\mathcal{A}(UU^T)\|_2 +\|\mathbf{b}\|_2 \leq \|\mathcal{A}\| \|UU^T\|_F +\|\mathbf{b}\|_2 \\ \leq \|\mathcal{A}\| \max \left \{ \left( \frac{2\epsilon} {\lambda_n(C)}\right)^2, \frac{2\mu}{\lambda_n(C)} \|\mathbf{b}\|_2^2 \right \}+\|\mathbf{b}\|_2. \end{multline*} \end{proof} \section{Applications}\label{sec:applications} In this section, we present applications of our results to two SDPs: Max-Cut and matrix completion, both of which are important problems in the learning domain and have been studied extensively. Interest has grown to develop efficient solvers for these SDPs~\citep{arora2007combinatorial, pmlr-v65-mei17a, hardt2013understanding, bandeira2016low}. This work differs from previous efforts in at least two ways. First, we aim to demonstrate that Burer--Monteiro-style approaches, which are often used in practice, can indeed lead to provably efficient algorithms for general SDPs. We believe that building upon this work, it should be possible to improve the time-complexity guarantees of such factorization-based algorithms. Second, we note that several problems formulated as SDPs in fact necessitate low-rank solutions, for example because of memory concerns (as is the case in matrix completion), and factorization approaches provide a natural means to control rank. \subsection{Max-Cut} We first consider the popular Max-Cut problem which finds applications in clustering related problems. In a seminal paper, \cite{goemans1995improved} defined the following SDP to solve the Max-Cut problem: $\min_{X\in \mathbb{R}^{n\times n}} \ip{C}{X}, \mbox{s.t. } X_{ii} = 1 \; \forall \; 1 \leq i \leq n, X \succeq 0 $, where $n$ is the number of vertices in the given graph and $C$ is its adjacency matrix. Since the constraint set also satisfies $\trace{X}=n$, we consider the following penalized, non-convex version of the problem. \begin{align} \widehat{L}_{\mu}(U) \triangleq \ip{C+G}{U\trans{U}} + \mu\left(\left(\ip{I}{U\trans{U}}-n\right)^2 + \sum_{i=1}^{n}\left(\ip{e_i \trans{e_i}}{U\trans{U}}-1\right)^2\right),\label{eqn:maxcut} \end{align} where $G$ is a random symmetric Gaussian matrix. Let $\widehat F_{\mu}(UU^T) = \widehat L_{\mu} (U)$. After some simplifying computations, we have the following corollary of Theorem~\ref{thm:optimal_approx_compact}. \begin{corollary}\label{cor:maxcut} There exists an absolute numerical constant $c_1$ such that the following holds. With probability greater than $1-\delta$, every $(\epsilon, \gamma)$-SOSP $U$ of the perturbed Max-Cut problem $\widehat{L}_{\mu}(U)$~\eqref{eqn:maxcut} with: \begin{align*}\epsilon \leq \frac{1}{c_1} \left(\frac{\gamma \sigma_G^2}{\mu n}\right)^{2/3},~~ \text{ and } ~~ k = \tilde{\Omega} \left( \sqrt{n \log\left(\frac{\mu^2 \sqrt{n}}{\sigma_G}\right)}\right), \end{align*} satisfies $ \widehat{F}_{\mu}(UU^T) - \widehat{F}_{\mu}(X^*) \leq \gamma \sqrt{\epsilon} \trace{X^*} +\frac{1}{2} \epsilon \frob{U}$, where $X^*$ is a global optimum of $\widehat{F}_{\mu}(X)$. \end{corollary} The above result states that for the penalized version of the perturbed Max-Cut SDP, the Burer--Monteiro approach finds an approximate global optimum as soon as the factorization rank $k = \tilde{\Omega}(\sqrt{n})$. Existing results for Max-Cut using this approach either only handle exact SOSPs~\citep{boumal2016non}, or require $k=n+1$~\citep{boumal2016globalrates}, or require $k$ that is dependent on $\frac{1}{\epsilon}$~\citep{pmlr-v65-mei17a}. Moreover, complexity per iteration scales only linearly with the number of edges in the graph. \subsection{Matrix Completion} In this section we specialize our results for the matrix completion problem \cite{candes2009exact}. The goal of a matrix completion problem is to find a low-rank matrix $M$ using only a small number of its entries, with applications in recommender systems. To ensure that the computed matrix is low-rank and generalizes well, one typically imposes nuclear-norm regularization which leads to the following SDP: \begin{minipage}{0.2\linewidth} \begin{align*} \min &\quad \trace{W_1} + \trace{W_2}\\ \text{s. t. }&\quad X_{ij} =M_{ij}, (i,j) \in \mathcal{S} \\ &\quad \begin{bmatrix}W_1 & X \\ X^T & W_2\end{bmatrix} \succeq 0. \end{align*} \end{minipage} \begin{minipage}{0.05\linewidth} \begin{align*} \equiv \\ \end{align*} \break \end{minipage} \begin{minipage}{0.6\linewidth} \begin{align*} \min & \quad \ip{I}{Z} \nonumber \\ \text{s. t. }&\quad \frac{1}{2}\ip{e_{i+n}e_{j+n}^T + e_{j+n} e_{i+n}^T}{Z} = M_{ij}, (i,j) \in \mathcal{S} \nonumber \\ &\quad Z \succeq 0. \end{align*} \end{minipage} \noindent Here $\mathcal{S}$ is the set of observed indices of $M$ and $Z\triangleq \begin{bmatrix}W_1 & X \\ X^T & W_2\end{bmatrix}$. Let \begin{align} \widehat{L}_{\mu}(U) = \ip{I+G}{UU^T} + \mu \sum_{i=1}^m \left(\frac{1}{2}\ip{e_{i+n}e_{j+n}^T + e_{j+n} e_{i+n}^T}{UU^T} - M_{ij} \right)^2 \label{eq:matcomp} \end{align} be the corresponding penalty objective. Let $\widehat F_{\mu}(UU^T) = \widehat L_{\mu} (U)$. The objective is positive definite with $\lambda_1(C)=\lambda_n(C)=1$. Also, since $\mathcal{A}$ is a sub-sampling operator, $\|\mathcal{A}\| \leq 1$. Finally, for $\epsilon^2 \leq \frac{\mu}{2}\sqrt{\sum_{(i,j) \in \mathcal{S}} M_{ij}^2}$, the residues are bounded by: \begin{align*} B&=\|\mathcal{A}\| \max \left \{ \left( \frac{2\epsilon} {\lambda_n(C)}\right)^2, \frac{2\mu}{\lambda_n(C)} \|\mathbf{b}\|_2^2 \right \}+\|\mathbf{b}\|_2 \leq \max 3\mu \sqrt{\sum_{(i,j) \in \mathcal{S}} M_{ij}^2}. \end{align*} \noindent Applying Theorem~\ref{thm:optimal_approx} for this setting gives the following corollary. \begin{corollary}\label{cor:mc_optimal}There exists an absolute numerical constant $c_2$ such that the following holds. With probability greater than $1-\delta$, every $(\epsilon, \gamma)$-SOSP $U$ of the perturbed matrix completion problem $\widehat{L}_{\mu}(U)$~\eqref{eq:matcomp} with: \begin{align*}\sigma_G \leq \frac{1}{4\sqrt{n \log(n/ \delta)}},~~ \epsilon \leq \frac{1}{c_2}\left(\frac{\gamma \abs{\mathcal{S}} \sigma_G^2 }{ n \mu }\right)^{\sfrac{2}{3}}, ~\text{ and }~ k = \tilde{\Omega} \left( \sqrt{ \abs{\mathcal{S}} \log\left(\frac{\mu^2 \sqrt{n} \sqrt{\sum_{(i,j) \in \mathcal{S}} M_{ij}^2}}{\sigma_G}\right) } \right),\end{align*} satisfies $\widehat{F}_{\mu}(UU^T) - \widehat F_{\mu}(X^*) \leq \gamma \sqrt{\epsilon} \trace{X^*} + \frac{1}{2} \epsilon \|U\|_F$, where $X^*$ is a global optimum of $\widehat{F}_{\mu}(X)$. \end{corollary} \noindent This result shows that for the matrix completion problem with $m$ observations, for rank $\tilde{\Omega}(\sqrt{m})$, any approximate local minimum of the factorized and penalized problem is an approximate global minimum. Most of the existing results on matrix completion either require strong distribution assumptions on $\mathcal{S}$ and incoherence assumptions on $M$ to recover a low-rank solution \citep{candes2009exact, jain2013low}. The standard nuclear norm minimization algorithms are not guaranteed to converge to low-rank solutions without these assumptions, which implies that the entire matrix would need to be stored for prediction which is infeasible in practice. Similarly, generalization error bounds \citep{foygel2011concentration} as well as differential privacy guarantees depend on recovery of a low-rank solution. Our result guarantees finding a rank -$\tilde{\Omega}(\sqrt{m})$ solution without any statistical assumptions on the sampling or the matrix. The tradeoff is our results do not guarantee finding a lower (potentially a constant) rank solution, even if one exists for a given problem. \section{Proofs of results in Section~\ref{sec:pd}} \section{Counterexample for the constrained case} Fix dimension $n$, number of constraints $m = n-1$ and rank $k= n-2$. Define the following matrices: \begin{align*} C &= \begin{bmatrix} \mathbf{I}_{n-2 \times n-2} & \mathbf{1}_{n-2 \times 2} \\ \mathbf{1}_{2 \times n-2} & \begin{array}{cc} 2n & 0 \\ 0 & 0 \end{array} \end{bmatrix}, \mbox{ and } \\ A_i &= \begin{bmatrix} \mathbf{0}_{n-2\times n-2} & \begin{array}{c} \mathbf{0}_{i-1 \times 2} \\ \mathbf{1}_{1 \times 2} \\ \mathbf{0}_{n-2-i \times 2} \end{array} \\ \begin{array}{ccc} \mathbf{0}_{2 \times i-1} & \mathbf{1}_{2 \times 1} & \mathbf{0}_{2 \times n-2-i } \end{array} & \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \end{bmatrix}, \mbox{ for } i = 1,\cdots, n-2, \end{align*} where $\mathbf{0}, \mathbf{1}, \mathbf{I}$ represent all zeros, all ones and identity matrix respectively of appropriate sizes. The counter example problem is \begin{align} &\min_{U \in \mathbb{R}^{n\times k}} \trace{\trans{U} C U} \nonumber \\ & \mbox{ s.t. } \trace{\trans{U} A_i U} = 0 \mbox{ for } i = 1,\cdots, n-2 \nonumber \\ &\qquad \trace{U \trans{U}} = n-2. \label{eq:counter-constrained} \end{align} The following lemma shows that on the constraint manifold, $U = \mathbf{I}_{n \times k}$ is a suboptimal second order optimal point. \begin{lemma} Let $k=n-2$, then $U = \mathbf{I}_{n \times k}$ satisfies second order optimality on the constraint manifold and is suboptimal for problem \eqref{eq:counter-constrained}. \end{lemma} \begin{proof} \noindent \textbf{Suboptimality of $U$}: For $U = \mathbf{I}_{n \times k}$, $UU^T$ is a $n\times n$ matrix with a $k \times k$ Identity block. Since the first $n-2$ constraints have $0$ in the top $k \times k$ block, we get that $\ip{A_i}{UU^T}=0, 1 \leq i \leq n-2$. Also $\trace{UU^T}$ is $k=n-2$. Hence, $UU^T$ is a feasible point for the above SDP. Computing the objective value of~\eqref{eq:counter-constrained} at $U$, we see that it is equal to $n-2$. Define $w \triangleq \trans{\begin{bmatrix} \mathbf{1}_{1\times n-2} & 0 & -1 \end{bmatrix}}$. Consider now a different matrix $\displaystyle X_0 \triangleq \frac{n-2}{n} \left(w \trans{w} + \begin{bmatrix} \mathbf{0}_{n-1 \times n-1} & \mathbf{0}_{n-1 \times 1} \\ \mathbf{0}_{1 \times n-1} & 1 \end{bmatrix}\right)$. It is easy to see that $X_0$ is PSD and satisfies all the constraints in~\eqref{eq:counter-constrained}. Furthermore, we see that the objective value at $X_0$, $$\displaystyle \ip{C}{X_0} = \frac{n-2}{n} \cdot \trans{w}Cw = \frac{(n-2)^2}{n} < n-2 =\ip{C}{UU^T}.$$ This proves that $UU^T$ is suboptimal to the above SDP.\\ \noindent \textbf{Second order optimality:} The space orthogonal to the constraint manifold at $U$ is given by $\displaystyle \mathscr{P} \triangleq \linspan{U, A_i U; i\in[n-2]}$. The first order optimality conditions are satisfied since the gradient of the objective, $CU = U + \sum_{i=1}^{n-2} A_i U,$ is in $\mathscr{P}$, and is orthogonal to the constraint manifold. We will now show that for any perturbation along the tangent space at $U$, the objective value strictly increases, thereby showing that $U$ satisfies second order optimality conditions. Any perturbation $\triangle U$ in the tangent space at $U$, by definition is orthogonal to $\mathscr{P}$ ($\triangle U \perp \mathscr{P}$). Let $\triangle U$ be given by $$\triangle U \triangleq \begin{bmatrix} {\triangle \overline{U}}_{n-2 \times n-2} \\ \triangle u_1 \; \triangle u_2 \; \cdots \; \triangle u_{n-2} \end{bmatrix},$$ where each $\triangle u_i$ is a $2\times1$ vector. Since $\triangle u \perp A_i U$ for every $i$, we see that $\trans{\mathbf{1}_{2\times 1}}\triangle u_i = 0$ for every $i \in [n-2]$. Hence for any point $U+\triangle U$ on the constraint manifold, \begin{align} &\trace{\trans{\left(U + \triangle U\right)} A_i \left(U + \triangle U\right)} = \trace{\trans{U} A_i U} \nonumber \\ \Rightarrow & 2 \trace{\trans{\triangle U} A_i U} + \trace{\trans{\triangle U} A_i \triangle U} = 0. \label{eq:constraints-unchanged} \end{align} Similarly the $\trace{UU^T}=n-2$ constraint gives us, $2\trace{\trans{\triangle U}U}+\trace{\trans{\triangle U} \triangle U}=0$. Further, the change in objective value is, \begin{align*} \trace{\trans{\left(U + \triangle U \right)} C \left(U + \triangle U \right)} - \trace{\trans{U}C U} &= 2 \trace{\trans{\triangle U} C U} + \trace{\trans{\triangle U} C \triangle U} \\ &= 2 \trace{\trans{\triangle U} \left(U + \sum_i A_i U\right) } + \trace{\trans{\triangle U} C \triangle U} \\ &\stackrel{\zeta_1}{=} -\trace{\trans{\triangle U} \left(\triangle U + \sum_i A_i \triangle U\right) } + \trace{\trans{\triangle U} C \triangle U} \\ &= \trace{\trans{\triangle U} \left(C - \sum_i A_i - \mathbf{I} \right) \triangle U} \\ &= \sum_{i=1}^{n-2} \trans{\triangle u}_i D \triangle u_i, \end{align*} where $D\triangleq \begin{bmatrix} 2n-1 & 0 \\ 0 & -(n-1) \end{bmatrix}$. $\zeta_1$ follows from \eqref{eq:constraints-unchanged}. Since all perturbations in the tangential space satisfy $\trans{\mathbf{1}_{2 \times 1}}\triangle u_i=0$, we see that $\triangle u_i \in \linspan{\begin{bmatrix} 1 \\ -1 \end{bmatrix}}$. Since $\begin{bmatrix} 1 & -1 \end{bmatrix} D \begin{bmatrix} 1 \\ -1 \end{bmatrix} = n > 0$, this shows that objective value strictly increases for any perturbation along the tangential space at $U$. \end{proof} \section{Gradient Descent}\label{sec:gd} In previous sections we have seen that for the perturbed penalty objective~\eqref{eq:smoothed}, under some technical conditions on the SDP, with high probability upon appropriate choice of the parameters, every approximate SOSP is approximately optimal. Second-order methods such as cubic regularization and trust regions~\citep{nesterov2006cubic,cartis2012complexity} converge to an approximate SOSP in polynomial time. While gradient descent with random initialization can take exponential time to converge to an SOSP~\citep{du2017gradient}, a recent line of work starting with~\citet{ge2015escaping} has established that perturbed gradient descent (PGD)\footnote{This is vanilla gradient descent but with additional random noise added to the updates when the gradient magnitude becomes smaller than a threshold.} converges to an SOSP as efficiently as second-order methods in the worst case, with high probability. In particular we have the following \emph{almost dimension free} convergence rate for PGD from~\citep{jin2017escape}. \begin{theorem}[Theorem 3 of \citet{jin2017escape}] Let $f$ be $l$-smooth (that is, its gradient is $l$-Lipschitz) and have a $\rho$-Lipschitz Hessian. There exists an absolute constant $c_{\max}$ such that, for any $\delta \in (0, 1)$, $\epsilon \leq \frac{l^2}{\rho}$, $\Delta_f \geq f(X_0) -f^*$, and constant $c \leq c_{\max}$, $PGD(X_0,l,\rho,\epsilon,c,\delta,\Delta_f)$ applied to the cost function $f$ outputs a $(\rho^2,\epsilon)$ SOSP with probability at least $1-\delta$ in $$O \left( \frac{(f(X_0)-f^*)l}{\epsilon^2} \log^4 \left( \frac{nkl \Delta_f}{\epsilon^2 \delta} \right) \right)$$ iterations. \end{theorem} The above theorem requires the function $f$ to be smooth and Hessian-Lipschitz. The next lemma states that the perturbed penalty objective~\eqref{eq:smoothed} satisfies these requirements---proof in Appendix~\ref{apdx:gd}. \begin{lemma}\label{lem:gd_param} In the region $\{U \in \mathbb{R}^{n\times k} : \|U\|_F \leq \tau \}$ for some $\tau > 0$, the cost function $\hat L_\mu(U)$ in~\eqref{eq:smoothed} is $l$-smooth and its Hessian is $\rho$-Lipschitz with: \begin{itemize} \item $l \leq 2\|C+G\|_2 + 4 \mu \|\mathcal{A}\| \|\mathbf{b}\|_2 + 12 \mu \tau^2 \|\mathcal{A}\|^2$, and \item $\rho \leq 16\mu \tau \|\mathcal{A}\|^2$. \end{itemize} Here, $\|\mathcal{A}\|$ is as defined in~\eqref{eq:normofA}. Notice furthermore that, with high probability, $\|G\|_2 \leq 3\sigma_G\sqrt{n}$. In that event, $\|C+G\|_2 \leq \|C\|_2 + 3\sigma_G\sqrt{n}$. \end{lemma} Combining this lemma with the above theorem shows that the perturbed gradient method converges to an $(\epsilon, \rho^2)$ SOSP in $\widetilde{O}(\frac{1}{\epsilon^2})$ steps (ignoring all other problem parameters). This can be improved to $\widetilde{O}(\frac{1}{\epsilon^{1.75}})$ using a variant of Nesterov's accelerated gradient descent~\citep{jin2017accelerated}. Moreover, if the objective function is (restricted) strongly convex in the vicinity of the local minimum, then we can further improve the rates to $\textrm{poly} \log\left(\frac{1}{\epsilon}\right)$~\citep{jin2017escape}. This property is satisfied for problems where $\mathcal{A}$ meets either restricted isometry conditions or when $\mathcal{A}$ pertains to a uniform sampling of incoherent matrices~\citep{agarwal2010fast, negahban2012restricted,sun2014guaranteed}. See~\citep{bhojanapalli2016dropping} for more discussions on restricted strong convexity close to the global optimum. The complexity of the algorithm is given by Gradient-Computation-Time $\times $ Number of iterations. Computing the gradient in each iteration requires $O\left(Zk+nk^2 + mnk \right)$ arithmetic operations where $Z$ is the number of non-zeros in $C$ and the constraint matrices. For dense problems this becomes $O\left( mn^2k \right)$. However, most practical problems tend to have a certain degree of sparsity in the constraint matrices so that the computational complexity of such a method can be significantly smaller than the worst-case bound. \subsection{Main results} The main contributions of this work are: \begin{itemize} \item We propose a simple penalty version of the factored SDP~\eqref{eq:factored} and show that, for almost all cost matrices $C$, any exact SOSP of the rank-constrained formulation~\eqref{eq:penalty_factored} is a global optimum for rank ${\Omega}(\sqrt{m})$---see Corollary~\ref{cor:exactpenaltyfactorized}. This result removes the smooth manifold requirement of~\citep{boumal2016non}, though it applies to~\eqref{eq:penalty_sdp}, not~\eqref{eq:sdp}. \item We show that there indeed exists a compact, feasible SDP with a worst-case $C$ for which the penalized, factorized problem admits a suboptimal SOSP (see Theorem~\ref{thm:bad_sdp}), even for rank almost as big as the dimension. \item We show that by perturbing the objective function slightly and by performing a smoothed analysis on the resulting problem, we can guarantee every approximate SOSP of the perturbed problem is an approximate global optimum of the perturbed and penalized SDP. Hence, we can use standard techniques~\citep{cartis2012complexity,ge2015escaping} to find approximate SOSPs and guarantee global optimality---see Theorem~\ref{thm:optimal_approx_compact}. \end{itemize} In summary, we show that for a class of SDPs with bounded solutions, we can find a low-rank solution that is close to the global optimum of the penalty objective. We believe that the factorization technique can be leveraged to design faster SDP solvers, and any looseness in the current bounds is an artifact of our proof, which hopefully can be tightened in future works. \subsection{Prior work} Fast solvers for SDPs have garnered interest in the optimization and in the theoretical computer science communities for a long time. Most of the existing results for SDP solvers can be categorized into direct (convex) methods and factorization methods. \\ \noindent \textbf{Convex methods:} Classical techniques such as interior point methods~\citep{ nesterov1989self, nesterov1988polynomial, alizadeh1995interior} and cutting plane methods~\citep{anstreicher2000volumetric,krishnan2003properties} enjoy geometric convergence, but their computational complexity per iteration is high. As a result, it is hard to scale these methods to SDPs with a large number of variables. With the goal of speeding up the computation, many works have considered: i) a specific and important class of SDPs, namely, SDPs with a trace constraint ($\trace{X}=1$), and ii) methods with sub-linear convergence. For these SDPs, \citet{arora2005fast} proposed a multiplicative weights method which provides faster techniques for some graph problems, with running time depending on $O(\frac{1}{\epsilon^2})$ and the \textit{width} of the problem. \citet{hazan2008sparse} proposed a Frank--Wolfe-type algorithm with a complexity of $\tilde{O}( \frac{Z}{\epsilon^{3.5}})$ where $Z$ is the sparsity of $C$ and the $A_i$'s. \citet{garber2016sublinear, garber2016faster} proposed faster methods that either remove the dependence on $Z$ (sub-linear time), or improve the dependence on $\epsilon$. While these methods improve the per iteration complexity, they still need significant memory as the rank of solutions for these methods is not bounded, and scales at least at the rate of $O(\frac{1}{\epsilon})$. An exception to this is the work by \citet{yurtsever2017sketchy}, which uses sketching techniques in combination with conditional gradient method to maintain a low rank representation. However this method is guaranteed to find a low rank optimum only if the conditional gradient method converges to a low rank solution. \\ \noindent \textbf{Factorization methods:} \citet{burer2003nonlinear, burer2005local} proposed a different approach to speed up computations, namely by searching for solutions with smaller rank. Even though all feasible compact SDPs have at least one solution of rank $O(\sqrt{m})$ \citep{barvinok1995problems, pataki1998rank}, it is not an easy task to optimize directly on the rank-constrained space because of non-convexity. However, \citet{burer2003nonlinear, burer2005local} showed that any \emph{rank-deficient local minimum} is optimal for the SDP; \citet{journee2010low} extended this result to any rank-deficient SOSP under restrictive conditions on the SDP. However, these results cannot guarantee that SOSPs are rank deficient, or at least that rank-deficient SOSPs can be computed efficiently (or even exist). \citet{boumal2016non} address this issue by showing that for a particular class of SDPs satisfying some regularity conditions, and for almost all cost matrices $C$, any SOSP of the rank-constrained problem with $k = \Omega(\sqrt{m})$ is a global optimum. Later, \citet{pmlr-v65-mei17a} showed that for SDPs with elliptic constraints (similar to the Max-Cut SDP), any rank-$k$ SOSP gives a $(1-\frac{1}{k-1})$ approximation to the optimum value. Both these results are specific to particular classes of SDPs and do not extend to general problems. In a related setup, \citet{keshavan2010matrix, jain2013low} have showed that rank-constrained matrix completion problems can be solved using smart initialization strategies followed by local search methods. Following this, many works have identified interesting statistical conditions under which certain rank-constrained matrix problems have no spurious local minima \citep{ sun2016geometric, bandeira2016low, ge2016matrix, bhojanapalli2016global, park2017non, ge2017no, zhu2017global, ge2017optimization}. These results are again for specific problems and do not extend to general SDPs. In contrast, our result holds for a large class of SDPs in penalty form, without strong assumptions on the constraint matrices $A_i$ and for a large class of cost matrices $C$. We avoid degenerate SDPs with spurious local minima by perturbing the problem and then using a smoothed analysis, which is one of the main contribution of the work. \subsection*{Notation} For a smooth function $f(X)$, we refer to first-order stationary points $X$ as FOSPs. Such points satisfy $\nabla f(X) = 0$ (zero gradient). We refer to second-order stationary points at SOSPs. Such points are FOSPs and furthermore satisfy $\nabla^2 f(X) \succeq 0$, i.e., the Hessian is positive semidefinite. The set of symmetric matrices of size $n$ is $\Sym{n\times n}$. $\sigma_i()$ and $\lambda_i()$ denote the $i$th singular- and eigenvalues respectively, in decreasing order. \subsection{SDPs with positive definite cost}\label{sec:pd} We now consider a second class of SDPs: ones where the cost matrix $C$ is positive definite. The feasible set of these SDPs need not be compact. However, FOSPs for these SDPs are bounded, hence we will be able to show similar results as in Section~\ref{sec:compact}. Consider the penalty formulation of the perturbed problem, \begin{align} \underset{U \in \mathbb{R}^{n \times k}}{\text{minimize}} ~ \widehat L_{\mu}(U) = \ip{C+G}{UU^T}+\mu \sum_{i=1}^m \left(\ip{A_i}{UU^T} - b_i\right)^2, \label{eq:smoothed} \end{align} where $G$ is a symmetric random matrix with $G_{ij} \stackrel{i.i.d.}{\sim} \mathcal{N}(0,\sigma_G^2)$ for $i\leq j$. Let $\widehat F_{\mu}(UU^T) = \widehat L_{\mu} (U)$. To prove an optimality result for this problem, we first show a residue bound for any $\epsilon$-FOSP of $\widehat L_{\mu}(U)$. \begin{lemma}\label{lem:residues} Consider~\eqref{eq:smoothed} with a positive definite cost matrix $C$. Let $\sigma_G \leq \frac{\lambda_{\min}(C)}{6\sqrt{n \log(n/ \delta)}}$. Then, with probability at least $1-\delta$, at any $\epsilon$-FOSP $U$ of~\eqref{eq:smoothed}, the residue obeys: $$ \|\mathbf{r}\|_2 =\|\mathcal{A}(UU^T) -\mathbf{b}\|_2 \leq \|\mathcal{A}\| \max \left \{ \left( \frac{2\epsilon} {\lambda_{\min}(C)}\right)^2, \frac{2\mu}{\lambda_{\min}(C)} \|\mathbf{b}\|_2^2 \right \}+\|\mathbf{b}\|_2. $$ \end{lemma} Using this, we get the following result from Lemma~\ref{lem:compact_optimal_approx} and Theorem \ref{thm:compact_eps_fosp} along same lines as that of Theorem \ref{thm:optimal_approx_compact}. Let $$B \triangleq \|\mathcal{A}\| \max \left \{ \left( \frac{2\epsilon} {\lambda_{\min}(C)}\right)^2, \frac{2\mu}{\lambda_{\min}(C)} \|\mathbf{b}\|_2^2 \right \}+\|\mathbf{b}\|_2.$$ \begin{theorem}[Global optimality.]\label{thm:optimal_approx} Let $\delta \in (0, 1)$ and $c_0$ be a universal constant. Given an SDP \eqref{eq:sdp} with positive definite objective matrix $C$, let $\tilde X $ be a global optimum of the perturbed problem \eqref{eq:smoothed}, and let $\sigma_G \leq \frac{\lambda_{\min}(C)}{4\sqrt{n \log(n/ \delta)}}$. Let $U$ be an $(\epsilon, \gamma)$-SOSP of the perturbed problem \eqref{eq:smoothed} with: \begin{align*} \epsilon \leq \left(\frac{\gamma k^2 \sigma_G^2 }{ 32c_0 n \mu \|\mathcal{A}\|^2}\right)^{\sfrac{2}{3}} ~\text{ and }~ k \geq 3\left[\log\left(\frac{n}{\delta}\right) + \sqrt{ \operatorname{rank}(\mathcal{A}) \log \left(1 +\frac{8 \mu B\|\mathcal{A}\|\sqrt{c_0 n} }{\sigma_G } \right) } \right]. \end{align*} Then, with probability at least $1-O(\delta)$, \begin{align*} \widehat F_{\mu}(UU^T) - \widehat F_{\mu}(\tilde X) \leq \gamma \sqrt{\epsilon} \operatorname{Tr}(\tilde X) +\frac{1}{2}\epsilon \|U\|_F. \end{align*} \end{theorem} This result shows that even though the feasible set of SDP is not compact, as long as the objective is positive definite, all approximate SOSPs of the perturbed objective are approximately optimal. Without the positive definite condition, SDPs can have unbounded solutions (see Section 2.4 of \citet{gartner2012approximation}). We also require a bound on the magnitude of the perturbation ($\sigma_G$), as otherwise the objective ($C+G$) can be indefinite with (too) high probability, which may result in unbounded solutions. \section{Introduction} \label{sec:intro} \input{intro_reorg} \section{Exact second-order points typically are optimal} \label{sec:exact} \input{exact_optimal_reorg} \section{Exact second-order points sometimes are suboptimal} \label{sec:exact_sub} \input{exact_suboptimal_reorganized} \section{Approximate second-order points: smoothed analysis} \label{sec:smooth} \input{smoothed_analysis_reorganized} \subsection{Compact SDPs} \label{sec:compact} \input{compact_SDP_reorganized} \input{pd_reorganized} \input{applications} \input{gradient_descent} \section{Conclusions and perspectives}\label{sec:conc} In this paper we considered the Burer--Monteiro factorization to solve SDPs~\eqref{eq:factored}. In addition to dimensionality reduction, one advantage of such formulations is that algorithms for them necessarily produce positive semidefinite solutions of rank at most $k$. An ideal theorem would state that some polynomial-time algorithm computes approximate optima for~\eqref{eq:factored} in all cases with reasonably small $k$. In this regard, we now review what we achieved, what seems impossible and what remains to be done. Because problem~\eqref{eq:factored} has nonlinear constraints, our first step was to move to a penalized formulation~\eqref{eq:penalty_factored}. For simplicity, we chose to work with a quadratic penalty. Quadratic penalties may require pushing $\mu$ to infinity to achieve constraint satisfaction at the optimum. Taking $\mu$ large may prove challenging numerically. In practice, it is known that augmented Lagrangian formulations (ALM) behave better in this respect~\citep{birgin2014ALM}. Thus, a first direction of improvement for the present work is to tackle ALM formulations instead. Second, we established in Section~\ref{sec:exact} that for almost all SDPs, all exact SOSPs of~\eqref{eq:penalty_factored} are global optima which map to global optima of the penalized SDP~\eqref{eq:penalty_sdp} provided $\frac{k(k+1)}{2} > m$, where $m$ is the number of constraints. It should not be possible to improve the dependence on $k$ by much since certain SDPs admit a unique solution of rank $r$ such that $\frac{r(r+1)}{2} = m$. We showed in Section~\ref{sec:exact_sub} that for certain SDPs the penalty formulation~\eqref{eq:penalty_factored} admits suboptimal SOSPs. This suggests that even in the ideal statement stated above one may need to exclude some SDPs. Third, we showed in Section~\ref{sec:smooth} that upon perturbing the cost matrix $C$ randomly (to avoid pathological cases), with high probability and provided $k = \tilde \Omega(\sqrt{m})$ (which is the right order though constants and dependence on other parameters could certainly be improved), when SOSPs have bounded residues (which is the case for positive definite cost matrices and for compact SDPs up to a technical modification), all SOSPs of the factored, penalized and perturbed problem are approximately optimal for that problem. This is achieved through smoothed analysis, which we believe is an appropriate tool to deal with the pathological cases exhibited above. These results can be further improved by deducing approximate constraint satisfaction and optimality for the original SDP~\eqref{eq:sdp}---which we currently do not do---and by further relaxing conditions on the SDP. Finally, we studied the applicability of our results to two applications: Max-Cut and matrix completion. While these particularizations do not always improve over the specialized solvers for these problems, we believe that the work done here in studying low-rank parameterization of SDPs will be a helpful step towards building up to faster methods. \section*{Acknowledgment} NB thanks Dustin Mixon for many interesting conversations on the applicability of smoothed analysis to low-rank SDPs. NB was supported in part by NSF award DMS-1719558. \bibliographystyle{abbrvnat}
2023-04-23T08:18:17.948Z
2018-03-02T02:05:59.000Z
redpajama/arxiv
arxiv_0000
1,548
10,833
23187769886cf3c53274a609c7450717064c8285
\section{Introduction} Understanding the structure, phase behaviour and dynamics of ionic liquids and concentrated electrolytes, both in the bulk and near interfaces, is a longstanding challenge. Since the pioneering work of Helmholtz \cite{helmholtz1853}, Debye and H\"{u}ckel \cite{debye1923}, Onsager \cite{onsager1934deviations} and many others, much recent progress has been made using the statistical mechanics tools of the theory of classical liquids \cite{hansen2013theory}. A large body of ``exact" results and sum-rules was established \cite{martin1988sum}, while the Ornstein-Zernike (OZ) formalism \cite{ornstein1914accidental} and classical density functional theory \cite{percus1962approximation} became the basis of numerous approximate theories of the structure, including non-linear integral equations for the pair correlation functions \cite{hansen2013theory}; amongst these theories the mean spherical approximation (MSA) plays an important role, since it allows for analytic solutions of simple, semi-realistic models of ionic liquids \cite{waisman1972mean,waisman1972mean1}, which will be used in the present paper. The OZ formalism was also put to good use to examine the asymptotic decay of pair correlation functions and density profiles at interfaces \cite{attard1993asymptotic,leote1994decay,attard1996electrolytes}. Although different analytical or numerical theories predict different dependences of the correlation (or screening) length on ion concentration, the theoretical predictions converge on two qualitative features: (1) the decay of the correlation is exponential; and (2) the longest correlation length in a concentrated electrolyte is of the same order of magnitude as the (mean) ion diameter. However, recent experiments suggest an ``underscreening" phenomenon, namely the existence of an anomalously large decay length which is incongruent with the above mentioned theoretical predictions \cite{gebbie2013ionic,gebbie2015long,smith2016electrostatic}. Surface force balance experiments reveal hat the force acting between negatively charged mica surfaces immersed in an electrolyte decays exponentially with surface separation $L$, but the decay (or screening) length $\lambda_S$ scales as \cite{lee2017scaling}: \begin{equation} \frac{\lambda_S}{\lambda_D} \sim \begin{cases} 1, & a/\lambda_D \ll 1 \\ (a/\lambda_D)^3, & a/\lambda_D \gg 1, \end{cases} \label{scaling_exp} \end{equation} with \begin{equation} \lambda_D = \frac{1}{\sqrt{4 \pi l_B (\rho_+z_+^2 + \rho_- z_-^2) }} \end{equation} the Debye length, $\rho_\pm$ ($z_\pm$) the number density (valence) of the cations/anions, $a$ the ionic radius, while $l_B = e^2/(4 \pi \epsilon \epsilon_0 k_B T)$ is the Bjerrum length with $\epsilon$ the dielectric constant of the electrolyte, which depends on ion concentration This scaling relation has been verified for various electrolyte chemistries, ranging from pure ionic liquids (e.g. room temperature molten salts) and ionic liquid-organic solvent mixtures to aqueous alkali halide solutions. A scaling theory has been proposed, based on identifying solvent molecules as effective charge carriers, with an effective charge determined by thermal fluctuations \cite{lee2017scaling,lee2017underscreening}. More recently, a first-principles analysis based on Landau fluctuation theory and the MSA has been put forward, which confirms that $\lambda_S/\lambda_D$ has a power law dependence on $a/\lambda_D$, albeit with a considerably smaller exponent compared to the experimental findings summarised in Equation (\ref{scaling_exp}) \cite{rotenberg2017underscreening}. In this paper, we explore an additional mechanism of force generation in confined systems, namely the classical counterpart of the celebrated quantum Casimir effect of an electromagnetic field fluctuation-induced force acting between the confining surfaces \cite{casimir1948attraction}. The classical Casimir effect is observed in high temperature confined systems, where the thermal fluctuations now play the role of quantum field fluctuations. Restrictions on the possible Fourier components (or modes) of thermal fluctuations imposed by spatial confinement generate the classical Casimir force. Large amplitude critical fluctuations in a fluid close to a thermodynamic critical point strongly enhance the classical Casimir effect, where the universality of critical scaling laws entails a corresponding universality of the Casimir force \cite{fisher1978} (for a recent review of the classical Casimir force, see \cite{gambassi2009casimir}). We examine the possibility of an observable Casimir force in confined ionic fluids under conditions inspired by the aforementioned experimental setups \cite{gebbie2013ionic,gebbie2015long,smith2016electrostatic}. No critical fluctuations are involved, but the infinite range of the Coulombic interactions is expected to significantly affect the resulting Casimir force. This question has already been explored in the high temperature limit within Debye-H\"{u}ckel theory of point ions, for a variety of boundary conditions, and using a microscopic description of the confining metallic or dielectric media \cite{jancovici2004screening,jancovici2005casimir,buenzli2005casimir,hoye2009casimir}. This paper describes an attempt to go beyond the point ion description by considering finite size ions to account for excluded volume effects which are crucial for concentrated electrolytes. In Section~\ref{sec:freeenergy}, we first consider the point ion limit using a systematic approach inspired by a paper dealing with Casimir force in confined non-equilibrium systems \cite{brito2007generalized}, while excluded volume effects are included within the MSA in Section~\ref{sec:msa}. Some concluding remarks are made in Section~\ref{sec:conclusion}. \section{The free energy of fluctuation modes} \label{sec:freeenergy} We begin our analysis by expressing the free energy in terms of fluctuation modes. Let $F = F(\rho_+,\rho_-)$ be the free energy of a bulk electrolyte with cation density $\rho_+$ and anion density $\rho_-$. We expand around the mean density, i.e. $\rho_\alpha = \rho_\alpha^{0} + \delta \rho_\alpha$, and write \begin{equation} F = F(\rho_+^{0},\rho_-^{0}) + \sum_{\alpha=\pm} \delta \rho_\alpha \frac{\partial F}{\partial \rho_\alpha} \Bigg|_{\rho_+^{0},\rho_-^{0}}+ \frac{1}{2} \sum_{\alpha,\beta =\pm } \delta \rho_\alpha \delta \rho_\beta \frac{\partial^2 F}{\partial \rho_\alpha \partial \rho_\beta} \Bigg|_{\rho_+^{0},\rho_-^{0}} \end{equation} Defining $\Delta F = F -F(\rho_+^{0},\rho_-^{0})$, and noting that $\left< \delta \rho_\alpha \right> = 0$, where $\left< \cdot\right>$ denotes thermal average, we obtain \begin{equation} \Delta F = \frac{1}{2} \sum_{\alpha,\beta = \pm} \left< \delta \rho_\alpha \delta \rho_\beta \right> \frac{\partial^2 F}{\partial \rho_\alpha \partial \rho_\beta}\Bigg|_{c_+^{0},c_-^{0}} = \frac{1}{2} \sum_{\alpha,\beta = \pm} \left< \delta \rho_\alpha \delta \rho_\beta \right> \chi^{-1}_{\alpha \beta} \end{equation} where we have defined the partial response functions \cite{rotenberg2017underscreening} \begin{equation} \frac{\partial^2 F}{\partial \rho_\alpha \partial \rho_\beta} \Bigg|_{\rho_+^{0},\rho_-^{0}} = \chi_{\alpha \beta}^{-1}. \label{part_res} \end{equation} We can express the fluctuations in terms of Fourier modes $ \delta \rho_\alpha( \mathbf{r}) = \frac{1}{V} \sum_\mathbf{k} e^{- i \mathbf{k} \cdot \mathbf{r}} \delta \rho_{\alpha,\mathbf{k}} $, and the correlations of the fluctuations are related to the structure factors $S_{\alpha \beta}(\mathbf{k}) = \left< \delta \rho_{\alpha,\mathbf{k}} \delta \rho_{\beta,\mathbf{-k}} \right>/V$, which are in principle experimentally measurable using techniques such as neutron scattering. We now consider an electrolyte solution confined between two infinite charged walls separated by a distance $L$. For a strongly charged surface, one might imagine that the concentration fields of the cations and anions are pinned on the surface, or at the very least the surface anchors the fields and significantly reduces the magnitude of fluctuations. Assuming that the fields are pinned at the walls (\emph{i.e.} $\delta \rho_\alpha = 0$ at the walls), the wavenumber of the fluctuation modes normal to the surfaces can only take discrete values $k_n = n \pi/L$. Therefore, the fluctuation energy inside the slit is given by \begin{equation} \Delta F_{\mathrm{in}} = \int \frac{\mathrm{d}^2 \mathbf{k}}{(2\pi)^2} \left[ \frac{\pi}{L} \sum_{n = 1}^{\infty} \sum_{\alpha,\beta = \pm} \chi^{-1}_{\alpha \beta} S_{\alpha \beta}\left(\sqrt{k^2 + \left(\frac{n \pi}{L}\right)^2 } \right) - \int_{0}^{\infty} \mathrm{d}p \sum_{\alpha,\beta = \pm} \chi^{-1}_{\alpha \beta} S_{\alpha \beta}\left(\sqrt{k^2 + p^2 } \right)\right] \label{fluct_energy} \end{equation} where we have subtracted the energy in the limit when $L \rightarrow \infty$, and exploited the symmetry of the summand and integrand with respect to negative $n$ and $p$. We note that the $n=0$ term is irrelevant since it is independent of $L$. The resulting Casimir force is simply the derivative of the fluctuation energy with respect to the surface separation \begin{equation} f_{\mathrm{Casimir}} = - \frac{\partial \Delta F_{\mathrm{in}} }{\partial L}. \label{casimir_force1} \end{equation} Note that we have implicitly assumed that charged surfaces do not affect the structure factors $S_{\alpha \beta}(k)$ -- this assumption restricts the validity of our analysis to the far field limit when the walls are far apart. Equations (\ref{fluct_energy})-(\ref{casimir_force1}) relate the bulk response functions and the structure factors to the Casimir force. We next turn to estimating those quantities for a two-component electrolyte. Following ref.~\cite{rotenberg2017underscreening}, we introduce the wavenumber-dependent partial response functions $\hat{\chi}_{\alpha \beta}$, defined by \begin{equation} \hat{\chi}^{-1}_{\alpha \beta}(k) = \frac{\delta_{\alpha \beta}}{\rho_\alpha} - \hat{c}_{\alpha \beta} (k), \label{k_susceptibility} \end{equation} where $\hat{c}_{\alpha \beta} (k)$ is the Fourier transform of the OZ direct correlation function. Using the definition of the structure factor in terms of the total correlation function $\hat{h}_{\alpha \beta}(k)$ \begin{equation} S_{\alpha \beta}(k) = \rho_\alpha \delta_{\alpha \beta} + \rho_\alpha \rho_\beta \hat{h}_{\alpha \beta}(k) \end{equation} it can be shown \cite{hansen2013theory,rotenberg2017underscreening} that \begin{equation} \begin{pmatrix} S_{++} & S_{+-} \\ S_{-+} & S_{--} \end{pmatrix} = \frac{1}{\hat{\chi}^{-1}_{++} \hat{\chi}^{-1}_{--} - \hat{\chi}^{-1}_{+-}\hat{\chi}^{-1}_{-+}} \begin{pmatrix} \hat{\chi}^{-1}_{++} & -\hat{\chi}^{-1}_{+-} \\ -\hat{\chi}^{-1}_{-+} & \hat{\chi}^{-1}_{--} \end{pmatrix}. \label{struct_fact_matrix} \end{equation} To make further progress, we can split the direct correlation functions into the Coulomb part and the short-range part: \begin{equation} \hat{c}_{\alpha \beta}(k) = -\frac{4 \pi z_\alpha z_\beta l_B}{k^2} + \hat{c}^{s}_{\alpha \beta}(k). \label{decomp} \end{equation} The Random Phase Approximation (RPA) assumes that $\hat{c}^{s}_{\alpha \beta}(k)=0$, and in this limit Equation (\ref{k_susceptibility}) can be substituted into Equation (\ref{struct_fact_matrix}) to yield analytical expressions for the structure factors. The partial response functions, Equation (\ref{part_res}), can be evaluated by noting that the free energy density of an electrolyte in the random phase approximation reads \begin{equation} \frac{F}{k_B T V}= \rho_+ \left[ \log(a^3 \rho_+) -1\right] + \rho_- \left[ \log(a^3 \rho_-) -1 \right] - \frac{1}{12 \pi \lambda_D^3} . \end{equation} Taking derivatives with respect to $\rho_+$ and $\rho_-$, we thus arrive at \begin{equation} \chi^{-1}_{\alpha \beta} = V k_B T \left( \frac{\delta_{\alpha \beta}}{\rho_\alpha} - \pi \lambda_D l_B^2 z^2_\alpha z^2_\beta \right) \; . \end{equation} For a $1:1$ electrolyte, $z_+ =-z_- = 1$, $\rho_+ =\rho_- = \rho/2$, and the sum of structure factors can be written as \begin{eqnarray} \sum_{\alpha,\beta = \pm} \chi^{-1}_{\alpha \beta} S_{\alpha \beta}\left(k \right) &= & \frac{V k_B T}{2} \left[ \left(2 - \frac{l_B}{4 \lambda_D}\right) \frac{2 \lambda_D^2 k^2+1}{\lambda_D^2 k^2+1} - \frac{l_B}{4 \lambda_D} \frac{1}{1 + \lambda_D^2 k^2} \right] \nonumber \\ & = & V k_B T \left( 2- \frac{l_B}{4 \lambda_D} - \frac{1}{1+\lambda_D^2 k^2} \right) \label{structure_fact} \end{eqnarray} we first note that the constant term drops out of the Casimir force as the sum and the integral cancel out, \begin{equation} \sum_{n = 1}^{\infty} \frac{\pi}{L} - \int_{0}^{\infty} \mathrm{d}p = \sum_{n = 1}^{\infty} \left(\frac{\pi}{L} - \int_{\frac{(n-1) \pi}{L}}^{\frac{n \pi}{L}} \mathrm{d}p\right) = 0. \label{const_term} \end{equation} The crucial step of our analysis is to note that \begin{equation} \frac{\pi}{L} \sum_{n = 1}^{\infty} \frac{1}{1+\lambda_D^2 k^2 +\lambda_D^2 \left( \frac{n \pi}{L}\right)^2} = \frac{\pi}{L} \frac{1}{2 (1 + \lambda_D^2 k^2 )} \left[ \frac{L}{\lambda_D} \sqrt{1 + \lambda_D^2 k^2} \coth \left( \frac{L}{\lambda_D} \sqrt{1 + \lambda_D^2 k^2} \right) - 1 \right] \label{sum_discrete} \end{equation} and \begin{equation} \int_{0}^{\infty} \mathrm{d}p \frac{1}{1+\lambda_D^2 k^2 + \lambda_D^2 p^2} = \frac{\pi}{2 \lambda_D \sqrt{1 + \lambda_D^2 k^2} }, \label{sum_integral} \end{equation} Substituting the difference between Equations (\ref{sum_discrete}) and (\ref{sum_integral}) into Equation (\ref{fluct_energy}), and multiplying by $L$, we obtain the free energy per unit area (instead of volume): \begin{equation} \frac{\Delta F_{\mathrm{in}}}{A k_B T} = \frac{1}{4} \left( \int_0^{\infty} \frac{k}{1+\lambda_D^2 k^2} \mathrm{d}k - L \int_0^{\infty} k \frac{\coth \left( \frac{L}{\lambda_D} \sqrt{1+ k^2 \lambda_D^2 }\right) -1}{\lambda_D \sqrt{1 + \lambda_D^2 k^2} } \mathrm{d}k \right), \label{RPA_calculation} \end{equation} with $A$ the plate area. While the first term of Equation (\ref{RPA_calculation}) diverges logarithmically, it is $L$-independent and therefore does not contribute to the disjoining force. The second term can be integrated analytically to give \begin{equation} \frac{\Delta F_{\mathrm{in}}}{A k_B T} = - \frac{1}{4} \left[ \frac{L}{\lambda_D^3} - \frac{1}{\lambda_D^2} \log\left( 2 \sinh \frac{L}{\lambda_D} \right) \right]. \end{equation} Therefore, the Casimir force per unit area is \begin{equation} \frac{f_{\mathrm{Casimir}}}{A} = \frac{k_B T}{4\lambda_D^3} \left(1- \coth \frac{L}{\lambda_D} \right). \label{casimir_force} \end{equation} Perhaps surprisingly, Equation (\ref{casimir_force}) reveals that the Casimir force is \emph{attractive}, and has an asymptotic decay length of of $\lambda_D/2$. \section{Hard core repulsion and the Mean-Spherical Approximation} \label{sec:msa} The RPA ignores hard-core interactions and assumes point-like ions. This approximation is unreasonable in dense ionic systems such as ionic liquids and concentrated electrolytes. To include hard-core interactions, we consider the Mean Spherical Approximation (MSA). The MSA direct correlation function for a two component hard sphere electrolyte with cations and anions having equal diameters $\sigma$ has been derived in pioneering papers \cite{wertheim1963exact,thiele_equation_1963,waisman1972mean,waisman1972mean1}, and reads \begin{align} \hat{c}^{s}_{\alpha\beta}(k) &= \frac{4 \pi \sigma^3}{(k\sigma)^6} \left[ 24 d_{\alpha\beta} - 2 b_{\alpha\beta} (k\sigma)^2 + e_{\alpha\beta} (k\sigma)^4 \right. \nonumber \\ & - \left \{ 24 d_{\alpha\beta} - 2(b_{\alpha\beta} +6 d_{\alpha\beta})(k\sigma)^2 +(a_{\alpha\beta} +b_{\alpha\beta} +d_{\alpha\beta} +e_{\alpha\beta} )(k\sigma)^4 \right \} \cos (k\sigma) \nonumber \\ & \left. + \left \{ -24d_{\alpha\beta} (k\sigma) + (a_{\alpha\beta}+2 b_{\alpha\beta}+4 d_{\alpha\beta})(k\sigma)^3 \right \} \sin (k\sigma)\right] \label{MSA_dca} \end{align} where \begin{equation*} a_{\alpha\beta} = - \frac{(1 + 2 \eta)^2}{(1-\eta)^4} - 2 B\left(\frac{\sigma}{\lambda_D}\right) \frac{l_B}{\sigma} z_\alpha z_\beta, \end{equation*} \begin{equation*} b_{\alpha\beta} = - \frac{6 \eta (1 + \frac{\eta}{2})^2}{(1-\eta)^4} + \left[ B\left(\frac{\sigma}{\lambda_D}\right) \right]^2 \frac{l_B}{\sigma} z_\alpha z_\beta, \end{equation*} \begin{equation*} d_{\alpha\beta} = - \frac{ \eta (1 + 2 \eta)^2}{2 (1-\eta)^4}, \end{equation*} \begin{equation*} e_{\alpha\beta} = \frac{l_B}{\sigma} z_\alpha z_\beta, \end{equation*} \begin{equation*} B(x) = \frac{x^2 + x - x \sqrt{1+ 2x} }{x^2}, \end{equation*} with $\eta = (\pi/6)\sum_\alpha \rho_\alpha \sigma^3$ the total packing fraction. Substituting Equation (\ref{MSA_dca}) into Equation (\ref{decomp}) yields the full direct correlation function. Unlike the RPA, the hard core repulsion causes the MSA structure factor to be oscillatory and to decay to zero in the $k \rightarrow \infty$ limit. To proceed further, we first evaluate numerically the difference between the sum and the integral \begin{equation} G_{\alpha\beta}(k_{\parallel}, L) = \frac{\pi}{L} \left( \sum_{n=1}^{\infty} S_{\alpha\beta}\left(\sqrt{k_{\parallel}^2 + \frac{n \pi}{L}} \right) - \int_0^{\infty} \; S_{\alpha\beta}\left(\sqrt{k_{\parallel}^2 + \frac{n \pi}{L} } \right) \mathrm{d}n \right). \label{sum_difference} \end{equation} and note that both the sum and the integral are convergent since the structure factors decay asymptotically as: \begin{equation} \sigma^3 S_{\alpha\beta}(k) \sim \frac{3 \eta}{\pi} \delta_{\alpha \beta} - \frac{36 \eta^2}{ \pi} (a_{\alpha \beta} + b_{\alpha \beta}+d_{\alpha \beta} + e_{\alpha \beta}) \frac{\cos (k\sigma) }{(k\sigma)^2}, \; \mathrm{when} \; k \rightarrow \infty. \end{equation} and we showed in Equation (\ref{const_term}) that a constant term has no bearing on the Casimir force and can be ignored. Using the Euler-Maclaurin formula, we can expand $G_{\alpha\beta}(k_{\parallel}, L)$ asymptotically in $1/L$: \begin{equation} G_{\alpha\beta}(k_{\parallel}, L) = - \frac{1}{2} \frac{\pi}{L} S_{\alpha\beta}\left(k_{\parallel} \right) + o(L^{-1}). \label{EM_expansion} \end{equation} Since the relevant quantity is the Casimir energy per unit area, we need to multiply Equation (\ref{sum_difference}) by $L$ at the end of the calculation, such that the first term in (\ref{EM_expansion}) becomes actually a (diverging) constant independent of $L$ (c.f. the first term in Equation (\ref{RPA_calculation})). As such, we must subtract it before numerically integrating over $k_{\parallel}$. All in all, the Casimir energy (per unit volume) reads \begin{equation} E_{\mathrm{Casimir}}(L) = \sum_{\alpha \beta} \chi_{\alpha \beta}^{-1} F_{\alpha \beta}(L) \label{casimir_energy_msa} \end{equation} where \begin{equation} F_{\alpha \beta}(L) =\frac{1}{(2 \pi)^2} \int_0^{\infty}2 \pi k_{\parallel} \left[ G_{\alpha\beta}(k_{\parallel}, L) + \frac{1}{2L} S_{\alpha\beta}\left(k_{\parallel} \right) \right] \mathrm{d}k_{\parallel} \label{decay_integrate} \end{equation} and \begin{align} \chi^{-1}_{\alpha \beta} &= \chi^{-1}_{\alpha \beta, RPA} + \hat{c}^{s}_{\alpha \beta}(0) \nonumber \\ & = V k_B T \left[ \frac{\delta_{\alpha \beta}}{\rho_\alpha} - \pi \lambda_D l_B^2 z^2_\alpha z^2_\beta - \frac{\pi}{3} \sigma^3 (4 a_{\alpha \beta} + 3 b_{\alpha \beta} + 2 d_{\alpha \beta} + 6 e_{\alpha \beta} ) \right]. \end{align} We note that although the structure factor has a slow $\cos (k\sigma)/(k\sigma)^2$ decay, the integrand $ G_{\alpha\beta}(k_{\parallel}, L) + \frac{1}{2L} S_{\alpha\beta}\left(k_{\parallel} \right) $ decays rapidly with $k_{\parallel}$, making the numerical integration in Equation (\ref{decay_integrate}) particularly easy. We also note that the integral over $k_{\parallel}$ must be performed last since the divergent part needs to be subtracted off by exploiting the asymptotic expansion of the difference between a Riemann sum and the integral provided by the Euler-Maclaurin formula. Finally, the force per unit area is obtained by numerically differentiating Equation (\ref{casimir_energy_msa}) with respect to $L$. As an illustration, we consider aqueous sodium chloride solutions, and use the ion diameter and dielectric constant estimates from ref \cite{smith2016electrostatic}. Figure \ref{no_underscreening} shows that the predicted Casimir force as a function of surface separation is attractive for low concentration, confirming the RPA result, but oscillates between attraction and repulsion as a function of surface separation for concentrated electrolytes. Figure \ref{no_underscreening}b shows that the decay length close to saturation concentration is still comparable to the ion diameter, and at 4.9 M the screening length is $\approx 0.32 \sigma$, well below experimentally measured values \cite{smith2016electrostatic}. \begin{figure} \centering \subfigure[]{ \includegraphics[height=0.4\textwidth]{fig1a.pdf}} \subfigure[]{ \includegraphics[height=0.4\textwidth]{fig1b.pdf}} \caption{The electrolyte Casimir force for concentrated electrolytes oscillates between attraction and repulsion as a function of surface separation due to hard core repulsion. (a) The predicted electrolyte Casimir force for aqueous sodium chloride solutions. The ion diameter and dielectric constant estimates are taken from ref \cite{smith2016electrostatic}. (b) The main panel shows the electrolyte Casimir force at 4.9M, a concentration close to saturation, plotted on a log scale. The blue (red) portions denote repulsion (attraction), while the dashed line indicates the RPA result at the same concentration. The inset shows the Casimir force at 0.1M. } \label{no_underscreening} \end{figure} \section{Conclusion} \label{sec:conclusion} We have used a second order expansion of the free energy of a binary ionic liquid, confined between two charged insulating surfaces, in powers of the fluctuating ion density modes, for a given spacing $L$ between the surfaces. The resulting Casimir force acting between the surfaces is the derivative of this free energy with respect to $L$ (cf. Equation~(\ref{casimir_force1})). The required input is provided by the partial structure factors $S_{\alpha\beta}(k)$. We have examined two cases: \begin{itemize} \item[(a)] When the ions are assumed to be point charges, which amounts to the RPA, valid for very low ion concentrations only, the calculations can be carried out analytically, leading to the result in Equation~(\ref{casimir_force}); the Casimir force is attractive, and decreases with a decay length equal to one half the Debye length. This prediction agrees with earlier calculations based on a different, fully microscopic Debye-H\"uckel approach~\cite{jancovici2004screening, jancovici2005casimir,buenzli2005casimir}. \item[(b)] At higher concentrations, finite size (excluded volume) effects become predominant; we have included them within the MSA, which includes a short-range contribution to the partial direct correlation functions, as shown in Equation~(\ref{MSA_dca}). The resulting expressions for the free energy and Casimir force must now be evaluated numerically. The results for concentrated aqueous NaCl solutions, within an implicit solvent model of oppositely charged hard spheres, are summarized in Fig.~\ref{no_underscreening}. Instead of the exponential decay of the Casimir force predicted by the RPA (point charges), the force now exhibits a striking, exponentially damped oscillatory decay as a function of $L$ at the highest, physically relevant concentrations. The periodicity of the oscillations is comparable to the mean ion diameter, reflecting the structural ordering of the ions. To the best of our knowledge, no such oscillatory Casimir force in electrolyte solutions has been reported before, although oscillatory Casimir forces have been theoretically predicted for active matter systems with a non-monotonic energy fluctuation spectrum \cite{lee2017fluctuation}. \end{itemize} It must be stressed, however, that the Casimir force reported here is not directly related to the ``underscreening" phenomenon discovered recently in experiments~\cite{gebbie2013ionic,gebbie2015long,smith2016electrostatic, lee2017scaling,lee2017underscreening}. Note that the ``first principles" theory of this phenomenon~\cite{rotenberg2017underscreening} is based on the same microscopic model and on the same theoretical tools employed in this paper. The present calculations of the Casimir force can be readily extended to asymmetric electrolytes (ions of different valences and diameters), as well as to models of ionic solutions with explicit solvent~\cite{rotenberg2017underscreening}, within the same theoretical framework presented in Sections~\ref{sec:freeenergy} and~\ref{sec:msa}. Work along these lines is in progress. As a final remark, we note that the electrolyte fluctuation induced force discussed here has to be considered even in the absence of a mean-field interaction arising from surface charges, and that other forces induced by surface-charge fluctuations may also have to be taken into account under confinement by conducting walls in or out of equilibrium~\cite{drosdoff_charge-induced_2016,dean_nonequilibrium_2016}. \begin{acknowledgements} B.R. acknowledges financial support from the French Agence Nationale de la Recherche (ANR) under grant ANR-15-CE09-0013-01. A.A.L acknowledges support from the Winton Program for the Physics of Sustainability. This work is dedicated to Daan Frenkel on the occasion of his 70$^{th}$ birthday, as a token of appreciation of the authors' wonderful interactions with him -- over a range of time-scales. In particular, J.-P.H. wishes to express his sincere gratitude for an inspirational friendship and constant support over more than forty years. \end{acknowledgements}
2023-04-23T08:18:18.560Z
2018-03-02T02:02:18.000Z
redpajama/arxiv
arxiv_0000
1,572
3,968
5a0b28e04314cd21e9e035d09f47ca496a674a95
\section*{Introduction} The hydrodynamic instabilities of ordinary viscous fluids are a cornerstone of fluid mechanics, governing the breakdown of laminar flow and the transition to turbulence \cite{DrazinReid,Charru}, and of great importance across fluid motion in engineering, meteorology, oceanography, astrophysics and geophysics. The modern era has seen the advent of superfluids, realised in the laboratory in the form of superfluid Helium \cite{Leggett1999}, ultracold atomic gases (Bose-Einstein condensates (BECs) \cite{Dalfovo1999} and degenerate Fermi gases \cite{Giorgini2008}) and quantum fluids of light \cite{Carusotto2013}. The macroscopic quantum behaviour leads to several key distinctions from ordinary fluids \cite{Tsubota2013b}. Firstly, viscosity is absent in the quantum fluid. Secondly, when the fluid velocity exceeds a critical magnitude the flow is dissipated through elementary excitations. Thirdly, vorticity is constrained to exist only a discrete filaments with quantised vorticity. Given these deep apparent differences, an ongoing direction of research is to establish whether the paradigm instabilities of ordinary fluids have analogs in superfluids, how they manifest and in what ways they are similar \cite{Tsubota2013}. Considerable attention has been given to the instability of laminar superfluid flow past obstacles and surfaces, revealing quantum analogs of the classical wakes including the von K\'arm\'an vortex street \cite{Sasaki2010,Stagg2014,Shin2016} and the boundary layer \cite{Stagg2017}. Systems of two immiscible BECs are predicted to exhibit the Rayleigh-Taylor instability of the interface between them \cite{Sasaki2009, Gautum2010,Bezett2010,Kobyakov2011,Jia2012,Kadokura2012, Kobyakov2014}. Meanwhile, the presence of magnetic dipolar atomic interactions leads to instabilities analogous to those found in ferrofluids, including the Rosenweig instability \cite{Saito2009,Kadau2016} and fingering instability \cite{Xi}. The Kelvin-Helmholtz (KH) instability is one of the most elementary hydrodynamic instabilities, first formulated by Helmholtz \cite{Helmholtz} and Kelvin \cite{Kelvin} in the nineteenth century, and describes the instability of the interface between two parallel fluid streams with different velocities. Under suitable conditions, the interface undergoes a dynamical instability characterised by exponential growth of perturbations. The interface tends to roll up, destroying the steady laminar flow, and often initiating a transition to turbulence. The simplest flow which supports the KH instability is for two streams within a single inviscid incompressible fluid, for which the instability occurs for all values of the relative speed. The KH instability also arises for two streams of different fluids with different densities, in which case the KH instability can become superposed by the buoyancy-driven Rayleigh-Taylor instability \cite{Charru}. To date, superfluid analogs of the KH instability have been considered at the interface of two distinct superfluids. The KH instability between the A and B phase of superfluid $^3$He has been detected experimentally under rotation \cite{Blaauwgeers} and analysed theoretically \cite{Volovik1,Volovik2,Finne}. The KH instability between nuclear superfluids in a neutron star has been proposed as the trigger for pulsar glitches \cite{Mastrano}. It has also been discussed at the interfaces between the normal fluid and superfluid \cite{Henn,Korshunov}, between $^3$He and $^4$He \cite{Burmistrov}, and the interface between two components of an immiscible binary BEC \cite{Takeuchi,Suzuki,Kobyakov2014}. In these cases, the presence of two distinct fluids complicates the behaviour, including buoyancy effects \cite{Kobyakov2014} and a crossover to a counterflow instability if there is significant overlap of the fluids at the interface \cite{Suzuki}. The goal of this paper is to demonstrate that the KH instability can be realized within a single component superfluid and that this prototypical incarnation of the KH instability is achieveable with current experimental technologies. We will also see that the KH instability leads to the formation of vortex clusters which, when coarse-grained, mimic a viscous shear layer. This facilitates a measurement of the effective viscosity in a system with a moderate number of vortices, well within the limits of current experimental systems. \section*{Model} We model a weakly-interacting atomic superfluid BEC in two-dimensions through a macroscopic wavefunction $\Psi(x,y,t)$ which evolves according to the Gross-Pitaevskii equation (GPE) \cite{Pethick,Pitaevskii,Barenghi}, \begin{equation} i \hbar \frac{\partial\Psi }{\partial t}=\left[ -\frac{\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) +V(x,y,t)+g\left|\Psi \right|^{2}\right]\Psi. \label{eq:GPE} \end{equation} Here $m$ is the atomic mass and $g$ is a nonlinear coefficient arising from the contact-like atomic interactions. As is typical of most BEC experiments (to guarantee the stability of the BEC against collapse), we consider repulsive atomic interactions, $g>0$. The 2D atomic density follows from the wavefunction as $n(x,y,t)=|\Psi(x,y,t)|^2$ and the fluid velocity as ${\bf v}(x,y,t)=(\hbar/m)\nabla \theta(x,y,t)$, where $\theta(x,y,t)$ is the phase distribution of $\Psi$. Time-independent solutions of the GPE satsify $i \hbar {\Psi}_t= \mu \Psi$, where $\mu$ is the chemical potential of the condensate. Advanced techniques using optical and magnetic fields now allow for almost arbitrary spatial and temporal control over the external potential $V$ experienced by the atoms \cite{Henderson}. We non-dimensionalise the GPE based on ``natural units" \cite{Barenghi} in which the unit of length is the healing length $\xi=\hbar/\sqrt{m \mu}$, the unit of time is $\hbar/\mu$, the unit of energy is $\mu$, and the unit of density is $n_0=\mu/g$. The corresponding unit of speed is the speed of sound $c=\sqrt{\mu/m}$. We proceed by performing numerical simulations of the GPE (non-dimensionalised using the above units). We consider the domain $-D_x \leq x \leq D_x$, $-D_y \leq y \leq D_y$, with $D_x=128$ and $D_y=64 $. Space is discretized onto a $N_x \times N_y=512 \times 256$ uniform cartesian mesh, spatial derivatives are approximated by a $6^{\rm th}$--order finite difference scheme and a $3^{\rm rd}$--order Runge-Kutta scheme is used for time evolution, with time-step $\delta t = 5 \times 10^{-3}$. Periodic boundaries are taken in the $x$ (streamwise) direction and zero boundaries in the $y$ (transverse) direction, although our choice of potential means our system is effectively independent of the choice of boundary conditions in the transverse direction. \begin{figure}[t] \centering% \includegraphics[width=0.8\linewidth]{./fig1.pdf} \caption{A schematic of the initial configuration. An atomic superfluid confined to a channel is divided by a central barrier; the superfluid on either side flows in opposite directions. The central barrier is then lowered in time to create a region of high shear.} \label{fig1} \end{figure} We choose a potential $V$ so as to create an overall channel aligned along $x$ which is separated into two sub-channels by a central barrier, as depicted in Fig.~\ref{fig1}. The potential we take is uniform along $x$, and along $y$ it is a combination of a box potential and a central Gaussian potential, \begin{equation} V(x,y,t) = V_B \, \mathcal{H} \left(|y|-\frac{L}{2}\right)+V_G(t) \exp\left(-\frac{y^2}{\sigma^2} \right), \end{equation} where $\mathcal{H}$ denotes the Heaviside function. The box is taken to be $L=60$ wide and the potential walls are sufficiently high $V_B \gg 1$ to be effectively infinite. Such box potentials can be realized experimentally using appropriately-shaped optical or electromagnetic fields \cite{boxes}. The Gaussian potential is taken to have width $\sigma = 1.28$ and time-dependent amplitude, $V_G(t)$. Such potentials can be created using focussed laser beams, with the amplitude controlled through the laser intensity. Initially $V_G=5$ such that the superfluid in each channel is separate from the other. After numerically obtaining the condensate ground state (by imaginary-time propagation of the GPE \cite{Barenghi}), we impose a linearly-decreasing phase profile along $x$ in the $y>0$ sub-channel with total phase winding number $\mathcal{W}$. This induces a uniform flow in the negative $x$ direction with speed $\pi \mathcal{W}/D_x$. Similarly, we impose an equal and opposite flow in the $y<0$ sub-channel. Note that the equal and opposite flow arrangement is simply taken for convenience: our findings hold for any relative streamwise flow between the two sub-channels. If the potential barrier is maintained, the two fluids undergo persistent flow. However, we choose to ramp the barrier down with time so as to merge the counter-propagating fluids and create a narrow region of large shear flow. We ramp the barrier down according to the function $V_G(t)=\max(0,5-0.1t)$, although our findings are robust to changing the rate of this ramp-down. \section*{Results} \begin{figure*} \centering% \includegraphics[width=0.245\linewidth]{fig2a.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2c.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2e.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2g.pdf}\\ \includegraphics[width=0.245\linewidth]{fig2b.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2d.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2f.pdf}\hfill \includegraphics[width=0.245\linewidth]{fig2h.pdf}\\ \caption{Snapshots of the evolution of a simulation with winding number $\mathcal{W}=1$, in the absence of noise. Panels (a) and (b) show the density, $|\Psi|^2$, and phase, $\theta$ (for clarity only plotted where $|\Psi|^2>0.01$), at $t=0$. Panels (c)--(f) show the evolution of the density at times $45<t<150$ as the central barrier is lowered and two topological defects emerge. A final steady state density (g) and phase (h) are plotted at $t=600$.} \label{fig2} \end{figure*} \begin{figure}[t] \centering% \includegraphics[width=0.8\linewidth]{./fig3.png} \caption{Schematic of the formation of circulating flow at points (red dots) along the interface between the two flows. The flow velocity is illustrated by black arrows and the phase difference across the interface $\Delta \theta$ by the blue line.} \label{fig3} \end{figure} Figure \ref{fig2} shows the results for a winding number $\mathcal{W}=1$. As the barrier drops the two persistent currents come into contact, exciting strong density perturbations. We see the formation of two topological defects, quantised vortices with the same sign of circulation, which remain in a stable configuration throughout the rest of the simulation. We have carried out simulations with up to $\mathcal{W}=30$ and all simulations result in the same end-state: a line of quantised vortices with some background phonon excitations. Iit is clear that the number of vortices produced is simply $2\mathcal{W}$. The explanation for the formation of $2 \mathcal{W}$ like-signed vortices is straightforward and illustrated in Fig. \ref{fig3}. Across the $y=0$ interface there exists a discontinuity in the condensate phase $\theta$. This phase difference, defined as $\Delta \theta=\theta(x,y=0^+)-\theta(x,y=0^-)$ (mod $2\pi$), has a saw-tooth profile along the channel (due to the wrapping of the phase between $-\pi$ and $\pi$). Now recall that the fluid velocity is proportional to the gradient of the phase, and hence this gives rise to a saw-tooth-profile velocity component along $x$. At points along the inteface where the phase jumps by $2\pi$ this velocity component discontinuous switches direction. There are exactly $2\mathcal{W}$ such points along the interface. When coupled with the imposed flows for $y>0$ and $y<0$, this gives rise to a circulating flow around these points on the interface, which hence immediately evolve into quantised vortices. \begin{figure*} \centering% \includegraphics[width=0.32\linewidth]{fig4a.pdf} \includegraphics[width=0.32\linewidth]{fig4c.pdf} \includegraphics[width=0.32\linewidth]{fig4e.pdf}\\ \includegraphics[width=0.32\linewidth]{fig4b.pdf} \includegraphics[width=0.32\linewidth]{fig4d.pdf} \includegraphics[width=0.32\linewidth]{fig4f.pdf} \caption{Snapshots of the evolution of the density with $\mathcal{W}=20$, with a small amount of noise added to the initial condition, for (a) $t=100$; (b) $t=200$; (c) $t=300$; (d) $t=400$; (e) $t=500$; (f) $t=700$. Note the formation of a quantum vortex sheet which `rolls-up' via a Kelvin-Helmholtz instability.} \label{fig4} \end{figure*} What is produced is the quantum analogue of a classical vortex sheet. Whereas a classical vortex sheet is a continuous curve along which the fluid vorticity is non-zero, the quantisation of vorticity in a superfluid prevents this and instead supports a line of quantised vortices. It is interesting to note that in studies of classical vorticity, vortex sheets are often computed as collections of point vortices along a curve \cite{Krasny}; thus the quantum vortex sheet is a direct realization of this mathematical abstraction. We also note that vortex sheets have been predicted in two-component BECs \cite{Kasamatsu}. Subject to perturbations we would expect the quantum vortex sheet to roll-up via a KH instability. In our simulations we find without the presence of an internal or external perturbation the quantum vortex sheet remains stable, at least for as long a time period as it is feasible to integrate for. However the addition of a small amount of white noise (whose magnitude is less than 1\% of the background wavefunction) to our initial configuration is sufficient to realise a KH instability in a single component superfluid. In a real system, such noise will be present from a variety of sources, including thermal, quantum and mechanical effects. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig5.pdf} \caption{The evolution of the spatial extent of vortex cluster, $\mathcal{L}$, in time.} \label{fig5} \end{center} \end{figure} Figure \ref{fig4} shows the evolution of a simulation with $\mathcal{W}=20$, with a small amount of noise added to the initial condition. As expected from above, the interface rapidly evolves into a quantum vortex sheet of 40 like-sign vortices. The vortex line then visibly destabilises. The vortices first tend to bunch up into small clusters of 2-4 vortices, which co-rotate. Over time, the clusters merge with neighbouring clusters, forming progressively bigger clusters. This process is the quantum analog of the progressive roll-up of a classical vortex sheet, in others words, the KH instability. It is worth noting that the clusters of many like-signed vortices act to mimic classical patches of vorticity. To monitor the effective cluster size we first integrate the atomic density in the transverse direction, defining \begin{equation}\label{eq:navg} \rho (x,t)=\langle n(x,y,t) \rangle_y=\frac{1}{2 D_y} \int_{-D_y}^{D_y} n(x,y,t) dy, \end{equation} where we interpret $\rho$ as a course-grained density field. Denoting $P(k,t)$ as the Fourier-transform of $\rho$, then the typical spatial extent of a cluster can be estimated as $\mathcal{L}(t)=D_x/2 |\hat{k}(t)|$, where \[ \hat{k}(t)=\argmax_{|k|>0} P(k,t). \] Figure \ref{fig5} shows the evolution of $\mathcal{L}$ in time. The step-wise increase of the cluster size $\mathcal{L}$ is consistent with the progressive merger of smaller clusters into larger ones of approximately double the size. This process ceases when the clusters become comparable in size to half the length of the channel. Note how the merging becomes slower as the cluster size increases. Up until this point the vortices are dominately of the same circulation, which is the circulation of the vortices in the initial quantum vortex sheet. The KH instability is interrupted as the vortices try to form a single large cluster. Angular momentum is not conserved in our system due to the presence of the external potential. This exerts a torque on the gas, which manifests itself through the appearance of negatively signed vortices which are created at the edge of the condensate and penetrate into the bulk. Over time this collection of vortices of positive and negative sign evolve into a quasi-steady-state composed of clusters of like-sign vortices, see Fig.~\ref{fig6}. These clusters are consistent with negative-temperature Onsager vortex clusters, which were originally predicted to be the preferred state of high-energy two-dimensional turbulence of point vortices \cite{Onsager}. More recently, these states have been shown to arise in atomic BECs \cite{Simula2014,Billam2014}, including recent experimental observations \cite{Johnstone2018,Gauthier2018}. To further study how this flow mimics its classical counterpart, we next examine the coarse-grained momentum of the fluid, integrated along the channel. Indeed, we can readily compute the (dimensionless) momentum directly from the wavefunction via $n \mathbf{v}=i\left(\Psi \nabla \Psi^*-\Psi^* \nabla \Psi\right)$. We integrate the streamwise component of the momentum along the channel, denoting this quantity $V_\parallel=\langle n v_x\rangle_x$, where $\langle \cdot \rangle_x$ represents averaging over the streamwise dimension (see Eq.~(\ref{eq:navg}) for the precise definition of the averaging represented by the angled brackets). Figure \ref{fig7} (a) shows the evolution of this quantity, computed from three snapshots as the simulation progresses. Before lowering the central barrier $V_\parallel$ corresponds to a superposition of that in the two independent sub-channels, where each flow has $V_\parallel\approx \pm 0.5$. However, once the barrier is lowered we see a smoothening of the transition, which is approximately linear in the vicinity of $y=0$. This is akin to a classical viscous shear layer between two regions of fluid under relative motion. While in the classical case, shear layers are supported by shear forces and viscosity, in the viscosity-free superfluid this analogous behaviour is generated by the collective action of the many quantised vortices. Similarly, the superfluid analogous of a boundary layer was recently predicted in the form of the collective behaviour of many vortices close to the surface \cite{Stagg2017}. This analogy to 1D classical shear flow provides a means to estimating the effective viscosity of the quantum fluid. Effective viscosity, $\nu'$, is a widely used concept in both experimental and theoretical studies of superfluid helium \cite{Walmsley,Stagg}, where it is commonly used to interpret the dissipation of (incompressible) kinetic energy. At extremely low temperatures where thermal dissipation mechanisms are negligible, dissipation arises from phonon emission when vortices accelerate (due to the influence of other vortices, density inhomogeneities or boundaries) \cite{vinen2001,barenghi2005} or reconnect/annihilate with each other \cite{zippy2001,stagg2015,baggaley2018}. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{fig6a} \\ \includegraphics[width=0.4\textwidth]{fig6b} \\ \caption{Snapshots of the late time evolution of the system with $\mathcal{W}=20$, at times (a) $t=1800$ and (b) $t=5000$. Red circles and blue squares mark the location of vortices with positive and negative circulation respectively.} \label{fig6} \end{center} \end{figure} In this 1D limit the Navier-Stokes equation for a classical viscous fluid reduces to a simple diffusion equation, and so we can estimate the effective viscosity, $\nu'$, by comparing our evolution of $V_\parallel$ to solutions of the 1D diffusion equation, \begin{equation}\label{eq:diff} \frac{\partial V_\parallel}{\partial t}=\nu' \frac{\partial^2 V_\parallel}{\partial y^2}. \end{equation} If we assume the initial form for $V_\parallel$ to be \[ V_\parallel(y,0) =\begin{cases} \pi\mathcal{W}/D_x&y<0\\ -\pi\mathcal{W}/D_x&y>0 \end{cases}, \] then the solution to Eq.~(\ref{eq:diff}) is simply \begin{equation}\label{eq:diffsoln} V_\parallel=\frac{\pi\mathcal{W}}{D_x}\, \mathrm{erf}\left(\frac{y}{\sqrt{4 \nu' t}}\right). \end{equation} We obtain $\nu'$ by fitting this analytic solution to $V_\parallel(y)$ from our course-grained simulations, with the fit shown in Fig. \ref{fig7}(b). Hence we estimate $\nu' \approx 0.1$. Given our non-dimensional quantum of circulation is $\kappa=2\pi$ we estimate $\nu'/\kappa \approx 0.015$, which we can compare to values from the literature. \begin{figure} (a) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ \includegraphics[width=0.95\linewidth]{fig7a.pdf}\\ (b) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ \includegraphics[width=0.95\linewidth]{fig7b.pdf} \caption{The evolution of the course-grained streamwise component of the momentum along the channel, $V_\parallel=\langle m v_x\rangle_x$. (a) The solid line shows the two counter-flowing streams at $t=0$; the dashed line shows the existence of a superfluid shear flow at $t=500$; its subsequent evolution (due to effective viscosity) is shown at $t=1000$ via the dot-dashed line. (b) The solid line plots $V_\parallel$ at $t=1000$ computed from the simulations, the dashed line displays the solution to the diffusion equation Eq.~(\ref{eq:diffsoln}), with $\nu'=0.1$. } \label{fig7} \end{figure} Before proceeding it is important to note that the estimates of $\nu'$ to date come from three-dimensional studies of superfluid helium, which is very different from the fluid system in this study. With that caveat in mind, the most complete compilation of $\nu'$ to date is found in \cite{Walmsley}, who show that in the limit of zero temperature $\nu'$ approaches two different limiting values depending on the form of the turbulence. Ultraquantum or Vinen turbulence is the simplest form of quantum turbulence, where (in three-dimensions) there is a nearly random tangle with an apparent lack of large-scale motions in the velocity field. In contrast in the quasi-classical regime there is some structure to the quantised vortices, and at large length-scales (i.e. much larger than the typical intervortex spacing) non-zere course grained velocity and vorticity fields exist, and one would expect that these course grained fields are continuous functions and so a classical-like description becomes possible. For these two different forms of quantum turbulence in the limit of zero temperature, it has been found that $\nu'_{\rm UQ}/\kappa \sim \mathcal{O}(0.1)$ for the ultraquantum regime and $\nu'_{\rm QC}/\kappa \approx \mathcal{O}(0.01-0.1)$ for the quasi-classical regime \cite{Walmsley}. Within the context of the GPE, it has been estimated that $ \nu'/\kappa \sim \mathcal{O}(0.1)$ for three-dimensional ultraquantum turbulence \cite{Stagg} and $\nu'/\kappa \sim \mathcal{O}(1)$ for three-dimensional quasi-classical turbulence in a superfluid boundary layer \cite{Stagg2017}. The larger values of $\nu'$ within GPE over superfluid Helium has been attributed to the fact that the vortices are many orders of magnitude closer to each other (relative to their core size) in GPE simulations. Our value $\nu'/\kappa \approx 0.015$ is clearly much lower than these previous GPE-based results, despite comparable intervortex distances. This leads us to conclude that the difference is due to the dimensionality of the flow. To our knowledge, this is the first estimate of the effective viscosity for a 2D superfluid, and we hope that future studies will provide comparatives estimates of this quantity. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{fig8a.pdf}\hfill \includegraphics[width=0.49\linewidth]{fig8c.pdf}\\ \includegraphics[width=0.49\linewidth]{fig8b.pdf}\hfill \includegraphics[width=0.49\linewidth]{fig8d.pdf} \caption{Snapshots of the evolution of the density with $\mathcal{W}=20$, in a circular channel, with a small amount of noise added to the initial condition, for (a) $t=400$; (b) $t=800$; (c) $t=1200$; (d) $t=2500$. Note the KH instability is observed with an experimentally feasible system. } \label{fig8} \end{figure} Before we close we turn to an experimentally-feasible means to realize the KH instability in an atomic BEC. A ring-trap geometry provides a natural setup to replicate our periodic channel, motivated by the experimental use of ring traps to study the superfluid dynamics of atomic BECs \cite{rings}. In one such experiment, a Laguerre-Gauss beam was used to controllably impart angular momentum to the atoms, which served to phase imprint winding numbers up to $10$ \cite{Moulder2012}. Figure \ref{fig8} shows the dynamics when the condensate is now confined to a ring-shaped channel (simulated in a square domain, $D_x=D_y=256$). As in the straight channel simulations, we impose counter-propagating flows in the outer/inner halves of the channel, and use a narrow barrier to initially separate the flows. Following removal of the barrier, we see the establishment of a line of vortices which proceed to `roll-up' in a qualitatively similar manner to the simulation presented in Fig.~\ref{fig4}. Note that it may be more convenient in practice to create the relative flow by initially phase imprinting the outer half of the ring-shaped channel while keeping the inner half in shadow (and thus stationary) by means of an optical mask. Note also that our method of estimating the effective viscosity is experimentally achievable. While it is not possible to directly measure the fluid velocity, it is now possible to experimentally identify both the positions and the circulations of the vortices \cite{Powiss2014,Seo2017,Johnstone2018}. From this information the velocity field, and hence the coarse-grained momentum across the channel, can be readily reconstructed. \section*{Conclusions} In conclusion, we have demonstrated the analog of the famous classical Kelvin-Helmholtz instability in an atomic superfluid gas. Two adjacent regions of the fluids which are initially in relative motion entrap a line of quantized vortices along their interface. This quantum vortex sheet is unstable, and rolls up into small clusters of same-sign vortices. Over time these clusters merge to create larger clusters. When coarse-grained this flow mimicks a classical shear flow, allowing an effective viscosity to be estimated. Once the cluster size becomes comparable to the channel width, secondary vortices of opposite sign become nucleated, mixing into the turbulent flow, and the end state is the segregation of the vortices into clusters of like-sign vortices. These dynamics are experimentally accessible within ring-trapped atomic BECs. \section*{Acknowledgements} N.P. acknowledges support by the Engineering and Physical Sciences Research Council (Grant No. EP/R005192/1).
2023-04-23T08:18:19.019Z
2018-03-02T02:08:17.000Z
redpajama/arxiv
arxiv_0000
1,586
4,275